1 INTRODUCTION PETER M. SHANE, JOHN PODESTA, AND RICHARD C. LEONE
T
his book, like the conference from which it is dr...
13 downloads
455 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
1 INTRODUCTION PETER M. SHANE, JOHN PODESTA, AND RICHARD C. LEONE
T
his book, like the conference from which it is drawn, is rooted in a straightforward conviction: Even at a time of great national risk, we must resist the reflexive equation of secrecy with security. The government’s embrace of information technology as a tool for social control rather than one of human empowerment threatens to squander profound opportunities for democratic revitalization. The equation of safety with secrecy threatens not only our liberty, but our security as well. We need much more careful reflection on the appropriate shape and content of our national policy regarding public information, as well as on our use of technology in relation to information policy. Such reflection is essential if we are to avoid the tragic errors of earlier historical periods, in which the United States sacrificed liberty unnecessarily in the name of national security. By the mid-1990s—just a very few years after Tim Berners-Lee invented the World Wide Web—it was common to hear cyberoptimists sounding as if the advent of digital information and communications technologies was about to cure all ills besetting advanced Western democracies. The Internet had made possible unprecedented human connectivity. The Web would enable cheap and easy access to a virtually unlimited world of information. The digital divide seemed to be all that lay between the too-frequent apathy, ignorance, and alienation of the American voter and a world of fully informed, fully engaged, democratic citizens. 1
2
A LITTLE KNOWLEDGE
Few things could have shaken this imaginary link between technology and freedom more than the September 11, 2001, attacks on New York City and Washington, D.C. These monstrous acts dramatically exposed the security vulnerabilities of the United States. Our critical infrastructure—including power grids, banking, and systems of water supply, communications, and transportation—was entirely dependent on networks of computers, and seemed to embody an open invitation to catastrophic terrorist attack. Our nation was suddenly confronted with enormously complex questions about both technology and information policy: How do we prevent terrorist groups from exploiting our open and highly networked society? Are traditional methods of either law enforcement or defense preparedness adequate to defend our domestic security? Can key information about our vulnerabilities be kept out of the wrong hands? Can the world prevent such atrocities from happening again? In the ensuing years, it has become crystal clear that our national responses to these questions have implications not only for our security, but also for our liberty. Today, we are effectively told that, in the name of national security, we must relinquish our privacy, surrender access to critical government information, and voice opposition to government actions timidly. Such is portrayed as the price of safety. Technology that so recently offered the promise of engagement and liberation is now touted for its potential to help monitor and control. The government has sought and obtained vastly increased power to monitor the activities of ordinary Americans. With the passage of the USA PATRIOT Act, anyone using a computer at the same public library, public university, or even on the same block as someone suspected of committing a crime can unwittingly become the subject of clandestine surveillance facilitated by digital networks. But as the government acquires more and more information about its citizens, its citizens are receiving less and less information about their government. The Justice Department, despite a legal mandate to defend the public’s right to know, has become the preeminent champion of government secrecy. Thousands of public documents have been stripped from government Web sites. The Bush administration has taken unprecedented steps to shield both contemporary information about basic domestic law enforcement policy and purely historical records regarding the presidency. In the face of this official retreat from openness, the essays that follow—contributed by leading thinkers in law, engineering, public
INTRODUCTION Shane, Podesta, and Leone
3
policy, statistics, and cognitive psychology—yield a thoughtful framework for developing new national policies regarding public information and the uses of technology to maintain an informed and engaged public. They illuminate the uniqueness of our historical moment, in terms of both the possibilities for deploying new technologies in the cause of openness and the potential for using those same technologies in ways that assault our privacy, liberty, and security. They describe aspects of the current administration’s campaign for unprecedented secrecy and the importance of reconceptualizing the relationships among our most precious values if we are to achieve a twenty-first century public information policy that does justice to American democratic values. Taken together, these nine essays yield six critical tenets that ought to frame how the United States now moves forward in creating a new national policy toward technology and public information.
THE FREE FLOW OF INFORMATION IS ESSENTIAL TO THE SECURITY AND PROSPERITY OF THE UNITED STATES. In the wake of September 11, it is all too easy to imagine the devastation that could arise from inattention to our domestic security. In the face of concrete scenarios in which terrorists could exploit our information systems to their horrific advantage, generalizations about the relationship between freedom of information and the American way of life may seem platitudinous. As documented by our authors, however, the linkage is concrete, undeniable, and fundamental to the public interest. The rule of law, the advance of knowledge, and the capacity of a free people to determine its future through democratic self-governance all depend on the free flow of information. Withholding information from the public can jeopardize security in multiple ways. Secrecy is most obviously dysfunctional when, as John Podesta points out, it prevents communities from assessing and responding to their vulnerabilities. Secrecy likewise undermines the public’s capacity to hold accountable those officials who are responsible for tending to national vulnerabilities. Secrecy also undermines trust at a time of public risk. As documented by Baruch Fischhoff, there is broad consensus among those who have studied risk communication policy regarding “the importance of trust for risk management, and the centrality of open, competent communication in securing” that trust. Victor Weedn takes the point further, showing how government’s failure to involve the public proactively in the development of risk communication systems profoundly reduces the capacity of experts to craft
4
A LITTLE KNOWLEDGE
risk messages that communicate effectively what the public needs to know in order to behave responsibly. The fundamental link between national security and information openness is also demonstrated by Alice Gast’s analysis of the influence of information restrictions on research in science and technology. As Gast points out, over half of the growth in the American economy during the second half of the twentieth century can be traced to the impact of technological innovation—an area in which the United States competes fiercely for world leadership. That leadership, however, depends on three threatened mechanisms for promoting the free flow of information: the housing of the U.S. research enterprise in institutions of higher education that are committed to free inquiry; our hospitality to foreign scholars and students on America’s campuses; and a high degree of freedom for U.S. scholars to pursue international collaborations and academic exchange. Anyone who fantasizes that national security can be maintained as effectively without these practices should contemplate the fate of the Soviet Union, which, of course, took much the opposite tack. For this reason, a sound twenty-first century approach to public information policy must take account of the ways in which openness is not only compatible with, but essential to, security.
THE IMPULSE TOWARD SECRECY INEVITABLY METASTASIZES. Despite every nation’s legitimate need for confidentiality in certain contexts, demands for official secrecy—once indulged by public opinion— inevitably spread in destructive ways that take confidentiality outside its legitimate realm. John Podesta recounts how new laws have given unprecedented power to classify information to domestic regulatory agencies, including the Environmental Protection Agency, the Department of Health and Human Services, and the Department of Agriculture. This trend is potentially disruptive to our entire scheme of government, for it has long been conventional wisdom that the president’s national security powers do not extend to these purely domestic realms. When President Truman claimed inherent power to seize American steel mills on the ground that his military powers extended to the domestic economy, Justice Robert Jackson wrote: “[N]o doctrine . . . would seem to me more sinister and alarming than that a President whose conduct of foreign affairs is so largely uncontrolled, and often even is unknown, can vastly enlarge his mastery over the internal affairs of the country by his own commitment
INTRODUCTION Shane, Podesta, and Leone
5
of the Nation’s armed forces to some foreign venture.”1 The same is true about the danger of extending the president’s domestic powers on the grounds of terrorist risk. The executive branch’s responsibility to protect our homeland security should not be an excuse to shield the government from accountability in protecting our health, environment, and food supply. Big business likewise successfully used September 11 as a handy ticket to jump on the secrecy train. Businesses can now use exemptions from the Freedom of Information Act to shield themselves from public disclosure of the environmental hazards they create or the safety violations they commit. They need only assert that such hazards are related to our homeland security. Of course, the capacity of business concerns to transmogrify into security concerns is not new. As Alice Gast points out, export controls—routinely used to maintain American economic superiority—are now also used to limit academic exchanges, all in the name of national security. The ultimate limits of secrecy’s spillover are hard to predict. In March 2002, White House Chief of Staff Andrew Card ordered executive branch agencies to “maintain and control” “sensitive information related to America’s homeland security that might not meet . . . standards for classification.”2 This means that agencies may now shield information about which the classifying authority cannot “identify or describe” any “damage to the national security” that disclosure of the information “reasonably could be expected to cause.”3 It is reasonable to ask why. The creep of security concerns into irrelevant areas is likewise evident, according to Alice Gast, in the extension of security reviews to international students and scholars whose fields of study bear little or no relevance to any plausible view of national security.
PUBLIC INFORMATION POLICY AND TECHNOLOGY POLICY ARE INESCAPABLY LINKED. Tensions between the values and risks associated with the free flow of information are familiar to everyone. On a dayto-day level, the particular tension that most people experience is not the relationship between security and freedom, but the relationship between personal privacy and the benefits that we realize as a society through the relatively easy flow of information, even about individuals. In this complex domain, George Duncan writes that no policymaker can hope to reconcile the reasonable demands of privacy with the benefits of widespread access to information “[w]ithout
6
A LITTLE KNOWLEDGE
understanding the impact of advances in computer and telecommunications technology.” Those advances, of course, have made information about virtually all of us potentially available on a global basis, and in ways significantly beyond any individual’s capacity to control. Duncan’s point is generalizable to the field of public information policy as a whole. In every area, both the supply and demand for information are affected by what he describes as “[r]apidly falling costs across the spectrum of the information process, including data capture, date storage, data integration, and data dissemination.” It has thus become radically incomplete, if not incoherent, to talk about the future of public information policy without accounting for the now-inevitable migration of information storage, manipulation, and dissemination to digital media. Decisions made about the regulation and pricing of telecommunications services, standards for commercially available hardware and software, and government investment in research and development will now all have critical, if sometimes only indirect, impacts on the degree to which we remain an open society in terms of information access. Debates over technology policy will determine much about the quality of our “information commons.” This conversation must not be limited to engineers and economists.
NEW TECHNOLOGIES HOLD UNPRECEDENTED PROMISE FOR MAXI MIZING THE VALUE OF INFORMATION TO AN EMPOWERED CITIZENRY. In one respect, the cyberoptimists are undeniably correct: Effectively deployed, under propitious social conditions, the information and communications capacities now facilitated by digital networks hold the promise of revolutionizing democratic practice. As George Duncan writes, broad access to information supports democratic decisionmaking. The key, however, lies not simply in the expansion of people’s access to data, but rather in combining enhanced access to data with the capacity to organize human communities online in order to interpret and make use of information for public purposes. It is the active engagement of citizens in political life, through informed discussion, that has the potential to revitalize democracy. Peter Shane argues that administrative agencies ought to use online citizen consultations to help turn data into usable public knowledge about key issues. He offers two examples: using computermediated discussions to help agencies create meaningful online
INTRODUCTION Shane, Podesta, and Leone
7
libraries in real time about “hot issues” facing agency decisionmakers, and creating online discussions to assist in agency compliance with the demands of the Data Quality Act. Victor Weedn offers a similar vision, explaining how community-based risk communication systems can incorporate online opportunities for citizens to become involved not only in supplying risk-relevant information, but also in critiquing and interpreting potential risk messages, to help ensure that available information is being communicated to the public most effectively. In truth, we are probably only at the very beginning of imagining the technical possibilities for using computer-mediated communication to enhance and intensify the “real space” activities that constitute our democratic life. Technology can aid communities in identifying, debating, and addressing public issues with a level of inclusiveness and accountability previously unheard of. In developing public policy at the intersection of technology and public information, we should therefore be especially careful to avoid approaches that would impede the deployment of technology-enhanced techniques for enlarging democratic participation and deepening citizen engagement with public life.
BECAUSE TECHNOLOGY IS JANUS-FACED, “DEMOCRATIC INTENTION” IS AS CRITICAL TO SHAPING THE FUTURE AS SOUND ENGINEERING. Technologies are not inherently democratic. Even tools with the potential to support an unprecedented degree of dialogue, collaboration, and access to information can be deployed to sustain unprecedented surveillance and control. If we are to realize the full democratic potential of new information technologies, a key ingredient in their design and deployment must be what cyberactivist Steve Clift has called “democratic intent.” George Duncan and Joel Reidenberg make a powerful case that, with regard to computer-mediated access to personal data, democratic intent actually favors a robust concern for personal privacy. Duncan’s research focuses especially on the use of large public databases. He emphasizes the degree to which lowered costs of information capture, storage, and integration have made databases about individuals both easier to compile and infinitely more useful for social decisionmaking. Whether such databases are used routinely to bolster democratic policymaking, however, may depend significantly on whether citizens become fearful of their privacy implications. Even databases that purport to use anonymous data may compromise individual privacy because the
8
A LITTLE KNOWLEDGE
potential exists for linking records across different databases in ways that reveal individual identities. If the privacy costs of using such databases appear to be too great, communities will wind up having less access to such data, at some cost to the quality of relevant public policymaking. What Duncan shows, however, is that we need not sacrifice data access in order to protect privacy. Mathematical techniques are available that maintain the social utility of databases, while at the same time masking the links between specific behavioral data points and identifiable persons. A public policy guided by democratic intent—or what Duncan calls a democratic “ethical framework”—will reconcile the demands of data access and privacy by advancing both. Reidenberg explains that the United States might well look to the privacy policies of other nations for examples. He observes that other nations recognize citizen confidence in the fair treatment of their personal information to be a significant aspect of personal security, and understand also that citizens need a robust realm of privacy in order to exercise their sense of autonomous participation in democratic life. Pervasive surveillance can destroy not only individual expectations of privacy, but also the trust in government that is essential to any community’s effective exercise of collective selfdetermination. With important exceptions, American law generally presumes that individuals lose their privacy interests in particular information once they voluntarily disclose that information in any public setting. Our government’s seemingly insatiable appetite for collecting information, combined with public access laws that tend to underplay the value of privacy, means that a great deal of personally identifiable and potentially sensitive data is automatically considered public information. Moreover, our laws are relatively inattentive to the secondary use of those data by the private sector. We are more inclined to treat each person’s privacy in the sphere of commerce merely as an economic good to be protected or not by market forces. By contrast, Europe is very concerned with the uses of personal information by both private actors and by the government. Europe, like most of the world, treats privacy as a human right, not just as an economic good. U.S. policies based on similar democratic intent would thus be more vigilant and restrictive than those now in place and would change our definition of privacy for the better. Privacy is not only compatible with but essential to an open society;
INTRODUCTION Shane, Podesta, and Leone
9
our governing metaphor should therefore not be one of “trading off,” but of enlarging the pie.
WE NEED NEW INSTITUTIONS. The framers of the U.S. Constitution understood that the merits of good ideas would not alone ensure their adoption. Measures in the public interest might always be defeated by arguments from self-interest and passion—unless, that is, our public institutions could be so configured as to ensure a just and inclusive deliberation over all matters of public policy. Right now, we do not have an institutional structure with the right mix of checks and balances to ensure adequate consideration of the arguments in favor of freedom of public information and the protection of personal privacy. Each is the mission of every agency, and so, as Sally Katzen and Peter Shane imply, neither is the central preoccupation of any. We ought, therefore, to have at least two new significant and well-funded offices to help shape the American information society of the twenty-first century: a government-wide Freedom of Information Office and an Office of Privacy Protection. For the latter, clear models already exist. As discussed by Reidenberg, government data privacy offices around the world help ensure that governments justify the fairness of their information practices. They typically act also as neutral third parties to which businesses likewise have to justify practices that have the potential to compromise individual privacy. A new Freedom of Information Office could have at least three critical functions. The first would be overseeing key policy choices about new information technologies that would serve public access to government. As Shane recounts, a host of technical questions regarding engineering and design can affect the quality of public access: matters from search engine development to user interface design to the construction of back-end databases, to name but a few. These choices ought to be overseen by an office whose primary accountability, in Shane’s words, is “for the degree of public access and transparency [the office] achieves for the American people.” Second, such an office might facilitate what Sally Katzen, citing an Office of Management and Budget circular, calls the “A-130 approach” to government disclosure of information. Under such an approach, the burden in achieving widespread and accessible dissemination of government information would shift from individual requesters to the government. Such an office would need a rigorous
10
A LITTLE KNOWLEDGE
set of internal checks and balances to warrant public confidence that the office was genuinely pursuing openness in a nonpartisan and objective way. Finally, such an office might serve as a neutral arbiter concerning claims of so-called deliberative privilege against the public disclosure of executive branch information. That is, the Freedom of Information Act might be amended to call for the disclosure of all internal deliberative documents, at least as related to official proceedings such as administrative rulemaking. Exemptions would occur only if an agency could persuade the Freedom of Information Office that disclosure of the information could reasonably be expected to cause an articulable harm to the interests of the United States. These six tenets do not belie the importance of secrets to the security of even the freest society. Releasing tactical and strategic information into the wrong hands can plainly jeopardize peace, prosperity, and human life. At the same time, we must not forget that the free flow of information is essential to those aspects of our national life that make us strongest. In the prescient words of James Madison, “A popular government without popular information . . . is but a prologue to farce or tragedy or perhaps both.”4 Now is a time for both courage and imagination. Too much is at stake for Americans to accept either that the key issues facing us with regard to our public information policy are too technical for popular understanding or that tragic tradeoffs among security, privacy, and liberty are simply inevitable. In pursuing our highest priority values, we should eschew talk of “tradeoffs” as long as possible in favor of a national dialogue about how to optimize their common realization. We are justified in chastising leaders who embrace too readily the temptations of secrecy over the virtues of transparency. It is imperative that concerned citizens reassert the value of openness in formulating new and more productive approaches toward reconciling the imperatives of both security and freedom, and the deployment of technology to advance both.
2 NEED TO KNOW GOVERNING IN SECRET JOHN PODESTA
I
n late January 2003, U.S. spy satellites photographed North Korean trucks pulling up to the Yongbyon Nuclear Complex to begin the process of removing nuclear fuel rods from the complex’s storage tanks. The fuel rods, which had been safely stored pursuant to a 1994 agreement between the United States and North Korea, contained enough plutonium to make five to six nuclear weapons. In the hands of terrorists, the plutonium, once reprocessed, would constitute the makings for a series of devastating dirty bombs. The photographs signaled a dramatic escalation of the crisis brewing over the North Korean nuclear weapons program. The administration of President George W. Bush reacted to this new evidence of North Korea’s dangerous intentions with a strategy that has come to characterize its approach to governing: keep it secret; cover up the evidence. This was not a case of the intelligence community protecting its sources and methods of intelligence-gathering from the enemy. The North Koreans were operating completely out in the open, with absolutely no deception, well aware of the U.S. ability to see exactly what was going on. Indeed, the North Koreans were virtually waving at the cameras. It was not the North Koreans who were being kept in the dark; it was the American people and their representatives in Congress. While top administration officials deny even now that they were 11
12
A LITTLE KNOWLEDGE
suppressing intelligence for political reasons, it seems clear, as more candid officials have admitted on background, that the administration was slow to confront the North Koreans publicly for fear that an escalating crisis on the Korean Peninsula would interfere with its public relations offensive against Saddam Hussein. The administration’s embrace of secrecy, so vividly demonstrated by its handling of the Korean episode, has been evident since shortly after President Bush’s inauguration and has increased exponentially since September 11. In addition to the secret workings of the new Department of Homeland Security, three other domestic agencies— the Department of Health and Human Services, the Environmental Protection Agency (EPA), and the Department of Agriculture—have been given unprecedented power to classify their own documents as “secret, in the interests of national security.” The Justice Department, formerly charged with defending the public’s right to know, has become a veritable black hole when it comes to the release of government information. Neither the House nor the Senate Judiciary Committee has been able to get essential information about how the Justice Department has implemented the USA PATRIOT Act’s enforcement authorities, despite the fact that all the members of these two committees have security clearances. The House committee chairman, Representative James Sensenbrenner, a Republican, had to threaten to subpoena Attorney General John Ashcroft before the Justice Department would answer even the most basic questions on the use of USA PATRIOT Act authorities. In October 2001, Ashcroft also radically changed the Justice Department’s interpretation of the Freedom of Information Act (FOIA). He urged all government agencies to withhold documents if there was any possible legal reason to keep them secret. He told those who rejected FOIA requests to “rest assured”—the Justice Department would defend their decisions unless they “lacked a sound legal basis.” In so doing, the attorney general reversed the fundamental principle behind FOIA: the presumption of disclosure. Moreover, the Justice Department also steadfastly refuses to release the names of the hundreds of Muslim men who it detained after September 11. In a recent ruling, Judge Gladys Kessler of the U.S. District Court for the District of Columbia called the secret detentions “odious to a democratic society” and “profoundly antithetical to the bedrock values that characterize a free and open one such as ours.” She found that withholding the names of those imprisoned was not
NEED TO KNOW Podesta
13
permitted under the FOIA and ordered their release. The order has been stayed pending appeal. The focus on secrecy clearly has the blessing of the White House. On March 20, 2002, Andrew Card, chief of staff to the president, issued a memo that ordered an immediate reexamination of all public documents posted on the Internet. The memo encouraged agencies to consider removing “sensitive but unclassified information.” Six thousand public documents have already been removed from government Web sites. The number of decisions to make a document, video, or audio recording classified, already up 18 percent under the Bush administration before September 11, continues to grow rapidly. The breadth and the scope of the redaction of government information is astounding: ◆
the National Imagery and Mapping Agency has stopped selling large-scale digital maps;
◆
the Federal Aviation Administration has removed from its Web site data on enforcement actions against air carriers;
◆
the Bureau of Transportation Statistics has removed transportation spatial mapping data from its Web site;
◆
the Department of Transportation has removed pipelinemapping information from its Web site;
◆
the Agency for Toxic Substances and Disease Registry has dropped its report on chemical site security;
◆
the Nuclear Regulatory Commission’s Web site was completely down for six months and now has extremely limited information;
◆
public access to the envirofacts database posted by the EPA has been severely limited;
◆
the EPA Risk Management Plans, which provide important information about the dangers of chemical accidents, including emergency response plans, were removed even after the Federal Bureau of Investigation (FBI) admitted there was no unique terrorist threat;
14
A LITTLE KNOWLEDGE
◆
the Department of Energy’s Web site for national transportation of radioactive materials was taken down;
◆
Federal Depository Libraries have been asked to destroy CDROMs of U.S. geological water supplies;
◆
even access to the Internal Revenue Service reading room has been restricted in the name of national security.
While the administration has been grappling with how to apply the new and slippery concept of “sensitive but unclassified” to its own records, it has pressured the scientific community into applying it to peer-reviewed research. At a January 2003 meeting, convened by the National Academy of Sciences (NAS), administration officials warned that if scientific journals did not voluntarily censor themselves, the government would likely step in to mandate censorship. Subsequently, the NAS made a pact, drafted with the help of administration officials, to censor articles that could compromise national security. Censorship decisions will not take into consideration the scientific merit of the article. While most scientists recognize the need to have better controls on feedstocks that can be converted to bioweapons, many believe that the new policy may deter research and prevent the dissemination of information that could lead to new defenses against biological attacks (such as immunization and quarantine strategies). When it comes to protection against terrorist attacks, however, the administration has more faith in silence than in knowledge. The extent of the current administration’s preoccupation with secrecy has even bled into the unprecedented effort, begun by President Bill Clinton, to declassify historically valuable records from World War II, the early days of the Cold War, and Vietnam. That initiative, which led to nearly one billion pages of formerly classified records being made available to scholars and historians, has nearly ground to a halt despite the lack of any conceivable connection to the terrorist threat.
THE ULTIMATE REGULATION George W. Bush is certainly not the first president to use secrecy and the control of government information as a weapon to mold public attitudes in support of administration policy. Modern history is replete
NEED TO KNOW Podesta
15
with examples—from the Cold War to Vietnam to Iran-Contra—of presidents of both parties who sought to avoid public oversight of controversial policies by keeping accurate information from the public. But President Bush’s efforts have been unprecedented in promoting policies that expand government secrecy at almost every level, in restricting public access to vital health and safety information, and in removing publicly generated information from the public domain. Indeed, this president’s policies reverse important trends of the last four decades toward more openness in government. The administration’s advocacy of more secrecy in government is often couched in terms of a necessary national security reaction to September 11—but it well predates those tragic events. It is entirely appropriate and necessary for our country to reexamine the balance among the rights of individuals, the values we cherish as an American community, and the need to secure our nation from the threat of transnational terrorism. But President Bush’s embrace of this new culture of secrecy will not only leave our democratic institutions weaker, it may leave the country less secure in the long term. Of course, there are secrets worth protecting. It is beyond dispute that some information must be closely held to protect national security and to engage in effective diplomacy. Often our interest in protecting the method by which information was obtained is even greater than our interest in protecting its content. For example, when disclosures of classified information mention telephone intercepts, other nations often take heed and find more secure ways to communicate. It is also beyond dispute that unauthorized disclosures can be extraordinarily harmful to U.S. national security interests and that far too many such disclosures occur. They damage our intelligence relationships abroad, compromise intelligence-gathering, jeopardize lives, and increase the threat of terrorism. Today, we are confronted with an enemy that operates in the shadows—an enemy that will not only tolerate but also actively seek out civilian casualties. These are people hell-bent on acquiring weapons of mass destruction and putting them to use. The operational requirements of a global war against terrorism only enhance the government’s legitimate needs to mount clandestine actions. Whether those actions are arresting suspected terrorists in Naples, Italy, or firing a Hellfire missile from an unmanned Predator aircraft at al Qaeda leader Abu Ali in Yemen, secrecy is essential. But the troubling aspect of this administration’s approach to secrecy is its conversion of the legitimate desire for operational security into an
16
A LITTLE KNOWLEDGE
excuse for sweeping policies that deny public access to information and public understanding of policymaking. President Bush was right to say, in his 2002 State of the Union address, that “America is no longer protected by vast oceans. We are protected from attack only by vigorous action abroad and increased vigilance at home.” But openness does not destroy security; it is often the key to it. The American people cannot remain vigilant if they remain ignorant. To be sure, some critical defense and security information must be kept from public view, but strengthening homeland security requires public knowledge of potential threats and the public will to take corrective action to deal with unacceptable risks. Let us recognize secrecy for what it is: government regulation of information. The more tightly one controls information, the more stringent and complex the regulations must be. The late Senator Daniel Patrick Moynihan once said that secrecy is the ultimate form of regulation because the people do not even know they are being regulated.
THE RISKS OF SECRECY One has to ask whether it was genuine security concerns or the pleas of the business lobbyists that led the administration to insist on the secrecy provision buried in the legislation that created the Department of Homeland Security. This provision effectively guts the FOIA with respect to vital public health, safety, and environmental information submitted by businesses to the federal government. FOIA already prohibited the disclosure of information that could threaten national security. But this new provision prohibits the disclosure of information that relates in any way to the protection of “critical infrastructure” that private industry labels “sensitive” and discloses to the government. Not only does the public lose its right to know anything about hazards that could affect their communities, now the government is under an affirmative obligation to keep this information secret. The exemption provides a convenient way for businesses to conceal from public disclosure even routine safety hazards and environmental releases that violate permit limits. Shielded from public scrutiny, these hazards are much less likely to be addressed. Senate negotiators had worked out a compromise, one that was more narrowly tailored to encourage businesses to enhance security
NEED TO KNOW Podesta
17
protection for critical infrastructure without upending community right-to-know laws. But that compromise was rejected by the administration. The enacted new provision will expose Americans unknowingly to more dangers than they might otherwise have faced. Similar concerns arise from the administration’s approach to dealing with the serious homeland security threat posed by the storage of dangerous toxic chemicals. Industrial manufacturing facilities storing acutely toxic chemicals such as chlorine gas, ammonia, and cyanide present a potentially enormous and devastating opportunity for terrorists. The EPA has estimated that at least 123 plants store toxic chemicals that, if released through explosion, mishap, or terrorist attack, could result in deadly toxic vapor plumes that would put more than 1 million people at risk. In the U.S. Army Medical Department’s worst-case estimate, a terrorist attack on such a chemical plant would lead to about 2.5 million deaths. While there are more than 75,000 chemicals in commerce and some 20,000 industrial manufacturing facilities storing industrial chemicals across the country, only a small number of chemicals— probably fewer than two dozen—would be of keen interest to terrorists because they explode into large lethal plumes that kill or maim on contact. It is thus possible to reduce sharply threats in the chemical industry by focusing on a small number of the worst chemicals and a small number of the most dangerous plants. Furthermore, very practical steps are available at these priority facilities to minimize or eliminate them as terrorist targets. The facilities can substitute less toxic alternatives for their most acutely hazardous ingredients; they can convert to “just-in-time” manufacturing, whereby the most highly toxic molecules are synthesized immediately before use rather than synthesized separately and stored in bulk reserve; and they can reduce storage volumes of the most acutely toxic chemicals. When originally assessing the threat terrorism posed to the chemical manufacturing industry, the administration, led by the EPA and Tom Ridge (then the assistant to the president for homeland security), embraced a strategy of risk reduction. They planned to inspect the worst facilities to ensure that practical and necessary steps to reduce unnecessary risk and to ensure public safety had been taken. But after receiving intense pressure from the chemical industry, the administration backed down, settling for voluntary efforts by the industry to strengthen site security by building stronger fences and adding guard dogs—measures that do nothing to eliminate the target or
18
A LITTLE KNOWLEDGE
reduce the risk of catastrophic accident. Because the EPA is not even requiring that companies report to the government the steps they have voluntarily taken at their facilities, the government lacks needed information about the extent to which this very dangerous class of terrorist targets has been minimized. And thanks to the new secrecy provisions, people living immediately adjacent to these potential targets know less than ever about what is going on behind the chain-link fences. Therefore, local citizens who might be affected the most are less able or likely to demand corrective action. Now the Department of Justice has floated a new draft of terrorism legislation, dubbed the USA PATRIOT Act II. One might have hoped the new proposals would contain the kind of clear regulatory authority sought by the EPA administrator, Christine Todd Whitman, and Secretary of Homeland Security Tom Ridge to reduce the threat posed by these industrial facilities. Instead, the bill contains a provision that would further restrict public access to existing chemical company reports. These reports, mandated by the Clean Air Act, describe the worst-case scenarios that would result from chemical spills, industrial accidents, or explosions.
SHOWING STRIPES, PRE–SEPTEMBER 11 Although the administration justifies this broad expansion of government secrecy as a response to new security threats, the administration’s preference for secrecy predates September 11 and includes areas that have never before been viewed as matters of critical national security. For example, the administration has removed from government Web sites information regarding the use of condoms to prevent HIV/AIDS, the fact that abortions do not increase the risk of breast cancer, Labor Department statistics on mass layoffs, and budget information showing state-by-state cuts in federal programs. Withdrawing access to such materials seems to have more to do with satisfying the Republican base or avoiding embarrassment than with protecting national security secrets from Osama bin Laden. In the same vein, shortly after his inauguration, President Bush ordered a review of current policy regarding the disclosure of presidential records. He later signed an executive order allowing the current or a future president to block the release of any presidential
19
NEED TO KNOW Podesta
record—an order he then used to block the release of documents from Ronald Reagan’s administration that were potentially embarrassing to members of the current administration. That executive order violated the spirit, if not the letter, of a 1978 law that affirms that presidential records belong to the public and requires that they be released within twelve years after a president leaves office—subject to narrow exceptions, including national security. The law was passed in response to Richard Nixon’s claim that he personally owned all his presidential records. For his part, Vice President Dick Cheney spent nearly a year and a half blocking the efforts of Congress and the Government Accounting Office to acquire information about his energy task force, including the names of energy company lobbyists who attended task force meetings and how much these sessions cost the government. A U.S. district court judge, appointed by President George W. Bush, rejected a lawsuit that Comptroller General David Walker filed against Vice President Cheney to obtain the names of the lobbyists. There can be little doubt: from its early days, this administration has placed little value on the principle of an informed public.
DRAWING LINES Before September 11, the administration’s predilection for secrecy had aroused a reasonable degree of scrutiny in the media, best exemplified by front-page reports about the vice president’s efforts to conceal details about his energy task force. But the post–September 11 environment has given the administration cover to act with virtual impunity. The government is now concealing important actions with only the most convoluted connections to the war against terrorism. The nation is sure to pay a steep price—as it has so often in the past when its citizens have been kept unjustifiably in the dark. Taken together, the secrecy initiatives of the Bush administration take us back to an era we had all but forgotten: the advent of the Cold War. Duct tape and plastic sheeting may have replaced fallout shelters, but the existence of a massive bureaucracy to control government information is all too familiar. Such information control can lead today to an invidious, paranoid culture of secrecy, just as it did in the 1950s. By deeming everything under the sun a secret, President George W. Bush has affected our ability to distinguish what is really
20
A LITTLE KNOWLEDGE
a secret from what is not. He has infected the entire system of security classification with ambiguity and weakened the argument for nondisclosure. By doing so, he has, paradoxically, undermined our security by denying the public the vital information we need to strengthen security here at home. Are we made more secure by concealing the fact that any one of 123 chemical plants around the country could endanger a million or more people if attacked? Or are we better off informing the public so that people can demand that the risk of terrorist incidents or catastrophic accidents be reduced at those plants? Similarly, does concealing the fact that U.S. customs inspectors are only able to examine 1 to 2 percent of the shipping containers entering the United States make us more secure? Or are we better off if the public knows enough to demand that the inspection process be improved by identifying vulnerable loading docks and tracking the movement and condition of each container from the point of origin to its arriving destination? Is concealing the Department of Energy’s plan to ship high-level nuclear waste within a mile of congressional office buildings the route to security? Or are we better off if the public knows so that they can demand new routes or storage solutions that do not put the Capitol at risk? We clearly need to find a different approach to these issues, one that better reflects our fundamental values and our commitment to informed public discourse and debate. In formulating that approach it would be wise to start with three questions: ◆
Does the information fall within a class that should presumptively be kept secret? Operational plans, troop movements, human source identities, technological methods of surveillance, and advanced weapons designs must continue to command the highest level of protection. But even in those categories there can be circumstances in which public disclosure is appropriate and warranted—a classic example being Secretary of State Colin Powell’s United Nations Security Council briefing on declassified intelligence on Iraq’s program of weapons of mass destruction.
◆
Does the information’s important public value outweigh any risk of harm from public disclosure? For example, in the Clinton administration, the White House worked with the EPA and the
NEED TO KNOW Podesta
21
FBI to disclose information in the EPA’s toxic release inventory, including emergency evacuation plans. The public was able to receive important public safety information that the FBI had concluded was of no unique value to terrorists. Likewise, under the leadership of Vice President Al Gore, the overhead imageries dating back to the 1960s from the CORONA, ARGON, and LANYARD intelligence satellite missions were declassified. Disclosing the capabilities of our oldest spy-satellite systems caused no harm to our security, and the imageries proved to be of great value to scholars as well as to the natural resources and environmental communities. ◆
Does release of the information educate the public about security vulnerabilities that, if known, can be corrected by individuals or public action? Justice Louis Brandeis said that sunlight is the best of disinfectants. He meant that without openness people would lose trust in their government and government would lose its ability to do its work. But you can take another meaning out of Brandeis’s statement: Security flaws in our nation, just like security flaws in our computer software, are best put in the sunlight—exposed, patched, and corrected.
Openness not only enhances important democratic values, it is also an engine of technological and economic growth. America has been a world leader in technology for more than a century for one main reason: Information flows more freely within this country than anywhere else in the world. Scientists and researchers share their results and benefit from a highly developed peer-review system. The need for technological advancement has never been greater. The problems of terrorism are so complex that many of the solutions lie in technologies not yet developed or even imagined. Public knowledge, public scrutiny, and the free exchange of scientific information may not only provide the breakthroughs necessary to stay ahead of our adversaries but may also offer a better long-term national security paradigm. As the NAS president, Bruce Alberts, noted, “Some of the planning being proposed [on restrictions of scientific publications] could severely hamper the U.S. research enterprise and decrease national security.” And while we certainly need better controls on the distribution of materials and technologies that can be used to
22
A LITTLE KNOWLEDGE
create weapons of mass destruction, we need to resist reestablishing the Cold War culture of secrecy across many sciences and disciplines. A new culture of secrecy is bound to influence the direction of discovery, the efficient advancement of scientific knowledge, and our ability to understand fully the costs that come from a science program unchecked by public scrutiny. September 11 seared into our consciousness the realization that there are strong forces in the world that reject the forces bringing our world together: modernity, openness, and the values we cherish as Americans. But in addressing the problems of international terrorism and homeland security, it is paramount that we remember what we are fighting for. We are fighting for the survival of an open society—a country where people are free to criticize their government, where government is truly an extension of the people. We cannot protect this society by abandoning the principles upon which it was founded. When we relinquish our role as a beacon of government transparency, we derail our own mission to create a more secure, democratic world. Ultimately, stability can be achieved only through open institutions, where citizens are involved, not excluded from the governing process. This is particularly true in developing nations, where terrorists are most likely to find safe harbor. A government can earn the trust of its people only by conducting its work in the light of day, by exposing itself to scrutiny and criticism, and eventually by finding a system that citizens will accept and respect. Finding the right balance between confidentiality and an informed public opinion is certainly more difficult than a policy of absolute secrecy or one of unconditional disclosure. But that is the challenge our nation has struggled with for generations, and it is one we will all face in the future. At this critical moment in our history, we owe it to ourselves and our posterity to strike this balance and protect our tradition of liberty. President Dwight D. Eisenhower, a great military leader, made the argument succinctly: “Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals, so that security and liberty may prosper together.”
3 THE IMPACT ON SCIENCE AND TECHNOLOGY OF RESTRICTING INFORMATION ACCESS ALICE P. GAST Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown. —Vannevar Bush Science, the Endless Frontier, 1945
BACKGROUND: THESE ARE NOT NEW ISSUES The dynamic tension between the free exchange of ideas and the concern for their exploitation is a longstanding challenge. Following World War II, great advances in science and technology were fueled by significant investments by the federal government. Several times during these technological revolutions, concerns for security created impediments to the broad dissemination of research. Several committees and panels investigated this phenomenon, and much has been written about it.1 In the 1950s, colleges and universities were growing increasingly dependent on federal support for their research programs—and much of that federal funding (84 percent in 1957) was directed specifically at military research.2 A 1950 National Academy of Sciences report for the Department of State stated: “The principal damage of unnecessary restrictions lies in the creation of a furtive atmosphere in 23
24
A LITTLE KNOWLEDGE
which the flow of information necessary to progressive science is brought to a halt.”3 Thus, concerns at the time surrounded the security restrictions on communications from federally funded research and those by private industry as well. In addition, the community questioned the balance in the U.S. science research portfolio caused by the lack of a federal rationale and emphasis on immediate practical results.4 The space race brought on by Sputnik and the Gemini and Apollo projects inspired a new generation of scientists and engineers, those who had grown up practicing “flash” drills at elementary school by crawling under their desks to hide from atomic bombs. The Cold War brought about heightened tensions regarding nuclear weaponry and delivery capabilities. After détente collapsed in 1979, national security controls affected professional conferences, visiting foreign scholars and study programs, and courses available to students from certain countries.5 One conference, an American Vacuum Society meeting in 1980 on magnetic bubble memory devices, ran into export control restrictions and ultimately rescinded invitations to conferees from Hungary, Poland, and the Soviet Union. Other attendees were required to sign an agreement not to “re-export” information to the named countries. That same year, Russian scientists, including one working at the University of Texas, were prevented from traveling to an Institute of Electrical and Electronics Engineers and Optical Society of America conference on lasers, electro-optical systems, and inertial confinement fusion; the open display of technical equipment was deemed sensitive.6 In 1981, the Department of State sought to restrict the access of a Hungarian engineer visiting Cornell University. After being informed that he would not be able to participate in private seminars or discussions and could not have access to preprints of papers, Cornell cancelled the visit. Also that year, MIT declined a $250,000 contract from the Air Force because of federal control over the publication of research results. The presidents of these and other universities called for clarification that export control regulations “are not intended to limit academic exchanges arising from unclassified research and teaching.”7 Unfortunately, there was no resolution of this issue and difficulties for international exchange persisted. U.S. Customs officials undertook several startling actions in the name of protecting U.S. technology, including confiscating books and documents from departing academic visitors.
THE IMPACT OF RESTRICTING INFORMATION ACCESS Gast
25
In 1985, Mikhael Gorbachev brought a reform movement, perestroika, and a new leadership valuing glasnost, or openness, as one of its fundamental principles. As the Soviet Union loosened its hold on Eastern Europe, democratic reforms flowed through the former Communist bloc, culminating with the destruction of the Berlin Wall and free elections in Hungary, Czechoslovakia, Poland, and, finally, the new Russian Federation. While the Cold War ended, the national security laws restricting scientific and technological communication in the past remained on the books and would be used in other contexts against new enemies.8 Meanwhile, a new threat emerged as a trade war and technology race developed in the Pacific Rim. By the 1980s, concerns with the Japanese trade surplus and growing superiority in some commercial sectors elevated commercial interests to the same threshold as strategic concerns.9 In fact, since this period, U.S. economic health has become an important consideration in security discussions. Export control laws, long a mechanism to control the transfer of goods having military applications, became a means to limit the export of goods or technologies having commercial value.10 This dual focus contributes to some of the difficulties experienced in university research administration today.
UNITED STATES LEADERSHIP IN SCIENCE AND TECHNOLOGY Today, the national and world economies, as well as our national security, increasingly are driven by science and technology.11 Indeed, technological innovation has been responsible for more than 50 percent of the growth of the U.S. economy during the past fifty years, and this trend is continuing and accelerating in the age of knowledgebased economies.12 The National Science Foundation (NSF) has found that “scientists and engineers contribute enormously to technological innovation and economic growth” despite their constituting less than 5 percent of the workforce. Most nations regard science and technology “as a key determinant of economic growth,”13 and, consequently, the demand for scientists and engineers is projected to significantly exceed that for other occupations in this decade.14 The United States has a uniquely effective research and development (R&D) system, in which universities are the dominant source of
26
A LITTLE KNOWLEDGE
fundamental scientific and engineering research. The federal government funds university research projects selected through merit-based competitions in a marketplace of ideas. In addition to serving the purposes of the federal government, many of these results are moved into the private sector through entrepreneurship and licensing of patents. Each federal dollar serves the nation in two ways: it supports science and technology research while also educating the next generation of scientists and engineers. This close relationship between scientific research and education is a hallmark of U.S. leadership in science and technology. Germany and Russia struggle with a system of premier laboratories separated from their university structure.15 This separation hinders the innovation that arises from student–faculty interactions and diminishes the recruiting ability for science and technology research concerns in academia and industry. Today, the nation’s “international economic competitiveness . . . depends on the U.S. labor force’s innovation and productivity.”16 However, the NSF warns that “[s]cience is a global enterprise [and has been so] long before ‘globalization.’”17 To achieve that innovation and productivity, it is increasingly necessary to pursue national and international collaborations.18 The United States cannot educate students, undertake research, or produce technology for society in isolation from the rest of the world if it hopes to maintain its leadership in science and technology.
THE U.S. SCIENCE AND TECHNOLOGY MELTING POT American leadership in the global science and technology enterprise has arisen in part from the continual influx of the world’s best minds in science, engineering, and technology. Foreign students and scholars are critical to the vitality of American innovation. Many stay and contribute significantly to our economy and national research efforts. They provide much of the leadership and skilled workforce of our high-tech sector. A recent trade magazine noted that the number of foreign-born CEOs in U.S. companies has nearly quadrupled since 1996, with CEOs from almost one hundred foreign countries now leading American companies.19 Nearly 40 percent of U.S. engineering faculty are foreign-born,20 and more than a third of U.S. Nobel laureates are foreign-born.21 Additionally, nearly half of the scientific and medical professionals at the National Institutes of Health are
THE IMPACT OF RESTRICTING INFORMATION ACCESS Gast
27
foreign nationals.22 Many others return to influential leadership positions in their native countries with an understanding and appreciation of American values. Their contributions to global security are immeasurable. Startling trends warn us of impending problems. The National Science and Technology Council “has expressed concern about the nation’s ability to meet its technical workforce needs and to maintain its international position in [science and engineering].”23 Because the college-age population in the United States, Europe, and Japan has steadily declined in the past two decades, major industrialized countries have sought foreign students to satisfy the demand for graduate students in science and engineering.24 U.S. production of bachelor degrees in engineering, mathematics, computer sciences, and physical sciences has generally been declining since 1986.25 Forty-one percent of engineering graduate students and 39 percent of math and computer science graduate students in the United States are international.26 Even though the decline of college-age students in the United States has begun to reverse and many efforts are being made at the K–12 level to encourage more U.S. students to take an interest in science and engineering, in the near term our nation’s ability to maintain leadership in the science and engineering marketplace will depend on the increasing participation of under-represented minorities and women, and on sustaining the international student population.27 It will become difficult to recruit both domestic and foreign talent to science and technology research if burdensome regulations and an atmosphere of secrecy prevail.
INTERNATIONAL CONFERENCES AND COLLABORATIONS U.S. scientific and technological leadership is made possible by a creative environment fostered by international exchange and collaboration. High-level conferences, symposia, and collaborations provide the feedback, replication, and cooperation that are essential to the production of cutting-edge science and technology. Such interactions between leading American and foreign scientists and engineers have become pervasive in science today. Indeed, internationally coauthored papers now account for 32 percent of multiauthored papers.28 Yet, in 2003, visa delays prevented almost one hundred scientists from attending the World Space Congress in Houston.29 ChineseAmerican Frontiers of Science (the premier binational meeting of
28
A LITTLE KNOWLEDGE
young scientists, hosted by the U.S. National Academy of Sciences) had to be postponed because of visa impediments.30 Distinguished foreign visitors and collaborators are being turned away from our institutions causing the disruption of invaluable research collaborations and exchanges. The National Security Entry-Exit Registration System (NSEERS) recently redefined its criteria to include country of origin rather than citizenship. As a result, distinguished Canadian professors with worldrenowned research programs are being treated as if they are criminals by being fingerprinted and photographed on both entry into and exit from the United States. The understandable refusal of the affected scientists to travel to the United States—as well as a solidarity movement by colleagues not personally affected—will have a tremendously detrimental impact on U.S. research programs, conferences, and international collaborations. The number of NSEERS countries has grown to twenty-five and may continue to expand.31 Increasingly, complex scientific challenges command large international facilities, national laboratories, and international collaborations. The use of Department of Energy (DOE) synchrotron radiation facilities by international colleagues grew from 6 percent in 1990 to 40 percent today, while the overall number of users grew from 1,600 to 6,000 per year.32 The D-Zero accelerator project, a worldclass $5 million-a-year operation located at the Fermi National Accelerator Laboratory (a DOE facility), is an example. Five hundred scientists from eighteen countries work on the accelerator. Visa problems have delayed and prevented some of the collaborators from entering, impairing the collaborations that fuel hardware and software advances. Left to grow unchecked, these problems will undoubtedly drive important interchanges and opportunities away from the United States to the detriment of our community, our security, and our economy.
COLD WAR ISOLATION? After the release of Scientific Communication and National Security (the Corson report) in 1982, George A. Keyworth II, then director of the Office of Science and Technology Policy, commented, “The last thing we want to do is ape the repressive Soviet model which stifles technological innovation through its obsession with secrecy.”33 Clearly, there were many factors contributing to the decline
THE IMPACT OF RESTRICTING INFORMATION ACCESS Gast
29
of Soviet science and technology research during the Cold War.34 One key element, however, was the atmosphere of secrecy and repression in the scientific enterprise. Large, impersonal university departments in the USSR did not foster the kind of collegial interaction and breadth of research that characterize American universities.35 Evidence of Soviet isolation is found in any U.S. university’s list of international students and scholars from the former Soviet Union; at MIT, their numbers were near zero until 1985, rose briefly into the single digits, then fell again until 1991, when they began to climb steadily to the dozens.36 The self-imposed isolation of the Soviet Union during the Cold War severely interfered with international exchanges. This isolation contributed to other problems, such as the lack of a peerreview system, a noncompetitive research funding process, poor communication and collaboration between laboratories, and a lack of awareness of breakthroughs in the West. The result was the utter collapse of Soviet science and technology and a mass exodus of talent. In the 1990s, significant numbers of Soviet scholars immigrated to Germany, Israel, and the United States, and the brain drain continues today.37 We must prevent a similarly disastrous scenario in the United States. Although this may sound alarmist, the fact is that both Europe and Asia each now graduate more Ph.D.s in science and engineering than does the United States, and both also have world-class research, especially in emerging fields like nanotechnology.38 Moreover, Britain, Australia, and Canada have realized gains in foreign student enrollments in each of the past three years.39 Foreign students and scholars have choices, and the United States must compete to attract the world’s brightest minds. Particularly damaging are the now-common situations where students, scholars, or faculty already admitted to the United States and engaged in study and research travel outside the country (often for academic or serious personal reasons) and are unable to return. The cases are troubling, with research projects suspended, classes unattended, and (in some cases) family members and belongings stranded in the United States. The enhanced scrutiny for returning visa holders seems a poor use of finite homeland security resources. If the United States does not promptly find a means to expedite visas for high-quality students, scientists, and technologists, our nation will pay a heavy price that will be measured in the talent and opportunities that are lost to foreign competitors.
30
A LITTLE KNOWLEDGE
History seems destined to repeat itself. The stories today give us déjà vu, yet they are perhaps more compelling since our relationships with international colleagues are so much stronger twenty years after the “bubble memory” incident. These recent anonymous examples (from among many) illustrate the burden immigration problems are putting on universities: ◆
Dr. B has been coming to TechU for visits of varying lengths since 1993 as a guest of a faculty member and in conjunction with a large international project. Dr. B is a member of the Russian Academy of Science as well as the European Academy of Science, and is chair of a large department in a top Russian university. He has been granted J visas on many occasions by the U.S. consulate in Moscow, so he is no stranger to the consulate’s systems. In 2003, Dr. B’s J visa was stalled, despite the intervention of a U.S. agency contact at the Moscow embassy. The TechU project is on the brink of its definitive test, but the visa delay may prevent B’s participation.
◆
An assistant professor of electrical and computer engineering has been working at a U.S. university under an H1-B visa. In the summer of 2002, the professor and his family (wife, a British citizen; two children, U.S. citizens) went to Egypt to visit family. His wife and one child have returned to the United States, but the professor’s return has been delayed by security clearance requirements. He applied for an H1-B1 non-immigrant visa at the U.S. consulate in Cairo, was twice interviewed by the consul, and talked to the U.S. ambassador, who could not expedite matters. The Dean of the College of Engineering was informed by the U.S. embassy in Cairo that a visa could not be issued prior to obtaining the required clearance from Washington, D.C. The professor’s scheduled fall class was cancelled, for no other available faculty member was qualified to teach it.
◆
A faculty member with a J visa status went home to China in summer 2002, and in July he applied for a new visa to return to the United States. He is still waiting for a decision on his application. When he did not return in time, his department arranged for someone to teach his class temporarily. Now, the department is planning to make the temporary assignment permanent, and will try to find another role for the J faculty member to fulfill, while he remains in China on the university payroll.
THE IMPACT OF RESTRICTING INFORMATION ACCESS Gast
31
Two mechanisms to characterize and track international students and scholars deserve our utmost attention and care. The Technology Alert List (TAL), which triggers security reviews of foreign students and scholars, has recently been expanded to cover areas of study and research that appear to have no relevance to security (for example, architecture, planning, housing, community development, environmental planning, landscape architecture, and urban design), and to cover many others with tenuous links (for example, civil engineering). Adding these areas of study to the TAL has created a debilitating backlog in cases and undermines the review process. A new mechanism should be put in place to ensure a continuing and intensive collaboration between the scientific community and the government to narrow the focus and maintain the accuracy and efficiency of the TAL as guidance to consular officers. The Student and Exchange Visitor Information System (SEVIS) can be a great improvement in security and record keeping. The ability to have a database of the information previously handled by cumbersome paper forms will be a great asset. The university community strongly supports the implementation of SEVIS and is working hard to deploy it. Unfortunately, however, the launch of SEVIS has been plagued by problems. Technical defects have had very serious consequences for students and scholars. Students and scholars erroneously flagged face immediate deportation. These technical flaws must be rectified immediately. SEVIS must also have the capacity to accept corrections arising from human error without creating unduly harsh consequences for the individuals or the universities. The academic community should continue to work with the government to make SEVIS more robust and accurate. These immigration issues indicate that the U.S. role as an intellectual magnet for the best and the brightest minds is precarious. Barriers to immigration and scientific visits could rapidly tip the scale and make other nations more attractive. It is in our nation’s interest to bring in the best students and scholars by the most efficient and safest means.
OPENNESS IN UNIVERSITY RESEARCH Science and engineering are fast-paced, interactive, and collaborative enterprises that follow unpredictable paths and continually build on the work of others in unexpected ways. Nothing is more crucial to the progress of research than communication with colleagues and the
32
A LITTLE KNOWLEDGE
cross-fertilization of ideas. This interchange occurs at all levels, from students in neighboring laboratories to faculty attending international conferences. Brilliant advances often arise from discussions with colleagues working in entirely different fields. The fallacious idea that research can be subdivided, cordoned off, or kept apart is deadly to our progress and our leadership. Research thrives on openness and suffers in isolation.
TODAY’S PROBLEMS DEMAND GLOBAL SOLUTIONS The recent worldwide epidemic of Severe Acute Respiratory Syndrome (SARS) highlights the global integration of society and our resulting interdependence. The Global Outbreak Alert and Response Network of the World Health Organization (WHO) coordinated an international effort uniting thirteen laboratories in ten countries to identify the cause of SARS. This international team makes use of laboratories with significant experience in infectious diseases to improve diagnoses and to develop a treatment.40 Clearly, global exchange and international scientific collaboration and cooperation are critical to solving problems of this magnitude. Within a month, the collaboration identified a new pathogen, a member of the coronavirus family never before seen in humans, as the cause of SARS. “The pace of SARS research has been astounding,” said Dr. David Heymann, Executive Director of the WHO Communicable Diseases Programmes. “Because of an extraordinary collaboration among laboratories from countries around the world, we now know with certainty what causes SARS.”41 The spread of the disease and the delay in effective testing and treatment were exacerbated by the initial secrecy regarding the outbreak in China. An interim report on the SARS outbreak in Guangdong Province by a WHO team of experts recommended improved collaboration between virological laboratories in China to facilitate exchanges of results, specimens, and reagents.42 The potential importance of “super-spreaders”—source cases that, for reasons not yet understood, infect large numbers of people— shows how critical global communication, collaboration, and response are to epidemics of this type. Clearly, these and other aspects of health and homeland security require a worldwide approach. The university research community responds effectively in times of national need. We have a strong history of public service, and technological superiority has been a hallmark of our strategic might. While today MIT believes that we can best serve the nation by pursuing
THE IMPACT OF RESTRICTING INFORMATION ACCESS Gast
33
open, unclassified research, the institute does provide mechanisms for faculty to participate in classified programs at other institutions.43 MIT President Charles Vest recently stated, “As we respond to the reality of terrorism, we must not unintentionally disable the quality and rapid evolution of American science and technology, or of advanced education, by closing their various boundaries. For if we did, the irony is that over time this would achieve in substantial measure the objectives of those who disdain our society and would do us harm by disrupting our economy and quality of life.”44
RESTRICTIVE RESEARCH CONTRACTS REDUX The tumultuous years in the early eighties motivated much dialogue and many panels, committees, and reports. In 1985, President Reagan recognized the critical importance to the nation of preserving the principle of openness while maintaining necessary national security. He promulgated National Security Directive 189 (NSDD-189), which reinforced a core principle for the conduct of basic and fundamental research at colleges and universities, primarily that there be no restrictions on the publication of research results. In so doing, NSDD-189 confirmed classification as the appropriate means to control federally sponsored university research with national security implications. Supported strongly by universities and sustained by every subsequent administration, NSDD-189 remains in effect today.45 It is therefore both startling and disappointing to find restrictions placed on publication and the participation (or approval by the sponsor) of foreign students in unclassified research projects today. Two agreements from the same agency may contain different and inconsistent language. This clouds the government granting process and does little to serve the goals of national security and scientific and technical progress. It also leads to debilitating delays of months and years in contract negotiations. Some universities have begun to refuse research funding rather than accept such restrictions.46 Of particular concern is the ad hoc nature of restrictive research contract clauses and the use of vague categories like “sensitive but unclassified.” MIT and the Council on Government Relations have maintained a list of examples of restrictive clauses found in research contracts.47 Despite the persistent nature of this issue in recent history, the current state of science and technology make openness even more pressing today, for three reasons:
34
A LITTLE KNOWLEDGE
◆
Science is increasingly interdisciplinary. Interdisciplinarity requires open communication and collaboration, new things arising from serendipitous interactions. Working in fields between traditional disciplines requires that barriers be broken down between disciplines, cultures, disciplinary arrogance; researchers need to learn each other’s languages.
◆
Science is fueling technology more rapidly than ever. The distance between fundamental breakthroughs and applications affecting society is growing smaller and technology transfer is becoming more rapid than ever. Note how quickly USB “memory sticks” have replaced CD burners as the backup medium of choice. The effective transition of technology to society requires openness and communication among universities, government laboratories, and industry.
◆
U.S. leadership in science and technology relies on a close marriage between research and education. Research and education go hand in hand and are the most productive ways to produce new knowledge and to train new scientists. These, in turn, fuel our economy and quality of life. The close interplay between research and education requires open communication, formal and informal teaching settings, and the unencumbered participation of students in laboratories and research groups.
CONCLUSIONS What are the consequences of restricting information access on science and technology today? ◆
The control of information restricts its dissemination domestically as well as internationally. This undermines our system of peer review, competitive proposal evaluation, and collaboration, and prevents the serendipitous discoveries arising from casual access. Openness must be preserved in research supported by industry as well as the government.
◆
The restrictions on research results discourage young people from pursuing research in heavily regulated fields. If the restrictions
THE IMPACT OF RESTRICTING INFORMATION ACCESS Gast
35
become too onerous, scientists will migrate to other work, depriving us of the very talent we need to face these challenges. This could be particularly devastating if it diverts the talent pool from the areas of greatest need, such as infectious disease or homeland security. ◆
Controls on information and international participation in research damage our relationships with allies. Science is an international language, and scientists are great diplomats and great communicators. Once foreign scientists begin to boycott U.S. conferences or avoid sabbaticals in the United States, the nation will be gravely harmed.
◆
Restrictions on the dissemination of research results undermine the close synergy between research and education that make our system great and propel our leadership to innovation. The ability of students to access the latest breakthroughs in research is an important element in our pursuit of leading-edge research with the best and brightest students.
◆
When restrictions are not carefully considered and weighed, absurd examples of government controls create an atmosphere of distrust or contempt within a community of researchers. These attitudes are difficult to mend and divert attention from the areas of greatest need.
◆
Hindering our ability to recruit the best and brightest international students and scholars harms our productivity and leadership in science and technology.
RECOMMENDATIONS Clearly, the challenge we face of balancing our very real concerns for security with the need to protect our effective and innovative research environment calls for considered dialogue between policymakers and the scientific community. We should fortify the dialogue that has been carried on so ably for so many years. Recent statements by senior administration officials demonstrate an appreciation for these issues and a desire to engage the scientific community in this dialog.
36
A LITTLE KNOWLEDGE
At a recent keynote address to the Science and Technology Policy Colloquium hosted by the American Association for the Advancement of Science (AAAS), White House Office of Science and Technology Policy (OSTP) Director John Marburger agreed that SEVIS was in need of some work and offered a series of recommendations to correct the “visa situation” and keep the United States an attractive destination for visiting science and engineering personnel. Marburger advised increasing involvement of the “expert communities within the federal government in providing guidance to the process,” eliminating duplicate operations among the screening processes, improving the “impact reporting among affected institutions,” bettering the knowledge “among all parties regarding how the visa system works,” and, lastly, instilling “a frame of mind within the technical and higher education communities that perhaps falls short of patience, but rises above hysteria.”48 At another venue, newly appointed Secretary of the Department of Homeland Security, Tom Ridge, addressed the Association of American Universities (AAU) gathering of sixty-two university presidents. Ridge said that the effective implementation of SEVIS would depend on proper training. While thanking the university presidents for their “hard work to deploy” SEVIS, he said that he understood their “legitimate concerns.” “We know that your foreign students are indispensable to America’s continued leadership in science and in medicine and in technology,” he said. “And as we secure America from terrorists, we do not want to risk losing the next Enrico Fermi or Albert Einstein. We would be a far poorer nation in many, many ways.” Mr. Ridge said the system must balance the privacy rights of students with national security needs.49 I am hopeful that this understanding of the importance of protecting our science and technology communities from ineffective restrictions will continue. Here are three suggestions for maintaining this momentum: ◆
Institutionalize the dialog between government and universities. This important period in the development of new policies related to the Department of Homeland Security provides an excellent opportunity to establish an ongoing dialogue between government leaders and American universities on the issues of openness, science, and technology. We must establish new
THE IMPACT OF RESTRICTING INFORMATION ACCESS Gast
37
mechanisms to foster collaborative engagement and workable, effective policy. ◆
Ensure openness in research while protecting national security. The policy tone and standards must be set at the highest levels of the government. They should ensure that classification remains the primary means of control for federally financed university research. We should embrace concepts such as “tall fences around narrow areas” from learned groups such as the Corson panel.50 Reaffirming these principles would make government-funded research more effective, more efficient, less contentious, and more successful.
◆
Develop clear and effective immigration policies. The U.S. government should develop consistent, efficient, and effective practices for reviewing and issuing visas to foreign students and scholars. We cannot afford to lose our ability to attract top students and scholars to the United States.51
4 REALISTIC RISK DISCLOSURE IN NEWLY NORMAL TIMES BARUCH FISCHHOFF
A
s if life did not hold enough risks already, the confrontation with terrorism has added more. It has also changed the shape of some old risks, altering either the processes creating them or access to information about them. Moreover, the new risks and the new wrinkles are often intellectually and emotionally challenging ones. They involve complex, novel phenomena. They require expertise distributed over multiple disciplines and cultures. They are dynamic and uncertain. They pose difficult tradeoffs: your money or your life, your life or your liberty, my freedom or yours. They require vigilance from already weary people. They evoke social tensions in an already complex time. Although most people could do without the additional risks, the legacy of existing risks provides them with resources for dealing with the new ones. At the individual level, these resources take several forms. One is knowledge about related risks and control mechanisms. A second is inferential strategies for extrapolating beyond direct evidence. A third is methods for collecting information and critically evaluating its sources. Those sources may include officials, trusted peers, and entertaining speculators (for example, talking heads on cable news, disembodied voices on talk radio, Web sites of uncertain credibility). A fourth resource is experience in managing the emotional overlay of risks threatening physical, social, or economic security. It 39
40
A LITTLE KNOWLEDGE
may include psychological coping mechanisms (for example, deep breathing, distraction, mobilization), social supports, and material preparations (saving money, emergency supplies). These strategies doubtless vary widely in their efficacy and in the hopes vested in them. For example, it is hard to access a balanced, comprehensible, properly qualified suite of information about many risks. In its absence, appearances may be deceiving, producing misunderstandings (albeit logically defensible ones). Without explicit instruction, some risky processes are hard to understand (for example, the difference between radiation and radioactivity), while some inferences frustrate intuition (for example, how small risks mount up through repeated exposure). Information sources that are reliable for some risks may be out of their depths for others. Coping mechanisms may not always work or may come at an unacceptable price (for example, the emotions may have a mind of their own, defying attempts to manage them; constant vigilance can produce fatigue, reducing both performance and quality of life). To varying degrees, these strengths and weaknesses are understood and subject to corrective measures.1 The institutional resources for risk management parallel those employed by individuals, and are subject to analogous limitations. Institutions arise to overcome individuals’ limitations. However, their design and execution rely on individuals, with unruly emotions, mixed motives, intellectual blinders, misplaced commitments, agency issues, and the like. In order to justify trust, institutions must be competent, as well as committed to serving the public’s interest. Managing risks often takes experts beyond the range of proven solutions (for example, SARS—to take today’s headline), forcing them to rely on judgment. To a first approximation, the judgmental processes of experts in areas of great uncertainty may not be that different from those of other generally educated individuals.2 When any new risk arises, these management functions must somehow be filled. Arguably, that is best done with a team combining domain-specific expertise and familiarity with risks in general. However, there are natural barriers to such collaborations. The stress of risk management is not conducive to bringing in outsiders. Institutions have shown a preference for the kinds of expertise already on their payroll or in their Rolodex. Insiders with the most relevant expertise will assert their authority, even when the new problem differs fundamentally from its predecessors. For example, risk communication
REALISTIC RISK DISCLOSURE IN NEWLY NORMAL TIMES Fischhoff
41
may seem like a natural extension of public relations; yet, the former is undermined by the same spin that is essential to the latter. When an institution goes shopping for new expertise, it may be challenged to discern quality, perhaps succumbing to marketing pressure or safe, unchallenging choices. The imperfect risk communications regarding anthrax contamination, smallpox vaccination, duct tape, and national terror alert levels are some recent examples of these general tendencies. The following sections briefly review research and practice with individual and institutional risk management in order to provide an idea of what is already available “in inventory.” They draw heavily on experience with environmental and health risks, which have a legacy of problems with “wicked” properties like those of terrorism. Indeed, some of the same issues recur, insofar as terrorists also attack essential aspects of our lives. Usable approaches constitute a sort of “defense spin-off” from investments in peaceful research. The concluding section discusses some properties of managing terror risks that might require dedicated research.
INSTITUTIONAL APPROACHES TO RISK MANAGEMENT NORMAL RISKS Environmental and health risk communication has been an important arena for shaping general relationships between citizens and authorities.3 In the Western democracies, the evolving standard endorses a degree of shared responsibility that would have been hard to imagine a generation ago. It reflects a faith and a hope that authorities can create the facts that the public needs for effective decisionmaking, and deliver them in a comprehensible, credible form. The outcome of this social experiment will depend on both political commitment and technical execution. Figure 4.1 (page 42) shows the Canadian Standards Association’s conceptualization of this process.4 On the right appear the standard steps of risk management—unusual only in requiring an explicit evaluation at each transition between steps. Thus, the model recognizes the possibility of having to repeat the work until it has been performed successfully—and that this goal might not be achieved.
42
A LITTLE KNOWLEDGE
FIGURE 4.1. STEPS IN THE Q850 RISK MANAGEMENT DECISIONMAKING PROCESS —SIMPLE MODEL Initiation
Risk Communication
Preliminary Analysis End
Go back
Risk Analysis
Next step and/or take action
Risk Estimation End
Go back
Risk Assessment
Next step and/or take action
Risk Evaluation End
Go back
Next step and/or take action
Risk Control End
Risk Management
Go back Take action
Action/ Monitoring Note: Risk communication with stakeholders is an important part of each step in the decision process.
REALISTIC RISK DISCLOSURE IN NEWLY NORMAL TIMES Fischhoff
43
The left-hand side shows a commitment to two-way risk communication at each stage of the process. Both the comprehensiveness and the reciprocity are noteworthy. In this view, citizens have a right to hear and to be heard from the very beginning, when risk analyses are initially formulated. Moreover, they have recognized expertise, for shaping the terms of the analysis and informing its content. This is a striking departure from the historic philosophy of decide-announcedefend. Similar policies have been advanced by the Presidential/Congressional Commission on Risk, the Environmental Protection Agency, the UK Royal Commission on Environmental Pollution, the UK Health and Safety Executive, the UK Parliamentary Office of Science and Technology, and the UK Cabinet Office.5 All endorse sharing risk analyses with the public, focusing on its informational needs and taking advantage of its expertise. All recognize the importance of trust for risk management, and the centrality of open, competent communication in securing it. The evolution of this philosophy is reflected in a series of reports from the U.S. National Academy of Sciences. The 1983 “red book” recognized that risk analyses inevitably reflect both science and politics.6 It advocated their explicit integration, presaging elements of the current standard. However, the immediate response was a quest for fuller separation: scientists would assess the situation, while politicians would decide what to do about it. That would keep scientists from adding spin to their results and politicians from claiming unwarranted expertise. The possibility of separation seemed particularly appealing to scientists hoping to perform “objective” research. The aspiration ignored the role of politics in setting the research agenda and the power that flows from having (or blocking) research on a topic. The academy’s 1989 Improving Risk Communication asserted the public’s right to know the results of risk analyses.7 That commitment envisioned a less passive role for the public than the red book, which sought to serve the public by managing its risks well. Advocating a more active role required asserting the public’s ability to understand risks—and confronting the widespread belief in citizen incompetence (captured in phrases like “real versus perceived risks” or “hysterical public”). One form of evidence supporting active public involvement was the success of many scientifically developed and evaluated communications. A second form was research providing a richer
44
A LITTLE KNOWLEDGE
context for interpreting disagreements between citizens and experts— rather than just blaming them on citizen ignorance or stupidity. That context included differences arising from (a) terminological usage (for example, treating different outcomes as the “risk” of a technology), (b) judgmental heuristics (deliberate but imperfect rules whose occasional mistakes may be justified by their occasional problems), (c) selfserving biases (for example, assuming the worst about other people, when interpreting disagreements), and (d) information availability (that is, how experts limit what citizens can know). These reports sought to make science a better public servant. Subsequent ones challenged conventional views of science. Science and Judgment in Risk Assessment recognized the central role of judgment in risk analyses—given the great uncertainty surrounding many novel issues involving complex conjunctions of environmental, industrial, social, psychological, and physiological processes.8 It recast lay–expert comparisons as being between two sets of beliefs, from individuals differing in training and experience. It offered standards for diagnosing and disclosing the role of expert judgment as well as for eliciting it in a disciplined way.9 In 1996, Understanding Risk revived the red book’s theme of the intertwining of science and values.10 It challenged the assumption that scientists can simply do their work and leave the politics to others. Rather, it showed how a risk analysis’s framing inevitably expresses some values. They are seen in the choice of topics (why some outcomes are studied and not others) and the definition of terms (for example, risk, exposure). For example, “risk” could mean just mortality or include morbidity. Any definition assigns relative weights to different adverse consequences. Even deciding to weight all deaths equally expresses a value. It represents a decision not to assign greater weight to deaths among young people (with more lost years) or those exposed to a risk involuntarily.11 The report argued that these definitional choices should be made explicitly—and by individuals representing those whose fate depends on them. Involving citizens in priority setting was further endorsed by the congressionally mandated Committee on Setting Priorities for the National Institutes of Health.12 It led to the creation of a Citizens’ Advisory Panel, chaired by the head of the Institutes. In the same year, Toward Environmental Justice called for “participatory science,” involving citizens in the design and conduct of studies affecting their community.13 That participation takes advantage of citizens’
REALISTIC RISK DISCLOSURE IN NEWLY NORMAL TIMES Fischhoff
45
expertise (for example, in exposure processes), while seeing that they learn as much about their conditions as the outsiders examining them know. It should improve citizens’ scientific and policymaking sophistication, while increasing the chances of their accepting the results of collaboratively produced analyses.
TERROR RISKS The implications of applying this evolving standard to managing terror risks might best be seen in specific examples.
SMALLPOX VACCINATION. At this writing, the United States is attempting to vaccinate some number of first responders and healthcare workers against vaccinia, a disease closely related to familiar strains of smallpox. It is an important test of terror-related risk management and communication processes. Healthcare workers, reasonably, expect informed consent procedures that reflect their concerns and the special features of vaccination risks. The full story of the campaign has yet to unfold, much less be studied and documented. However, anecdotal reports suggest violations of the two key conditions for trust: competence and caring. In the terms of Figure 4.1, it is not clear that the program’s initial formulation considered healthcare workers’ worries about the medical and economic consequences of adverse reactions. It seems not to have collected nor disseminated authoritative information about the workers’ personal vulnerability and opportunities to spread the disease through their work activities. The controversy over compensating workers for adverse reactions suggested that some people in government mistrusted healthcare workers (as malingerers who would falsely claim problems) or simply did not value them and their willingness to take risks in the national interest. Adding smallpox vaccination to public health’s mission, without providing additional resources, intensified the strain and ill will. Even if the final story is different, the stumbling process will complicate future risk management by souring relations—as it has with postal workers, following the anthrax crisis and the subsequent ricin scare.14 DOMESTIC SURVEILLANCE. In qualitative, if not quantitative, terms many of terrorism’s risks are known. So are many of the risks of the antiterror campaign (see the other chapters in this volume). By and large, though, these risks seem not to have been viewed in risk-management
46
A LITTLE KNOWLEDGE
terms. There is nothing like the consultative process of Figure 4.1. As a result, there is growing concern that civil liberties issues are being neglected—or at least not treated as creatively as they might be. Doing so would require a process incorporating the concerns and expertise of vulnerable citizens and those concerned about their welfare. Similar apprehensions can be found in immigrant communities, which are often treated as though they are harboring suspects. The procedures in place to balance national security and civil liberties risks may be defensible, but the public cannot know that without overt, proactive consultation and communication. Nor is it clear that terror risks themselves are managed with the reality-checking and corrective procedures depicted in the figure. If not, then that risk management may be inefficient and inadequately monitored. Such institutional failures would not be surprising. With a complex, novel risk problem, it is implausible that the right set of expertise just happened to be assembled from the start, much less that it identified the best solutions. With a problem that is embedded in a whole society, like terrorism, insight into possible solutions may be particularly widely dispersed. As time goes on, however, the lack of a proper risk-management structure raises questions about institutional competence and motivation. Perhaps the institutions in charge do not realize the vulnerabilities; perhaps they are not interested in public concerns; perhaps the system has prematurely frozen on early solutions, unable to treat them as experiments.
INFORMATIONAL APPROACHES NORMAL RISKS High stakes ride on the success of these social experiments in interpreting risk analysis and communication. If they succeed, they strengthen the case for a participatory society. If they fail, they strengthen the case for paternalism. Even if it holds citizens’ benefits paramount, a paternalistic society is not a partnership. As a result, it faces an increased risk of distrust and alienation, when citizens question official promises and resent the lack of consultation. Without communication, citizens have less chance to acquire the knowledge needed to understand policy choices or make ones in their own best interests.
REALISTIC RISK DISCLOSURE IN NEWLY NORMAL TIMES Fischhoff
47
Of course, relations between citizens and regime are also matters of political philosophy. Philosophical matters, however, should be advanced as such and not hidden behind unsubstantiated claims about citizens’ competence. If citizens believe that they have been denied a fair chance to participate, they should distrust those in authority. If authorities underestimate citizens’ competence, the attendant mistrust will needlessly sour their relationships. If authorities exaggerate public competence, they may leave citizens without needed protections. Thus, successful communication requires mutually respectful channels. Once in place, those channels need meaningful content. On the one hand, citizens must express themselves clearly enough to shape analyses around their concerns and reveal their informational needs. On the other hand, citizens must understand how analyses are framed and what critical facts they produce. Ideally, this would be a back-and-forth process. Without some understanding of the facts, citizens cannot know what is at stake. Without knowing what citizens value, analysts cannot achieve the proper focus. Thus, citizens and analysts need to educate one another. Nonetheless, many of the design challenges are apparent in the single round of communication needed to create a standardized message. The logic of defensible risk communication is simple: ◆
Analytically, determine the facts most relevant to predicting the outcomes that matter most to most citizens.
◆
Empirically, determine what citizens already know.
◆
Design messages to close these gaps in order of decreasing importance—taking advantage of research into information processing.
◆
Evaluate the messages’ impact.
◆
Repeat the process, as needed, until an acceptable level of understanding has been achieved.15
Prioritizing information is important because the communication channel is often narrow. Citizens may have other things on their minds or be under great stress (for example, when suffering from a
48
A LITTLE KNOWLEDGE
medical condition or angered because they do not feel respected). Poorly chosen content can narrow the channel further: Why pay attention to experts who are saying things that are irrelevant or that go without saying? Why trust “communicators” who omit crucial facts or treat one like an idiot? People poised to make well-formulated personal or policy decisions need quantitative estimates of the probability and magnitude of each relevant consequence. Often, though, people also need qualitative information regarding the processes underlying those estimates; that is, the processes of creating and controlling risks. Such process knowledge can give the quantitative estimates intuitive credibility, allow citizens to follow the debate over a risk, and give citizens a feeling of competence as decisionmakers.16 Two examples may suggest the nature of such deliberate communication development.
INFORMED CONSENT FOR MEDICAL PROCEDURES. In some states, physicians are held to a professional standard for informed consent: they need to say what their peers say. Although professional standards pool the judgments of a community, they also can entrench flawed practices. Other states have a materiality standard: physicians must say whatever is “material” to their patients’ decisions. One way to meet this demand is to provide patients with a laundry list of possible side effects. Such lists, however, may mean little to patients— especially when their time, energy, and cognitive capacity are limited. If materiality is interpreted as focusing on what people most need to know, then it might be operationalized in value-of-information analysis terms.17 Namely, information is material to the extent that receiving it affects the expected utility of recipients’ choices. Table 4.1 shows the results of such an analysis, applied to a common medical procedure: carotid endarterechtomy.18 Scraping out the artery leading to the brain can reduce the risk of stroke for patients with arteriosclerosis there. However, many things can go wrong, some of which are listed in Table 4.1. They are a lot to consider, especially under the stress of a life-threatening illness and the need to balance the risks against possible positive consequences of the surgery: increased quality and quantity of life. The results in Table 4.1 reflect one procedure for conducting a value-of-information analysis. The table creates a population of hypothetical patients, each of whom would rationally choose the surgery
49
REALISTIC RISK DISCLOSURE IN NEWLY NORMAL TIMES Fischhoff
TABLE 4.1. SENSITIVITY TO INFORMATION ABOUT POTENTIAL SIDE EFFECTS OF CAROTID ENDARTERECHTOMY PERCENTAGE OF PATIENTS WHO SHOULD DECLINE SURGERY AFTER LEARNING OF EACH RISK
death stroke facial paralysis myocardial infarction lung damage headache resurgery tracheostomy gastrointestinal upset broken teeth
15.0 5.0 3.0 1.1 0.9 0.8 0.4 0.2 0.09 0.01
Source: Jon Merz, Baruch Fischhoff, Dennis J. Mazur, and Paul S. Fischbeck, “DecisionAnalytic Approach to Developing Standards of Disclosure for Medical Informed Consent,” Journal of Toxics and Liability 15 (1993): 191–215.
were there no risks (and were money no object). These patients vary in their physical condition, represented by probability distributions over possible health outcomes (indicating the expected variation in their response to the surgery). They also vary in their values, represented by distributions of utilities for those outcomes. Statistical methods create individual patients by sampling values from these distributions. The expected utility of the surgery decision is calculated for each such patient, ignoring all risks. By definition, it is positive because these patients are all better off with the surgery. The expected utility is then recalculated, after “revealing” the probability of each possible consequence. The materiality of that information equals its effect on the expected utility of the surgery. If the expected utility changes from positive to negative, then the surgery is no longer recommended. Table 4.1 shows the percentage of patients who should decline the surgery after learning about each potential side effect. Thus, 15 percent should decline the surgery on learning the probability of immediate death. Another 5 percent should decline when told the risk of iatrogenic stroke. An additional 3 percent should be dissuaded by hearing the risk of facial paralysis. Among the many other possible
50
A LITTLE KNOWLEDGE
side effects, few would affect many choices. Although nothing should be hidden, communications should focus on ensuring that patients understand these three risks. That means understanding both the probability and nature of each. These three probabilities are large enough to be readily understood (and not tiny fractions of a percentage). Moreover, whether they occur is resolved at one time (during surgery), avoiding the challenges of thinking about risks distributed over time.19 Furthermore, even rudimentary knowledge of the surgery should allow patients to imagine these events occurring and create a mental model that affords meaning to the statistics. The nature of the first two side effects (death, stroke) should also be familiar; candidates for the surgery face these risks already. Thus, a message might get as far as what the third side effect (facial nerve paralysis) is really like, before overloading recipients. These results assume patients who know nothing about the probabilities of the possible outcomes. That seems reasonable for an unfamiliar surgery. For patients with prior beliefs, the analysis would consider how different facts would update their beliefs. If something goes without saying, then the message can say something else. If an important fact proved too hard to communicate, skipping it would save recipients’ time—while acknowledging a limit to their understanding. To sum up, those bearing a duty to inform must first analyze the available science in order to determine patients’ information priorities. They need to determine what people know already and what mental models organize that knowledge. Their communications should bridge the critical gaps, drawing where possible on basic cognitive research. Finally, they must evaluate their work and repeat the process, as needed.
ENVIRONMENTAL HEALTH EMERGENCY SYSTEMS. A common communication challenge arises when otherwise benign (or beneficial) systems dramatically misbehave. The effectiveness of the responses to such crises depends, first, on how quickly warnings reach their intended audience and, then, on how well these warnings are understood and followed. Thus, the risks depend on human behavior, which depends, in turn, on communication effectiveness. Figure 4.2 shows a model for responses to outbreaks of Cryptosporidium, a protozoan parasite that can infect public water supplies.20 There was a massive outbreak
Special Studies
Utility Awareness
Joint Task Force
Utility Communique
Health Department Awareness
Medical Awareness
Well Test
Miscellaneous Announcements
Consumer Awareness for Private Wells
Media Coverage
Consumer Awareness for Public Systems
Info Sources
Averting Behavior for Private Wells
Averting Behavior for Public Systems
Tap Test
Consumption of Well Water
Consumption of Treated Water
Health Effects
Source: Canadian Standards Association, Risk Management: Guidelines for Decision Makers (Q850) (Ottawa: National Standard of Canada, 1997).
Trigger Event
Routine Testing Results
Utility Treatment Options
Contamination of Drinking Water
FIGURE 4.2. RISK MANAGEMENT GUIDELINES
REALISTIC RISK DISCLOSURE IN NEWLY NORMAL TIMES Fischhoff 51
52
A LITTLE KNOWLEDGE
of Cryptosporidium in Milwaukee about ten years ago, which killed about 100 people and sickened 400,000. Cryptosporidium is shed through the feces of infected mammals as oocysts—spheroidal eggs 3–5 microns in diameter. It enters the water supply through human sewage effluent discharges and fecally contaminated storm runoff (for example, from feedlots). Most drinking water treatment does not fully remove or deactivate oocysts. As a result, they may find their way into tap water. Symptoms appear after one to seven days and include nausea, vomiting, diarrhea, and lowgrade fever. Although there is currently no medical treatment, many infected individuals are asymptomatic or recover within two weeks. The disease can, however, be fatal to immunocompromised individuals. Those with AIDS are specially vulnerable. Crypto is considered a top problem in U.S. drinking water.21 In 1998, an apparent outbreak in Sydney, Australia, caused great turmoil, but was subsequently traced to a false positive from poor testing.22 Figure 4.2 shows the complex of physical, biological, and social factors that determine outbreak risk. It has the form of an influence diagram. In it, each node represents a variable. Two nodes are connected with an arrow, if knowing the value of the variable at the tail facilitates predicting the value of the variable at its head. For example, the greater the water utility’s awareness of outbreak potential, the greater the chances of its conducting special studies or creating a multi-agency task force. Estimating the model requires input from multiple disciplines, including microbiology (dose-response relationships), civil engineering (filtration and testing), ecology (land use), communications (message penetration), and psychology (perceived risk, actual response). The model’s computational version specifies values for each variable and dependency, at a given site, then predicts the risks and attendant uncertainties (for example, from relying on judgment in the absence of directly relevant studies). This model was created as the integrating core of a project designed to reduce Crypto risks by communicating better with consumers. However, running the model revealed that current testing is so ineffective that an outbreak will likely have passed (or at least peaked) before its source is detected. Thus, even if every consumer got a perfect message and followed it exactly, an emergency system relying on “boil water” notices would not protect the most vulnerable. Under these circumstances, vulnerable populations require other forms of protection, such as routine provision of bottled water.
REALISTIC RISK DISCLOSURE IN NEWLY NORMAL TIMES Fischhoff
53
Without better detection procedures, an emergency system relying on consumer communication may actually harm health by creating an illusion of control. This conclusion depends on the specifics of the decision. The same system may be adequate for a readily detected pollutant, like E. coli. Even current Crypto testing may be adequate for land-use planning or filtration system investment—if testing eventually provides a (forensic) diagnosis of an outbreak’s source. In semistructured interviews, citizens often raised questions about the system that was creating and controlling these risks (for example, why this was a problem, who was issuing “boil water” notices). As a result, a credible message might need to convey (a) why water is vulnerable, (b) that all uses pose danger (even tooth brushing), and (c) what “boiling water” entails.
TERROR RISKS SMALLPOX VACCINATION CAMPAIGN. Although the consultative process preceding the smallpox vaccination campaign was widely criticized by healthcare workers, resourceful individuals potentially had access to much information regarding the vaccine and disease. That availability reflected the campaign’s outreach, media attention, and historical experience. The record included not only some essential statistics, but also personal experiences rendering those statistics meaningful. (One of the more remarkable ones came from a New York Times staff member who had been seriously injured by the vaccine during the last major U.S. outbreak.)23 With somewhat greater difficulty, one could also learn some things about the uncertainty involved in extrapolating historical risk estimates to the new campaign. Although perhaps not quantified, publicly available material raised concerns about possible changes in the sensitivity of healthcare workers—and those whom they might expose. Other program features were harder to infer, such as its postvaccination surveillance, compensation, and employment risk. Perhaps the greatest uncertainties were outside the usual informedconsent process: just what benefits the vaccination would confer. Estimating those benefits requires risk analyses that consider the threat of smallpox being used, the probability that the vaccine will provide protection against the weaponized strain, the opportunities for post-exposure vaccination, and the needed set of vaccinated workers.
54
A LITTLE KNOWLEDGE
DOMESTIC SURVEILLANCE. Citizens evaluating domestic intelligence policies need both quantitative and qualitative information. On the one hand, they need to know the magnitude of the threat, the chance of identifying legitimate suspects, and the damage done by “false positives.” On the other hand, they need to know how the system works. For example, pulling in a class of individuals (for example, male students from a given country) could yield valuable information about them, as individuals and as a class. However, it is a form of reactive measurement. It might deter some of its targets while radicalizing others. It could have unpredictable effects on future visitors from that country. The attention paid to civil liberties issues when conducting such surveillance could affect these processes. It might reduce the yield from questioning by restraining interrogators, or increase it by convincing participants that they can speak freely. It could alienate or draw in the target community. These issues ought to interest all citizens, whether concerned about others’ welfare or their own. No one knows with any confidence where any of this will lead. The historical record is most relevant to the extent that the old rules apply. Those who claim that the world changed on September 11 (or clearly revealed its changes that day) may also argue for changed surveillance rules. Whatever the merits of those claims, they add a level of uncertainty. A consultative risk management process might help to design a process that best achieved multiple goals, including societal trust.
EVALUATING COMMUNICATIONS AND CITIZENS NORMAL RISKS These examples show the kinds of analyses needed to identify the information critical to effective risk management; others can be found in the literature.24 Such analyses allow us to determine the content of risk communications and evaluate the adequacy of (citizen or expert) understanding. Each project involves a formal analysis, considering the natural and social processes affecting citizens’ welfare. Each analysis omits details precious to some technical specialists, which might have cluttered a communication or risk analysis. Each requires expert judgment regarding issues that are poorly understood but critical to the model. In some applications, just getting the full set of issues on the
REALISTIC RISK DISCLOSURE IN NEWLY NORMAL TIMES Fischhoff
55
table, without quantification, may have significant marginal utility for framing analyses and communications.25 Some model of the processes creating and controlling risks is implicit in any communication, as is some image of what recipients know and can understand. That image might reflect convention (what we have always said), parochialism (what we specialize in), strategy (what we would like the public to believe), or prejudice (what we believe about the public’s knowledge and capacity). When the image is wrong, communications fail the public in the short run by denying them the information needed for effective decisionmaking (for example, ill-advised surgeries, insufficient attention to land-use practices that create waterborne parasite risks). Over the longer run, poor choices can undermine respect for the public. It is easier to fault their understanding rather than the communications.26 Table 4.2 summarizes some of the consequences of poor communication. Treating the public fairly requires diligently integrating risk analysis and communication. Without such commitment, the technical community may misdirect its efforts (by studying and communicating marginal topics) and misdirect society (by distorting relationships between citizens and authorities). It may also miss the research opportunities that arise by combining disciplines in the pursuit of complex, novel problems. The carotid endarterechtomy and Cryptosporidium examples connected experts and laypeople through structured interviews. Done well, these methods can fulfill much of the information transfer function of Figure 4.1’s process. They cannot, however, create social relations, which can confer legitimacy beyond technical competence.
TABLE 4.2. POOR (TERROR) RISK COMMUNICATION CAN: Undermine effective decisionmaking Create feelings of helplessness Erode public faith in authorities Erode authorities’ faith in the public Compound actual risk by instilling needless anxiety Erode the social coordination produced by sharing information
56
A LITTLE KNOWLEDGE
TERROR RISKS In familiar domains, citizens have some notion of the institutions that stand behind risk analysis and communication. With terror, little of that is in place. Citizens must learn who the players are, whom to trust, and how to get a hearing. Conversely, institutions must learn their new roles. A common, intuitive, but destructive institutional response is to shut out the public until the institution has figured things out.27 Conveying an incomplete picture, or even misinformation, is a risk with early communication. Citizens typically want to know what is happening, however, even if the news is troubling.28 In that case, the responsible message is one of uncertainty— better than unwarranted claims of expertise, which result in denying citizens the truth. Social trust is eroded when citizens feel treated like incompetents, victims, or enemies.29 Later information sharing might dispel suspicions of more sinister motives. However, erasing a shadow of doubt can take a long time, especially after it has had a chance to permeate people’s impressions. Terror poses unique challenges for a society. These include creating new relations between its citizens and institutions, which are suddenly interdependent in unfamiliar ways. If these ties are mismanaged, society’s response will be less efficient. Institutions will be less able to serve citizens’ priorities, exploit their understanding, and enlist their help. Citizens will be less able to find reliable information, protect themselves, and go about their normal lives. Moreover, the failures may erode trust between citizens and institutions—indirectly facilitating the terrorists’ objectives. Although terror is unique in some ways (for example, the nature of the damage and delivery method, the need for some secrecy), it is like other risks in many others. Arguably, it should build on the lessons learned in managing other risks. That experience provides both social and analytical tools. They are the product of hard-earned trial and error, which has left bruised industries (for example, nuclear power) and communities in its wake. With a whole society on the line, it seems foolish to start afresh.
5 PUBLIC INFORMATION AND RISK COMMUNICATION IN TIMES OF CRISES VICTOR W. WEEDN
THE PROBLEM In times of crisis, such as bioterrorist attacks, the public will need and demand information on breaking news. Perhaps the most important information and the most difficult to relate will be information pertaining to continuing threats and risks. This risk or threat element of a crisis makes the public an involved participant, because citizens are in jeopardy of being placed in harm’s way. Accordingly, the public will have an abnormally keen interest in knowing the nature of the threat, the degree of risk, and what is being done and should be done in response to it. Thus, the breadth, nature, and significance of the information needed by the public when a crisis involves the homeland is far different than it is for a foreign war. Difficulties in risk communication are varied. Interloping imagegrabbing politicians, agency rivalries, and an absence of clarity with respect to who is in charge are often obvious problems, despite the best of emergency planning. Risk communication messages are sent to citizens with varying levels of education and awareness. Diversity in the public also implies differences in perspectives, such that the messages 57
58
A LITTLE KNOWLEDGE
are interpreted in different ways. Furthermore, not everyone will, in fact, share the same risk. Moreover, human perception of risks is not straightforward. Psychological and emotional factors are a complication in the interpretation of the messages. Those in charge may not wish to divulge certain information that may inform terrorists, prevent law enforcement efforts, or cause undue concern. Perhaps the most difficult complication is the inherent element of the unknown in crisis incidents. The consequence of poor risk communication is manifold and potentially catastrophic. At worst, panic may ensue. Distrust of government leadership may be generated, causing some to ignore governmental instructions and admonitions. Inherently stressful events may be made more fearful and stressful, resulting in widespread mental and psychosomatic illness. Emergency rooms and hospitals could become seriously compromised by large numbers of patients with confusing presentations seeking care despite no direct harm. The public may be left in a quandary—not knowing what to do. Government help lines may become clogged by individuals who want answers to their questions. Chaotic behavior may contribute to the woes of government officials. Recent terrorist incidents have highlighted the difficulty we as a society have in communicating risks. ◆
In late 2001, at least four letters containing aerosolized anthrax were sent to two U.S. Senators and to U.S. media offices. Twentythree individuals, including postal and mailroom employees and other unintended victims, contracted anthrax and five people died. Secretary of Health and Human Services Tommy Thompson initially attempted to serve as the government spokesperson to the public about this attack, but was embarrassed when he was asked questions he could not answer. Later, the technical experts who were brought in disagreed with one another. Then, the recommendations for obtaining oral and nasal swabs were dropped, suggesting to many that the administration did not know what it was doing.
◆
In December 2002, President Bush initiated a program to protect the United States against a terrorist smallpox threat. The program was to have three phases: in the first, military troops in high-risk areas would be required to be vaccinated, then civilian
PUBLIC INFORMATION AND RISK COMMUNICATION IN CRISES Weedn
59
medical and emergency first responders would be urged to volunteer for vaccinations, and finally the general public would be offered vaccinations. Within a year the program was ended, because the rate of vaccinations had dropped as the administration failed to convince the first responders that the risk of vaccination was acceptable. Less than 40,000 of the intended 440,000 civilian healthcare workers opted for the vaccinations. ◆
In March 2002, President Bush signed a directive to create a Homeland Security Advisory System to provide a means to disseminate the federal government’s assessment of terrorist threats to federal, state, and local authorities as well as to the American public. This system is based on a set of five graduated “threat conditions” (green = low, blue = guarded, yellow = elevated, orange = high, and red = severe) that require federal agencies to execute corresponding “protective measures” to reduce vulnerabilities or increase response capabilities during periods of heightened alerts. This color “threatcon” warning system of Homeland Security Director Tom Ridge has received mixed reviews. The repeated warnings of threats of an ill-defined nature have left the public wondering what to do.
◆
In February 2003, the U.S. Department of Homeland Security publicly recommended that U.S. citizens purchase items useful for biological, chemical, and radiological terrorist attacks, including a three-day supply of water and food, battery operated radios, and duct tape and plastic sheeting to seal off windows and doors (“in-place sheltering”). The duct tape and plastic sheeting recommendation was politicized and mocked and later withdrawn as the central recommendation.
As a consequence of these incidents and others, risk communication has become a significant hot topic within the public health community and within government.
SCHOOLS OF THOUGHT Three main schools of thought have emerged among investigators of this topic, each emphasizing a different consequence. The first is the “panic” school of thought. This group believes that the main point of
60
A LITTLE KNOWLEDGE
risk communication is the avoidance of public pandemonium and irrational behavior. Its members would seek to reassure the public and not announce all potential threats and their consequences for fear of public overreaction. They do, however, note the dire consequences of weapons of mass destruction and the growing potential for their use. In other words, they emphasize the worst-case scenarios and are sincerely concerned about how the public will react to them. This group seeks government funding for studies on how to avoid panic. In fact, history has demonstrated that panics are unlikely— for example, neither the World Trade Center disaster nor the anthrax letters produced anything that resembled a panic. Perhaps modern media generates a sufficient information flow that citizens feel that the risk is distant and contained. This group, however, might point to the closing of post offices and the Senate Hart building for months as examples of an irrational societal paralysis, akin to panic. It is true that the “terror” caused by terrorists extends and is intended to extend beyond direct damage. A second school of thought might be called the “medical model.” This group, the largest of the three, believes that risk communication is important to prevent undue stress to the public. Surveys indicate that most of the New York City population had difficulty sleeping in the aftermath of the World Trade Center attacks. Hospital wards did, in fact, see an increase in mental patient visits. Subtle effects were manifest in a variety of ways. Medical effects of stress are well known. The term post-traumatic stress disorder has been applied to mental symptoms and multiple idiopathic physical symptoms. The underlying assumption of this medical model is that any individual can react in a negative way to stress. This group emphasizes reassurance and would probably tend to minimize the risks. A third school of thought is the “empowered citizen” model. This group believes that information should be disseminated so that individual citizens can respond in their own best interests. In the end, government cannot do everything for the public—there will always be things that will have to be or ought to be done by individuals themselves. The underlying assumption is that individual members of the public are healthy rational beings who can react appropriately to a situation, if given correct information. Unlike the medical model, citizens in this model are assumed to be mentally capable of receiving important information about a threatening situation. This school of thought would tend to give more rather than
PUBLIC INFORMATION AND RISK COMMUNICATION IN CRISES Weedn
61
less information, and places greater emphasis on what is said than on how it is said. Of course, the three perspectives are not entirely mutually exclusive. Messages can at once be reassuring and minimize stress and panic but also give pertinent information to the public. The perspectives are more a matter of emphasis. The “empowered citizen” perspective, however, unlike the first two schools, will err on the side of giving more, not less, information about a given threat.
CURRENT RISK COMMUNICATION TRAINING Officials may or may not have media training, and may or may not have been educated in risk communication principles or be experienced in such. Public health officials have generally had some training in risk communication. In the public health literature, risk communication is the process by which a complex issue is made understandable by the public. The literature arose from the many environmental safety concerns that have perplexed the general public because of seemingly incomprehensible, incongruous, and scary messages. For instance, the public health community had to discuss publicly the arcane and complex health risks of polychlorinated biphenyls (PCBs) in the environment and what to do about them. For this historical reason, and because public health officials are faced with a variety of topics and situations, the training has been about risk communication principles, such as the repeated delivery of simple, clear messages from a trusted voice. Process, not content (“how to say,” not “what to say”) is the focus of the teachings, which differ little from basic media training. Unfortunately, public health risk communication discussions are devoid of specific substantive message content. They do not seem to be significantly informed by psychosocial research in risk perception and decisionmaking, despite its seemingly obvious applicability. The discussions are not based on extensive research on actual messages, their effectiveness, their understandability, or public perceptions. There is little research on what the public needs to know for given threats. Risk communication is not limited to public health officials. Emergency services directors, law enforcement officers, mayors, and governors may all find themselves having to communicate risks to the public in times of crisis. These individuals may have had
62
A LITTLE KNOWLEDGE
some media training, but are not likely to be trained in risk communication as a discipline, the way public health officials are. Public affairs officers, information officers, media coordinators, and public spokespeople who communicate governmental messages for a living will generally have more extensive media training and perhaps education in communication theory, but are unlikely to have formal training in risk communication or psychosocial cognition. Furthermore, government officials and spokespeople are often not subject matter experts on the scientific and technical aspects of a given crisis. Thus, they are ill equipped to advise the public properly, such as in the case of the Tommy Thompson example above.
MINDSET BARRIERS TO GOOD RISK COMMUNICATION The first mindset barrier to effective risk communication is preoccupation with the unfolding event. Even when the official has been properly trained, he or she will be focused on official actions needed to respond to the incident, rather than on what the public needs to do or know. Immediately after an incident, responsible officials are overwhelmed with the task of gathering relevant information and responding to the event. Simultaneously, the media will react to the incident causing officials to make public statements before they are ready. Unfortunately, the early phase of a crisis also is the most critical time to provide information and reassure the public. During the sniper spree of late 2002, Police Chief Charles Moose of Montgomery County, Maryland, responded to the question of what to tell the public in the initial press conference by stating that he was doubling the police force on the streets. When the reporter reworded the question and asked what the public should know, Chief Moose repeated his answer. The consequence of this reactive, preoccupied mindset is that it is very difficult to truly implement and institutionalize good risk communication. Another mindset barrier is the “crisis mentality” of responding officials and their need to maintain control. Indeed, the command and control paradigm requires an authoritarian leader—otherwise, a crisis may create chaos. Despite strong command and control, information should be disseminated to empower citizens to act responsibly. This need is easily drowned out in times of crisis. The tenor of the
PUBLIC INFORMATION AND RISK COMMUNICATION IN CRISES Weedn
63
communication should be adult-to-adult and respectful of the autonomy of individual citizens. The message should be “we are in this together,” and not “we know, so do as we say.” Even in times of martial law, citizens must know the extent of their rights of assembly, free speech, and self-protection. Pre-event planning that enables citizens to understand and protect themselves while acting altruistically in the case of urgent situations is critical. Even in the absence of a crisis, risk communication is not a subject easily grasped. “Risk communication and health information dissemination” was one of six focus areas for $918 million in funding distributed to the states in FY 2003 by the Centers for Disease Control and Prevention (CDC) for bioterrorism preparedness. States often spent this money on making radio systems compatible and creating computer Internet and Intranet networks. When the money actually funded the communication of risks, it was typically for media training or risk communication education, but funding for message content was not embraced. In addition, risk communication may be perceived as unimportant and thus not given a high priority. Emergency managers across the United States have generally adopted the Incident Command System. This system involves five major activities that are independently staffed: command, planning, operations, logistics, and finances. Communications is a mere detail, treated solely as interoperable radio linkage. This neglect of risk communication is dangerous; it must be recognized as an important component of government response.
LACK OF PUBLIC FEEDBACK Current emergency messaging is strictly one way: government official to the public. Risk communication discussions have focused on communications from the government to the public and not on feedback from the public. Yet for every message delivered, new questions will arise. Knowing what new questions have arisen in the mind of the public could help the government formulate its next message. Of course, these new messages will also stimulate new questions, to which even more new messages could be generated, and so on and so forth. Allaying public fears would seem to require knowing what those fears are and how significant they are.
64
A LITTLE KNOWLEDGE
Beyond addressing new questions, feedback would allow the government to know if its message is being properly interpreted and understood. Messages are not always received as intended. It is likely that different segments of the population will receive a message differently. Furthermore, the public may have information and fresh ideas for the government in responding to the crisis. A government that acts on what it believes to be the best solution without permitting input from the governed will erode confidence and risk disenfranchisement.
RISK ASSESSMENT Recognition of a threat is the first step in risk assessment. Recognition is sometimes obvious and at other times not. Regardless, early recognition is desirable to permit preparations and to intervene at a time that may allow a full crisis to be averted or minimized. Currently, such recognition in cases of terrorism is made by a government official based on intelligence reports, the news media, weather agencies, law enforcement, public health sources, or other means. The Department of Homeland Security intends to build an information technology system that would mine data from a broad array of sources to assist the recognition of threats. A community-based alert system also might assist in the detection of certain crises when incidents and intelligence indicate the need for heightened awareness. Care must be taken to avoid false positives, as they may have serious economic consequences and lead to cavalier attitudes. Risk detection is an area of significant government investment, but nonetheless remains an area for improvement. After the recognition of a threat or crisis event, the risks must be assessed in order to respond appropriately. Initially, this assessment will focus on the level of the risk and whether the risk merits triggering other actions. Homeland Security Secretary Tom Ridge instituted a Homeland Security Advisory System in response to the September 11 attacks. Whether or not to alert the public on the basis of scant and questionable but alarming intelligence involves little more than guesswork. Intelligence suggesting terrorist activity appears daily, and Ridge and his colleagues hope to avoid public panic in the face of sketchy intelligence and the people’s inability to protect themselves. The color code threat warning system is an attempt to convey the risk level, but it conveys no specific information on which the citizenry can act.
PUBLIC INFORMATION AND RISK COMMUNICATION IN CRISES Weedn
65
Nonetheless, the trigger was pulled and the public was notified seventeen months after the September 11 attacks, when interrogations of captured terrorists and other intelligence suggested an imminent terrorist attack that might even involve chemical or biological weapons or radioactive “dirty bombs” aimed at lightly guarded targets like hotels or apartment buildings. Millions of people stocked up on duct tape and plastic sheeting, and some parents pulled their children out of school and topped off their gas tanks to be ready for a quick evacuation—clear expressions of public anxiety. Later, Secretary Ridge tried to calm the public, saying that there was no new intelligence to suggest the need to raise the alarm any further. He suggested that some people had overreacted and stated that individuals and families should not seal their doors. Critics argue that the government’s system for analyzing terrorist threats and sharing the information with the public is not making the public any safer but instead unnecessarily creating fear. The government pronouncements are taking away any perspective that people should have about their lives. Nonetheless, the administration believes that the new procedures for analyzing threats and making the information public is far better than it was in the first days after the September 11 attacks, when the administration issued a series of vague terror alerts without offering the public any substantive guidance about how to respond, apart from a repeated call for “vigilance.” The government does not have a method to classify and quantify risks systematically. A system of scenario-tested, algorithm-based risk communications must be developed to address social crises optimally, including those induced by terrorists. Such a scheme would permit a more rational approach to the diverse risks with which government is confronted. It would facilitate learning from accumulated experience. Analytic frameworks for risk-based decisionmaking are well described in the academic literature, but are seemingly not generally applied by governments. Regardless, better methods of risk assessment are needed.
SOLUTIONS I propose three synergistic solutions to the problems of risk communication. The first involves a paradigm shift to proactive treatment of the topic. The second, the creation of a Risk Communication Team,
66
A LITTLE KNOWLEDGE
can be performed without substantial effort or cost. The third solution involves an information technology system to assist the process.
ANTICIPATORY EFFORTS. It is possible to anticipate many future scenarios. For instance, an anthrax attack from an aerosol plume is a possible mode of biothreat attack. We must prepare not only for the medical and logistic responses to such an attack, but for the communication response as well. Some progressive emergency programs already may have template press releases for many types of incidents, but the emphasis in risk communication plans has been the process by which the information is released and not the substance of the message itself. We have the ability to design, research, and refine messages for the most likely threat potentials. Further, cognitive psychologists could test the messages during mock exercises. We also can perform research on the messages delivered in real crisis situations in order to develop a body of scientific knowledge on crisis messages and risk communication. Thus, it is possible to develop proactively a series of “evidence-based messages,” rather than develop messages in a perpetually ad hoc fashion as situations arise. RISK COMMUNICATION TEAMS. Teams should be created whose mission is to contemplate the messages that should be given to the public. Currently, information dissemination and media response are often assigned to spokespeople and public affairs officers. These people are trained to be communicators of spoken and written words—not on what substantive messages should be released to empower citizens to act responsibly. A Risk Communication Team could focus on what the public should know, rather than how and what the public should be told. This team should operate during an incident to deal with the specifics of the given crisis. Members should not be burdened with many other responsibilities so they can concentrate on this task. Between contingencies, the team should be actively engaged in developing messages for anticipated scenarios. The team should also be involved in disaster training exercises. The composition of the team can take various forms. A core team should be augmented by subject matter experts for a given incident. The team should include government media relations, health authorities, emergency services authorities, and first responders,
PUBLIC INFORMATION AND RISK COMMUNICATION IN CRISES Weedn
67
among others. These government officials should come from different levels of government and different agencies to help foster intergovernmental cooperation. The team could be augmented by local university groups, which may help infuse personnel, energy, and a different perspective. Psychosocial researchers may be of particular use in helping to test messages. Citizen volunteers also may help to broaden the perspective of the team and keep it more sensitive to public needs.
INTELLIGENT RISK COMMUNICATION SUPPORT SYSTEM. Finally, an information technology system could significantly assist current risk communication efforts. Such a system would support, not replace, government officials. Moreover, it could address existing barriers to implementing and institutionalizing optimal risk communication practices. The creation of the system would force a rational and systematic approach to risk assessment and communication. It would overcome mindset barriers by automatically prompting officials with evidence-based messages created proactively for such use. Information technology also can be harnessed to create message feedback systems. The front end of the system would address risk assessment. Specifically, it would assess the type, level, geographic, and temporal trend of a given risk situation. This risk assessment engine would incorporate models of given threat scenarios. Then the system would map the evidence-based messages that have been proactively developed to the assessed risk category and level. The results could be used to address three audiences: the internal government response team and their partners, the media, and the public directly. In the case of the internal response team, the information system could provide Intranet communications, but also could automatically pull up refresher training vignettes. In the case of the media, the information system could automatically pull up evidence-based messages incorporated into press release templates, which would be modified in any given incident to relate to the specifics of the event. In the case of the public, the information system could incorporate synthetic interview technology and e-government community network systems that would permit message feedback. Synthetic interview technology allows the public to use the Internet or the telephone to ask unstructured questions and receive
68
A LITTLE KNOWLEDGE
consistent and well-thought-out prerecorded answers from credible experts and authoritative officials. This component of the system would augment or replace the usual telephone help line and could handle orders of magnitude more inquiries. If the common questions were answered automatically, government workers would be free to address other needs. E-government community networks would permit citizens to participate in government function through input and discussion with government officials. Such networks could be utilized directly for message feedback. Furthermore, important topics of discussion could be extracted automatically from public use of synthetic interviews and e-government networks for use as indirect feedback to government officials. For instance, it would be important to government officials that most questions and most discussion revolved around the issue of reimbursement for adverse reactions from vaccination in the case of smallpox attack. Moreover, registering geographic information with questions from the public will permit localization and monitoring of the spread of specific developments in the jurisdiction as they unfold. Eventually, the information system might be able to assess group and individual risks and automatically tailor individualized communications with appropriate responsive recommendations. Such targeted risk communications would take the concept of risk communication to a wholly new level and would be made possible only through information technology. Such a system will most naturally fall to the Department of Homeland Security, which has absorbed the Federal Emergency Management Agency. The system must be secure and developed to enable seamless interoperable homeland security infrastructure from the neighborhood to the metropolitan, regional, state, national, and global levels. The system must operate in a dynamic, rapidly changing, open, global environment. It should be developed with the input of American citizens and their communities of interest. The system must be designed to be sensitive to issues of cultural diversity, and it must be constructed to empower the citizen and local decisionmaking. The greatest social dividend may be derived from the information infrastructure that can be utilized simultaneously for other nonurgent commercial, government, and community benefits. The system, built
PUBLIC INFORMATION AND RISK COMMUNICATION IN CRISES Weedn
69
from the impetus of homeland security, could act as a model for government to interact with its constituency in other areas. Better knowledge management and greater social participation can only further strengthen our society.
6 EXPLORING THE TENSION BETWEEN PRIVACY AND THE SOCIAL BENEFITS OF GOVERNMENTAL DATABASES GEORGE T. DUNCAN 1
PRIVATE LIVES AND PUBLIC POLICIES— OR IS IT PUBLIC LIVES AND PRIVATE POLICIES? We cannot ignore the fact that government databases—whether of the Social Security Administration, the Internal Revenue Service, the Department of Homeland Security, or the Bureau of the Census—amass detailed and sensitive information about each of us. Accessing these databases bestows power, power with the capacity to support positive functions of our society as well as strike negative blows to our fundamental values. Positively, this power can guide and validate policies for the common good. Negatively, it can trash the lives of those who just want to be let alone. The book Private Lives and Public Policies, published in 1993, highlights this tension: “Private lives are requisite for a free society. To an extent unparalleled in the nation’s history, however, private lives are being encroached on by organizations seeking and disseminating information. . . . In a free society, public policies come through the actions of the people. . . . Data . . . are the factual base needed for informed public discussion about the direction and implementation of these policies.”2 71
72
A LITTLE KNOWLEDGE
While subject to largely exogenous and ever-changing circumstances, a society uses its technology to manage the gathering, maintenance, merging, and propagation of information. From start to finish, this can be viewed as the CSID data process—data Capture, data Storage, data Integration, and data Dissemination. Desirably, this CSID data process is implemented in pursuit of a society’s goals—just as Thomas Jefferson argued successfully that the U.S. Decennial Census of 1800 should go beyond a mere count of the populace to include data on people’s ages, so as to provide a factual basis for policies enhancing longevity. Underlying all the policy and management considerations about information processes is the tension between privacy/confidentiality—the protection of the data provider—and data access—the support of the data user. Government protects informational privacy by avoiding excessive intrusion as it undertakes the C stage, data capture. Government also explicitly promises confidentiality in the S, I, and D stages of its surveys and censuses conducted for statistical purposes. Also, in gathering and receiving information for administrative purposes there is often at least an implicit understanding that data are to be held confidential. Creating the tension is the simultaneous requirement that government provide the data in ways that further its operational mission. Furthermore, as Amitai Etzioni has argued, this data access must help maintain our democratic political process, encourage our free market economy, and sustain our social institutions.3 The task of government in the information process is both remarkably enhanced and terribly complicated by our dynamic technological environment. Advances in information technology have sharply lowered the costs of data capture, storage, integration, and dissemination. Accordingly, the information flow enabled by new technology bestows ever more power, power that amplifies the tension between maintaining private lives and forming public policies. This tension between the needs of the individual to be free in pursuit of happiness and the needs of the community for free exercise of its responsibilities is palpable and must be resolved. The tension is manifest in the strain between information access and informational privacy. Informational privacy, suggest Duncan, Jabine, and de Wolf, “encompasses an individual’s freedom from excessive intrusion in the quest for information and an individual’s ability to choose the extent and circumstances under which his or her beliefs, behaviors,
PRIVACY AND THE SOCIAL BENEFITS OF GOVERNMENT DATABASES Duncan
73
opinions, and attitudes will be shared with or withheld from others.”4 And so with this understanding of our goals we continued— notably with the Health Insurance Portability and Accountability Act, struggles about Internet regulation, digital rights management, considerations about the digital divide, sunshine laws, and much more—all with the aim of creatively struggling to get the best of private lives and public policies. And, ten years ago, that was just the struggle we thought we should be engaged in. But with doubly crashing finality, September 11 radically altered our thinking about the role of information in our society. Fears of terrorism, of rogue states, even of corrupt corporate executives, appear to have shifted our concerns. A cynic might argue that we now stupidly settle for the worst of public lives and private policies. As I intend here, a public life is not that of Britney Spears or Bill Clinton, a life of one who voluntarily chooses to glow in the floodlights of publicity. Instead, it is that of the rather ordinary person, caught by the inescapable lens of the surveillance camera, the electronic capture of every credit card transaction, and the required registrations and divulgence of information to obtain government services. A private policy is made not in the sunshine of open government and corporate operations, but instead is made screened from public scrutiny and with no general access to the data needed for informed judgments or to input from individuals and advocacy groups. I make four fundamental points in the sections that follow: 1.
Information ethics. Dealing with the tensions, challenges, and conflicting interests demands guiding principles consistent with an ethical framework for information about individuals. Expanding on developments in Private Lives and Public Policies, I will suggest here four enduring principles of information ethics: democratic accountability, constitutional empowerment, individual autonomy, and information justice.
2.
Managing the tension. There exist both restricted access and restricted data approaches to resolving the tension between privacy/confidentiality and data access.
3.
Societal reality. Information policy must be responsive to changes in societal reality, and these changes can be abrupt and far reaching.
74
A LITTLE KNOWLEDGE
4.
Changes in technology. Information policy must be responsive to changes in technology, and these changes have been, and will continue to be, profound and not fully predictable.
Before examining these four fundamental points, we must understand salient aspects of how databases function in our society today.
PERSONAL DATA IN GOVERNMENT DATABASES While government databases can provide socially beneficial information about organizations and establishments, I will focus on government databases that provide information about individuals, because such information most directly raises concerns about privacy and confidentiality. Government agencies depend on individuals to provide data that accurately reflect some of the most personal and sensitive aspects of their lives. Some data provision is mandated by legislation; individuals must, for example, file an income tax return with the Internal Revenue Service. Other data provision is voluntary, as when the Substance Abuse and Mental Health Services Administration interviews people at their residences about use of licit and illicit drugs. Data users span a diverse range of individuals and organizations. They include academic researchers at the University of Chicago, policy analysts for the American Association for Retired Persons and the National Association of Home Builders, business economists for Chase Manhattan bank, and statisticians for the Centers for Medicare & Medicaid Services (CMS). They include reporters for the Los Angeles Times, marketing analysts for Amazon.com, advocates for the Consumer Federation of America, and medical insurance underwriters for Cigna. In general, data users employ the data they obtain for end uses such as policy analysis, commercial and academic research, advocacy, and education. Often unanticipated are data users who force access through legal action, often as part of a discovery process and involving a court-issued subpoena. Many statistical agencies lack adequate legal authority to protect identifiable statistical records from mandatory disclosures for nonstatistical uses.5 An example of this was the ruling that the Energy Information Administration could not protect company survey responses from the Department of Justice’s Antitrust Division for use in compliance activities.
PRIVACY AND THE SOCIAL BENEFITS OF GOVERNMENT DATABASES Duncan
75
With advances in information technology, databases are not simply systems of records.6 A system of records is defined by several attributes: 1.
Records. Generally flat files, with each row pertaining to a single individual, such as a visa applicant, and columns giving values on various attributes of that individual, such as sources of income, HIV status, and whether or not the person visited Afghanistan during 1997–2001.
2.
Autonomy. Each record has a separate, identifiable existence.
3.
Durability. The basic structure of the collection of records, as opposed to which individuals are represented or what specific data values may be, does not change radically over time.
4.
Control. A single organization exercises authority over the database, authorizing changes and access.
Quite unlike systems of records, today’s databases are heterogeneous. They have complex structures determined by the purposes for which they were constructed, and they are plagued by difficulties in semantic interoperability because of different vocabularies and different perspectives on the use of the data. Further, they are often maintained by multiple sites, are capable of linkage of records across databases, and may not be under the control of a single authority. This makes the application of existing law and administrative procedures problematical. And yet this issue must be addressed because government databases contain highly sensitive and valuable information. We can hope that the Confidential Information Protection and Statistical Efficiency Act of 2002 (CIPSEA)7 will help create some uniformity in the policy governing these databases. Government captures enormous amounts of data, stores it in very large databases, analyzes some of it, and disseminates information products to individuals, governments, businesses, and other organizations. Many of these data sets are obtained directly from respondents in surveys and censuses or through building systems of administrative records based on a variety of citizen interactions with government or through government-mandated data provision from organizations. Surveys include:
76
A LITTLE KNOWLEDGE
◆
Face-to-face interviews, as with the National Longitudinal Surveys of Young Women conducted by the Bureau of Labor Statistics
◆
Telephone surveys, as with the Behavioral Risk Factor Surveillance System conducted by the Centers for Disease Control and Prevention, which estimates current cigarette smoking and use of smokeless tobacco
Administrative records from citizen interaction include: ◆
Licensing data, as with state Departments of Motor Vehicles and local building permits
◆
Internal Revenue Service tax returns
Government-mandated data provision from organizations includes: ◆
Employer-furnished data, as with Social Security Administration earnings records
◆
Public health data, as with birth certificate information sent to the National Center for Health Statistics
◆
Records of session times and durations, temporarily assigned network (IP) addresses, and means and source of payments— including credit card or bank account numbers—obtained from Internet Service Providers (ISPs) with a simple subpoena (no court review required) under Sections 210 and 211 of the USA PATRIOT Act
The Internet has accelerated the demand for access to government information services, primarily by broadening the range of potential data users. Access demand is in commensurate tension to concerns about privacy and confidentiality. The National Science Foundation affirms, “Given the inexorable progress toward faster computer microprocessors, greater network bandwidth, and expanded storage and computing power at the desktop, citizens will expect a government that responds quickly and accurately while ensuring privacy.”8
PRIVACY AND THE SOCIAL BENEFITS OF GOVERNMENT DATABASES Duncan
77
Broad access to data supports democratic decisionmaking. Access to government statistical information supports public policy formulation in areas ranging through demographics, crime, business regulation and development, education, national defense, energy, environment, health, natural resources, safety, and transportation. Thrust against the evident value of data access is the counter value that private lives are requisite for a free society. This chapter deals with an important aspect of the tension between information privacy/ confidentiality and data access—the proper handling of personal information that is collected by government throughout the CSID data process. Other privacy issues, such as video surveillance, telephone interception (bugging), Internet censorship, the protection of children, encryption policy, and physical intrusion into private spaces, are outside my purview. To quickly illustrate the scope of government databases, check a Web site that as of April 2, 2003, provided links to 13,338 free searchable public record databases.9 One example of the tension between individual privacy and serving the common good is the list available from the Web site of New Mexico’s “25 most wanted” parents who do not pay child support. From this list, one can easily learn the name of the most egregious “deadbeat dad,” Ernest Marchbanks, along with his date of birth, his address, and the amount owed—$32,789.93.
TENSION NEED NOT MEAN TRADEOFF Given the tension between values of privacy/confidentiality and data access, it is tempting to seek some appropriate tradeoff between the two, reconciling the choice based on some politically meaningful weighting of the two attributes. Such an approach has evident appeal, substantial historical precedent, and notional justification through the usual reasoning of microeconomics. All the same, I argue that a tradeoff mentality overly restricts the action space and inhibits creative thinking about alternatives. Certainly in a static world of fixed choices, a reasoned tradeoff is the best we can do. But with appropriate imagination and some good “thinking outside the box,” we may well find that the evident choices are overly constrained, and new choices may become available to us. In that case, we may be able to gain on both dimensions—improve privacy/confidentiality and improve data access.
78
A LITTLE KNOWLEDGE
A useful way to conceptualize the issues here is to make use of the R-U confidentiality map framework.10 In this context, the idea is to consider a specific information process through all of its stages, including data capture, data storage, data integration, and data dissemination. A quantified measure of threats to privacy/confidentiality is developed and called the disclosure risk R.11 A quantified measure of the usefulness of the resulting data product is developed and called the data utility U. In a typical context, release of the data in its full form, while having substantial data utility, would have unacceptably high disclosure risk. To deal with this, disclosure limitation methods could be employed. Most attention has been paid to developing and employing these methods at the data dissemination stage of the information process. At this stage, disclosure limitation (DL) methods are mathematical transformations—deterministic or stochastic—that mask a database and create a data product suitable for release.12 Nonetheless, disclosure limitation procedures could also be employed at the other stages of the information process. They can limit the nature of the data collected, modify the way it is stored, and restrict data integration. In its most basic form, an R-U confidentiality map is a set of paired values of disclosure risk R and data utility U, as generated by a family of DL strategies. Generally, any such disclosure limitation procedure can be thought of as being applied more or less stringently— that is, a DL procedure can be thought of as being parameterized. For example, consider developing a public use data product, which is a form of data dissemination to the general public. A statistical agency will commonly use a procedure called topcoding. For the 1999 New York City Housing and Vacancy Survey, for example, the Census Bureau replaced any actual contract rent above $2,950 with a value of $3,817, which is the (conditional) mean rent for these cases. The topcode value of $2,950 is then the chosen parameter value of the DL procedure. Raising it would lower the stringency of masking and lowering it would increase the stringency of masking. The R-U confidentiality map shows the tradeoff between disclosure risk and data utility as a function of the parameter value. The RU confidentiality map can also display such a tradeoff for a different DL procedure, for example adding noise to high rental values. Figure 6.1 gives a schematic for how disclosure risk R and data utility U change for two different DL procedures. If a maximum tolerable
PRIVACY AND THE SOCIAL BENEFITS OF GOVERNMENT DATABASES Duncan
79
disclosure risk is set (perhaps not constant, but increasing with data utility, as in Figure 6.1), for a given DL procedure the optimal stringency of disclosure limitation is determined by the intersection of the maximum tolerable risk and the R-U curve. In the case illustrated by Figure 6.1, it shows the importance of going to the DL method with the right-most curve, rather than settling for the best tradeoff with other DL methods.
FIGURE 6.1. R-U CONFIDENTIALITY MAP
Disclosure risk R
Original data
Maximum tolerable risk
No data
Data utility U
GUIDING PRINCIPLES AN INFORMATION ETHICS What ethical principles should guide data stewardship? The principles set forth here expand on those from Private Lives. The United States, and a growing list of other countries, embraces a freedom that
80
A LITTLE KNOWLEDGE
recognizes pluralism, public decisionmaking based on representative democracy, and a market-oriented economy. Broadly, in such a society a variety of contrasting and competing interest groups vie for influence and benefit. To have this process work properly requires extensive generation of information about persons, and its dissemination under appropriate constraints. Consistent with this ethos, an ethics of information can be built on four principles: democratic accountability, constitutional empowerment, individual autonomy, and information justice. These principles can provide a functional guide for assessing the societal impacts of information, and all aim to improve the trustworthiness of government information operations. None of these principles reaches the level of asserting a right, an entitlement that should not be compromised.13 Rather they are desiderata—they assert values that must be held up against other values to reach compromises.
DEMOCRATIC ACCOUNTABILITY. Democratic accountability is the assurance through institutional mechanisms, culture, and practice that the public obtains comprehensive information on the effectiveness of government policies. Kenneth Prewitt has explored this concept.14 The technology of the Web is a most exciting implement for fostering democratic accountability. A quick click to the Social Security Administration (SSA) Web site on January 6, 2003, yielded their Performance and Accountability Report for fiscal year 2002.15 It provides full disclosure of the SSA’s financial and programmatic operations. This Web presentation allows the agency’s commissioner, Jo Anne B. Barnhart, to assert, “We are committed to providing data that is complete and reliable to those who use it for decisionmaking.” I find it noteworthy that by putting this information up on their Web site, the SSA has included the general citizenry among the set of “those who use it for decisionmaking.” CONSTITUTIONAL EMPOWERMENT. Constitutional empowerment is the enhanced capability of citizens to make informed decisions about political, economic, and social questions. Constitutional practice emphasizes restraints on executive excess through separation and balance of power and instead confers broad access to the political process. Many government agencies have enthusiastically adopted Web technology as a vehicle for providing information broadly to the citizenry. Cutting-edge research is being conducted under the
PRIVACY AND THE SOCIAL BENEFITS OF GOVERNMENT DATABASES Duncan
81
Digital Government Program, including work on confidentiality.16 A powerful exemplar of this data empowerment in action is FedStats, which provides an easy gateway to statistics from more than one hundred U.S. federal agencies.17
INDIVIDUAL AUTONOMY. Individual autonomy is the capacity of a person to function in society as an individual, uncoerced and cloaked by privacy. Individual autonomy is compromised by the excessive surveillance sometimes used to build databases; a lack of informed consent from subjects who are not told about the purpose, sponsorship, risks, and benefits of voluntary research before deciding whether or not to participate; unwitting dispersion of data; and a willingness by those who collect data for administrative purposes to make them available in personally identifiable form.18 Government agencies have both ethical and pragmatic reasons to be concerned about individual autonomy. Ethically, agencies ought to respect individual dignity and protect the personal information entrusted to them. Pragmatically, without attention to individual autonomy, agencies will find it difficult to enlist the voluntary cooperation that smoothes operations. Subverting individual autonomy is covert information gathering, deception in how information is to be used, and the dissemination of misinformation. Also subverting individual autonomy is the use of coercion—real or perceived threats or excessive rewards—in data capture. Regardless of the similarity in the look of the two words, I believe it important to mark the distinction between anonymity and autonomy. With anonymity you act while hiding your real identity. On the positive side, anonymity can free you of retribution, as may be desirable with whistle blowers and dissidents under an oppressive government. On the negative side, anonymity can loosen social constraints and allow compromise of the privacy of others. While autonomy can be viewed as an essential good, only to be compromised when in conflict with other essential goods, anonymity has no such privileged status. Indeed, much as an individual might value anonymity, it inherently conflicts with accountability. Yes, there is privacy in anonymity, as one acts and is present while unknown, but it comes with costs that must be ethically justified in each instance. David Brin quotes one Hal Norby on this topic: “sacrificing anonymity may be the next generation’s price for keeping precious liberty, as prior generations paid in blood.”19
82
A LITTLE KNOWLEDGE
INFORMATION JUSTICE. Information justice is the fairness with which information is provided to individuals and organizations. Information injustice can occur when data are captured from only some individuals and according to unacceptable selection criteria. It can occur when data are denied to some groups or individuals inappropriately. Ubiquity of access may well be an appropriate initial position affirming information justice. In implementing the four fundamental principles of democratic accountability, constitutional empowerment, individual autonomy, and information justice a government agency should follow a policy of functional separation.20 This policy makes a distinction between administrative data and statistical data. The distinction is on the basis of use: ◆
administrative data are used so that data on an individual has a direct impact on that individual
◆
statistical data are used to create aggregate measures that have an impact on individuals only through substantial group membership
Thus, April Dawn’s application for a commercial pilot’s license is initially administrative data, since it is used to determine whether to issue April a license. When a database of such applications is used to determine whether females are issued pilot’s licenses as frequently as males, this constitutes a statistical use. Conceivably, such a study might affect administrative practice about license issuance, in which case it might affect the chances of April Dawn’s subsequent application. This impact is due solely to April being female and is only negligibly determined by her particular data. The distinction between administrative data and statistical data, while important both for ethical reasons and for conformity to provisions of CIPSEA, can be difficult to draw in particular cases. What happens when a statistical collection is used to regulate an industry, say on safety issues? Then the data on a particular firm may lead to regulations that indeed have substantial impact on that firm. This issue arises because the number of firms may be small. So while clear distinctions between administrative data and statistical data might be made with demographic data—those dealing with people—the distinctions may be hard to draw with establishment data.
PRIVACY AND THE SOCIAL BENEFITS OF GOVERNMENT DATABASES Duncan
83
MECHANISMS FOR MANAGING CONFIDENTIALITY AND DATA ACCESS FUNCTIONS Wide-ranging mechanisms exist to deal with conflicts about the capture and dissemination of data. They span federal legislation, interorganizational contractual arrangements, intraorganizational administrative policies, and ethical codes. They also include technological remedies such as the release of masked data that may satisfy data users’ needs for statistical information while posing little risk of disclosure of personal information.21 In managing confidentiality and data access functions, government agencies have two basic tools for responsible provision of information: restricted data and restricted access. As developed in Private Lives, these concepts have the following interpretations: ◆
Restricted data. Data are transformed to lower disclosure risk. This is accomplished through disclosure limitation techniques such as (1) releasing only a sample of the data, (2) including simulated data, (3) “blurring” the data by grouping or adding random error, (4) excluding certain attributes, and (5) swapping data by exchanging the values of just certain variables among data subjects.
◆
Restricted access. Administrative procedures impose conditions on access to data. These conditions may depend on the type of data user; conditions may be different for interagency data sharing than for external data users. An example of an institutional arrangement for restricted access by external data users is the Census Research Data Center at Carnegie Mellon University.22
Technical procedures for restricting data have been proposed over the past twenty years in both the statistical and computer science literature.23 Unlike restricting access, restricting data is a technical device involving stochastic and mathematical transformations to mask the database and limit disclosure. Typical disclosure-limiting masking involves grouping into categories, adding noise, topcoding, or swapping attribute values. Statistical disclosure limitation practices have allowed agencies to provide increasing amounts of data to the research community. Thomas Jabine gives an excellent summary of
84
A LITTLE KNOWLEDGE
statistical disclosure limiting practices for selected U.S. agencies.24 The techniques proposed depend on the nature of the data, whether in tabular, microdata, or online form. Various restricted access policies have been implemented in the last twenty years.25 Notable have been the American Statistical Association/National Science Foundation fellowship programs with a variety of federal agencies, including the Bureau of Labor Statistics, the Bureau of the Census, the National Center for Education Statistics, and the Department of Agriculture. The fellowship programs allow the data needs of specific research projects to be evaluated. If approved, data users relocate to the agency to gain access to unrestricted data. Statistical agencies sometimes employ licensing approaches.26 The Bureau of the Census has long sought a mechanism by which it could make detailed census information more readily available to researchers, while maintaining the integrity and confidentiality of that data. With this in mind, the bureau has established eight Census Research Data Centers across the country. Through access to such valuable data, the centers have attracted nationally renowned scholars to engage in interdisciplinary, collaborative research on important policy issues.27 Government faces a variety of predicaments as well as opportunities as it seeks to fulfill its responsibilities for both confidentiality and access to data. The problems and possibilities are accentuated by economic and cultural changes, and importantly by developments in information technology. The next two sections probe these ideas.
RESPONSES TO CHANGES IN SOCIETAL REALITY Changes in societal realities should prompt a rethinking of how government operates generally, and this is no less true for government information processes. Accordingly, many of us are not surprised that the dramatic terrorist attacks of September 11 provoked responses in how government deals with data. Importantly, these events triggered a shift in national security thinking from how to deal with the threat from nation-states to how to deal with the threat from individuals and small groups. Dealing with such threats clearly requires data on individuals and their relationships, a much different imperative than was seeking information on the Soviet Union’s nuclear program.
PRIVACY AND THE SOCIAL BENEFITS OF GOVERNMENT DATABASES Duncan
85
Thus, many people have changed their perspectives. Richard Smith resigned in November 2001 as chief technology officer for the Privacy Foundation and began a career as an independent public safety and security consultant. In an article in Wired News, Smith said, Most citizens, including me, have now put privacy concerns on the back burner. Sept. 11 completely changed everything, and one of the things it changed is that people are far less concerned about what the private sector is doing with information, and far more concerned about what the government is doing to keep them safe. The first day I was just numb, wondering if anyone I knew had been killed. Then over the next couple of weeks I watched as information about the hijackers was pulled out of various databases such as the Internet reservations sites where they had booked their tickets, photos from security cameras and the ATMs they had used, and I started seeing that all the data collected as we go about our daily lives can be used for a very good purpose—such as tracking down murderers.28
President George W. Bush signed the USA PATRIOT Act into law on October 26, 2001, just six weeks after September 11. This far-reaching bill has many implications for government information processes. Specifically, Section 508 sets aside confidentiality protections under the National Center for Education Statistics (NCES) Act of 1994 for individually identifiable data provided to NCES. Under Section 508, the attorney general or an assistant attorney general may apply to a court of competent jurisdiction for an ex parte order to collect individually identifiable information at NCES relevant to an investigation of terrorism.29 In his 2003 State of the Union address, President Bush announced plans to develop a Terrorist Threat Integration Center, which would pull together information from government databases.30 More ambitious still would have been the Total Information Awareness (TIA) program to be initiated by the Defense Advanced Research Projects Agency (DARPA) in an office once headed by John Poindexter.31 This project raised concerns among advocacy groups such as the Electronic Privacy Information Center (EPIC), the U.S. Association for Computing Machinery, and Congress.32
86
A LITTLE KNOWLEDGE
We are not surprised that the responses—whether individual, legislative, or administrative—are potentially serious overreactions in some areas and underreactions in others. They tend to be long on dramatic actions and short on support of fundamental change.
RESPONSES TO CHANGES IN TECHNOLOGY Without understanding the impact of advances in computer and telecommunications technology, information policy and management cannot resolve the increasing tensions between privacy/confidentiality and data access. Rapidly falling costs across the spectrum of the CSID data process require and facilitate new ways of doing things. For example, using relatively inexpensive and existing technology, it was possible to implement the Smart and Secure Tradelanes (SST) initiative (under which U.S.-bound containers would be equipped with electronic seals containing information on the container’s origin; its planned routing; its actual location; the personnel involved in packing, inspecting, and moving the container; and the contents of the container).33 These readers would transmit to Customs and other agencies all information about the container’s status—origin, location, contents, and security. William Scherlis provides a valuable guide to how technological change must be considered in information policy.34 Some important elements of these technological changes fall under the following general rubrics: 1.
Information management. Some notable developments are: a. The XML “meta standard” for metadata potentially allows easier record linkage across heterogeneous databases. It lessens the need for construction of humongous data warehouses, and suggests that privacy advocates worried about the construction of such data warehouses may be misplacing their concerns. b. New data mining and search strategies that potentially allow detailed profiles of persons to be drawn.
2.
Human–computer interaction. Research has made it possible to: a. Make information accessible to a much broader range of the public.
PRIVACY AND THE SOCIAL BENEFITS OF GOVERNMENT DATABASES Duncan
b.
87
Seek “every-citizen” usability, addressing language differences, cultural differences, and disabilities, and providing access anytime, anywhere.
3.
Network infrastructure. New developments here have expanded broadband access and wireless technologies.
4.
Encryption methods. Advances have led to use in confidentiality, integrity, authentication, authorization, and auditing.
CONCLUSIONS Information systems and technology are increasingly integral to organizational and societal decisionmaking, and are thoroughly pervasive in the way our society works. Much of this development has been driven by sharply lower costs at each stage of the CSID information process: data Capture, data Storage, data Integration, and data Dissemination. An ongoing consequence is that the demand for privacy and confidentiality is inextricably bound in tension with the demand for access to data. This forces explicit policy formulation to deal with privacy/confidentiality and data access issues. In resolving this tension consistent with our political traditions of cherishing private lives and active involvement in public policies, we are obliged to pay close attention to four factors: 1.
Policy ought to be grounded in ethical principles, including democratic accountability, constitutional empowerment, individual autonomy, and information justice.
2.
There are two effective mechanisms for resolving the tension between privacy/confidentiality and data access. They are restricted data and restricted access. The R-U confidentiality map provides a conceptual framework for examining the tradeoff between disclosure risk and data utility under different degrees of stringency in disclosure limitation. It also allows examination of the impact of “breaking out of the box” in developing new procedures.
88
A LITTLE KNOWLEDGE
3.
Policy for dealing with information processes must be responsive to changes in the realities faced by society. As September 11 demonstrates, these changes can be unexpected, abrupt, and have far-ranging consequences.
4.
Information processes change drastically as new technology is implemented. These changes must be reflected in policy formulation.
At this juncture, we have to acknowledge fully that the world we face is different because of September 11. While recognizing this fundamentally altered circumstance, we as a nation must reaffirm our historical commitment to private lives and public policies. The task is challenging and requires new thinking and a full awareness of the implications of new information technology. Failing this challenge, we risk slipping into the quagmire of public lives and private policies.
7 INTERNATIONAL APPROACHES TO PUBLIC AND PRIVATE SECTOR DATA PRIVACY AND SECURITY JOEL R. REIDENBERG
T
he threat and fear of international terrorism challenge the balance in democratic societies between two fundamental values: security and privacy. For democracy to function, citizens must have confidence that government will be able to protect their physical safety. Public confidence in the fair treatment of personal information is also essential for citizens to have a sense of security. In democratic society, citizens must have privacy to be able to exercise their rights of participation. The U.S. Department of Defense’s “Terrorist Information Awareness” (TIA) project1 crystallizes the competition in American policy between these two basic values. The Defense Department hopes that information gathering and profiling through the use of sophisticated technologies will provide greater national security: Terrorists must engage in certain transactions to coordinate and conduct attacks against Americans, and these transactions form patterns that may be detectable. Initial thoughts are to connect these transactions (e.g., applications for passports, visas, work permits, and drivers’ licenses; automotive rentals; and purchases of airline tickets and chemicals) with events, such as arrests or suspicious activities. For this 89
90
A LITTLE KNOWLEDGE
research, the TIA project will use only data that is legally available and obtainable by the U.S. Government.2 The project anticipates combining data collected by the government directly from citizens and data sold by commercial entities to public authorities. But the type of pervasive data surveillance contemplated by TIA risks destroying public expectations of privacy in personal information. Extensive surveillance diminishes trust in government, and trust was further undermined by the original oversight role of Admiral John Poindexter, the agency’s head—known best for his conviction in the Iran-Contra scandal.3 Privacy, however, need not be incompatible with security. International approaches to public and private sector data privacy offer useful insights and alternatives to the competition between privacy and security inherent in current U.S. policy. To start, outside the United States the term “personal information” covers a broad range of data, while the term “public information” is a much narrower concept and not particularly relevant to the debate over security. Foreign statutes tend to impose restrictions on the secondary use of personal information that apply equally to government and the private sector. These restrictions seek to promote trust in data processing. Consequently, governmental uses of commercial databases face more important restraints than in the United States. Similarly, citizens have greater rights against private sector data processing. Many foreign regulatory systems establish data protection supervisory agencies to enforce data privacy rights. These public authorities often have independence from the government and institutionalize checks and balances on the erosion of data privacy. The key differences result in difficult international tensions for the United States and interesting options for the development of technological means for protection of privacy.
PERSONAL INFORMATION AND PUBLIC INFORMATION Data privacy issues revolve around two key concepts: “personal information” and “public information.” Across national legal systems, the meaning of these concepts varies. Personal information relates to data about individuals. The concept may be defined broadly to encompass a wide range of data or narrowly to address only select information about individuals. Public information encompasses data
INTERNATIONAL APPROACHES TO DATA PRIVACY Reidenberg
91
that has one of three distinctive characteristics: (1) the information is generally available to the public, either from disclosures from public records or from other disclosures; (2) the information relates to the transparency of government; or (3) the information is collected and stored by the government. With respect to personal information in the United States, narrow protections for privacy tend to apply only to “individually identified information” that is nonpublic. Privacy laws such as the Gramm Leach Bliley Act, the Health Information Privacy Protection Act, and the Telecommunications Act of 1996 each focus on data that is collected about specific individuals and that is not publicly available. The U.S. approach effectively denies citizens an expectation of privacy for information that is available to the public, such as home mortgage information, car loan data, or information about observable medical conditions. By contrast, outside the United States, personal information or “personal data” is generally defined to include any information relating to an “identified” or “identifiable” individual. This definition is enshrined in international instruments such as the Council of Europe Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data and the OECD Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data as well as in European national laws. The national laws across Europe implement the definition found in Article 2 of the European Directive 95/46/EC: “personal data” shall mean any information relating to an identified or identifiable natural person (“data subject”); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity.
Privacy protection will attach to any data falling within this broad definition. The scope of the broad internationally accepted definition covers a larger range of information than would typically gain protection under U.S. law. Under U.S. law, for example, an Internet user’s Web surfing patterns that are tracked through cookies are unlikely to receive any legal protection.4 European national law must, however,
92
A LITTLE KNOWLEDGE
determine whether a person is “identifiable” by taking account “of all the means likely reasonably to be used . . . to identify the said person.”5 Cookies linked to Web surfing data are likely to be considered “personal data,” though European countries diverge on the threshold of difficulty that serves to qualify information as “identifiable.”6 The overall significance is that more information outside the United States is likely to qualify for protection as “personal information,” and the profiling of such information is more likely to be viewed as unwarranted government surveillance. The distinctive meaning of public information is important for information privacy in the United States. Public information relates first to data that is generally available to the public. Personal information may become public through an individual’s interactions in society. For example, the fact that an individual wears glasses can be observed on the street and is generally available to the public. Public information also relates to the transparency of government. In democracies and open societies, government activities are open to public participation and scrutiny. Access to information generated by the government and access to data held by the government provide citizens with a check on state power. Since the U.S. federal and state governments collect and store significant amounts of data about citizens and the United States has a strong commitment to open government, public access transforms a wide range of personal information into public information. For example, asset ownership records such as real property records, car vehicle registrations, boat registrations, and purchase money security interests are all matters of public record. Any personal information contained in court filings is similarly part of the public record unless protected by a judicial or special confidentiality order. In the United States, if information is part of a public record or generally available to the public, the expectation of privacy is usually lost. Indeed, once personal information is in the public domain, U.S. law recognizes few limitations on the public’s use of it.7 The Driver’s Privacy Protection Act is a rare example.8 While information collected by a state for licensing drivers is considered public information and may be disclosed by the government, the federal statute creates an opt-out right for certain commercial uses of the personal information found on driver’s licenses.9 In contrast, outside the United States, the general availability of personal information does not diminish privacy rights. Personal data
INTERNATIONAL APPROACHES TO DATA PRIVACY Reidenberg
93
TABLE 7.1. PERSONAL INFORMATION/PUBLIC INFORMATION CASE STUDY—VIDEO SURVEILLANCE BY PRIVATE PARTY A building owner sets up digital video surveillance cameras to monitor the entrances from the public street. U.S.A.:
Pedestrians have no legal expectation of privacy. Private surveillance of a public space is not restrained.
Europe:
Private surveillance of the public space raises serious legal concerns. Images are “personal information” and their digital processing will be subject to government licensing.*
CASE STUDY—VIDEO SURVEILLANCE BY PUBLIC AUTHORITY A police department sets up digital video cameras to monitor spectators at a sporting event held in a public stadium. U.S.A.:
Surveillance in public spaces is allowed.
Europe:
Video surveillance by public authorities only escapes general privacy regulations if conducted for the purposes of public security, defense, or national security.**
*
See, for example, France: Loi No. 95-73 du 21 janvier 1995, Art. 10 (legal permission
from a regional governmental authority is required and recordings may not be stored longer than thirty days). **
See European Directive 95/46/EC, Recital 16.
is protected whether it is in the public domain or not. Sweden, for example, has a long tradition of open government. For more than two centuries, Sweden has granted significant rights of access to information held by public authorities.10 Yet the public nature of personal information does not vitiate data protection. Fair information practice standards apply to any information about identifiable individuals and are not based on the public character of the data. The scope of public information is also narrower outside the United States, thus providing a higher level of privacy against nonstate processing of personal information. Government is generally less transparent and most records typically made available to the public in the United States, such as drivers’ license information, home mortgage
94
A LITTLE KNOWLEDGE
information, or car loans, are not disseminated publicly. For example, many civil law countries tend to expunge identity information from published judicial opinions. The sheer volume of U.S. public records is greater in the United States than elsewhere. The extent of public information in the United States means that the U.S. private sector gains access to a wide range of personal data that would be unavailable outside the United States. Private profiling and surveillance are thus facilitated without the protection of fair information practice standards. Similarly, the narrow scope of “personal information” in the United States means that data collected by the private sector may readily be available to the government and, again, profiling and surveillance outside the scope of privacy protections are facilitated. In effect, this erosion of the distinction between public and private treatment of personal information brings about a creeping diminution of the protection for citizens of their privacy. The erosion translates into a greater weight for security compared to privacy.
GENERALIZED RULES OF FAIRNESS IN PUBLIC AND PRIVATE USE AND DISSEMINATION OF DATA The greater weight to security may also be seen in the general approach to data privacy. While the United States treats data protection as a market issue, data privacy is generally accepted internationally as a fundamental human right.11 This political rights approach outside the United States translates into comprehensive standards that define and protect a basic right to data privacy. International agreements and broad national data protection laws increasingly shape these standards. The European Convention for the Protection of Human Rights and Fundamental Freedoms, for example, sets out in Article 8 a right of privacy that may only be abridged in accordance with law if necessary to protect national security or public safety. The protection of data privacy becomes an essential part of public law. Both the Organization for Economic Cooperation and Development (OECD) and the Council of Europe have elaborated a set of broad principles for fair information practices that address the acquisition, use, storage, transmission, and dissemination of personal information. The OECD Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data
INTERNATIONAL APPROACHES TO DATA PRIVACY Reidenberg
95
TABLE 7.2. GENERALIZED RULES OF FAIRNESS CASE STUDY—PROFILING ONLINE BEHAVIOR BY PRIVATE SECTOR A Web site logs online surfing patterns for data mining to profile visitors for internal and marketing uses. U.S.A.:
Consumers only rarely have legal rights to object to the profiling and secondary use of traffic data by Web sites.
Europe:
Data protection laws will generally prohibit the data mining of online transaction information without the individual’s consent.
CASE STUDY—PROFILING ONLINE BEHAVIOR BY PUBLIC AUTHORITY A government Web site tracks visitors to the site for profiling and data mining. U.S.A.:
Few restrictions would apply.
Europe:
Important restrictions would apply, though logging Web traffic data to safeguard national security and public order is permissible without the citizen’s consent.
are designed for voluntary adherence,12 while the Council of Europe’s Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data is an international treaty with the force of law in signatory states.13 Throughout Europe, nations have enacted data protection laws that express the full set of privacy principles. The European Directive on data privacy, formally known as Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data,14 harmonizes the domestic law of the member states at a shared, high level of protection for “the fundamental rights and freedoms of natural persons, and in particular their right to privacy.” Elsewhere—in Latin America (notably in Argentina), in Australia, in Canada, and in an increasing number of other countries—the European model of data protection has proved more influential than the ad hoc, reactive model of U.S. privacy law. The foreign data privacy statutes typically establish a complete set of standards for the fair treatment of personal information. The European Directive requires that:
96
A LITTLE KNOWLEDGE
personal data must be . . . processed fairly and lawfully; . . . [must be] collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes; . . . [must be] adequate, relevant and not excessive in relation to the purposes for which they are collected and/or further processed; . . . [must be] accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that data which are inaccurate or incomplete . . . are erased or rectified; . . . [must be] kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the data were collected or for which they are further processed.15
The rights and obligations of data protection statutes generally apply in an equal fashion to the public and private sectors. National data protection laws will thus cover the collection and use of information such as online transaction data whether by companies or by the government. In general, outside the United States data may not be used for purposes other than those relating to the reasons for collection. For example, foreign laws will generally prohibit the data mining of online transaction information without the individual’s consent. Nevertheless, special exceptions apply to some of the general data privacy rights in order to safeguard national security and public order. For example, use limitations and access rights may conflict with security needs. Log files of online activity may become a critical resource for security investigations, yet the original purpose of collection may be incompatible with the law enforcement use. Similarly, if a terrorist suspect has access to the data file, then security can be severely compromised. Conversely, false and inaccurate information without transparency may result in wrongful targeting of innocent citizens. As a result, foreign laws allow derogations from basic principles for narrowly tailored security needs.16 Such measures are justified only when safeguard conditions are satisfied. This provides a degree of flexibility for law enforcement activities.
INSTITUTIONALIZED CHECKS AND BALANCES Alongside the generalized rules of fairness, foreign laws typically establish an independent supervisory authority to provide oversight for data privacy rights. Both the Council of Europe Convention and
INTERNATIONAL APPROACHES TO DATA PRIVACY Reidenberg
97
TABLE 7.3. INSTITUTIONAL CHECKS AND BALANCES CASE STUDY—PUBLIC SECTOR DATA PROCESSING A government agency seeks to require the inclusion of health data on a national benefits card. U.S.A.:
Privacy Act notice and public rulemaking provide some transparency and allow comment on proposed rule, but the proposing agency makes all final decisions with few constraints.
Europe:
Data protection supervisory authorities must be consulted and can object to or constrain proposed processing.
CASE STUDY—PRIVATE SECTOR DATA PROCESSING A company decides to sell a list of clients with demographic information. U.S.A.:
Clients, if even made aware of the sale of their personal information, have few options, and few restrictions would apply to the corporate sale of personal information.
Europe:
Data Protection Supervisory Authority can intervene to determine whether the sale is a secondary use and whether consent is required. The external review imposes a degree of selfassessment on corporate behavior.
the European Directive require such agencies.17 These data protection supervisory authorities serve as an institutional counterweight to creeping erosions of privacy, whether through government action or private action. In effect, the existence of data protection agencies institutionalizes an assessment of the impact of new technologies and new uses of personal information on privacy. The European experience demonstrates the importance of this institutional check. Governments must be able to justify the fairness of new information practices, particularly when those practices involve or might lead to citizen surveillance activities. Businesses must also be able to justify the legitimacy of their information practices to an independent party, rather than rely on self-interested evaluations. While these data protection agencies have enforcement powers, few sanction cases have actually been prosecuted. The agencies tend to emphasize informal negotiation and public education. In the
98
A LITTLE KNOWLEDGE
context of heightened security concerns and greater public surveillance, the particular value of an independent institutional check on government surveillance is significant. The lack of an independent agency in the United States means that privacy will necessarily be underweighted in the development phases of government security programs.
TENSIONS The divergence between the United States and other countries in the treatment of personal information creates increasing international tension. European regulations limit transborder data flows when destination countries do not offer adequate privacy protection.18 The U.S.–EU Safe Harbor Agreement sought to facilitate data flows to the United States.19 The accord, however, is a substantive failure. At the outset, the agreement was more politically expedient than legally enforceable.20 As the first anniversary of the agreement approached in July 2001, the European Commission requested an independent expert review of the implementation by U.S. companies of the Safe Harbor requirements.21 The expert review found deficiencies of such magnitude that the European Commission acknowledged in a subsequent staff report that the Safe Harbor does not work.22 The Commission’s staff admitted that the number of adherents is “lower than expected” and that “a substantial number of organizations that have adhered to the Safe Harbour are not observing the expected degree of transparency as regards their overall commitment or the contents of their privacy policies.”23 More recently, the European Union has objected to the transfer of airline passenger data from Europe to the U.S. Customs Service.24 U.S. law requires airlines that land in the United States to provide information about passengers to U.S. authorities.25 The program, known as Computer Assisted Passenger Pre-Screening (CAPPS), profiles passengers and seeks to identify potential terrorists. The profiling system and subsequent use of the European data, raise great concern and many issues under the European laws.26 The negotiations over privacy protections for European passenger data are quite sensitive because the information transfers implicate basic security concerns for the United States and fundamental political rights for European citizens.
INTERNATIONAL APPROACHES TO DATA PRIVACY Reidenberg
99
In addition to the international tensions, the divergences also create technological tensions. European data protection supervisory authorities are beginning to focus on the interface between data protection law and technical designs. More specifically, the European supervisory authorities have started to examine the technological compliance with European standards of fair information practice. For example, Microsoft’s .Net Passport faced scrutiny to assure that its authentication services could be used in a fashion compatible with European data protection requirements. 27 Microsoft agreed to modify the product following negotiations with the European Union’s Article 29 Working Party of data protection commissioners.28
LESSONS AND RECOMMENDATIONS The international perspective and tensions offer a number of lessons and recommendations for U.S. policy to resolve the competition between security and privacy. As European Commissioner Frits Bolkestein observed, the fight against terrorism and the right to privacy must “strike an appropriate balance. It is also necessary to be practical and not theoretical.”29 To strike the appropriate balance, four key aspects emerge from international approaches that would be valuable to incorporate into U.S. policy:
DEVELOP AND ACCEPT A VALUES HIERARCHY. The boundaries between security and privacy must be developed in conjunction with a hierarchy of values. Fundamental security needs challenge the ability of democratic society to safeguard citizen privacy. Government must be able to protect citizens from imminent threats against their physical safety. The conditions and extent to which privacy will unavoidably give way in democratic society to fundamental security needs must be proportionate. For example, the risk of privacy invasions from a bioterrorist attack in any major metropolitan center may be greater than those of a well-regulated and controlled data monitoring system that prevents such an attack. Quarantines enforced by online surveillance cameras are just one example of the more intrusive practices that are likely to arise from future fears of bioterrorism.30 An executive order in the United States already authorizes the quarantine of individuals suspected of having
100
A LITTLE KNOWLEDGE
certain communicable diseases.31 The public needs to have confidence that the proportionate relationship between surveillance for security and compromises on privacy is appropriate, so that democratic institutions and values prevail in the struggle against terrorist violence.
IDENTIFY PUBLIC INFORMATION NEEDS. The creeping loss of privacy that arises from narrowly defining personal information and exempting public information from protection calls for a reevaluation of U.S. policy toward public information. The need for personal information to be “public information” must be identified clearly and narrowly. This reduces the erosion of public and private distinctions and the corresponding loss of citizen privacy. At the same time, the use of public information should be restricted to the purpose for which the personal information was made public. Such a policy promotes basic fairness in the treatment of personal information and minimizes the adverse impact on privacy without compromising the objectives of open government. INSTITUTIONALIZE CHECKS AND BALANCES. In defining boundaries between security and privacy and in identifying public information needs, independent, institutional safeguards are essential. Oversight independence, such as a data protection agency, assures that security incursions on privacy occur only when threats are imminent and the dangers substantial. Similarly, independent institutional assessments provide validation of the need for personal information to be disseminated through the public record. Likewise, safeguards such as sunset provisions on surveillance programs assure that privacy can be restored once the security threat has diminished. IMPLEMENT ARCHITECTURAL CONTROLS. Lastly, technology and architectural designs must be deployed to minimize potential privacy intrusions. The international tension over airline passenger data reflects the utility of this approach. Europe is concerned about the subsequent use and sharing of transferred data. U.S. Customs can work to design its database so that sharing and subsequent use do not initially take place on an identified basis. Only at the point of a “hit” or probable match could identifying information be released through the information processing system. Such architecture would build safeguards directly into the data processing system.
INTERNATIONAL APPROACHES TO DATA PRIVACY Reidenberg
101
As a matter of U.S. policy, privacy and security interests should be reconciled to the greatest extent possible through the use of technological configurations. Indeed, independent oversight of a technological configuration will often be easier than oversight of particular data uses. Although security and privacy may appear at first blush to compete inexorably, international approaches suggest that mechanisms can be developed in U.S. law and policy to accommodate both sets of interests. These mechanisms have the advantage of modernizing U.S. data privacy for an age of security-conscious data processing. At the same time, these mechanisms are likely to reduce the looming international conflicts that arise from the weaker, divergent approach to citizen privacy in the United States.
8 DO NEW TECHNOLOGIES SUPPORT A BROADENING OF PUBLIC INFORMATION RIGHTS? SALLY KATZEN
I
n the past three decades, we have witnessed a rate of change in technology and its varied uses that is virtually unparalleled in our experience—a not-so-virtual “Information Revolution,” producing a much-heralded “Information Age.” Among other things, there has been an enormous expansion of our ability to sort and analyze data from multiple sources at incredible speeds. One of the byproducts of this is an ability to disseminate that information—in the form of raw data (sometimes referred to as databases) or in an almost infinite number of arrangements or compilations—to multiple sources (far and wide) at virtually no cost. New technology enables us to do more with the information we have, and thus we obviously should take a new look at our information policies, including public information rights. But what changes are called for is not immediately obvious. First, technology is not the only thing that is new. We also work differently— possibly because of our computers and possibly because we have come to think differently about communications. Two examples: Thirty-five years ago, when I was working as an associate in a law firm, my assignments typically took the form of producing a memo— I should say a memorandum because it was a fairly formal process. The memorandum included my research (at least the research that I 103
104
A LITTLE KNOWLEDGE
ultimately concluded was relevant) and my analysis. It was painstakingly written and carefully edited to be clear, precise, and, I hoped, persuasive. When the partner (or client) received the memo, I generally threw away the back-up notes, numerous drafts, and so forth, and the memorandum itself was placed in a file in a central repository (where it could be retrieved by anyone in the firm who knew the filing system developed by the very professional secretarial staff). Today, even for those who produce such memoranda, the research and much of the other back-up have inevitably been downloaded onto their hard drives. The drafts of the memo are also somewhere in the innards of their computers (and can be retrieved by someone more technically proficient than I am), even if the drafts have been deleted. The memo itself is in their computers and, if it was electronically transmitted to the addressees, it is also in the addressees’ computers. If they forwarded it—or part of it—to others, with or without additions of their own that are now indistinguishable from the original work, it is being stored in countless other computers lacking any central retrieval process. The second example of a change in how we work involves email. When I was recently in the government, I received lots of emails, so many that, I am embarrassed to say, I often had my assistant print them out so I could read them during meetings or at home that evening. The e-mails often provided information I may or may not have wanted or needed, or they asked for a quick response to a question that had no lasting import. But not infrequently, they forwarded a lengthy article, memo, or draft position paper, with “comments?” or “what do you think?” or “reaction?” leading off or closing the e-mail. We have all gotten used to responding to e-mails with a few (not necessarily well chosen and rarely edited) words: “fine,” “looks o.k. to me,” “weird,” and “who would have thought?” are just a few of the possible replies from those of us who still do not think of e-mail as having the same weight as something written on a piece of paper. In fact, some of those quick replies used to be communicated orally, sometimes not even in words but with a lifted eyebrow or a pointed gesture, and that was that; any follow-up that was appropriate or required would be done at a later time and would supersede the quick response (of which there was no record). Now, particularly if you work in the Executive Office of the President, your instantaneous reaction is preserved for all time, to be unearthed by a researcher (or voyeur) who may not appreciate humor or sarcasm,
DO NEW TECHNOLOGIES SUPPORT PUBLIC INFORMATION RIGHTS? Katzen
105
horror or irritability, or the desire just to get rid of something not really worth the time to treat it seriously. Apart from thinking about what we do and how we do it, we should also consider whether existing public information rights are the right rights—for the idea of “broadening public information rights” assumes that the rights that we are contemplating broadening are appropriate, valid, effective, and efficient in achieving their objectives. This is not entirely a theoretical exercise, because we have learned a lot about how these rights have played out since the sixties and seventies, which produced the Freedom of Information Act, the Government in the Sunshine Act, and the Privacy Act. At the risk of appearing oblivious to subtleties, I think it clear that the Freedom of Information Act (FOIA) and the Government in the Sunshine Act were essentially designed to afford the public a look into how government functions. The FOIA was to enable the public to look at the records the government keeps, and the Sunshine Act was to enable the public to observe decision meetings (at least in multimember commissions), so that the discussion as well as the decision would be “on the record.” The proponents of both acts relied heavily on the oft-quoted phrase “sunshine is the best disinfectant,” and they genuinely believed that the transparency that would result from the mandated openness would not only make the government more accountable, which was a driving force behind these acts, but it would also cause the government to reach “better” decisions. Both the FOIA and the Sunshine Act assiduously avoided making any distinctions on the basis of the identity or motive of the requester/observer, whether he was a researcher doing sound academic work, a news reporter engaged in her craft, or just a curious individual on a fishing expedition looking for whatever he might catch. Indeed, having to reveal why we wanted the information or why we wanted to attend a particular meeting would implicate First Amendment values, because it would imply that someone in the government would then say whether the request should (or should not) be granted, and some would argue that the government should not be distinguishing among requesters on the basis of content. There were certain (intended to be limited) exceptions in both acts, such as protected trade secrets or the release of predecisional intragovernmental communications. And these exceptions were complemented by the Privacy Act, which, again at the risk of overgeneralization, recognized that when the government had personal information about an
106
A LITTLE KNOWLEDGE
individual, that person had an interest in not having the government disseminate that information to others without his consent. (Incidentally, our concept of privacy, and our concerns about threats to it, have changed a great deal since the seventies, but that is another topic altogether.) In short, a balance had been struck: the government could release information it held unless there was a specific, well-recognized reason not to release it. Any costs that might result would likely be trivial and certainly well worth the increased government accountability. And the public would be better informed, and democracy would flourish with a more enlightened citizenry. So, have these acts lived up to expectations? It is difficult to be definitive because there is no alternative baseline—we cannot know what we would have experienced if the laws did not exist. Nonetheless, I am clearly in the camp that thinks they have been salutary and, while not perfect, generally successful in achieving their objectives. Let me, however, spend a few moments on the imperfections, because I believe they are real and must be taken into account as we rethink information policy. Take FOIA. As a lawyer in private practice in the seventies, I applauded the requesters and cheered when, long after the statutory deadline, they would take the nonresponsive agencies to court and demand access to the information they had requested. Why were the agencies dragging their feet, taking so long to process requests, and being so stingy in giving out information? Shame on them. Then I went into the government, specifically in 1979 as the General Counsel of the Council on Wage and Price Stability. Because of rising inflation, we had a “voluntary” program of wage and price controls. To monitor the major companies’ compliance with our standards, we asked them to report their prices (and for those that were being evaluated by increases in profit margins rather than prices, we asked for various data needed to calculate profit margins). Almost all of the covered companies provided the information, and we were busy sorting through it all to find questionable practices when we were literally inundated with FOIA requests from the reporting companies’ competitors, seeking access to the private price or profit information the covered companies had given us. I thought it obvious that this information fell within one or more of the exceptions and that all of these requests could be turned down. That turned out to be more complicated and difficult than I had supposed. Each request had to be read carefully by one of our lawyers to make sure there was not something
DO NEW TECHNOLOGIES SUPPORT PUBLIC INFORMATION RIGHTS? Katzen
107
being requested that should be produced; if there was a request that was not a slam-dunk denial, the lawyer had to prepare a memo setting forth the issues and the recommendations of the program officials, and inevitably there was a meeting to discuss and resolve the open questions. Even if we were going to reject the request outright, the lawyer still had to draft a letter denying the request, and even though it did not take us long to draft a form letter for this purpose, it took all the time of one of the three secretaries in the office (trying to keep up with the work of six lawyers) just to type the letters and prepare them for mailing, after I spent the evening reading each response, looking back through the incoming letter and the file that was assembled, and eventually signing my name. I probably overstate the disruption FOIA caused in our office, but it certainly did detract from our ability to focus on our “mission,” and with limited personnel (there are never enough people in government), that was a real problem. I thought at the time that this was an abuse of FOIA and that I should probably fault greedy clients or aggressive lawyering rather than the act for whatever inconveniences we experienced. I was, however, more understanding of the agencies’ reactions to FOIA and no longer cheered each time one was taken to a court of law or embarrassed in the court of public opinion. When I returned to private practice, I began to get religion again and had almost fully rejoined the pro-FOIA camp when I became a senior policy official at the Office of Management and Budget (OMB). On one of the first days on the job, I was presented with a list of overdue FOIA requests. These could not possibly involve protected information, I said, so let us clear them out as soon as possible. Again, it was not so easy. Some of the requests were specific, and the requested documents were easy to find and easy to evaluate; once such requests got to the head of the queue, they were promptly taken care of. But there were always those requests that were impossible to process quickly. Some were quite expansive—“any document that relates in any way to any regulation under the Clean Air Act”—or difficult to evaluate—“any document supporting cost-benefit analysis.” How to gather the documents that were potentially covered (when any one of them could be in any one of the staff’s computers)? How to read through the stacks of assembled documents and decide if each fell within the request and not within one of the exemptions? How to tee up the issues when there were serious questions, which there usually were for any deliberative documents (because that’s what OMB does:
108
A LITTLE KNOWLEDGE
deliberates public policy)? How to do all of the above when there were other important projects, assignments, tasks demanding our attention? All of this led to a certain frustration at the staff level, a certain irritation at the policy official level, and an almost irresistible temptation to put the request on the back burner and wait for the next report of overdue FOIA requests. Once, in a burst of overconfidence, I found myself thinking that if I could just talk to the requester and find out what he was really after, I could probably accommodate him in short order. I remember painfully the call we made. It began all right, but when I started to say that if he would tell me what he was looking for, the requester interrupted and told me it was none of my business what he was looking for—turn over the documents or he would see me in court after he had told all of his friends in the press about my attempt to intimidate him. My feelings about FOIA took another turn for the worse.1 The Sunshine Act has its own problems. I have had very little first-hand experience with the Sunshine Act, except that I remember in the Carter administration that I had heard about a “closed” premeeting that John Dunlop was arranging for the Advisory Committee of the Council on Wage and Price Stability. When I asked him about it and mentioned the Sunshine Act, he laughed in my face. He told me that he was going to do what he had to do to get his job done, that I had a lot to learn about life in general and life in Washington in particular, and that I should just run along and focus on the agenda, the hand-outs, whatever. I have always thought there must have been a pre-meeting, because the meeting itself, which was open to the public, went like an orchestrated dance. Several years later, I was in the audience when Ervin Duggin, then a commissioner at the Federal Communications Commission (FCC), gave an eloquent (and quite humorous) speech about his experience with the Sunshine Act. He told of the commissioners at the FCC—each of whom was nominated by the president and confirmed by the Senate—sitting in their offices, unable to talk to more than one colleague at a time about any matter within their jurisdiction. In an attempt to get something done on the job, the commissioners would send their legal advisors or other assistants to talk with the legal advisors or other assistants of other commissioners, and report back on what their respective bosses were thinking, what compromises were possible, what new ideas any of the commissioners might be considering, and so forth. It seemed that one of the unintended
DO NEW TECHNOLOGIES SUPPORT PUBLIC INFORMATION RIGHTS? Katzen
109
consequences of the Sunshine Act was that the staff—who are officially anonymous and unaccountable—became the movers and shapers of policy, rather than the commissioners, who had taken solemn oaths to promote the public interest. Even though he was so funny, it was a very sad tale indeed. Notwithstanding these imperfections with FOIA and the Sunshine Act, I remain convinced that the values they reflect, the aspirations they embody, and the objectives they seek to advance are as important and valid today as they were fifty years ago. But before we look at whether, and if so how, to broaden these rights, we need to spend a moment on another important component of information policy, OMB Circular A-130. For those who are not true cognoscenti of information policy, A-130 (as it fondly referred to by those in the know) is a document, issued by the OMB, that “establishes policy for the management of Federal information resources.” Some of it is devoted to the technology side—how agencies should think about obtaining and using information technology—but the relevant part for our purposes is Section 8a: “Policy.” This section admonishes agencies only to collect or create information necessary for the proper performance of their functions. It is also unequivocal at the dissemination end: “Agencies have a responsibility to provide information to the public consistent with their missions.” There are a number of specifics, but the essence is that government information is a valuable public resource and its dissemination (on equitable, non-restrictive and timely terms) is a public good and an important, indeed vital, obligation of the government. In some ways, this is the flip side of the FOIA process—the burden shifts from an individual requester asking for a specific document to the government’s having the responsibility of disclosing the information in the first place. There is an incentive for the agencies to disseminate as much information as possible, because the more the government disseminates, the less need there is for FOIA requests, which most agencies believe are a terribly inefficient, labor-intensive means of getting documents into the public realm—one request at a time, just the documents that are covered by the request, and at least initially being provided only to the requester himself. Also, technology works wonders with the government’s being proactive. The cost of putting the information on the Web is a one-time hit for the agencies, and the multitudes who want the information can download it at virtually no cost and with virtually total anonymity to the
110
A LITTLE KNOWLEDGE
“requester.” It sounded so sensible and so “good government” that we took a number of steps to put this policy into practice during the Clinton administration. Once again, we found that it was easier to say something than to do it. Start with the issue of cost. Even with the best technology, information dissemination is not really cost free. Someone (namely, staff personnel) has to identity the information and put it online, and some of it has to be refreshed to remain valid. This takes time and effort (in other words, money). I am sure you will not be shocked to hear that information dissemination is not high on the hit parade of most policy officials; if there are only a limited number of nickels, they would rather use their precious coins for a sexy “new initiative” that will speak well for their tenure in office. Very few people would choose “making the government run better” as their legacy. And this view of how to allocate (scarce) resources is even more pronounced for members of Congress, who would rather put the money in a choice piece of pork they can bring home to their constituents than in some abstract “good government” public information project. We also ran into the problem that people outside the Beltway (indeed, often people inside the Beltway) knew that the government had information they wanted, but did not know which department or agency was the originator or holder of the data. Absent an address or a very good search engine, it is often difficult to get your hands on something even if you know it exists. Our first step was to begin consolidating—on a virtual basis only—sources of similar data. We started with FedStats—a single stop for statistical information that could have been generated or housed at the Bureau of Labor Statistics, or at the Bureau of the Census, or at any of the lesser known statistical agencies, such as those tucked away at the Department of Transportation, the Department of Agriculture, and so forth. With FedStats, people did not need to know in advance of their inquiry where the data were—they simply made their requests and were taken (virtually) to whatever agency (or agencies) had the information they wanted. We were quite pleased with the results (and the public’s reaction), and that encouraged us to take the next step—www.FirstGov.gov. We got a tremendous boost from Eric Brewer, of Inktomi, who contributed the search engines that would sift through (the technical term is spider) all the data we had online and retrieve the relevant documents for the requester. But if the search engine is effective (which
DO NEW TECHNOLOGIES SUPPORT PUBLIC INFORMATION RIGHTS? Katzen
111
this was), and if there is a lot of information online (which there was), the requester would be presented with everything remotely relevant to the request, and we run smack into the problem of too much information. Our data banks runneth over, and it is not an overstatement to say that too much information can be as difficult to deal with as too little information. Drown them in a data dump and they will never find the one thing they wanted! The large amount of information should not be surprising, because the advances in technology enable us to feed our growing appetites for facts/data/statistics to support/oppose every move we make, which leads the government to collect more and more information. (That is one reason the emphasis in A-130’s collection guidelines is on “practical utility that in fact advances the mission of the agency.” The wish list, and hence the burden on respondents, would otherwise be unbelievable.) At one point, we asked some of the information technology people at the agencies to review the results of various (not-so-hypothetical) searches to make sure that we were getting the materials the requester would have wanted and to impose some hierarchy on the results of the search. We had some success in tweaking the system, but not much. I am confident, however, that better search engines, better programming, and more experience can help solve information overload, and as technology continues to advance, the costs will surely decrease. There is a problem that technology cannot solve. A-130 dictates that the government disseminate information that the public wants and that can be disclosed without any harm to the government. Reasonable people, however, may differ on those judgments, particularly the “harm” issue. See, for example, the different approaches to FOIA reflected in Attorney General Janet Reno’s memo of October 4, 1993 (“presumption of disclosure” unless there is “foreseeable harm,” with a goal of “maximum responsible disclosure”) and Attorney General John Ashcroft’s memo of October 12, 2001 (“discretionary disclosure” only after “full and deliberate consideration of the institutional, commercial, and personal privacy interests that could be implicated”).2 Well before September 11, OMB went through a very difficult exercise called “worst-case scenario.” Briefly (and overgeneralizing as usual), the Environmental Protection Agency required facilities handling large quantities of extremely hazardous chemicals to develop risk-management plans, which were to include all of the possible “off-site consequences” of a hypothetical accidental chemical release.
112
A LITTLE KNOWLEDGE
These plans (including a detailed description of the worst-case scenario) were to be made available to the public as part of the “public’s right to know” effort—which had been extraordinarily successful in reducing the amount of noxious toxins in the air, water, and so forth. (The theory, and what occurred in fact, is that when companies publicly report on their use of selected chemicals, the people who live in the surrounding area become empowered with this information and can then bring not-so-subtle pressure on the company to decrease its use of those materials.) There was, however, significant concern in other federal agencies that, by widely disseminating these plans—as in putting them on the Internet—we were providing a detailed roadmap to anyone (here or abroad) who wanted to wreak maximum damage. We ultimately reached a compromise within the administration. Instead of putting the plans online, a hard copy would be placed in a public reading room located near the site. People could come into the reading room and read the plans (there was a limit on the number of plans any individual could see in a month), but no one could copy or take away any of the plans. This caused some people real anguish, for they saw the restrictions on access to these plans as cutting into the heart of the public’s right to know. After September 11, they felt even worse. A lot of information that had previously been online, from such diverse government agencies as the Departments of Energy and Agriculture, was suddenly yanked. And the Homeland Security Act created a whole new category of information—Homeland Security Information—that is not classified but is presumably not for public consumption. There is regrettably a tendency in the government, even in the best of circumstances, to take a protective stance—if in doubt, do not release it. For those who cheered the Reno memo, there was considerable disappointment that the current administration had shown a decided preference for confidentiality rather than disclosure before September 11, and since then has taken, and is considering taking even more, steps in that direction. National security has trumped accountability. This is not the time or place to try to resolve who has the better of the argument on this issue, but rather to note that clearly people see the risks and rewards of dissemination of some types of information differently. And if those who see danger in dissemination are in positions of authority, the amount of material made available under A-130 (or FOIA for that matter) will be significantly reduced.
DO NEW TECHNOLOGIES SUPPORT PUBLIC INFORMATION RIGHTS? Katzen
113
Reliance on A-130 begs another question. Apart from content considerations, what types of documents, databases, and notes should the agencies be making available to the public? (I am trying to avoid the term “records” because that is another can of worms.) There is no question about “published” documents or databases—surely, if the government prints something in the Federal Register or distributes it by CD-ROM, there should be no hesitation to put it online. There should also be no question about the contents of various agency docket file rooms, which might, for example, contain all the materials relevant to a particular rulemaking proceeding, including some of the drafts of the proposed or final rule.3 There are also various internal agency documents that might help the public navigate its way through the agency (directories, organization charts, and so forth). These are easy. But what about internal decision memoranda? Or drafts of what become final agency documents that were never sent outside the agency? What about intra-agency discussions (now given immortality by being preserved in e-mail)? What about those “quick responses” I mentioned at the beginning of this chapter? The fact that we can capture and preserve these items does not necessarily mean that they should be disseminated to the public—at least not without their being asked for—wholly apart from any of the FOIA exemptions. Again, this is not the time to decide on a definitive list of do’s and don’ts. I simply want to raise the issue that, when we look to the government to fulfill its responsibilities under A-130, we may not get, and may not want to get, every scrap of paper or every e-mail. By now, it should be clear that my preoccupation with A-130 exists because, while I want to expand the amount of government information available to the public, I would rely on the A-130 model rather than focusing on an individual’s rights. A-130 has the advantage of efficiency, fairness (what one person gets to see, everyone gets to see), and a proactive, centralized decisionmaking process on what is to be made public (rather than an ad hoc reaction to an individual request). The agencies have an incentive to do a good job—if not because of pride or any competitive spirit, then to minimize the number of FOIA requests that would otherwise be made. And, given the way we now use computers to conduct business, each FOIA request, if it is taken seriously—as it should be—will inevitably result in an enormous processing effort that will reach virtually everyone in the agency and not necessarily produce information that is particularly probative of anything.
114
A LITTLE KNOWLEDGE
A-130’s success in achieving the goals of accountability and a better informed citizenry will obviously depend on the government’s good-faith commitment to improving/expanding public information, which in turn may depend on its being convinced that money spent on this endeavor will reap greater rewards than those produced by the high transaction costs of FOIA. I believe we will not be disappointed. But I am not so naive that I would rely exclusively on A-130. In fact, even if agencies do a masterful job of disseminating their information, there will still be some things that the agency (perhaps mistakenly) does not think the public is interested in or that the agency thinks should not be disseminated for some (possibly invalid) reason, and this brings us back to FOIA. FOIA should remain, not as the opening salvo, but as the last resort to ferret out information about which the government may not be forthcoming. One change I would like to see considered, however, is our traditional reluctance to inquire about (and to evaluate) the identity and motives of the requester. Some may immediately think that this will enable the government to turn down “legitimate” requests; it is also possible, however, that it will enable the government to provide information that might otherwise be withheld, delayed, or lost in the shuffle. I am suggesting this change because all information is not the same. My earnings records at the Social Security Administration or the contents of my FBI file is one thing; commercial pricing, trade secrets, or contracting information is another; and information about meetings with, or documents sent to, a high-ranking official in the Executive Branch shortly before an announcement of a policy change is another. Similarly, not all requesters are the same. I stand in a different place requesting information about me in the government’s files than does my neighbor. An academician conducting serious research or a newspaper reporter working on a story is different from a gadfly looking for dirt or someone on a fishing expedition, who is in turn different from a lawyer representing the competitor of the company about which he is seeking commercial information in the government’s files. I am not suggesting setting the bar at “good cause”—that is clearly too high. But what about a “non-frivolous” standard that takes into account what the information is and why it is being requested? Accountability is important, and the standard should be set so that those who seek to hold the government accountable will be able to meet it. But I am less sympathetic to the “just
DO NEW TECHNOLOGIES SUPPORT PUBLIC INFORMATION RIGHTS? Katzen
115
curious” or “let me see what you have and I’ll pick what I want.” It bears emphasis that processing each request takes government personnel away (even temporarily) from their normal jobs, including their job of providing information to the public. And with the changes in technology and the way we conduct the business of government, that disruption and distraction is going to get worse, not better. To those who say I would be restricting rather than broadening information rights, I would respectfully disagree. I would push hard on the A-130 model. Changes in technology enable the government to collect, generate, sort, analyze, and disseminate all sorts of information that would have been very difficult to gather and distribute in a paper world. Let the agencies plan to make this information available to the public in an efficient and effective way. Push the agencies to do more. And more. Retain our individual information rights, but ask those who want additional information to assume some responsibility for helping to make the system function for the benefit of all. This should produce a large net gain in public information, or at least a spirited debate.
9 PUBLIC INFORMATION, TECHNOLOGY, AND DEMOCRATIC EMPOWERMENT PETER M. SHANE
he federal Freedom of Information Act (FOIA)1 as enacted in 1966 cast agencies in three distinct roles with regard to public information: publisher, librarian, and clerk. As publisher, each agency was required to “currently publish” basic information about its organization and processes, as well as those substantive rules promulgated by the agency that are intended to have the force of law.2 As librarian, the agency was to “make available for public inspection and copying” several other varieties of fairly basic agency information, such as decisional orders in administrative adjudications, non-binding statements of policy and interpretation, and administrative staff manuals and instructions.3 As clerk, the agency was to make available any such other agency records as might be identified appropriately and reasonably in citizen requests, except for those records or portions of records that fell within a series of exemptions from mandatory disclosure.4 The 1996 Electronic Freedom of Information Act Amendments (E-FOIA Amendments)5 significantly blurred those roles. For example, the E-FOIA Amendments require that the information that agencies were originally compelled to “make available for public inspection and copying” now be made available through electronic means.6 This essentially dissolves the distinction between publisher and librarian. Moreover, the categories of information to be made
T
117
118
A LITTLE KNOWLEDGE
available for public inspection and copying now include all records released on request to individuals that “the agency determines have become or are likely to become the subject of subsequent requests for substantially the same records.”7 So much for the distinction between librarian and clerk. These shifts are important because they signal that the genuine transformation that new information and communications technologies (ICTs) could precipitate with regard to public access to government information need not be just about “more” and “faster.” Of course, new ICTs have revolutionized the volume and speed at which we can gather, store, manipulate, and disseminate information. That fact rendered all but inevitable the trend toward faster and highervolume ICT-enabled public access to government information. The deeper potential transformation, however, is not quantitative, but qualitative—a transformation in the relationship of agencies to citizens, of citizens to government, and of people to information. Basically, the potential revolution not yet reflected in the E-FOIA Amendments is the capacity of ICTs to facilitate ongoing systems of communication between citizens and agencies, in which citizens are empowered to hold agencies more effectively to account, encouraged to reflect on and enhance the information available to agencies for their decisionmaking, and generally enabled not only to acquire agency data, but to understand those data in some meaningful context. These functions, in other words, recast the agency in a new role: enabler of democracy. I would like to outline the contours of this vision briefly, and then mention three probable implications for realizing its potential.
FOSTERING CITIZEN DIALOGUE AROUND PUBLIC INFORMATION A good deal of the public excitement engendered by the Internet throughout the world has been the theoretical capacity of new ICTs to revitalize democracy. Plainly, we can see that the customary actors in our political life—candidates, parties, elected officials, public interest groups—are already using the Internet ambitiously to do conventional tasks in new ways: sharing information, raising money, and mobilizing support. But the democratic uses of the Internet run potentially to something rather different. New ICTs can enable a networking of
PUBLIC INFORMATION, TECHNOLOGY, AND DEMOCRATIC EMPOWERMENT Shane
119
citizens with government, and of citizens with one another, that has the potential to generate new levels of citizen engagement in the processes of making public policy. This would not be so-called direct democracy. It would not realize the dream (or nightmare) of some early Internet theorists that the disintermediating effects of networked computing would render representative democracy obsolete.8 I do not imagine, much less desire, that each of us, sitting at his or her laptop, should be able to cast electronic ballots directly on virtually any question of public policy. A more plausible and constructive vision is rather a more robust institutionalization of our constitutional framers’ twin commitments to both representative and deliberative democracy.9 New ICTs can facilitate informed deliberations among both citizens and officials to help in the formulation of public policy that is genuinely in the public interest. Structured online citizen dialogue can provide officials a much keener sense of what the public thinks about issues—that is, what the public thinks if given a chance to be both informed and deliberate. There is clear evidence that the public would be receptive to reforms along these lines. A 2003 Hart-Teeter survey of 1,023 randomly selected adults in the United States, including an oversample among users of government Web sites, asked people to identify the most positive gains that might be realized from socalled e-government initiatives. Holding the government more accountable to citizens was the clear winner, chosen by 28 percent of respondents. Another 18 percent cited greater public access to information. This compares to 19 percent who expected government to become more efficient and cost-effective, and only 13 percent who thought more convenient government services would be the most positive outcome.10 At the same time, the development of government Web applications to support citizen deliberation is in its infancy. Most government-supported interactive Web applications soliciting citizen views are dedicated either to facilitating electronic mail between citizens and individual officials, or to maintaining bulletin boards on which citizens can post their views about particular concerns. There is plainly potential value to both applications, but neither can go very far in terms of empowering citizens to engage more significantly in the development of public policy. The most advanced initiatives for facilitating one-way citizen-to-government input are the ongoing federal
120
A LITTLE KNOWLEDGE
efforts to conduct online notice-and-comment rulemaking. Particularly impressive is the new Web site www.regulation.gov,11 which allows citizens to search by keyword for pending regulatory initiatives on any topic of interest, and then, through a single portal, to submit comments to any open rulemaking or link to previously submitted comments. Such applications have obvious potential to involve much larger numbers of citizens in the process of providing administrative agencies with additional views and relevant information. They fall short, however, of creating either dialogue or citizen deliberation. What I have in mind as a potential form of application is best illustrated by a recent effort, led by philosopher and political scientist James Fishkin, to conduct an online Deliberative Poll. A deliberative poll is a form of structured citizen dialogue that seeks to ascertain informed public opinion on public policy issues.12 In its face-to-face version, random samples of as many as 800–1,500 potential participants are interviewed by telephone on the issues under discussion and then invited to participate in an extended deliberation on those issues, which generally takes place over a weekend at a central location. Participants receive incentives for agreeing to attend and then actually showing up and completing all questionnaires. Organizers send participants advance briefing materials that present the major arguments regarding the contending positions on whatever question is at issue. Preparing the briefing materials is highly labor intensive, because they have to be assembled (or at least vetted) by a group that represents a broad spectrum of stakeholders and perspectives. The aim is to produce a document that all sides agree treats their respective positions fairly. When the 150–300 deliberators show up for their actual discussion, they get some preliminary instruction and are divided randomly into small groups of 12–20, each with a trained moderator. They then alternate between small group discussions of the issues presented and so-called plenary sessions, where questions developed by the participants in their small groups are put to panels of experts who represent the contending positions. At the conclusion of their deliberations, participants respond to the same questionnaire as in the pre-deliberation telephone survey. Nearly two dozen face-toface deliberative polls have demonstrated the capacity of this technique to engender significant knowledge acquisition and opinion change among its participants, yielding a representative sample of the informed opinion of a local, regional, or even national public.
PUBLIC INFORMATION, TECHNOLOGY, AND DEMOCRATIC EMPOWERMENT Shane
121
Recently, James Fishkin, along with political scientist Shanto Iyengar of Stanford, has demonstrated that a deliberative poll can be conducted online. In January 2003, Fishkin, Iyengar, and their associates divided a representative sample of 280 Americans into online groups of 10–20 discussants, each led by a moderator, to discuss the role of the United States in world affairs.13 The team’s apparent success in replicating the positive effects of face-to-face deliberation points to the eventual feasibility of conducting online dialogue in a variety of structured formats to help widely distributed groups of citizens develop and share better informed views of issues of public policy. Consider how this technology might be deployed one day to create an entirely new kind of agency–public relationship with regard to public information. Here are two different hypothetical scenarios: ◆
Imagine that substantial public interest exists with regard to how a particular administrative agency is handling a controversial issue. Dozens, perhaps even hundreds, of FOIA requests are pending regarding one or another aspect of the agency’s performance. Instead of responding on an ad hoc basis, the agency might be authorized (or required) to publish a notice that this is a public access “hot topic,” specifically inviting members of the public who are interested in access to information on this topic to submit online requests for the information they would like to see. At the close of the request period, the agency would create an online library responding comprehensively to the requests, either by disclosing all information requested or providing an explanation as to why information was not being disclosed. The agency could then arrange to stage an online deliberation, perhaps after the online library had been accessible for thirty days. The discussion could allow the FOIA requesters as a group or perhaps a scientifically selected and representative sample of Americans in general to deliberate on whether the information provided is sufficient to answer critical questions regarding the agency’s behavior, or whether additional information ought be developed or disclosed. The agency might then be authorized (or required) to respond to the deliberation, either by developing or disclosing additional information, or explaining in writing why further additions to the online collection were unnecessary.
122 ◆
A LITTLE KNOWLEDGE
Under the so-called Data Quality Act,14 the Office of Management and Budget (OMB) has issued “policy and procedural guidance” to federal agencies, requiring the agencies (1) to issue data quality guidelines “ensuring and maximizing the quality, objectivity, utility, and integrity of information (including statistical information) disseminated” by the agency; and (2) to establish “administrative mechanisms allowing affected persons to seek and obtain correction of information maintained and disseminated by the agency that does not comply with the guidelines.”15 Now that the individual agencies have established their processes, it is already clear that private parties are hoping to transform procedures for objecting to the quality of agency information into a weapon for delaying agency enforcement activity.16 Rather than waiting for litigation to determine whether it has met its data quality obligations, however, an agency might issue a public notice regarding an online public proceeding to assess the quality of such data as the agency proposes to issue in order to further some public policy objective. The information in question might, as above, be provided in an online library, and then the agency could convene an online “public hearing” to deliberate over any issues that might be raised regarding the data’s “quality, objectivity, utility, and integrity.” Again, this could be a hearing of a representative sample of American citizens. Alternatively, the agency could enlist a national panel of experts in relevant fields. The “hearing” could unfold in such a way that only registered panel members could contribute in real time, but anyone could comment on the substance of the deliberation through a specially designated online bulletin board.
These examples are hardly exhaustive, and, plainly, each raises significant questions of design and implementation. What their plausibility demonstrates, however, is the feasibility of using ICTs to do more than simply transmit to the public larger amounts of information at faster speeds. Technology makes it possible to recast the relationship between agencies and citizens in a fundamental way, and to use deliberations over public information as a vehicle for intensifying the public’s connection to policymaking and the achievement of government accountability. Indeed, the coupling of deliberation with public access addresses an obvious missing link in the FOIA model of open government. It is
PUBLIC INFORMATION, TECHNOLOGY, AND DEMOCRATIC EMPOWERMENT Shane
123
not just “data points” that an informed public needs in order to become a fully engaged citizenry; it is understanding. This distinction is sometimes expressed as the difference between “information” and “knowledge.” Using ICTs to create online communities of citizens immersed in the analysis of government performance and public policy makes government records infinitely more valuable for citizen engagement in democratic self-governance. And it is not just the networking technologies themselves that will enrich this effort. New search engines, automatic summarization applications, and other tools designed to enable any individual to derive much greater meaning from large numbers of documents—all can be used to facilitate understanding and informed discussion. This scenario poses large questions at the level of operational detail. But I think it also requires us to rethink some fairly big issues in terms of achieving the maximum potential for this kind of a model to achieve a better informed citizenry.
ISSUE 1: OPENING THE ADMINISTRATIVE DECISIONMAKING PROCESS From the standpoint of government accountability, how government makes policy, including the processes by which it arrives at key decisions, are plainly matters of serious concern. Yet a persistent point of controversy in public information policy has been establishing rules for access to information about the internal policymaking deliberations of the executive branch. A lot of this material takes the form of so-called predecisional documents, that is, executive branch records that reveal the substance of policy-oriented discussions that precede public administrative initiatives (such as the promulgation of a regulation).17 Presidents have long claimed that all such records are presumptively immune from mandatory disclosure, whether in court, Congress, or any other forum. The FOIA permits the executive branch to fall back on that position even for everyday public requests for information. It exempts from mandatory disclosure “inter-agency or intra-agency memorandums or letters which would not be available by law to a party other than an agency in litigation with the agency.”18 Although the FOIA also states that it is not to be interpreted as authority to withhold information from Congress,19 so-called Exemption 5 practically
124
A LITTLE KNOWLEDGE
insulates a great mass of policymaking information from mandatory public release. The Reagan administration relied on Exemption 5 to shield information regarding its program of presidential regulatory oversight. In Wolfe v. Department of Health and Human Services,20 the D.C. Circuit held that the Department of Health and Human Services (HHS) did not have to disclose departmental records documenting the receipt and transmittal to the OMB of proposed and final regulations recommended by the Food and Drug Administration (FDA). Writing for the majority, then-Judge Robert Bork held that the records were protected by the “deliberative process” privilege embodied in Exemption 5. The penchant for secrecy arguably deepened under the first Bush administration, when the president assigned much of the work of reviewing regulations to the President’s Council on Competitiveness, chaired by Vice President Quayle.21 The council followed no regular procedures. It had no formal or informal agreement with Congress over legislative access to the documentation of its deliberative contributions. It did not establish any controls on the degree or nature of its substantive contacts with outside interests. It functioned chiefly through the personal staff of the vice president. It tried to exercise as much presidential influence as possible without creating any clear record of the nature of that influence. President Clinton, when he redesigned the regulatory review process in the OMB, took a very different tack. Through an executive order, he directed the OMB henceforth to make public not only the information sought in Wolfe, but, following the publication of a regulatory action, “all documents exchanged between OIRA [the Office of Information and Regulatory Affairs] and the agency during the review by OIRA.”22 This plainly shed a new intensity of sunlight on the profoundly important process of regulatory policymaking. Despite the much-noted penchant of the George W. Bush administration for secrecy, the president, to his credit, has kept this provision of the Clinton executive order in place. Yet more remarkably, it is possible now to check the status of regulations under review, as well as records of meetings and outside consultations with regard to rules under review, on the OMB Web site.23 Anyone with Internet access can determine, for example, that representatives of HHS and the Grocery Manufacturers of America met with OMB officials on March 7, 2003, to discuss the FDA’s proposed rules on bioterrorism
PUBLIC INFORMATION, TECHNOLOGY, AND DEMOCRATIC EMPOWERMENT Shane
125
and recordkeeping. Last September, members of the National Renderers Association met with OMB officials about the FDA’s proposed rule on proteins prohibited in ruminant feed. In short, our most recent Democratic and Republican presidents have each recognized that providing the public with a much clearer look into the nature of executive branch policymaking is not inconsistent with preserving the values of full and frank deliberation that the doctrine of executive privilege supposedly protects. With that in mind, it would now be appropriate for Congress to reconsider and narrow the scope of Exemption 5. Such an undertaking would require reconsideration of the constitutional bounds of executive privilege, and one can imagine that hard cases will inevitably be litigated. If anything, however, a presumption against privilege should prevail whenever release of a document has no implications for military affairs, foreign policy, or law enforcement. It could well be helpful if, at the very least, the FOIA was amended to adopt the litigation posture that the Justice Department followed under both the Carter and Clinton administrations: namely, that the department would not defend attempts at nondisclosure under Exemption 5 unless the agency could provide an independent and articulable rationale why disclosure of the information in question would injure the public interest.24 The value of online public information libraries would be greatly enhanced by ensuring online access to virtually all routinely produced information in the government’s hands that relates to both the process and substance of significant policymaking.
ISSUE 2: OPENING UP CONGRESS Yet another frontier involves the use of information and communications technologies to make elected representatives in our legislative branch more accountable. So far, the pace of executive branch innovation far outpaces legislative branch efforts to create access to information related to its own processes and policymaking. A recent independent assessment of legislative Web sites—both institutional and those hosted by individual members of Congress—found most to be “fair to poor” with regard to even a fairly basic set of objectives.25 Congress’s shortfall in using ICT to strengthen our democracy is well captured in this assessment:
126
A LITTLE KNOWLEDGE
There is a gap between what Web audiences want and what most Capitol Hill offices are providing on their Web sites. Constituents, special interest groups, and reporters are seeking basic legislative information such as position statements, rationales for key votes, status of pending legislation, and educational material about Congress. However, offices are using Web sites primarily as promotional tools—posting press releases, descriptions of the Member’s accomplishments, and photos of the Member at events.26
The most important legislative branch initiative toward fostering public access is the Library of Congress’s Thomas Web site, which provides free and comprehensive access to information about pending legislation, committee proceedings (including full text reports), and the Congressional Record.27 The problems with Thomas are threefold. First, you need to know a lot about Congress to know how to use it. Although it does have an FAQ feature (a collection of answers to commonly asked questions), the Web site does not provide a tutorial for those users who may be relatively clueless about congressional structure and process—which is to say, most Americans. Second, it is possible to find huge numbers of documents through a word search, but impossible to find within identified documents where those key words appear. Third, and perhaps most important, Thomas is not “one-stop shopping.” It offers nothing, for example, about members’ position statements and voting records. Indeed, it is arguable that information about the members’ political performance is both the most important information for a vigilant electorate, and the hardest to find online. There is nothing technologically challenging about creating a database that records all roll call votes in the House and Senate and that permits votes to be retrieved by members’ names. To the foreseeable objection by legislators that many such votes, taken in isolation, would be subject to misinterpretation, it seems a compelling response that the misinterpretation of actual information is unlikely to be a worse problem than wholly uninformed misinterpretation. It may be argued that sites administered by nongovernmental organizations can step forward to fulfill this role. For example, extensive information on members’ votes and political positions is now available through Project Vote-Smart.28 Through the PVS Web site, a voter can find any member by name or zip code. If the voter clicks
PUBLIC INFORMATION, TECHNOLOGY, AND DEMOCRATIC EMPOWERMENT Shane
127
onto a member’s “voting record,” the voter gets links to the member’s votes organized by year and subject matter. The subject matter index is fairly detailed, so the absence of a search engine may not be fatal. But a great number of topics seem not to be up to date. For example, as of this writing (April 18, 2003), there are no votes recorded for my junior senator, Richard Santorum, on any “crime issue” since the year 2000. An additional problem exists, of course, with regard to a privately maintained Web site—namely, holding it accountable for accuracy. Under “About Us,” the PVS Web site indicates that the project was launched in 1992 by “40 national leaders, including former Presidents Gerald Ford and Jimmy Carter.” One looks through the site in vain, however, for the name of any individual who is actually accountable for the day-to-day operation of the site. This is, to put it mildly, a very odd resource to be made available on an anonymous basis. In sum, there is no technological impediment to making available online to all citizens comprehensive information about Congress’s legislative processes and output. An accessible Web site should facilitate access through one portal to political information about members, institutional research, committee reports, drafts of bills, floor debates, and enacted legislation. If ICT is to be an enabler of democracy, this is a glaring omission.
ISSUE 3: THE NEED FOR NEW INSTITUTIONS In enacting the FOIA, Congress had to decide not only on a set of rules governing the disclosure of government records, but also on processes designed to ensure the faithful implementation of those rules. The system uses judicial, administrative, and political pressures to ensure compliance. Record seekers may file suit to enjoin the disclosure of information.29 Adverse judgments against government agencies may result in administrative disciplinary proceedings against agency officials who have withheld information arbitrarily.30 Agencies are required to file reports with Congress on an annual basis regarding their implementation of the FOIA, and the attorney general must file a report that not only details the year’s FOIA litigation but also includes “a description of the efforts undertaken by the Department of Justice to encourage agency compliance” with the FOIA.31 As a
128
A LITTLE KNOWLEDGE
whole, this system has not worked badly, and it ought be noted that, to the extent Congress may be disposed to create yet broader information rights for citizens, the Supreme Court has recently acknowledged that even implicit “informational injuries” arising under administrative statutes are adequate to sustain citizen standing to sue.32 At the same time, it remains problematic, as Sally Katzen observes in chapter 8, that maximizing public access to government information is not the central mission of any agency. The Department of Transportation promotes transportation. The Department of Agriculture promotes agriculture. Resources devoted to maximizing public engagement with agency information, even if those resources enable the public to understand and participate more effectively in monitoring agency performance and formulating public policy, are resources that agencies are likely to regard as having been diverted from more fundamental administrative tasks. Even the Justice Department, which is implicitly charged with encouraging compliance with the FOIA,33 varies from administration to administration as to whether the compliance it seeks to achieve is strict or permissive with regard to the invocation of FOIA’s exceptions. When it comes to making the value choices necessarily implicit in the implementation of public information policy, it is a problem that those values are not being assessed by any government official who is being held accountable primarily for the degree of public access and transparency he or she achieves for the American people. Deploying ICT in new ways to expand public access to information will only multiply occasions for value choice. The design of informational Web sites, as well as the composition of those databases that stand behind them, involve value choice. This is evident, for example, in sites that now seek public input on pending regulations. Consider that the Web site www.regulation.gov now offers a convenient one-stop portal for members of the public to comment on any pending regulation on any subject that has been proposed by any agency. A variety of agencies, however, have their own electronic rulemaking sites that are limited, of course, to a single agency and perhaps require greater sophistication to use. If, however, you are a telecommunications specialist, you might prefer the design of the Federal Communications Commission (FCC) online rulemaking site,34 because it provides a uniform process for submitting all legal materials to the FCC, including depositions or exhibits in administrative
PUBLIC INFORMATION, TECHNOLOGY, AND DEMOCRATIC EMPOWERMENT Shane
129
adjudication. Both regulation.gov and the FCC rulemaking site may be excellent for their respective audiences, and, of course, it may be optimal for the government to support both. But that takes resources. And the intake of public comments through multiple sites may itself raise questions of coordination. Will a user of one Web site easily be able to review public comments submitted on the other? These highly technical questions make a difference in how responsive government Web sites are, and to whom. Should agencies be persuaded to follow my earlier recommendation and begin using ICT to support online dialogue and deliberation around public information issues, the number of politically laden process design issues will multiply exponentially. Who shall participate in such dialogues? Under what norms? Subject to what forms of moderation? What happens within the agency to the comments thus elicited? The deployment of new forms of ICT to enhance the democratic uses of public information will only reveal with a vengeance the foolishness of the oft-heard phrase, “technology solution.” Technology without policy offers no solutions. There is no one form of technological deployment with regard to information access that, in a politically neutral way, is inherently optimal. Questions will always exist as to how resources are to be prioritized and whose needs are to be served most effectively. The current administration has, of course, illustrated in an especially dramatic way the political side of public information policy. Aside from its commendable decision to retain the Clinton policy of openness with regard to OMB regulatory oversight, it has been more ambitious in the cause of government secrecy than any administration since the FOIA was enacted.35 From a civil liberties perspective, it is perhaps the post–September 11 practice of secret arrests and adjudications with regard to the deportation of noncitizens that is most alarming.36 For sheer creativity in the pursuit of secrecy, however, it would be hard to beat Executive Order 13233, which orders the National Archivist to withhold historical records of the presidency that are otherwise disclosable under the Presidential Records Act37 if a claim of executive privilege is levied not by the sitting president or even by a former president, but by a private delegate of a former president, such as his lawyer or a family member.38 It is hard to overstate the audacity of the notion that a private individual, publicly accountable to no one, may invoke a constitutionally based presidential privilege.
130
A LITTLE KNOWLEDGE
If the positive values associated with public access are to get their due in the competition for government resources and attention, then it is important to have an official champion of public access that can play both a standard-setting and ombudsmanlike role in fostering the optimal uses of ICT to support public access rights. It would also share authority for ensuring the preservation of electronic records; what ICT makes easier to store and transmit, it also makes easier to falsify and delete. And, although such an agency never could be entirely above politics—no agency that relies on congressional appropriations ever is—it ought to be at least as insulated from everyday pressures as, say, the Federal Reserve Board. Congress should recognize that dependability in preserving public access to information essential to public understanding of how we are being governed is as important as a steady rudder in our fiscal policy. In light of the highly partisan times in which we live, it would be appropriate to have a commission of six members, all nominated by the president, but three of whom would have to come from lists provided to the president by the highest ranking members of the opposition party in each House of Congress. Each proposal I have offered, especially if taken seriously, will have its critics. On due consideration, each proposal might be subject to substantial refinement and improvement. What they have in common, however, is an important impulse that should animate an energetic national debate regarding the deployment of technology in the service of public access to and understanding of government information. The questions we should address are not primarily questions of how to do old things faster or in larger volume, but, rather, how to use the informational resources of the government in innovative ways to empower citizens most effectively to be active participants in democratic self-governance. Lawyers, technologists, journalists, and government officials are likely to be early entrants into that conversation. But it ought to engage everyone.
NOTES
CHAPTER 1 1. Youngstown Sheet & Tube Co. v. Sawyer, 343 U.S. 579, 642 (1952) (Jackson, J., concurring). 2. Memorandum from Andrew H. Card, Jr., for the Heads of Executive Departments and Agencies, re: Action to Safeguard Information Regarding Weapons of Mass Destruction and Other Sensitive Documents Related to Homeland Security (March 19, 2002), available at http://www.fas .org/sgp/bush/wh031902.html. 3. Executive Order no. 12,958, §1.2 (a)(4), 60 Fed. Reg. 19825, 19826 (1995). 4. James Madison to W. T. Barry, August 4, 1822, in The Writings of James Madison, ed. Gaillard Hunt (New York: G. P. Putnam’s Sons, 1910), 9: 103.
CHAPTER 3 1. Harold C. Relyea, Silencing Science: National Security Controls and Scientific Communication (Norwood, NJ: Ablex Publishing Corporation, 1994); Edward Teller, “Secrecy: The Road to Nowhere,” Technology Review, 84: 12; Harold C. Relyea, ed., Striking a Balance: National Security and Scientific Freedom, First Discussions (Washington, DC: American Association for the Advancement of Science [AAAS], 1985); Committee on Science, Engineering, and Public Policy (U.S.), Panel on Scientific Communication and National Security (Dale R. Corson, Chair), Scientific Communication and National Security: A Report (Washington, DC: National Academy Press, 1982). 131
132
A LITTLE KNOWLEDGE
2. Relyea, Silencing Science, 13. 3. Lloyd V. Berkner and Douglas Whitaker, Science and Foreign Relations (Washington, DC: International Science Policy Survey Group, 1950). 4. Relyea, Silencing Science, 13. 5. Ibid., 118. 6. Stephen H. Unger, “National Security and the Free Flow of Technical Information,” in Striking a Balance, ed. Relyea, 33–34. 7. Gina B. Kolata, “Attempts to Safeguard Technology Draw Fire,” Science 212 (1981): 523. 8. Relyea, Silencing Science, 189–93. 9. Paul E. Gray, “Advantageous Liaisons,” Issues in Science and Technology (Spring 1990): 40–46. 10. The export of military hardware and technical data is controlled by the International Traffic in Arms Regulations (ITAR), dating back to 1954, while the export of commodities of commercial interest (and the technical data related to their design, manufacture and utilization) is controlled by the Export Administration Regulations (EAR) from 1979. 11. National Science Board, Science and Engineering Indicators 2002 (Washington, DC: National Science Foundation, 2002), O-8, 3-4, 3-10 to 3-11, and 6-5. 12. There is a large literature on the close correlation between technological innovation and economic growth; current trends are regularly summarized in the Science, Technology and Industry Outlook, published by the Organization for Economic Cooperation and Development. Much of this analysis draws on the pioneering work of Robert M. Solow, notably “A Contribution to the Theory of Economic Growth,” Quarterly Journal of Economics 70 (1956): 65–94, and “Technical Change and the Aggregate Production Function,” Review of Economics and Statistics 39 (1957): 312–20. 13. National Science Board, Science and Engineering Indicators 2002, 5–44. 14. Ibid., 3–27. 15. Loren R. Graham, Science in Russia and the Soviet Union: A Short History (Cambridge: Cambridge University Press, 1993), 177. 16. National Science Board, Science and Engineering Indicators 2002, 3-28. 17. Ibid., 3-27 to 3-28. 18. Ibid., 5-43 to 5-49. 19. David Lipschultz, “Bosses from Abroad,” Chief Executive 174 (2002): 18–21. 20. National Science Board, Science and Engineering Indicators 1998, 2–20. 21. “Chronology of Nobel Prize winners in Physics, Chemistry, and Physiology or Medicine,” Nobel e-Museum—The Official Web Site of The Nobel Foundation, http://www.nobel.se/index.html.
NOTES
133
22. Bernard Wysocki, Jr., “Foreign Scientists are being Stranded by War on Terror,” Wall Street Journal, January 20, 2002. 23. National Science Board, Science and Engineering Indicators 2002, 2-7. 24. Ibid., 2-7. 25. Ibid., 2-19. 26. Ibid., 2-27. 27. Ibid., 2-16, 2-20 to 2-21, 2-25, 2-27. 28 Ibid., 5-45. 29. Wysocki, Jr., “Foreign Scientists.” 30. Staff of the National Academies, telephone conversations, March 2003. 31. Javad Mostaghimi, “Impact of the American NSEERS Law on the North American Academy” (University of Toronto, unpublished white paper, 2003). 32. Raymond Orbach, speech at Harvard Nanotechnology meeting, April 10, 2003. 33. Philip M. Boffey, “Security of U.S. Said to Be Hurt by Data Leaks,” New York Times, October 1, 1982. 34. Graham, Science in Russia. 35. Thane Gustafson, “Why Doesn’t Soviet Science Do Better than It Does?” in The Social Context of Soviet Science, ed. Linda L. Lubrano and Susan Gross Solomon (Boulder, CO: Westview Press, 1980), 31–67. 36. Data obtained from the MIT Registrar’s report included in the Reports to the President, 1883–2004. 37. See Graham, Science in Russia. 38. The Strategic Task Force on International Student Access, In America’s Interest: Welcoming International Students (NAFSA, Association of International Educators, 2003). 39. Ibid. 40. “WHO Coordinates International Effort to Identify and Treat SARS,” World Health Organization press release, March 17, 2003, http://www.who .int/mediacentre/notes/2003/np4/en. 41. “Update 31—Coronavirus never before seen in humans is the cause of SARS: Unprecedented collaboration identifies new pathogen in record time,” April 16, 2003, World Health Organization report, http://www. who.int/csr/sarsarchive/2003_04_16/en. 42. “Update 25—Interim Report of WHO Team in China, Status of the Main SARS Outbreaks in Different Countries,” April 9, 2003, World Health Organization report, http://www.who.int/csr/sarsarchive/2003_04_09/en. 43. In the Public Interest: Report of the MIT Ad Hoc Faculty Committee on Access to and Disclosure of Scientific Information, Sheila Widnall, Chair, June 12, 2002, http://web.mit.edu/faculty/.
134
A LITTLE KNOWLEDGE
44. Charles M. Vest, Response and Responsibility: Balancing Security and Openness in Research and Education, Report of the President for the Academic year 2001–2002, http://web.mit.edu/president/communications /rpt01-02.html. 45. Statement by John Marburger, Joint NAS/CSIS Workshop on Openness and Security, Washington, DC, January 9, 2003. 46. Richard Monastersky, “Publish and Perish?” Chronicle of Higher Education, October 11, 2002; Megan Twohey, “Colleges Fight New Restrictions on Use of Research,” The Federal Paper (January 20, 2003): 3. 47. Paul C. Powell, Julie T. Norris, and Robert B. Hardy, “Selected Troublesome/Unacceptable Clauses Related to Information Release and Foreign Nationals,” http://mit.edu/osp/www. 48. Andrew J. Hawkins, “Research community must have greater participation in student/scientist immigration issues, MIT’s Crowley tells AAAS,” Washington FAX, April 14, 2003, http://www.washingtonfax.com. 49. George Archibald, “College leaders see security rules hampering research,” Washington Times, April 16, 2003, http://www.washington times.com. 50. Committee on Science, Engineering, and Public Policy (U.S.), Panel on Scientific Communication and National Security (Dale R. Corson, Chair), Scientific Communication and National Security: A Report (Washington, DC: National Academy Press, 1982). 51. This chapter was written with the help and advice of Charles Vest, Jack Crowley, Sheila Widnall, Claude Canizares, and Jamie Lewis Keith, and with input from the AAU and other university leaders. The author would also like to thank Helen Samuels and William Whitney for their research assistance.
CHAPTER 4 1. Baruch Fischhoff, Ann Bostrom, Marilyn J. Quadrel, “Risk Perception and Communication,” in Oxford Textbook of Public Health, ed. Roger Detels, James McEwen, Robert Beaglehole, and Heizo Tanaka (London: Oxford University Press, 2002), 987–1002; Reid Hastie and Robyn M. Dawes, Rational Choice in an Uncertain World (Thousand Oaks, CA: Sage, 2001); Paul Slovic, ed., Perception of Risk (London: Earthscan, 2001). 2. Baruch Fischhoff, “Eliciting Knowledge for Analytical Representation,” IEEE Transactions on Systems, Man and Cybernetics 13 (1989): 448–61; Thomas Gilovich, Dale Griffin, and Daniel Kahneman, eds., The Psychology
NOTES
135
of Judgment: Heuristics and Biases (New York: Cambridge University Press, 2002). 3. William Leiss and Christina Chociolko, Risk and Responsibility (Kingston & Montreal: Queens & McGill University Press, 1994); Baruch Fischhoff, Sarah Lichtenstein, Paul Slovic, S. L. Derby, and R. L. Keeney, Acceptable Risk (New York: Cambridge University Press, 1981); Sheldon Krimsky and Alonzo Plough, Environmental Hazards: Risk Communication as a Social Process (Dover, MA: Auburn House, 1988). 4. Canadian Standards Association, Risk Management: Guidelines for Decision Makers (Q850) (Ottawa: National Standard of Canada, 1997). 5. Presidential/Congressional Commission on Risk Assessment and Risk Management, Risk Assessment and Risk Management in Regulatory Decision-Making, Final Report, vol. 2 (Washington, DC: Presidential/ Congressional Commission on Risk Assessment and Risk Management, 1997); Environmental Protection Agency Science Advisory Board, Toward Integrated Environmental Decision Making (SAB-EC-00-011) (Washington, DC: Environmental Protection Agency, 2000); Royal Commission on Environmental Pollution, Setting Environmental Standards (London: HMSO, 1998); Health and Safety Executive, Reducing Risks, Protecting People: HSE Decision-Making Process (London: HMSO, 2001); Parliamentary Office of Science and Technology, Open Channels: Public Dialogues in Science and Technology (London: House of Commons, 2001); Performance and Innovation Unit, Risk and Uncertainty (London: Parliamentary Cabinet Office, 2002). 6. National Research Council, Risk Assessment in the Federal Government: Managing the Process (Washington, DC: National Academy Press, 1983). 7. National Research Council, Improving Risk Communication (Washington, DC: National Academy Press, 1989). 8. National Research Council, Science and Judgment in Risk Assessment (Washington, DC: National Academy Press, 1994). 9. Fischhoff, “Eliciting Knowledge”; Robert Clemen, Making Hard Decisions (Belmont, CA: Duxbury, 1996); M. Granger Morgan and Max Henrion, Uncertainty (New York: Cambridge University Press, 1990). 10. National Research Council, Understanding Risk: Informing Decisions in a Democratic Society (Washington, DC: National Academy Press, 1996). 11. Fischhoff et al., Acceptable Risk; Baruch Fischhoff, Stephen R. Watson, and Chris Hope, “Defining Risk,” Policy Sciences 17 (1984): 123–39. 12. Institute of Medicine, Scientific Opportunities and Public Needs (Washington, DC: National Academy Press, 1998).
136
A LITTLE KNOWLEDGE
13. Institute of Medicine, Toward Environmental Justice (Washington, DC: National Academy Press, 1998). 14. Patricia Thomas, The Anthrax Attacks (New York: The Century Foundation, 2003). 15. Fischhoff et al., “Risk Perception and Communication”; M. Granger Morgan, Baruch Fischhoff, Ann Bostrom, and Cynthia Atman, Risk Communication: The Mental Models Approach (New York: Cambridge University Press, 2001). 16. Baruch Fischhoff, “What Do Patients Want? Help in Making Effective Choices,” Effective Clinical Practice 2, no. 3 (1999): 198–200; Baruch Fischhoff, “Why (Cancer) Risk Communication Can Be Hard,” Journal of the National Cancer Institute Monographs 25 (1999): 7–13. 17. Clemen, Making Hard Decisions; Howard Raiffa, Decision Analysis (Reading, MA: Addison-Wesley, 1968). 18. Jon Merz, Baruch Fischhoff, Dennis J. Mazur, and Paul S. Fischbeck, “Decision-Analytic Approach to Developing Standards of Disclosure for Medical Informed Consent,” Journal of Toxics and Liability 15 (1993): 191–215. 19. Shane Frederick, George Loewenstein, and Ted O’Donohue, “Time Discounting and Time Preference: A Critical Review,” Journal of Economic Literature 40 (2002): 352–401. 20. Elizabeth A. Casman, Baruch Fischhoff, Claire Palmgren, Mitch Small, and Felicia Wu, “Integrated Risk Model of a Drinking Waterborne Cryptosporidiosis Outbreak,” Risk Analysis 20 (2000): 493–509. 21. See EPA 815-F-98-0014. 22. Casman et al., “Integrated Risk Model.” 23. Michael Pollak, “A Distant Troubling Echo from an Earlier Smallpox War,” New York Times, December 17, 2002, F1. 24. Stephanie Byram, Baruch Fischhoff, Martha Embrey, Wandi Bruine de Bruin, and Sarah Thorne, “Mental Models of Women with Breast Implants Regarding Local Complications,” Behavioral Medicine 27 (2001): 4–14; Baruch Fischhoff, “Giving Advice: Decision Theory Perspectives on Sexual Assault,” American Psychologist 47 (1992): 577–88; Morgan et al., Risk Communication; and Donna M. Riley, Baruch Fischhoff, Mitch Small, and Paul S. Fischbeck, “Evaluating the Effectiveness of Risk-Reduction Strategies for Consumer Chemical Products,” Risk Analysis 21 (2001): 357–69. 25. Baruch Fischhoff, “Scientific Management of Science?” Policy Sciences 33 (2000): 73–87; Baruch Fischhoff, “Assessing and Communicating the Risks of Terrorism,” in Albert H. Teich, Stephen D. Nelson, and Stephen J.
137
NOTES
Lita, eds., Science and Technology in a Vulnerable World (Washington, DC: AAAS, 2002), 51–64. 26. National Research Council, Improving Risk Communication; Baruch Fischhoff, Paul Slovic, and Sarah Lichtenstein, “Lay Foibles and Expert Fables in Judgments About Risk,” American Statistician 36 (1983): 240–55. 27. Baruch Fischhoff, “Risk Perception and Communication Unplugged: Twenty Years of Process,” Risk Analysis 15 (1995): 137–45. 28. Jennifer S. Lerner, Roxana M. Gonzalez, Deborah A. Small, and Baruch Fischhoff, “Emotion and Perceived Risks of Terrorism: A National Field Experiment,” Psychological Science 14 (2003): 144–50. 29. Slovic, ed., Perception of Risk; George Cvetkovich and Ragnar Löfstedt, eds., Social Trust and the Management of Risk (London: Earthscan, 1999).
CHAPTER 6 1. George Duncan’s research work was partially supported by grants from the National Science Foundation under Grant EIA-9876619 to the National Institute of Statistical Sciences, the National Center for Education Statistics under Agreement EDOERI-00-000236 to Los Alamos National Laboratory, and the National Institute on Aging under Grant 1R03AG1902001 to Los Alamos National Laboratory. My thanks to Dr. Virginia de Wolf and Dr. Eleanor Singer for helpful comments. 2. George T. Duncan, Thomas B. Jabine, and Virginia de Wolf, Private Lives and Public Policies: Confidentiality and Accessibility of Government Statistics (Washington, DC: National Academy Press 1993). 3. Amitai Etzioni, The Limits of Privacy (New York: Basic Books, 1999). 4. Duncan et al., Private Lives, 22. 5. Duncan et al., Private Lives, 185–88. 6. Language used by the Privacy Act of 1974, 5 U.S. Code § 552(a). 7. Title V of E-Government Act of 2002 (Public Law 107-347 § 207, 116 Statute 2916). 8. Digital Government program announcement (National Science Foundation, 1999). See http://www.digitalgovernment.org/about/origins.jsp. 9. See http://www.searchsystems.net. 10. George T. Duncan and Stephen E. Fienberg, “Obtaining Information while Preserving Privacy: A Markov Perturbation Method for Tabular Data,” Eurostat: Statistical Data Protection 98 (1999) 351–62; George T. Duncan, Sallie Keller-McNulty, and S. Lynne Stokes, “Disclosure Risk vs. Data Utility:
138
A LITTLE KNOWLEDGE
The R-U Confidentiality Map,” Technical Report 2003–6, Heinz School of Public Policy and Management, Carnegie Mellon University (2003), 1–30. 11. Diane Lambert, “Measures of disclosure risk and harm,” Journal of Official Statistics 9 (1993): 313–31. 12. George T. Duncan and Robert W. Pearson, “Enhancing access to data while protecting confidentiality: prospects for the future,” Statistical Science 6 (1991): 219–39. 13. Lawrence M. Hinman, Ethics: A Pluralistic Approach to Moral Theory, 3rd ed. (Fort Worth: Harcourt Brace, 2003), 204–41. 14. Kenneth Prewitt, “Public statistics and democratic politics,” in Behavioral and Social Science: Fifty Years of Discovery, ed. J. J. Smelser and D. R. Gerstein (Washington, DC: National Academy Press, 1985), 13–128. 15. See http://www.ssa.gov. 16. See http://www.diggov.org; http://www.niss.org/dg. 17. See http://www.fedstats.gov. 18. David H. Flaherty, Protecting Privacy in Surveillance Societies (Chapel Hill: University of North Carolina Press, 1989); Duncan et al., Private Lives. 19. David Brin, The Transparent Society (Reading, MA: Addison-Wesley, 1998), 3. 20. Duncan et al., Private Lives, 134. 21. Duncan and Pearson, “Enhancing access.” 22. See http://www.heinz.cmu.edu/census. 23. N. R. Adam and J. C. Worthmann, “Security-control methods for statistical databases: a comparative study,” ACM Computing Surveys (CSUR) 21 (1989): 515–56; Patricia Doyle, Julia I. Lane, J. J. M Theeuwes, and Laura V. Zayatz, eds., Confidentiality, Disclosure, and Data Access: Theory and Practical Applications for Statistical Agencies (Amsterdam: Elsevier Science, 2001); Christopher Mackie and Norman Bradburn, Improving Access to and Confidentiality of Research Data (Washington, DC: National Academy Press, 2000); National Science Foundation Digital Government Program Announcement; Directorate for Computer and Information Science and Engineering (Washington, DC, March 15, 1998); G. T. Duncan, “Confidentiality and Statistical Disclosure Limitation,” in International Encyclopedia of the Social and Behavioral Sciences, ed. N. J. Smelser and Paul B. Baltes (New York: Pergamon, 2001), 2521–25 24. Thomas B. Jabine, “Statistical Disclosure Limitation Practices of United States Statistical Agencies,” Journal of Official Statistics 9 (1993): 427–54. 25. Jabine, “Statistical Disclosure”; Thomas B. Jabine, “Procedures for Restricted Data Access,” Journal of Official Statistics 9 (1993): 537–89.
139
NOTES
26. Marilyn Seastrom, “Licensing,” in Confidentiality, Disclosure and Data Access, ed. Doyle et al., 279–96. 27. See http://www.ces.census.gov/ces.php/rdc; Timothy Dunne, “Issues in the establishment and management of secure research sites,” in Confidentiality, Disclosure and Data Access, Theory and Practical Applications for Statistical Agencies, ed. Doyle et al., 297–314. 28. See http://www.wired.com/news/exec/0,1370,48197,00.html. 29. For a detailed discussion of this see William Seltzer and Margot Anderson, “NCES and the Patriot Act: An Early Appraisal of Facts and Issues,” prepared for presentation at the Joint Statistical Meetings, New York City, August 12, 2002, http://www.uwm.edu/~margo/govstat/jsm.pdf. 30. See http://www.whitehouse.gov/news/releases/2003/01/20030128-12.html. 31. See http://www.darpa.mil/iao/TIASystems.htm. 32. See http://www.epic.org/privacy/profiling/tia/; http://www.acm.org/usacm /Letters/tia_final.html; http://www.epic.org/privacy/profiling/tia/sa59.html. 33. See http://www.informationweek.com/story/IWK20030406S0001. 34. William L. Scherlis, ed., Information Technology Research, Innovation, and E-Government (Washington, DC: Computer Science and Telecommunications Board, National Research Council, 2002).
CHAPTER 7 1. The program name was recently changed from Total Information Awareness after the public and congressional outcry over the project’s civil liberties implications. See U.S. Department of Defense, “Report to Congress Regarding the Terrorism Information Awareness Program” (May 20, 2003) http://www.darpa.mil/body/tia/tia_report_page.htm. 2. U.S. Department of Defense, Defense Advanced Research Projects Agency’s Information Awareness Office and Total Information Awareness Project, available at http://www.darpa.mil/iao/iaotia.pdf. 3. U.S. v. Poindexter, 951 F.2d 369 (D.C. Cir., 1991) (reversing conviction for lying to Congress due to grant of immunity rather than truthfulness). 4. See In re: Pharmatrack Privacy Litigation, 329 F.3d 9 (1st Cir., 2003) (protecting Web sites’ ability to control dissemination of information about Web surfers rather than protecting the Web surfers themselves). See also Chance v. Avenue A, Inc., 165 F. Supp. 2d 1153 (W.D. Wash. 2001); In re: DoubleClick Inc. Privacy Litigation, 154 F. Supp. 2d 497 (S.D.N.Y. 2001). 5. European Directive 95/46/EC, Recital 26.
140
A LITTLE KNOWLEDGE
6. See, for example, Joel R. Reidenberg and Paul Schwartz, “Online Services and Data Protection and Privacy: Regulatory Responses” (Eur-Op: 1998). 7. See, for example, Paul Schwartz and Joel R. Reidenberg, “Data Privacy Law 91-125” (Michie: 1996). 8. 18 U.S. Code §§ 2721–2725. 9. Reno v. Condon, 528 U.S. 141 (2000). 10. Peter Seipel, “Sweden,” in Nordic Data Protection Law 122, ed. Peter Blume (Copenhagen: 2001). Sweden’s right of access to information held by the government dates back to 1766. 11. See, for example, Joel R. Reidenberg, Resolving Conflicting International Data Privacy Rules in Cyberspace, 52 Stanford L. Rev. 1315 (2000). 12. OECD Doc. No. C(80)(58) final, reprinted in 1981 International Legal Materials 422. 13. Euro. T.S. No. 108 (January 28, 1981), reprinted in 1981 International Legal Materials 377. 14. European Directive 95/46/EC (October 24, 1998), O.J. Eur. Comm., 23 November 1995, No. L281/31. 15. European Directive 95/46/EC, Art. 6. 16. See, for example, European Directive 95/46/EC, Art. 13. 17. The Council of Europe Convention Protocol requires the creation of a supervisory authority that operates in “complete independence.” Euro. T.S. No. 181, Art. 1(3) (November 8, 2001). At the same time, the European Directive stipulates that each member state establish a public authority that “shall act with complete independence.” European Directive 95/46/EC, Art. 28(1). 18. European Directive 95/46/EC, Art. 25. 19. See U.S. Department of Commerce, Issuance of Safe Harbor Principles and Transmission to European Commission, 65 Fed. Reg. 45,665, 45,665–686 (July 24, 2000); Commission Decision of July 26, 2000, pursuant to Directive 95/46/EC of the European Parliament and of the Council on the Adequacy of the Protection Provided by the Safe Harbor Privacy Principles and Related Frequently Asked Questions Issued by the U.S. Department of Commerce, 2000 O.J. (L 215) 7. 20. Joel R. Reidenberg, E-commerce and Trans-Atlantic Privacy, 38 Houston L. Rev. 717 (2001). 21. See “A Study Prepared at the Request of the European Commission Internal Market DG: The Functioning of the U.S.–EU Safe Harbor Principles” (September 21, 2001). 22. European Commission, Staff Working Paper SEC 2002/196 (February 13, 2002), 8–9.
141
NOTES
23. European Commission, Staff Working Paper SEC 2002/196 (February 13, 2002), 6, 11. 24. See statement of Fritz Bolkestein on airline passenger data transfers from the European Union to the United States before the Plenary Session of the European Parliament (March 12, 2003), http://europa.eu.int/rapid/start/. 25. See Aviation and Transportation Security Act, Public Law 107-71, codified at 49 U.S. Code §§ 44909(c) 2003 (requires airlines to transmit passenger information to U.S. Customs Service); Interim Rule: Passenger Name Record Information Required for Passengers on Flights in Foreign Air Transportation to or from the United States, Code of Federal Regulations, title 19, Part 122. 26. See Joint EU–U.S. statement on the transmission of Advanced Passenger Information System/Passenger Name Record data from airlines to the United States (March 2003), http://europa.eu.int/comm/internal_market/privacy /docs/adequacy/declaration_en.pdf. 27. See European Commission Article 29 Working Group, Working Document on Online Authentication Services, January 29, 2003, Eur. Doc. 10054/03/EN WP68, http://europa.eu.int/comm/internal_market/privacy/ docs/wpdocs/2003/wp68_en.pdf. 28. See European Commission press release, “Data Protection: Microsoft Agrees to Change its .NET Passport System after Discussions with EU Watchdog,” Doc. IP/03/151, January 30, 2003. 29. Speech by Commissioner Frits Bolkestein on airline passenger data transfers from the EU to the United States before the European Parliament Plenary Session, March 12, 2003. 30. See, for example, David Legard, “Singapore Enforces SARS Quarantine with Cameras, Bio-ITWorld,” April 11, 2003, available at http://www.bioitworld.com/news/041103_report2302.html. 31. Executive Order no. 13,295; Code of Federal Regulations, title 42, Part 70.6 (2003).
CHAPTER 8 1. But see E-FOIA Amendments, 5 U.S. Code § 522 (a)(6)(B)(ii). This section was added several years later, and it permits and encourages contact with the requester when the time for responding is inadequate. 2. See http://usdoj.gov/04foia/foiastat.htm; http://usdoj.gov/04foia/ foia1011012.htm. 3. See Executive Order no. 12,866, § 6(a)(3)(E) (Sept. 30, 1993) http://www.whitehouse.gov/omb/inforeg.
142
A LITTLE KNOWLEDGE
CHAPTER 9 1. Public Law 89-554, Stat. 80 (1966): 383; as amended, codified at U.S. Code 5 (2000, Supp. II 2002), § 552. 2. 5 U.S. Code § 552(a)(1). 3. 5 U.S. Code § 552(a)(2). 4. 5 U.S. Code Supp. II 2002), § 552(a)(3). 5. Public Law 104-231, §§ 3 to 11, Stat. 110 (1996): 3,049–3,054. 6. “For records created on or after November 1, 1996, within one year after such date, each agency shall make such records available, including by computer telecommunications or, if computer telecommunications means have not been established by the agency, by other electronic means.” 5 U.S. Code § 552(a)(2). 7. 5 U.S. Code § 552(a)(2)(D). 8. Jim Rubens, “Retooling American Democracy,” 17 Futurist (1983), 59–64. 9. Peter M. Shane, “The Electronic Federalist: The Internet and the Electronic Institutionalization of Democratic Legitimacy,” in Online Democracy: The Prospects for Democracy Renewal Through the Internet, ed. Peter M. Shane (forthcoming Routledge, 2004). 10. Council for Excellence in Government, The New E-Government Equation: Ease, Engagement, Privacy & Protection 19 (April 2003), http://www.excelgov.org/usermedia/images/uploads/PDFs/egovpoll2003.pdf. 11. See http://www.regulation.gov. 12. Robert C. Luskin, James S. Fishkin, and Dennis L. Plane, “Deliberative Polling and Policy Outcomes: Electric Utility Issues in Texas” (unpublished paper for the Annual Meeting of the Association for Public Policy Analysis and Management, Washington, DC, November 4–7, 1999), http://www.la .utexas.edu/research/delpol/papers/utility_paper.pdf. 13. First Online Deliberative Opinion Poll® Reveals Informed Opinions on World Problems, January 2003, available at http://www.pbs.org/ newshour/btp/polls.html. 14. Treasury and General Government Appropriations Act, Fiscal Year 2001, § 515, Public Law 106-554, Stat. 114 (2000): 2763, codified at 44 U.S. Code note following § 3516. 15. Office of Management and Budget, “Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by Federal Agencies,” Federal Register 66 (September 28, 2001): 49,718.
NOTES
143
16. See, for example, OMB Watch, NRDC Comments Threatened with Industry Data Quality Challenge, http://www.ombwatch.org/article /articleprint/1399/-1/83/. 17. The Supreme Court established the critical distinction, for FOIA purposes, between decisional and predecisional documents in NLRB v. Sears, Roebuck & Co., 421 U.S. 132 (1975). 18. 5 U.S. Code § 552(b)(5). 19. 5 U.S. Code § 552(d). 20. 839 F.2d 768 (D.C. Cir., 1988) (en banc). 21. See Jerry L. Mashaw, Richard A. Merrill, and Peter M. Shane, Administrative Law: The American Public Law System (St. Paul: ThomsonWest, 5th ed. 2003): 294–95 . 22. Regulatory Planning and Review, Executive Order no. 12,866, § 6(b)(4)(D), Federal Register 58 (September 30, 1993): 51,735. 23. See http://www.whitehouse.gov/omb/oira/. 24. See http://www.whitehouse.gov/omb/oira/. 25. Congressional Management Foundation, Congress Online: Assessing and Improving Capitol Hill Web Sites (Washington, DC, 2002), available at http://www.congressonlineproject.org/congressonline2002.pdf. 26. Id. at iv. 27. See http://www.thomas.loc.gov. There are two potential treasure troves of policy-relevant information produced at the behest of Congress, neither of which is available through Thomas—reports of the General Accounting Office and reports of the Congressional Research Service. Web surfers who know of the GPO and its role will find access to reports through the GPO Web site (although its search engine leaves much to be desired). As of May 2003, the overwhelming majority of CRS reports are generally not available online at all. 28. See http://www.vote-smart.org. 29. 5 U.S. Code § 552(a)(4)(B). 30. 5 U.S. Code § 552(a)(4)(F). 31. 5 U.S. Code § 552(e)(5). 32. Federal Election Commission v. Akins, 524 U.S. 11 (1998). 33. 5 U.S. Code § 552(e)(5). 34. See http://www.fcc.gov/cgb/ecfs/. 35. See generally Gary D. Bass and Sean Moulton, The Bush Administration’s Secrecy Policy: A Call to Action to Protect Democratic Values (October 2002), http://www.ombwatch.org/rtk/secrecy.pdf; Reporters Committee for Freedom of the Press, Homefront Confidential: How the
144
A LITTLE KNOWLEDGE
War on Terrorism Affects Access to Information and the Public’s Right to Know (3d ed. 2003), http://www.rcfp.org/homefrontconfidential/. 36. Hamdi v. Rumsfeld, 316 F.3d 450 (4th Cir. 2003). 37. Public Law 95-591, Stat. 92 (1978): 2523. 38. Further Implementation of the Presidential Records Act, Executive Order no. 13,233, § 10, Federal Register 66 (November 1, 2001): 56,025, 56,028.
INDEX AAU (Association of American Universities), 36 Academic and research exchanges. See Research and development (R&D); Science and technology; Universities and colleges Administrative decisionmaking process, access to, 123–25 Administrative records: functional separation policy applied to, 82; preservation of electronic records, 130; as source of data in government databases, 76 Agency for Toxic Substances and Disease Registry, 13 AIDS and vulnerability to water contamination, 52 Airline passenger data, 98 Alberts, Bruce, 21 Alert color system. See Homeland Security Advisory System American Statistical Association/National Science Foundation (NSF) fellowship programs and data access policies, 84 American Vacuum Society 1980 conference, 24 Anonymity, 7, 81 Anthrax crisis, 45, 58 Anticipatory efforts of risk communication, 66
A-130 (OMB Circular), 9, 109, 111, 113–14, 115 Architectural controls for technological access to private information, 100–101 Ashcroft, John, 12, 111 Asset ownership records, 92, 93–94 Association of American Universities (AAU), 36 Australia: Cryptosporidium outbreak in water, 52; data protection laws, 95; science and technology foreign graduate students in, 29 Autonomy of individual, 81 Bank transactions recorded by ISPs, 76 Barnhart, Jo Anne B., 80 Behavioral Risk Factor Surveillance System, 76 Bioterrorism preparedness, 63, 65 Bolkstein, Frits, 99 Bork, Robert, 124 Brandeis, Louis, 21 Brewer, Eric, 110 Brin, David, 81 Britain, and science and technology research, 29 Building permits and collection of personal data, 76 Bureau of Census, 84. See also Census data 145
146
Bureau of Labor Statistics, 76, 84 Bureau of Transportation Statistics, 13 Bush (George) administration and secrecy, 124 Bush (George W.) administration: Korean episode, 11–12; presidential records, policy on disclosure of, 18–19; regulatory review process under, 124, 129; secrecy of, 2, 14–16, 19–22 Bush, Vannevar, 23 Canada: data protection laws, 95; science and technology foreign graduate students in, 29 Canadian Standards Association’s conceptualization of risk management, 41–43 CAPPS (Computer Assisted Passenger Pre-Screening), 98 Card, Andrew, 5, 13 Census data, 72, 75, 78, 83–84 Census Research Data centers, 83, 84 Centers for Disease Control and Prevention (CDC), 63, 76 CEOs, foreign-born, 26 Cheney, Dick, 19 China and secrecy surrounding SARS outbreak, 32 CIPSEA. See Confidential Information Protection and Statistical Efficiency Act of 2002 Citizen involvement: Deliberative Polls providing, 120–21; in priority setting, 44–45; understanding needed for, 123. See also Right to know and environmental information Classified documents, 13, 14
A LITTLE KNOWLEDGE
Clift, Steve, 7 Clinton administration: declassification of historical records by, 14; disclosure of environmental dangers and public safety information by, 20–21; OMB regulatory review process under, 124, 129 Cold War, 14, 24 Color code system for terror alerts. See Homeland Security Advisory System Committee on Setting Priorities for the National Institutes of Health, 44 Computer Assisted Passenger PreScreening (CAPPS), 98 Confidential Information Protection and Statistical Efficiency Act of 2002 (CIPSEA), 75, 82 Confidentiality. See Privacy vs. government databases Congress and accountability, 125–27 Consent. See Informed consent Constitutional empowerment, 80–81 Convention for the Protection of Human Rights and Fundamental Freedoms, 94 Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data, 91, 94–95 Cookies and Web surfing, 91–92 Coping with risk. See Risk management Corson report, 28, 37 Costs of information dissemination, 6, 110 Council of Europe Convention for the Protection of Individuals
INDEX
with Regard to Automatic Processing of Personal Data, 91, 94–95, 96 Council on Government Relations, 33 Court filings and decisions, 92, 94 Credit card transactions recorded by ISPs, 76 “Crisis mentality” and risk communication, 62–63 Cryptosporidium, 50–53 CSID data process, 72, 85, 86–87 Customs Service, 98 DARPA (Defense Advanced Research Projects Agency), 85 Data Capture, data Storage, data Integration, and data Dissemination (CSID data process), 72, 86–87 Data mining techniques, 86 Data Quality Act, 7, 122 Decisionmaking: administrative process, access to, 123–25; qualitative vs. quantitative information for, 48, 54 Defense Advanced Research Projects Agency (DARPA), 85 Defense, Department of. See Total Information Awareness (TIA) program Deliberative Polls, 120–21 Democratic accountability, 7–9, 80 Department of. See name of agency (e.g., Justice Department) Digital Government Program, 80 Directive. See European Directive 95/46/EC “Dirty bombs,” warning of, 65 Disclosure: advantages of openness, 21; limitation (DL) methods, 78, 83, 87; risks of, 39–56
147
Domestic surveillance, 45–46, 54, 99–100 Driver’s Privacy Protection Act, 92 Duggin, Ervin, 108 Duncan, Jabine, and de Wolf, 71, 72 Dunlop, John, 108 D-Zero accelerator project, 28 Earnings records collected by government, 76 E-FOIA Amendments, 117 E-government community networks and risk communication, 68 Eisenhower, Dwight D., 22 Electronic Freedom of Information Act Amendments (E-FOIA Amendments), 117 Electronic Privacy Information Center (EPIC), 85 Emergency services directors and risk communication, 61–62 “Empowered citizen model” of risk communication, 60–61 Encryption, 86 Energy, Department of (DOE), 14, 28 Energy task force and Cheney refusal to reveal information about, 19 Engineering. See Science and technology Environmental health emergency systems, 50 Environmental Protection Agency (EPA): disclosure of environmental dangers and public safety information by, 21; envirofacts database of, 13; and public right to know, 17, 111–12; risk management
148
policy and plans of, 13–14, 43; secrecy of, 12; toxic chemical facilities, regulation of, 17–18 Ethics, 73, 79–80, 87 European Commission and Safe Harbor requirements, 98 European Convention for the Protection of Human Rights and Fundamental Freedoms, 94 European Directive 95/46/EC, 91, 95–96, 97 European privacy laws, 8, 91, 95 European Union’s Article 29 Working Party, 99 Evaluation of communications and citizens for risk management, 54–56 “Every-citizen” usability of computer technology, 87 “Evidence-based messages” for risk communication, 66, 67 Executive Order 13233, 129 Exemption 5 under FOIA, 123–24, 125 Export control laws, 25 Fairness rules in data use and dissemination, 94–96, 97 False positives, effect of, 64 FBI disclosure of environmental dangers and public safety information, 21 Federal Aviation Administration, 13 Federal Communications Commission (FCC) online rulemaking site, 128–29 Federal Depository Libraries, 14 FedStats, 81, 110 Feedback and risk communication, 63–64; and intelligent risk communication support system, 67
A LITTLE KNOWLEDGE
FirstGov.gov, 110 First responders and smallpox vaccination, 45, 59 Fishkin, James, 120–21 FOIA. See Freedom of Information Act Foreign students and scholars, 26–31 Freedom of Information Act (FOIA): attorney’s experiences with, 106–108; and burden on government, 109, 113–14; E-FOIA Amendments, 117; exemptions under, 5, 12, 123–24, 125; Justice Department approaches to, 111; and national security, 16; need for new institutions under, 9–10, 127–30; purpose of, 105 Freedom of Information Office, recommendation for, 9, 10 Functional separation, 82 Germany, 26 Global Outbreak Alert and Response Network coordinating SARS research, 32 Gorbachev, Mikhael, 25 Gore, Al, 21 Government databases, 71–87; functional separation as policy for, 82; and information ethics, 73, 79–80, 87; mechanisms for managing, 82–84; personal data in, 74–77; and R-U (disclosure risk R-data utility U) confidentiality map framework, 77–79, 87. See also Privacy vs. government databases; specific government agencies Government in the Sunshine Act. See Sunshine Act
149
INDEX
Government officials and risk communication, 61–62 Gramm Leach Bliley Act, 91 Hart-Teeter survey on government initiatives, 119 Health and Human Services, Department of, 12, 124 Healthcare workers and smallpox vaccination, 45, 58–59 Health data collected by government, 76 Health Information Privacy Protection Act (HIPPA), 91 Heymann, David, 32 Homeland Security, Department of: citizen preparedness recommendations of, 59; communication system recommended for, 68–69; information technology system to assist in recognition of threats, 64; science and technology policies of, recommended for, 36–37; secrecy of, 12 Homeland Security Act, 112 Homeland Security Advisory System, 59, 64 Homeland Security Information, 112 ICTs (Information and communications technologies). See New technology Immigrants, treatment of, 46 Immigration problems for scientists. See Visa delays for foreign scientists Improving Risk Communication (National Academy of Sciences 1989), 43–44 Incident Command System, 63 Indexing to congressional voting records, 127
Individual autonomy, 81 Influence diagram, 52 Information and communications technologies (ICTs). See New technology Information ethics, 73, 79–80, 87 Information justice, 82 Information overload and searches of government data, 111 Informed consent: for medical procedures, 48–50; for smallpox vaccination, 45 Institutional approaches to risk management, 40, 41–46 Intelligent risk communication support system, 67–69 Internal Revenue Service and tax information, 14, 74 International approaches to data privacy, 8, 89–101; fairness rules in data use and dissemination, 94–96, 97; institutionalized checks and balances, 96–98; public vs. personal information, 90–94; recommendations and lessons from, 99–101; tension with U.S. approach, 8, 98–99 International conferences and collaborations in science and technology, 27–28 Internet: access to government information services, 76; government-supported applications seeking citizen views, 119; log files of activity on, 96; predicted effect of, 1; surfing patterns, 91–92 Interviews and data collection, 67 Isolation imposed by current U.S. policies, 28–31 Iyengar, Shanto, 121 Jabine, Thomas, 83
150
Jackson, Robert, 4 Japan, trade with, 25 Jefferson, Thomas, 72 Justice Department: failure to disclose names of detained Muslim men, 12–13; and FOIA compliance, 12, 111, 127, 128; and FOIA Exemption 5 for nondisclosure, 125; proposed terrorism legislation and toxic chemical reporting, 18; USA PATRIOT Act implementation by, 12 Kessler, Gladys, 12 Keyworth, George A., II, 28 Latin American data protection laws, 95 Law enforcement officers and risk communication, 61–62 Library of Congress’s Thomas Web site, 126 Licensing data, 76, 92, 93 Madison, James, 10 Marburger, John, 36 Media: and intelligent risk communication support system, 67; training for risk communication, 61–62 “Medical model” of risk communication, 60 Metadata and XML standards, 86 Microsoft, 99 Mindset barriers to risk communication, 62–63 Mistrust of government, 47; and risk communication, 56; and smallpox vaccinations, 45; and TIA program, 90 MIT, 24, 32–33 Moose, Charles, 62
A LITTLE KNOWLEDGE
Motor Vehicles, Departments of, and collection of personal data for licensing purposes, 76 Moynihan, Daniel Patrick, 16 National Academy of Sciences on public disclosure of risk analyses: “red book” (1983), 43; report (1950), 23–24; report (1989), 43–44; report (1990), 44; report (1996), 44; self-censorship, 14 National Archivist and presidential records, 129 National Center for Education Statistics (NCES) Act of 1994, 85 National Center for Health Statistics, 76 National Imagery and Mapping Agency, 13 National Institutes of Health: Citizens’ Advisory Panel, 44; Committee on Setting Priorities for, 44; foreign nationals at, 26–27 National Longitudinal Surveys of Young Women, 76 National Science and Technology Council warning on technical workforce needs, 27 National Science Foundation (NSF), 25, 26, 76, 84 National security: checks and balances for, 100; and information access, 2–5; post–September 11 thinking about, 3, 84, 88, 112. See also specific government programs and agencies National Security Directive 189 (NSDD-189), 33 National Security Entry-Exit Registration System (NSEERS), 28
INDEX
NCES (National Center for Education Statistics) Act of 1994, 85 Network infrastructure, 87 New institutions needed, 9–10, 127–30 New technology: architectural controls for access to private information, 100–01; and broadening of public information rights, 103–15; changes in working methods due to, 103–04; database information and enhanced capabilities, 72; and Deliberative Polls, 120–21; fostering citizen dialogue, 6–7, 118–23, 128–1230; and intelligent risk communication support system, 67–69; international tensions over compliance with fair information practice, 99; responses to changes in, 73, 86–87 Nixon, Richard, 19 Nobel prize laureates from U.S. who are foreign-born, 26 Norby, Hal, 81 Normal risks: evaluation of communications and citizens for, 54–55; informational approaches to, 46–53; institutional approaches to, 41–45 North Korea, 11–12 NSDD-189 (National Security Directive 189), 33 NSEERS (National Security EntryExit Registration System), 28 NSF. See National Science Foundation (NSF) Nuclear Regulatory Commission’s Web site, 13 Office of Information and Regulatory Affairs (OIRA), 124
151
Office of Management and Budget (OMB): Circular A-130, 9, 109, 111, 113–14, 115; policy and procedural guidelines for government data, 122; regulatory review process for, 124 Office of Privacy Protection proposed, 9 Organization for Economic Cooperation and Development (OECD) Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data, 91, 94–95 “Panic model” of risk communication, 59–60 Passenger profiling, 98 Paternalism, 46 Patients and informed consent. See Informed consent PCBs, 61 Personal data in government databases, 74–77; administrative records as source of data, 76; subpoena for access to, 74; surveys to collect data, 75–76; systems of records distinguished from, 74–75; users of, 74; voluntary vs. mandated, 74; Web site links to searchable public record databases, 77 Personal information, defined, 90–94 Physicians, standards for informed consent, 48 Poindexter, John, 85, 90 Political philosophy and citizen relations, 47 Polychlorinated biphenyls (PCBs), 61 Postal workers and anthrax, 45, 58
152
Preservation of electronic records, 130 Presidential/Congressional Commission on Risk, 43 Presidential Records Act, 129 Presidential records and disclosure, 18–19, 129 President’s Council on Competitiveness, 124 Prewitt, Kenneth, 80 Priority setting, citizen involvement in, 44–45 Privacy Act, 105–06 Privacy Foundation, 84 Privacy vs. government databases, 5–6, 71–87; and changes in technology, 73, 85–86, 87; functional separation as recommended policy for, 81–82; guiding principles for, 73, 79–82; and information ethics, 73, 79–80, 87; international approaches to, 89–101; managing tension of, 7–8, 73; mechanisms for managing, 82–84; personal data in government databases, 74–77; and R-U (disclosure risk Rdata utility U) confidentiality map framework, 78–79, 87; and societal reality, 73, 84–85, 87; tradeoff mentality, arguments against, 10, 77–79; Web site links to searchable public record databases, 77 Private Lives and Public Policies (Duncan et al.), 71, 79, 83 Project Vote-Smart (PVS), 126–27 Publication of research results, 14, 24, 33 Public information: defined, 90–94; fostering citizen dialogue around, 6–7, 118–23; re-evaluation of U.S. policy
A LITTLE KNOWLEDGE
called for, 100. See also Government databases Public officials and risk communication, 61–62 PVS (Project Vote-Smart), 126–27 Qualitative vs. quantitative information for decisionmaking, 48, 54 Quarantines and online surveillance cameras, 99 Quayle, Dan, 124 Reagan administration, 21, 33 “Red book” (National Academy of Sciences), 43, 44 regulation.gov, 120, 129 Reno, Janet, 111 Research and development (R&D): federal funding for, 25–26; openness in, 31–34; publication of results, 14, 24, 33; U.S. compared to foreign, 26 Restricted data and restricted access, 83, 87 Ridge, Tom: and chemical manufacturing industry, 17, 18; and color code system of terror alerts, 64, 65; on SEVIS implementation, 36. See also Homeland Security, Department of; Homeland Security Advisory System Right to know and environmental information, 17, 111–12 Risk, definitions of, 44 Risk assessment, 64–65; and intelligent risk communication support system, 67 Risk communication, 3–4, 57–69; anticipatory efforts, 66; current training for, 61–62; difficulties in past terrorist incidents, 58–59; evaluation of,
INDEX
54–56; “evidence-based messages” for, 66, 67; intelligent support system for, 67–69; and lack of public feedback, 63–64; low priority accorded to, 63; mindset barriers to, 62–63; recommendations and solutions for, 7, 65–69; schools of thought on, 59–61; teams for, 66–67 Risk disclosure, 39–56 Risk management: and collaboration, 40; evaluation of communications and citizens for, 54–56; individual strategies to cope with risk, 39–40; informational approaches to, 46–53; institutional approaches to, 40, 41–46; for terrorism, 45–46, 53–54. See also Normal risks Risks of secrecy, 16–18 R-U (disclosure risk R-data utility U) confidentiality map framework, 78–79, 87 Rulemaking, citizen ability to comment on, 120, 128–29 Russia/Soviet Union: brain drain from, 29; demise of USSR, 25; science and technology research environment in, 4, 26, 29; Space Race with, 24. See also Cold War Safe Harbor requirements and U.S.-EU relations, 98 SARS epidemic, 32 Satellite missions, declassification of information from, 21 Scherlis, William, 86 Science and Judgment in Risk Assessment (National Academy of Sciences), 44 Science and technology, 21–37;
153
advantages of openness in, 4, 21–22, 33–35; background, 23–25; censoring of published articles on, 14; effect of restricted information access on, 4, 23–37; foreign students and scholars as contributors to, 26–31; international conferences and collaborations in, 27–28; isolation imposed by current U.S. policies, 28–31; openness in research, 31–34; recommendations for, 35–37; restrictive research contracts in, 24, 33; shortage of graduate students in, 27, 29; U.S. leadership in, 25–26, 34. See also New technology Science and Technology Policy Colloquium, 36 Scientific Communication and National Security (Corson report), 28, 37 Secrecy of government, 2, 3–4, 12; removal of Web site information prior to September 11, 2001, 18. See also Freedom of Information Act (FOIA); National security; Science and technology Sensenbrenner, James, 12 September 11, 2001 attacks. See National security Severe Acute Respiratory Syndrome (SARS), 32 SEVIS. See Student and Exchange Visitor Information System Smallpox vaccination: campaign and risk disclosure for, 53–54, 58–59; for healthcare works, 45 Smart and Secure Tradelanes (SST) initiatives, 85
154
Smith, Richard, 85 Smoking, data on, 76 Sniper in Washington, DC area, 62 Social Security Administration: earnings records collected by, 76; Performance and Accountability Report of, 80 Societal reality and privacy issues, 73, 84–85, 87 Soviet Union. See Russia/Soviet Union Space race, 24 SST (Smart and Secure Tradelanes) initiatives, 86 Statistical data: federal, 81, 110; functional separation policy applied to, 82. See also Census data Student and Exchange Visitor Information System (SEVIS), 31, 36 Subpoenas for government database information, 74 Sunset provisions on surveillance programs, 100 Sunshine Act, 105, 108–10 Surveillance and quarantines, 99. See also Domestic surveillance Sweden, 93 Synthetic interview technology and risk communication, 67 Systems of records, defined, 75 Tax returns, 76 Teams for risk communication, 66–67 Technology. See New technology; Science and technology Technology Alert List (TAL), 31 Telecommunications Act of 1996, 91 Terror alert color system. See Homeland Security Advisory System
A LITTLE KNOWLEDGE
Terrorism and security vulnerability, 2. See also National security; Terror risk Terrorist Threat Integration Center, 85 Terror risk: assessment of, 64–65; evaluation of communications and citizens for, 56; informational approaches to, 53–54; institutional approaches to, 45–46; presidential powers extended due to, 5 Thomas Web site (Library of Congress), 126 Thompson, Tommy, 58, 62 Topcoding, 78 Total Information Awareness (TIA) program, 85, 89–90 Toward Environmental Justice (National Institutes of Health 1998), 44–45 Toxic chemicals, safety and information concerns, 17–18 Trade and technology changes, 86 Training for risk communication, 61–62 Transportation, Department of, 13 Travel restrictions: during Cold War, 24. See also Visa delays for foreign scientists Treasury and General Government Appropriations Act. See Data Quality Act Truman, Harry, 4 UK Cabinet Office, 43 UK Health and Safety Executive, 43 UK Parliamentary Office of Science and Technology, 43 UK Royal Commission on Environmental Pollution, 43 Understanding of data, need for, 123
155
INDEX
Understanding Risk (National Academy of Sciences 1996), 44 Universities and colleges: foreignborn faculty, scholars, and students, 26; international exchange of information, restrictions on, 4, 24, 25; openness in research in, 31–34; publication of research results, 14, 24, 33; R&D and federal funding in, 25–26. See also Science and technology USA PATRIOT Act, 2, 12, 76, 85 USA PATRIOT Act II, 18 U.S. Association for Computing Machinery, 85 U.S. Customs Service, 98 U.S.-EU Safe Harbor Agreement, 98 USSR. See Russia/Soviet Union Vaccinia, 45 Values hierarchy, 99–100 Vest, Charles, 33 Visa delays for foreign scientists, 27, 28, 29–31; recommendations to remedy, 36 Walker, David, 19 Warning system: for environmental health emergency, 50–53; for Homeland Security
Advisory System, 59. See also Risk communication Water: environmental health warning system of contamination, 50–53; geological supplies, information about, 14 Web sites: government removal of information prior to September 11, 2001, 18; Library of Congress’s Thomas Web site, 126; links to searchable public record databases, 77; Nuclear Regulatory Commission’s, 13; Project Vote-Smart, 126–27; reexamination of public information on, 13. See also Internet Whitman, Christine, 18 Wolfe v. Department of Health and Human Services (1988), 124 Women, data on, 76 World Health Organization (WHO) Global Outbreak Alert and Response Network coordinating SARS research, 32 www.FirstGov.gov, 110 www.regulation.gov, 120, 129 XML “meta standard,” 86
ABOUT THE CONTRIBUTORS GEORGE T. DUNCAN is professor of statistics in the H. John Heinz III School of Public Policy and Management at Carnegie Mellon University, where his research centers on information technology and social accountability. He has published more than seventy papers in such journals as Statistical Science, Management Science, Journal of the American Statistical Association, Econometrica, and Psychometrika. He chaired the panel on Confidentiality and Data Access of the National Academy of Sciences, resulting in the book, Private Lives and Public Policies. He is a fellow of the American Statistical Association and the American Association for the Advancement of Science and an elected member of the International Statistical Institute. BARUCH FISCHHOFF is Howard Heinz University Professor in the Department of Social and Decision Sciences and the Department of Engineering and Public Policy at Carnegie Mellon University. He is president-elect of the Society for Risk Analysis and past president the Society for Judgment and Decision Making. He is a member of the Department of Homeland Security Science and Technology Advisory Committee, the Environmental Protection Agency Scientific Advisory Board, and the Institute of Medicine of the National Academy of Sciences. ALICE P. GAST is the vice president for research and associate provost at the Massachusetts Institute of Technology, as well as the Robert T. Haslam Professor of Chemical Engineering. Prior to beginning at MIT in November 2001, she was a professor of chemical engineering at Stanford University, where she taught for sixteen years. She is a member of, among others, the American Academy of Arts and Sciences, the American Association for the Advancement of Science, the American Chemical Society, the American Institute of Chemical Engineers, and the American Physical Society. 157
158
A LITTLE KNOWLEDGE
SALLY KATZEN is a visiting professor at the University of Michigan Law School, and also worked as senior policy adviser for Joe Lieberman for President. She served almost eight years in the Clinton administration, first as administrator of the office of information and regulatory affairs in the Office of Management and Budget (OMB), then deputy assistant to the president for economic policy, and then deputy director for management at OMB. Since leaving government in January 2001, she has taught at Smith College, Johns Hopkins University, and the University of Pennsylvania Law School. RICHARD C. LEONE is president of The Century Foundation, a public policy research foundation. His analytical and opinion pieces have appeared in numerous publications, and he is coeditor of Social Security: Beyond the Basics and The War on Our Freedoms. He was formerly chairman of the Port Authority of New York and New Jersey and state treasurer of New Jersey. He also was president of the New York Mercantile Exchange and a managing director at Dillon Read and Co., an investment banking firm. He is a director of several public companies, The American Prospect, and other institutions. He is a member of the Council on Foreign Relations and the National Academy of Social Insurance. He earned a Ph.D. and was a member of the faculty at Princeton University. JOHN PODESTA serves as president and CEO of the Center for American Progress, a progressive think tank in Washington, D.C., and is a visiting professor of law on the faculty of the Georgetown University Law Center. From October 1998 to January 2001, he served as Chief of staff to President Clinton. Earlier in the Clinton administration, he was a senior policy advisor to the president on government information, privacy, telecommunications security, and regulatory policy. He is the author of a book and several articles in this area, and has lectured extensively on privacy and technology policy. JOEL R. REIDENBERG is professor of law at Fordham University School of Law. His major publications include two coauthored books on data privacy and a wide range of prominent law journal articles as well as book chapters in the United States and Europe on both privacy and Internet regulatory issues. He has testified before Congress and has served as an expert adviser to the Federal Trade Commission, state governments and the European Commission. He is a former
ABOUT THE CONTRIBUTORS
159
chair of the Section on Law and Computers of the Association of American Law Schools and is also a former chair of the association’s Section on Defamation and Privacy. PETER M. SHANE is the Joseph S. Platt—Porter, Wright, Morris, and Arthur Professor of Law at Ohio State University and director of the Center for Law, Policy, and Social Science at at the university’s Moritz College of Law. Formerly dean of the University of Pittsburgh School of Law, he was highlighted in 1998 as one of forty “Young Leaders of the Academy” by Change: The Magazine of Higher Learning. As a faculty member at Carnegie Mellon University's Heinz School, he founded, in 2001, the Institute for the Study of Information Technology and Society (InSITeS), whose advisory board he now chairs. He has coauthored leading casebooks on administrative law and separation of powers law, most recently having edited and contributed to Online Democracy: The Prospects for Political Renewal Through the Internet (Routledge, 2004). VICTOR W. WEEDN is a principal research scientist at Carnegie Mellon University with appointments in the schools of engineering, science, and public policy and management. He is best known for forensic DNA typing and creating the military’s DNA identification program. In that role he helped to develop the PCR-on-a-chip technology that underlies many of the new biosensors. He is a former Codirector of the University of Pittsburgh/Carnegie Mellon University BioMedical Security Institute and currently serves as the chair for MMRS13 Risk Communication committee of Southwest Pennsylvania.