Humble Analysis
This page intentionally left blank
Humble Analysis The Practice of Joint Fact-Finding Clinton J. An...
16 downloads
537 Views
11MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Humble Analysis
This page intentionally left blank
Humble Analysis The Practice of Joint Fact-Finding Clinton J. Andrews
PRAEGE
Westport, Connecticut London
Library of Congress Cataloging-in-Publication Data Andrews, Clinton J. Humble analysis : the practice of joint fact-finding / Clinton J.Andrews, p. cm. Includes bibliographical references and index. ISBN 0-275-97588-6 (alk. paper) 1. Government consultants. 2. Policy sciences. 3. Political planning. 4. Knowledge, Sociology of. 5. Knowledge, Theory of. 6. Communication in small groups. I. Title. JF1525.C6A53 2002 352.3'73— dc21 2001058035 British Library Cataloguing in Publication Data is available. Copyright © 2002 by Clinton J. Andrews All rights reserved. No portion of this book may be reproduced, by any process or technique, without the express written consent of the publisher. Library of Congress Catalog Card Number: 2001058035 ISBN: 0-275-97588-6 First published in 2002 Praeger Publishers, 88 Post Road West, Westport, CT 06881 An imprint of Greenwood Publishing Group, Inc. www.praeger.com Printed in the United States of America
The paper used in this book complies with the Permanent Paper Standard issued by the National Information Standards Organization (Z39.48-1984). 10
9 8 7 6 5 4 3 2 1
Contents Illustrations P
vii IX
Part I. Motivations and Concepts 1. Join 2. An
3. Fundam
3
17 29
Part II. Serving Multiple Decision Makers 4. Assessing Technology for the U.S. Congress 5. Institutional Factors Af
47
65
Part III. Mixed Participation in Analysis 6. Comparing Envi 7. USEPA's Unfi 8. Washington's En 9. California's Toward 10. Minnesota's Risk-Based Environme 11. Procedural Factors Af
73
77
87
95 103 109
Part IV. Multiple Decision Makers and Mixed Participation in Analysis 12. Analyzing New England's Electricity Alternatives 13. Analysis in Context 14.Eva
119 125
137
vi 15.
Contents Methodological Factors Affecting Analysts
145
Part V. The Practice of Joint Fact-Finding 16. 17.
Lessons Learned Elements of Successful Joint Fact-Finding
Select Bibliography Index
165 177 187 195
Illustrations TABLES 1.1 6.1 7.1 8.1 9.1 10.1 13.1 15.1
Typology of Communicative Contexts for Analysts U.S. and State-Level Comparative Risk Projects USEPA Comparative Risk Project Ranking Washington State Comparative Risk Project Ranking California Comparative Risk Project Ranking Minnesota Comparative Risk Project Ranking Procedural Framework for the New England Project Characteristics of Project Evaluation Techniques
12 75 81 90 98 105 134 149
FIGURES 1.1 Communication Flows in Conventional Information-Eliciting Procedures 1.2 Communication Flows in Conventional Dispute Resolution Procedures 1.3 Communication Flows in Joint Fact-Finding Procedures 2.1 Stages in Decision Making 13.1 The New England Project's Open Planning Approach 15.1 Flowchart for Tracking Normative Content of Analysis 15.2 Sampling the Future 15.3 New England Project Scenario Analysis Process 15.4 Range of Sulfur Dioxide Emissions with and without Repowering (1,080 Scenarios for Each Case)
9 9 10 23 134 152 154 155 157
This page intentionally left blank
Preface The summons to jury duty came in April, and by August I had run out of excuses. I went down to the county courthouse, passed through the metal detectors, and signed in at the jury selection room. Hundreds of us from all walks of life waited for assignments. The bailiff gave us general instructions, and then started a video in which aspiring actors played out a mock trial for our benefit. After the video, the waiting began, and this was quite trying for some. A young doctor fumed and plotted: Didn't they realize how much his time was worth? He had three surgeries scheduled for that afternoon. If he got called, he'd say he was biased just to get out of serving. I was content to work through my o\er-loaded briefcase, and my elderly neighbor did crossword puzzles. Finally, the call came, and two dozen of us trooped outside under the watchful eye of a sergeant-at-arms for a walk from the old to the new courthouse. We looked, bemused, at a group of toddlers who were being herded in the opposite direction towards the day care center. The sergeant got us safely to the new courthouse, counted us off, and then told us to wait outside the courtroom because the attorneys were filing motions, an activity that we were not allowed to witness. Eventually we were allowed into the courtroom. There sat a be-robed judge, a plaintiff, a defendant, two attorneys, and a huge yellow machine that I took to be a cement mixer. The judge welcomed us, explained that this was a civil case involving product liability, and began the jury selection process. He needed six jurors and two alternates. It took a day and a half to select this group, and he had to dip deeply into the pool, dismissing over forty prospects before settling on myself and seven others. Some were dismissed for having heart conditions, bad backs, borderline mental abilities, or personal economic hardship. Most were dismissed through challenges by one or another attorney. Unsuitable prospects included knowledgeable people: a police officer, a doctor, an attorney, a financial analyst, and an engineer. Also dismissed were several grandmothers, young moms, and
X
Preface
anyone who had ever worked for a construction company. I presume that I survived the cut only because they thought I was a harmless, irrelevant academic; they apparently did not realize that I also carry a professional engineer's license. I would have told them if they had asked. So I felt like a "ringer" on this jury. My peers were a schoolteacher, a chef, and a few office drones and retirees. Once picked, we received a long lecture from the judge. He "instructed us in the law," telling us that we were to be the judges of the facts but that he would remain the judge of the law; that is, of the implications of the facts. Since it was a civil case, the decision rule required a simple majority finding rather than unanimity. The tantalizing clues dropped earlier about the subject of this trial were confirmed: a young laborer had caught his hand inside a mortar mixer while building a swimming pool; he was suing the manufacturer of the mixer for faulty design. Was this accident the manufacturer's fault or not? If we decided yes, then some other jury would have to answer the contingent question of dollar damages. Was our job a straightforward evaluation of the "facts"? Hardly. This trial took surprising twists and turns, featuring dueling experts, the wrath of a woman scorned, and more. It dramatized for me the concept of the "social construction of knowledge." It confirmed further that much of what passes for knowledge is really quite tentative, and that most of us therefore decide whom, and what, to believe based on secondary cues such as credentials, consistent behavior, and personal bearing. Now back to the story. JUDGES OF THE FACTS "All rise. Jurors are entering the courtroom." Each day in court began this way, and I grew to like the fanfare. My spouse used this line on me for weeks afterward, substituting "dining room," "bedroom," or "bathroom" as appropriate. But in fact it signified the magnitude of our responsibility as judges of the facts. There was a great irony embedded in this ritual, however, because we, as jurors, had to wait for information to come to us; we could not seek it out. The attorneys decided which information to present to us, and its order. If one party objected to any datum, we might have to ignore it. We were not even permitted to take notes or talk among ourselves for fear of prejudicing our subsequent deliberations. The judge insisted that there be only one record of the trial, the official record. All of the deliberations, our determinations of the facts, would be based on that transcript. Society's legal institutions truly constrained the construction of our knowledge about the case. The plaintiffs attorney sketched his case, his version of the facts, as follows. Young Bruce was a rough-and-tumble guy. He came from a no-account family and never finished high school. He worked hard and played hard, and his employer was Jones Pools. His job was to help build swimming pools for rich people. In particular, he mixed cement and vermiculite into a slurry that other members of the work crew wheel-barrowed off to where the pool bottom was being formed. He did the mixing in an Anderson mortar mixer, which consisted
Preface
XI
of a steel drum within which whirring steel paddles powered by a gasoline engine churned together whatever was added through a grate on top. Unfortunately, this grate was removable (just pull out a couple of cotter pins), and on the day in question, the grate had gone missing. So when young Bruce picked up a big bag of vermiculite and started to pour its contents into the mixer, the bag slipped into the drum and sucked his arm in with it. Bruce's arm was badly damaged by the crushing force of the steel paddles against the side of the steel drum. According to an expert mechanical engineer, the design of the mixer was defective because it could be operated without the protective grate in place. The grate should have been welded on to prevent easy removal, and a warning label intelligible even to Bruce should have been on the machine. It was a straightforward, plausible story. Then the defendant's lawyer introduced us to her version of the facts. "Curiouser and curiouser, as Alice remarked about Wonderland, is how you'll feel about the plaintiffs story as this trial progresses." Young Bruce, who was a long-haired motorcycle hood until his haircut on the day of the trial, was a liar who also cheated on his live-in girlfriend, the mother of his child. He had done that pool job for five years in a row, and was an expert at running that mixer. He knew what the warning label said on the side of the machine, and he knew better than to use the mixer without its protective grate. The grate was designed to be removable so that it could be cleaned, a natural requirement given that concrete mix was constantly being poured through it and would therefore build up. All competing mixer designs had a similar arrangement, and never in the twentyyear history of this design had a product safety complaint been lodged with the manufacturer. An expert mechanical engineer reviewed both the machine and the circumstances of the accident and confirmed that the product was safe. It was not reasonably foreseeable that young Bruce would use the mixer without its grate in place. In fact, to do so would be stupid, because it would make Brace's job harder. How? The grate incorporated an integral bag-breaker, sort of a knife-edge, to slice open the bag of concrete when it was hefted onto the grate. In the absence of the grate, one would have to rip open the bag of concrete, and then lift it up and pour the contents into the mixing drum. Tossing it against the grate with its built-in bag-breaker was so much easier. Here was another plausible story. Following the opening statements, which were opinions, not facts, the judge reminded us, witnesses were called. One witness, the plaintiff's brother, didn't see the accident happen but saw its immediate aftermath, and he confirmed that the grate was missing from the job site that day. Another witness, a co-worker who had once spent the weekend in jail because of a prank pulled by Bruce, said that the grate was usually on the mixer, although he didn't know where it was on the day of the accident. Young Bruce himself took the stand, and testified that he had never seen the grate during the five years he had worked for Jones Pools, so how could he have known about the wonderful bag-breaker feature? So the circumstances of the accident were a little unclear.
XII
Preface
LISTENING TO EXPERTS Next the expert witnesses took the stand. The plaintiff's lawyer constructed his expert's credentials for the record by asking the man to recite his resume: a B.S. in mechanical engineering from what I considered to be a middle-tier school, twenty years of relevant work experience, and expert testimony in fifty previous product liability cases (always for the plaintiff, it was noted during cross-examination). The engineer produced photographs showing the mixer with bits of string and plastic bag wrapped around the paddles, a mockup of a grate with welds instead of cotter pins, and most important, a mound of safety engineering texts showing that accepted engineering practice was to "design for dummies" by making protection an integral, rather than detachable, feature of any potentially dangerous machine. The defendant's lawyer worked to deconstruct the credibility of the plaintiff's expert witness, using mostly irrelevant arguments (to my ears). These ranged from "how do we know you took pictures of the right mixer?" to "the American Society of Safety Engineers is not a government agency; it has no force of law." Then the defendant's expert was called to the stand. Again, the construction of expertise: a B.S. in mechanical engineering from what I considered to be a good school, followed by a night-school M.S. and fifteen years of relevant experience, including dozens of product liability cases (sometimes for the plaintiff, other times for the defendant). This expert witness stressed the good safety record of the mixer design, the benefits of using the machine with the grate and its integral bag-breaker in place, the sound logic from a maintenance point of view in using cotter pins to allow removal of the grate for cleaning, and the lack of anything bad to be said about cotter pins anywhere in the safety engineering literature. The plaintiffs attorney took similarly ineffectual potshots at this witness's credibility. We, the jury, were left with a jumble of facts that seemed to boil down to a slight difference in design philosophy between two modestly qualified engineers—cotter pins or not? Just as I was getting comfortable that, as a "ringer," I could resolve this conflict for my fellow jurors, a new witness blew away the whole logical edifice. The girlfriend and unmarried mother of Bruce's child took the stand. She reported that her uncle was the foreman at Jones Pools, which started to explain why that company had not been named in the lawsuit. She then described how Bruce had cheated on her, sleeping with her best friend, recently after their baby had been born. The defendant's lawyer then played the tape recording of a phone call she made to the manufacturer immediately following that infidelity, saying "that incident with the mixer was no accident; Bruce stuck his hand in on purpose to collect some money!" Now, months later, reconciled with Bruce and on the witness stand, the girlfriend tearfully recanted her earlier statement. We didn't know which of the stories told by this sobbing young mother we should believe. The very idea that it was an accident now seemed in doubt. There the testimony ended. We jurors were left to decide whether the accident was really an accident, and whether the product design was unsafe. The first question was unanswerable, and the second a judgment call, but we had to
Preface
XIII
provide a yes or no answer for each. After much discussion we decided that it was an accident and that the product design was safe. The plaintiff lost. Young Bruce did not collect any money. MESSY FACTS, UNSATISFYING DECISIONS Unanswered, and unasked, were several important questions. Had young Bruce been properly trained by his employer in the use of the mixer? Probably not. Was a bag-breaker designed for use with small, heavy bags of dense cement also useful on big, light bags of fluffy vermiculite? Probably not. Did manual laborers usually understand written operating instructions or warning labels? Probably not. From an engineer's perspective, it seemed as though the manufacturer could have done much to improve the design, and that it should have done so even if it technically was not at fault. It further seemed that the employer should have been named in the lawsuit, but for personal reasons Jones Pools was not Bruce's target. A messy, unsatisfying sort of justice was served that day. THIS BOOK The trial helped me to understand that society explicitly constructs factual knowledge, certifies experts, and legitimates decisions in specified ways. The trial is one of several experiences that have dramatically changed my thinking about the role of the expert in public decision-making. I used to collect degrees, professional certifications, and experiences that lent me authority, the right to speak. These experiences forced me to confront the symmetrical need to become a great listener. I became particularly interested in deliberative techniques such as joint fact-finding because they supported cooperative rather than adversarial interactions. As I learned more about these techniques from the twin perspectives of scholar and practitioner, I saw a gap in the literature; hence this book. Briefly: Bardach (2000), Patton and Sawicki (1993), Stone (1997), and Weimer and Vining (1992) are among those who write about the practical problems of offering client-oriented advice on public decisions. They stand on the shoulders of Lasswell, Wildavsky, Wilson, and generations of economists and political scientists who fashioned the profession of policy analysis. Fisher and Ury (1981) and Susskind and colleagues (1999) provide valuable practical guidance for managing cooperative decision-making processes. Underpinning these works are important contributions to philosophy and political theory by the likes of Kant, Habermas, and Lindblom. Collectively, these writings provide both a normative rationale and detailed instructions for improving democratic deliberation. Keeney and Raiffa (1976), Kleindorfer and colleagues (1993), and many others offer practical guidance for performing highly technical decision analysis, and group decision-making has become a focus in recent years. This literature builds on fundamental contributions to psychology, economics, and management science by scholars such as von Neumann, Simon, and Kahneman.
XIV
Preface
These writings collectively provide the rationales and instructions for performing quantitative decision support. Finally, there is a literature on social studies of science contributed by Merton (1973), Price (1965), Brooks (1984), Jasanoff (1990), and others. This literature builds on fundamental additions to social theory by writers such as Bacon, Popper, and Kuhn. The "science wars" between postmodernists such as Latour and unrepentant modernists like Leavitt have overshadowed the useful contributions of this genre to practical questions of science-in-policy. These writings collectively enrich our understanding of the scientific enterprise and its links to public decision-making. Numerous studies draw on these four traditions to characterize theories, methods, processes, or institutions for technical analysis and public decisionmaking. However, excepting small gems by Ozawa (1991), Lee (1993), and a few others, there is relatively little to read if one wants to learn specifically about the practice of joint fact-finding, about the activities and perceptions of technically trained analysts working in a group decision support context. There are few published stories of professional planners, engineers, and scientists struggling to contribute information to collaborative decision-making processes. Much of what is written about planners, for example, merely derides their technocratic tendencies. This book instead takes a constructive look at evolving roles for technical analysts. My strategy in writing this book emulates Schon's Reflective Practitioner and Forrester's Deliberative Practitioner. I present and interpret stories taken from practice. Since my experience is in environmental planning and policy, that is the field of practice I write about; I hope that those from other fields can extrapolate appropriately. I rely on case studies in what inevitably becomes a personal, somewhat idiosyncratic investigation of new roles for so-called technocrats. I adopt this approach because practice does not easily succumb to generalization, since successful practice depends upon good performance on multiple fronts in a particular context. Still, I try to balance the richness of particular cases with general lessons for a broad range of practitioners. The collaborative, joint fact-finding context is not the dominant setting for technical analysis, but it is understudied relative to those of basic research, applied analysis for a single client, and analysis in an advocacy setting. Joint fact-finding is challenging, rewarding work, and I hope that this book begins to give it the focus that it deserves. ACKNOWLEDGMENTS I offer profound thanks for the advice and encouragement I received from early readers of the manuscript, including Terri Bookman, Susan Fainstein, Ken Foster, David Guston, David Hassenzahl, Joshua Lederberg, Robert Margolis, and Ned Woodhouse. I thank Rutgers University, Princeton University, AT&T, the National Science Foundation, and the New Jersey Department of Environmental Protection for financial support, and I thank Ellen Cotter for sharing our home with this unruly project.
Parti Motivations and Concepts
This page intentionally left blank
1 Joint Fact-Finding Joint fact-finding is a practice that has been around as long as the human species itself. Neolithic astronomers, arguing over the placement of rocks at Stonehenge, surely investigated the empirical evidence together: "The sun will rise here." "No, there." "Let's wait and see." The reason for writing this book is that people today are just as argumentative as their forebears, but they more often need help in finding out the facts. Human existence has become complex, and frequently people are not getting the help they need. We now routinely separate the tasks of decision-making and analysis, especially in the public sphere. This division of labor has made it more difficult to use the old way of resolving debates by means of direct, joint investigation. Instead, to resolve important factual disputes, we now rely heavily on institutions such as courts, and procedures such as debate according to rules of evidence. While there are benefits to specialization and institutionalization, there are also costs, including lost opportunities to investigate jointly and then to agree on relevant facts in a practical, informal, efficient manner. 1 explore the practice of joint fact-finding in three cases: advising the U.S. Congress, setting environmental priorities, and performing regional utility planning. These cases test the hypothesis that analysts must work differently than normal when working in a joint fact-finding context. They also show whether analysts need extra skills or new methods, and how procedural and institutional factors matter. Joint fact-finding is a loaded phrase implying that facts exist and merely need to be found, but readers should treat it simply as the commonplace term for the activities described in the cases. THE EXPERT A few years ago, Massachusetts Institute of Technology (MIT) economist and Nobel laureate Robert Solow granted an interview on the topic of offering expert
4
Motivations and Concepts
advice in a bureaucratic setting.1 Tall, thin, friendly, and reflective, Solow was a witty and well-intentioned fellow. He based his comments on a lifetime of relevant experience: a stint with the Council of Economic Advisors to President Kennedy, dozens of years of interactions with members of the U.S. Congress and other influential public and private decision makers, and a decade in the bully pulpit of Nobel-dom. Recall that Solow earned his academic reputation with pathbreaking research in the 1950s demonstrating that much economic growth is a function of technical progress. Solow offered several prescriptions for analysts advising decision makers. Not once did Solow mention the familiar but problematic "facts/values" dichotomy. This particular division of labor between experts and their clients was not a central part of his experience. Here I summarize Solow's advice, which portrays a frankly communicative view of the expert's role. Explain the rationale. An expert should offer a modicum of the reasoning behind recommendations, and not merely announce what is the "right" answer. Solow told the story of fellow economic advisor James Tobin getting called by a panicked President Kennedy, who said, "You told me not to worry about U.S. gold reserves fleeing overseas. But why wasn't I supposed to worry?" Tobin then had to explain the rationale, in lay terms. Become a good story teller. It is unreasonable to expect an executive—or anyone else outside your field of expertise—to understand the delicate subtleties of your discipline's analytical framework. A wholly deductive, or theory-based, explanation will be incomprehensible and unpersuasive. A wholly inductive, or evidence-based, argument may be comprehensible but will not provide generalizable insights. Solow recommended a middle-ground approach that relied on illustrative anecdotes. In the same way that Aesop's fables teach generalizable lessons, so an expert's story should convey personally meaningful insights to a decision maker. Often a good metaphor is key, as long as the user doesn't overstep its reasonable limits. Catch others' mistakes. An important part of the confidential expert's job is to debunk misconceptions. In economics, a field about which everybody thinks they know something, this task is crucial. Solow reported spending many hours in government meetings correcting this or that sub-cabinet official's misunderstanding of how markets really work. Experience is sometimes the worst teacher when it comes to understanding the dynamics of large, decentralized systems. Correcting misconceptions before they spread keeps the organization out of trouble. Don't lie or exaggerate. Experts do not serve their own or their clients' best long-term interests by deliberately misstating their understanding of the truth. Strategic gains from so doing are in most cases remarkably short-lived, according to Solow. A real expert also knows the boundaries of his or her expertise, and the appropriate uses of particular conceptual models and disciplinary tools. Hard as it may be to say, sometimes "I don't know" is the best answer. An interesting, if ironic, demonstration of Solow's sensitivity to this issue took place immediately after this interview. Solow had been asked to lecture to a very diverse audience about the economics of environmental
Joint Fact-Finding
5
sustainability. He was careful to preface his remarks with many caveats about the limits of the economic perspective and the assumptions underpinning his views. Yet this humility did him little good during the question-and-answer session, when unsympathetic audience members focused on those very limits and assumptions. This strategy worked well one-on-one, but not in a large group—an important point. Organize your peers. An expert community does society a great service when it collectively clarifies what is and is not known about a particular subject. For example, scientists worldwide have worked together to prepare state-of-thescience assessments for the Intergovernmental Panel on Climate Change.2 These assessments spell out what is known with certainty, what is known with high probability, what is speculative, and what is not at all understood. Individual experts rarely have the credibility or strength of client trust to specify the level of confidence in such a clear manner; an expert consensus does. But watch out for groupthink! Consider institutional factors. Solow concurred that the Congressional Budget Office (CBO) and the recently defunct U.S. Congressional Office of Technology Assessment (OTA) both gained a reputation for providing relatively professional, balanced analysis. In this regard they are quite unlike the relatively partisan President's Council of Economic Advisors and Office of Science and Technology Policy. The institutions these experts serve affect the degree to which the experts become politicized. Neutrality is a key survival strategy for Congressional agencies, whereas partisanship is a more effective survival strategy in the executive branch.3 Value leadership. According to Solow, Alice Rivlin's highly professional hand at the helm of the CBO cemented a professional ethic, a loyalty first to the discipline of economics and only second to the client for the work. Some insiders say that Jack Gibbons imposed a similar ethic at OTA, and the agency died shortly after he appeared to adopt a partisan stance by moving to the Clinton White House. Like Solow, many other scientists, engineers, social scientists, planners, and technical analysts enjoy deserved recognition for their expertise. Their special knowledge and analytical skills give them a privileged position in society, often at the ear of a decision maker. Some analysts have the freedom to produce fundamentally new knowledge, whereas others perform applied analysis and offer advice to specific clients. The word "analyst" broadly denotes a highly trained person who works professionally at analytical tasks and thus enjoys elite rather than lay status. A few additional brief definitions are due the reader. A client is the person who directly employs an analyst. A decision maker is a person with political legitimacy and power who makes decisions, but who may not be the analyst's direct client. Authority implies high status derived from the possession of broadly accepted credentials or broadly recognized wisdom. Objectivity denotes a balanced, nonpartisan perspective and analysis using impersonal criteria. Neutrality denotes balance and nonpartisanship in actions rather than analysis. Framing assumptions are the unwritten portions of mental models we each carry
6
Motivations and Concepts
around in our heads to explain how the world works and to orient ourselves. Normative content represents the value-laden aspect of any statement, that is, the portion due more to assumed standards of behavior ("ought") than to observed evidence ("is"). Chapters 2 and 3 expand on these definitions. THE CLIENT(S) The archetypal expert is the wise confidante, the advisor who has the ear of an executive decision maker. This expert serves the decision maker and measures much of his or her personal success in terms of access to and influence over that executive. The fame of Archimedes, for example, was established by his intellectual feats in the service of King Hiero of Syracuse during the third century B.C. His royal client provided problems to solve, ranging from inventing water pumps and war machines to establishing the purity of gold metal in a crown. The latter problem led Archimedes to develop his famous principle that a floating body displaces its own weight in a liquid, which prompted his equally famous shout of "Eureka!" Archimedes' treatises abstracted interesting science from the problem-solving activities, but both his influence and his income depended on that connection with the royal family. Occasionally this confidential relationship has revealed a dark side, as in the case of the magnetic but ignorant and debauched Grigori Rasputin, advisor to the last Tsarina of Russia. His ability to relieve the suffering of the Tsarina's hemophiliac heir Alexis gave him great influence in the royal household, which he used for personal gain and to further the ends of his corrupt accomplices. His actions unwittingly hastened the collapse of the monarchy and set the stage for the 1917 revolution. His infamy derived from that influential relationship with a decision maker. For bad and for good, decision makers have always needed experts, and experts have always needed decision makers. But the relationship between expert and decision maker, rationality and power, is not symmetrical, even in democratic societies. "Rationality is such a weak form of power that democracy built [solely] on rationality will be weak too."4 On one hand, rationality, the luxury of analysis, thrives best under the protection of stable power. On the other hand, power creates its own reality, and freely substitutes rationalization for rationality. Like most luxuries, analysis is dispensable. Policy analysis textbooks frequently urge the budding analyst to identify his or her client and to figure out how to make that person happy: "policy analysis is client-oriented advice."5 "Learn to advocate the positions of others."6 A "policy analyst is a producer of arguments, more similar to a lawyer . . . than to an engineer or scientist."7 Often, the client is simply the analyst's boss in a hierarchal organization. Thus, Robert Solow actually worked for Walter Heller, who in turn forwarded recommendations to the treasury secretary and thence to President Kennedy. This client-focused perspective, often shaded with delicate nuances,8 dominates the literature and the practice of policy analysis. Yet it is not the typical situation in cases of joint fact-finding. The analysts examined in
Joint Fact-Finding
7
this book are unusual because they try to serve multiple decision makers who may represent conflicting organizations or interests. The policy analysis literature is relatively sparse when it comes to studies of joint fact-finding, but I will draw on its many other insights at several points in this book. I will also tap bodies of research in decision science and social studies of science. The consensus-building community9 has given the most thought to joint fact-finding in recent years, because they have seen that joint fact-finding has enjoyed increasing use in firms, local planning boards, regulatory agencies, courtrooms, legislative chambers, and international treaty negotiations. Given this breadth of practice, it is reasonable to employ the term broadly and to define joint fact-finding as any process in which parties with different interests work together to develop a shared information base for making decisions. Joint factfinding specifically attempts to develop a shared perspective that will allow reasonably intelligent coping moves that can win sufficient political support.10 Multiparty decision processes of many types thus may include joint fact-finding. It is a crucial component of participatory management, communicative planning, and discursive democracy. THE SETTINGS Courtrooms and legislatures typically make use of expert knowledge by competitive rather than cooperative means. The advocacy approach, presenting thesis and antithesis in the hope that there will be a subsequent synthesis, has many strengths. Lindblom argues that "partisanship is both necessary and helpful to social problem solving" because self-interested individuals ensure that the best arguments are presented; and further, because society's "commonalities arise from the efforts of diverse persons to solve their own narrowly conceived problems."11 Legal advocacy has done remarkably well as a mechanism for resolving disputes, and yet it remains unsatisfying not only for those who lose cases, but also for those who must participate in deciding what are the facts. Clearly, the advocacy approach also has important weaknesses. It underutilizes the talents and time of many participants in the process. It forces parties to adopt strategic modes of communication. In a courtroom it presupposes a zero-sum game rather than the possibility of mutual gains. It inappropriately constrains the range of solutions, so that many outcomes remind us of Solomon's decision to cut the disputed baby in half. The legal process is just one of several social processes we use to construct facts and make decisions. Institutional and procedural idiosyncrasies profoundly affect the construction and use of knowledge. Different social processes—legal, political, scientific, others—will produce different versions of the facts, and different decisions.12 The courtroom is a special place because power relations must play out within strict procedural constraints, and legal procedures give particular weight to the force of the better argument. How does rationality fare in a more highly politicized context?
8
Motivations and Concepts Expertise in an Adversarial Setting
Ozawa notes that we employ a very limited repertoire of methods for handling expert input in adversarial forums.13 Some conventional methods are designed to elicit information and others to settle explicit disagreement. Informationeliciting procedures (Figure 1.1) include public hearings, written comments to decision makers, letters to the editor, and similar activities that clarify the positions of stakeholders on an issue. Scientific evidence or expert opinions often bolster the position statements. Decision makers sometimes employ dispute resolution procedures (Figure 1.2) designed to help them sort through the competing claims of advocacy science. They use expert review panels, special masters, National Academy of Science task forces, and similar "neutral" sources of expertise to arbitrate technical disputes. Experts, whether advocates or neutrals, often contribute relatively little to public decision-making. Ozawa explains the reasons why both conventional methods fail: First, the technical basis of scientific disagreement remains hidden, both from the decision maker and, possibly, the competing stakeholders. Second, by failing to integrate the consideration of scientific and political aspects of a policy issue, the political interests that drive participation by stakeholding groups are left unaddressed. Finally, the role cast for the scientist raises concerns about credibility that cannot be adequately put to rest.14 Technical analysts often seem to disagree about the established "facts," because of miscommunication, differences in the design of inquiries, errors in the inquiry, and differing interpretations of findings.15 Decision makers, major stakeholders, and the public cannot distinguish between superficial disagreements caused by the factors above, substantive disagreements, and the intentional use of distorting rhetorical devices. As long as technical knowledge is introduced into the process in an adversarial form, then the non-experts will have difficulty in distinguishing reliable facts from uncertainties, and factual questions from value questions. Interests and technical information are often mixed together in public debates, and parties have trouble distinguishing what others really care about. Often a technical argument is merely a device to advance a particular interest. Also, when the controversy stems from an obviously unbalanced incidence of costs and benefits among stakeholders, then further information may reduce the chance of agreement. Technical experts, being human, inevitably mix factual and value judgments together. "Neutral" expertise is more of an ideal than an accurate characterization of real experience.
Joint Fa
9
Figure 1.1 Communication Flows in Conventional Information-Eliciting Procedures
Source: Ozawa, Connie. Recasting Science: Consensual Procedures in Public Policy Makin CO: Westview Press, 1991. Used with permission. Figure 1.2 Communication Flows in Conventional Dispute Resolution Procedures
Source: Ozawa, Connie. Recasting Science: Consensual Procedures in Public Policy Makin CO: Westview Press, 1991. Used with permission.
10
Motivations and Concepts
Joint fact-finding is a procedure that tries to take better advantage of what the expert offers (Figure 1.3). It lays bare the basis for scientific disagreement, lets facts and values remain intertwined, and accepts that strict neutrality is too much to ask of most experts. Does joint fact-finding also make special demands on analysts? Case studies presented in subsequent chapters of this book address this and subsidiary questions. I now establish a rationale for selecting the cases, introduce a framework for evaluating the cases, and give readers a look ahead at the rest of the book. Figure 1.3 Communication Flows in Joint Fact-Finding Procedures
Joint Fact-Finding
11
THE CASE STUDIES Analysts face multiple difficulties when working in a communicative context. The cases illustrate two classes of problems: (1) the problem of having a group of decision makers rather than an easily identifiable client as the audience for analytical work; and (2) the problem of having to share analytical tasks with non-analysts, including members of the lay public. This suggests a 3 x 2 typology of analytical contexts. For completeness, Table 1.1 includes two contexts in which analysis is selfdirected rather than serving an external decision maker: research and education. I do not discuss them further because the problems of effective disciplinary communication and good teaching are thoroughly discussed elsewhere. The cases in the remaining categories build incrementally upon one another, although they are not directly comparable since their analytical tasks and institutional contexts differ. I focus first on the "consumer" dimension of sharing: is the audience for analysis an individual or a group of decision makers? When the analyst serves a group of decision makers, the process of constructing knowledge is likely to become more important, the problem of linking facts and values may become more difficult, and communication will play a more central role in determining the impact of the analytical effort. The remainder of Part I (chapters 2 and 3) provides a vocabulary for distinguishing client-oriented advising from joint factfinding. Part II (chapters 4 and 5) examines the special challenge that analysts meet when serving a group of decision-makers rather than an individual executive (whether directly or indirectly through a bureaucratic chain of command). Essentially a postmortem of the U.S. Congressional Office of Technology Assessment (OTA), this case study describes how a classic decentralized decision-making body provided itself with technical advice from 1972 to 1995.1 show that the institutional context, procedural arrangements, and individuals involved in each analytical effort affected its outcome, and I discuss the special characteristics of their approach to formal analysis. This case also provides a vehicle for reflecting on institutional issues. OTA was notable for soliciting external input to help set the scope of each analysis. It asked broadly trained staffers to convene multidisciplinary workshops, synthesize findings, and seek broad external review of results. It worked with less success to overcome communication gaps between analysts and decision makers, by producing short summaries and verbal presentations to complement their main product, the large written report. Its analysts interacted with elite stakeholders, but not with the general public. There is also a "producer" dimension of sharing: who is involved in the analysis? In some cases it is analysts only, but in others there is mixed participation from technical analysts, professional stakeholders, and members of the lay public. Concerns about constructing knowledge, connecting facts and values, and communicating may also surface in a mixed context. Does this dimension of sharing challenge analysts in the same way or in different ways as the first dimension?
Motivations and Concepts
12
Table 1.1 Typology of Communicative Contexts for Analysts Consumers of analysis:
Nondecision Makers
Individual Client
Multiple Clients
Standard problems of disciplinary communication
Standard problems of giving good advice and being an effective advocate
Special problem of working for multiple bosses
Research
Clientoriented Advising
Producers of analysis:
Technically Knowledgeable 1 People Only
Standard problems of good teaching
Special problem of working with non-analysts on analysis
Education
Mixed participation in analysis (Part III: Comparing Environmental Risks)
Mixed Participation
Elite joint fact-finding (Part II: Advising the U.S. Congress) Special problems of working for multiple bosses AND working with non-analysts on analysis Participatory joint factfinding (Part IV: Regional Utility Planning)
The special challenge of working with nonanalysts on analysis for a single executive is the subject of Part III (chapters 6 through 11). It compares a set of comparative environmental risk projects performed by the U.S. Environmental Protection Agency and several U.S. states. These projects had similar technical objectives—environmental risk comparisons to inform agency priorities—yet they employed a variety of approaches ranging from experts-only involvement to broad public participation. Participants in analyses included members of the
Joint Fact-Finding
13
lay public, political stakeholders, officials, and technical analysts. It shows that the successful projects depend on more than expert input, and that there are more and less effective ways to incorporate public participation. This set of cases identifies additional elements of an alternative analytical approach. In Part III, I also take the opportunity to reflect on procedural issues. The more successful comparative risk projects invited outside participation (typically both by professional stakeholders and the general public) at an early stage. Direct involvement of nonanalysts in the analysis also helped overcome communication problems. However, analysts had to accommodate these participants in various ways: by eliminating jargon, relaxing methodological rigor, and accepting direction. What happens when the context involves both multiple decision makers and mixed participation in the analytical effort? Part IV (chapters 12 through 15) delivers both challenges at once: an audience that is a group of independent decision makers, and an analytical effort involving a variety of technical experts, public officials, professional stakeholders, and members of the lay public. It describes joint fact-finding on electric utility planning in New England, and shows how analytical support was provided to a diverse group of decision makers representing six states. This case shows the challenges of marrying analysis to process, and of constructively broadening the scope of analysis. It illustrates how analytical approaches used in this context may need to differ from those employed by experts working for a single executive. It shows what happens when an analytical project is explicitly designed for its communicative context. Readers should be forewarned that I played a participant-observer role in this project. This part allows me to reflect on methodological issues. In this case, analysts again invited outside stakeholders to help them set the project scope, but they went further and asked for help in choosing assumptions, evaluating results, and making normative decisions. Participation of various types also minimized communications problems, but the analysts again went further and designed improved methods for analyzing and presenting data tailored to this communicative context. These cases have revelatory value16 because they illustrate innovative analytical efforts. Each case examines a different aspect of real-life analytical context and illustrates key facets of effective joint fact-finding practice. EVALUATION FRAMEWORK The case studies in this book test my claim that joint fact-finding demands special things of analysts: special personal skills and analytical methods, and a strong appreciation of procedural and institutional issues. They show how analysts behaved, and with what results. The cases show which skills and methods are helpful, and which procedural and institutional issues are relevant. Each case study also evaluates the analytical effort's success. If interesting features are visible, but the analysis was a failure, then others will not want to emulate its approach. But what is "success?" In program evaluations it is traditional to compare the program's implementation to its stated objectives, by
14
Motivations and Concepts
asking whether it is successful in its own terms.17 However, this narrow approach fails to ask larger questions, such as: Are those stated objectives relevant to the problem situation? Are those objectives aligned with society's broader interests? Is the current social order legitimate?18 Thus, many evaluators now adopt participatory approaches that provide more meaningful, if less quantifiable, feedback. In preparing the case studies, I followed this second approach and interacted with key participants. I interviewed major players, and surveyed participants in several of the studies. Evaluating an advisory enterprise is different in important ways from evaluating a program. The product of the enterprise is advice, and measuring the attributes of this output is particularly difficult. Therefore, the evaluation should also consider inputs (people, methods) and processes (procedures and institutions). The evaluation approach employed here examines the analytical effort's full range of inputs, processes, and outputs. Inputs include resources, participants, methods, and data. Processes include institutional structures and procedures that govern such things as quality control and eligibility to participate. Outputs include the products of the enterprise—facts generated, solutions proposed, and conclusions drawn. A special problem of science-dominated policy niches is that parties such as scientists, policy makers, and interest groups may each apply different evaluation criteria within such a framework. A critical evaluation therefore needs to focus on a reduced number of overarching criteria that capture key concerns. Clark and Majone identify four criteria for "scientific" policy analysis: adequacy, value, effectiveness, and legitimacy, which I have adopted when evaluating my cases:19 •
Adequacy deals with competence-related questions such as the origin and reliability of data, appropriateness of statistical tests, validity, and replicability. • Value has an internal component (is the research both scientifically important and technically feasible?), an external component (is the research relevant to public policy?), and a personal component (is the research personally significant to the investigator?). • Effectiveness (visible impact) should not be measured with reference to a particular policy debate, where its impact will usually be invisible, but rather with reference to the shaping of the policy agenda.20 • Legitimacy in this context may have two sources: a status-based, numinous, "authoritative" legitimacy such as that enjoyed by god-kings, charismatic leaders, and respected scientists, or a consent-based "civil" legitimacy that is granted to experts by the public in return for following constitutional rules and open, democratic procedures. Weber first made these distinctions.21 These criteria provide an operational basis for defining success, as an analytical effort that is adequate, valuable, effective, and legitimate. The criteria are rich enough to support nuanced evaluations of efforts that, realistically, may have enjoyed only mixed success, performing well in some ways and poorly in others. Since my primary goal is to understand what analysts need to do in order
Joint Fact-Finding
15
to succeed at joint fact-finding, this ability to measure success along multiple dimensions is crucial.
THE PUNCHLINE Part V (chapters 16 and 17) ties the experience of the case studies to the theoretical expectations about joint fact-finding analysis. It confirms that effective analysis will be self-consciously communicative, that nonstandard techniques may be needed, and that the various dimensions of communicative practice can be taught and learned. By way of a preview, the cases suggest that for analysis to be successful in a context of shared decision-making and mixed participation, it must do the following: • • • • • • • •
enjoy the support of those with decision-making power marry the analytical work to the decision process share information widely reason inductively by arguing from observations instead of theories actively manage normative content in analysis broaden the analytical scope persuasively translate the analytical results perform cross-disciplinary review of results
The remainder of the book shows in detail why these are the key practical elements of joint fact-finding, and builds a multifaceted argument that joint factfinding as an analytical specialty deserves broad recognition and thoughtful practice.
NOTES 1. An earlier version of this interview appeared in my column "Giving Expert Advice," IEEE Technology & Society Magazine, 17, 2 (Summer 1998): 5-6. 2. Their findings are summarized in Climate Change 2001, reports arising from the Third Assessment Report of the Intergovernmental Panel on Climate Change, in three volumes on the scientific basis; impacts, adaptation, and vulnerability; and mitigation (New York: Cambridge University Press, 2001). 3. Bruce Bimber, The Politics of Expertise in Congress: The Rise and Fall of the Office of Technology Assessment (Albany, NY: SUNY Press, 1996), p. 7. 4. Bent Flyvbjerg (Steven Sampson, trans.), Rationality and Power: Democracy in Practice (Chicago: University of Chicago Press, 1998), p. 234. 5. David L. Weimer and Aidan R. Vining, Policy Analysis: Concepts and Practice, 2nd ed. (Englewood Cliffs, NJ: Prentice-Hall, 1992), p. 1. A similar statement appears in Eugene Bardach, A Practical Guide to Policy Analysis: The Eightfold Path to More Effective Problem Solving (New York: Chatham House, 2000), p. xiii. 6. Carl V. Patton and David S. Sawicki, Basic Methods of Policy Analysis and Planning, 2nd ed. (Englewood Cliffs, NJ: Prentice-Hall, 1993), p. 15. 7. Giandomenico Majone, Evidence, Argument, and Persuasion in the Policy Process (New Haven: Yale University Press, 1989), p. 21, cited in Patton and Sawicki, Basic Methods of Policy Analysis and Planning, p. 15.
16
Motivations and Concepts
8. Arnold J. Meltsner, Policy Analysts in the Bureaucracy (Berkeley: University of California Press, 1976). 9. The consensus-building community analyzes, practices, and advocates for greater use of what used to be called alternative dispute resolution techniques, most often featuring assisted negotiation using facilitators or mediators. Their recent, exhaustive handbook includes a chapter on joint fact-finding that offers advice regarding how and when to hire technical experts. It offers less guidance for the technical experts on how to perform joint fact-finding, however. Lawrence Susskind, Sarah McKearnan, and Jennifer Thomas-Larmer, eds., The Consensus Building Handbook: A Comprehensive Guide to Reaching Agreement (Thousand Oaks, CA: Sage Publications, 1999). 10. See Charles E. Lindblom and Edward J. Woodhouse, The Policy Making Process, 3rd ed. (Englewood Cliffs, NJ: Prentice-Hall, 1993), esp. ch. 3 ("The Potential Intelligence of Democracy"). 11. Charles E. Lindblom, Inquiry and Change (New Haven: Yale University Press, 1990), pp. 54-55. 12. The institutions of science and law engage in particularly visible conflicts over important epistemological issues such as standards of evidence. For a detailed social science perspective on science in the courtroom, see Sheila Jasanoff, Science at the Bar: Law, Science, and Technology in America (Cambridge, MA: Harvard University Press, 1995). For a detailed natural science perspective, see Kenneth R. Foster and Peter W. Huber, Judging Science: Scientific Knowledge and the Federal Courts (Cambridge, MA: MIT Press, 1997). 13. Connie Ozawa, Recasting Science: Consensual Procedures in Public Policy Making (Boulder, CO: Westview Press, 1991), pp. 28-32. 14. Ibid., p. 33. 15. Connie Ozawa and Lawrence Susskind, "Mediating Science-Intensive Policy Disputes," Journal of Policy Analysis and Management 5, 1 (1985): 23. See also Dorothy Nelkin, Technological Decisions and Democracy (Beverly Hills, CA: Sage Publications, 1977), p. 83. 16. Case studies provide a way to answer "how" and "why" questions about analytical efforts, rather than limiting our inquiry to the "who," "what," "when," and "where" of archival methods. See Robert Yin, Case Study Research: Design and Methods, 2nd ed. (Beverly Hills, CA: Sage Publications, 1994). 17. Earl Babbie, The Practice of Social Research, 8th ed. (Belmont, CA: Wadsworth Publishing Co., 1998), pp. 333-355. 18. Frank Fischer, Evaluating Public Policy (Chicago: Nelson-Hall Publishers, 1995), p. 18. 19. William C. Clark and Giandomenico Majone, "The Critical Appraisal of Scientific Inquiries with Policy Implications," Science, Technology, and Human Values 10, 3 (Summer 1985): 6-19. For a useful extension of this evaluation framework, see David H. Guston, "Critical Appraisal in Science and Technology Policy Analysis: The Example of Science, the Endless Frontier," Policy Sciences 30 (1997): 233-255. 20. Carol H. Weiss with Michael J. Bucuvalas, Social Science Research and DecisionMaking (New York: Columbia University Press, 1980), pp. 248-276. 21. Max Weber, The Theory of Social and Economic Organization (New York: Free Press, 1922; 1957).
2 Analytical Angst Several years ago an economist at the Tennessee Valley Authority (TVA) issued a plea to his colleagues for help. This large federal corporation was carrying a huge debt load as a result of a failed gamble on nuclear power generation, and it had lost credibility with many in Congress, in the Tennessee Valley region, and even within its employee base. Management saw a Congressional order to create a long-term plan—the Energy Vision 2020—as an opportunity to set things right. This economist had been given the job of constructing an investment-planning model that everyone would believe. Technically, this analyst had to figure out what to include in the model. Choices included hundreds of demographic, economic, technological, climatic, and political variables influencing the demand for electricity, plus hundreds more affecting the supply of electricity, given that the gap between supply and demand represented the needed investment in electricity-generating resources. The analyst also had to decide whether to optimize electricity costs, debt burden, environmental impacts, reliability of the power system, or some combination of objectives, all under conditions of extreme uncertainty. Which modeling specification was the most appropriate simplification of reality? The TVA analyst couldn't answer his technical questions until he grappled with several communicative challenges. Management decision makers had great expectations from this model: not only would it reveal the solution to the financial crisis, but it would also placate Congress, make local environmentalists happy, and position TVA for electricity sector deregulation. The analyst needed to quell unrealistic expectations while still being responsive. A requirement for public participation meant that the modeling process also needed to be comprehensible to nonexperts. In addition, the engineers, economists, and other specialists working with him on this project often had no idea what people outside their own discipline were talking about. Worse, they sometimes had directly conflicting interests—both the financial and the environmental folks
18
Motivations and Concepts
wanted to halt all nuclear power plant construction, while the nuclear engineers would lose their jobs if that happened. This Ph.D. economist was being asked to do joint fact-finding, although that's not what his management called it. But clearly, parties with different interests were supposed to work together to develop a shared information base for making difficult decisions. He told his colleagues, in paraphrase: "Nothing I learned in graduate school prepared me for this—what should I do?"1 BIG PROBLEMS FOR ANALYSTS For analysts, two major problems simply come with the territory. These are the unavoidable technical problem of simplifying reality, and the equally unavoidable problem of communicating about analytical work. Modeling Reality Some decisions are important enough to justify significant prior investments in better information. Trained analysts—planners, engineers, scientists, and others—employ sophisticated tools to answer the relevant questions. Researchers also routinely give expert advice in public settings. For example, the U.S. Congress may want to understand the likely magnitude and incidence of impacts from implementing economy-wide income tax or health care reforms. On a local scale, a municipal government may want to select the best site for a new school, community center, or prison. Likewise, a firm making a significant investment in new production capacity will want to optimize its choice. Analysts work to support such decisions in an atmosphere of messy reality, in a context featuring controversy, uncertainty, technical complexity, and multiple parties who influence or participate in the decision-making process. Yet analysts must simplify reality to make a formal investigation tractable. At an academic meeting in 1993, Harvard professor William Alonso elicited knowing chuckles by describing how he fills a blackboard with the equations of his formal model for a postulated economic behavior, and then tells his students never, never to believe the results of the model. Modeling constructs may assume certainty, adopt the view of a single omniscient decision maker, and only approximately capture of behavior of complex systems. The imperfect match between analytical details and the factors most important to decision-making is frustrating, dangerous, and yet unavoidable. This well-known mismatch is the essence of the analyst's technical challenge. Communicating Many analysts have expertise but those in charge ignore them. Others deliver advice but find that decision makers don't understand what they are talking about. The division of labor between analysts and decision makers creates one set of communication problems. Public participation imposes additional difficulties. The division of analytical labor among disciplinary specialists further increases the communication problems.
Analytical Angst
19
Most of us have heard at some point in our lives that "facts and values are the respective domains of analysts and decision makers." This axiom has been battered in recent years by forces as diverse as populism, advocacy science, and social constructivism, each of which this book will discuss.2 The notion that analysts develop neutral information and then throw it over the wall for decision makers to act on seems quaint and overly simplistic to most practicing analysts. For any interesting problem, analysts and decision makers instead need to interact frequently and to respond to each other's every move. They must do so because facts and values are so often commingled. Yet neither analysts nor decision makers are very good at this interaction. Outside of public policy schools, too few analysts receive formal training intended to help them understand the context of their work. The peculiarities of the institutions that employ economists, planners, engineers, and scientists, and of the procedures governing their interactions with decision makers, are discussed in gripe sessions at the water cooler but rarely in the classroom. Likewise, many decision makers misunderstand what analysts do, and harbor unrealistic hopes regarding the power of formal analysis to improve decisions. These problems exist whether decision makers are senior executives or members of the lay public, although each makes different demands on analysts. Disciplinary specialization builds additional barriers to successful communication. An economist brings a different worldview to the problem of global climate change than does an atmospheric scientist, for example, making communication among analysts an imposing, frustrating challenge. These three types of mutual ignorance—between analysts and decision makers, between analysts and the general public, and among analysts with different specialties—form the essence of the analyst's communicative challenge. Need for Improvements Of various analytical challenges that exist, serving a shared decision-making context may be the least understood. When several parties must decide jointly on a course of action, or when outside groups hold a veto over the primary actor's decisions, then analysis becomes more than a technical optimization exercise. At a minimum, analysis becomes a strategic exercise, in which one actor seeks to develop steering capacity to manage the uncertainties introduced by having multiple, independent decision makers. Sometimes analysis can become a communicative exercise that encourages a consensus among the parties around a course of action. Likewise, the division of labor between analyst and decision maker is often arbitrary and sometimes inappropriate. While technical analysts bring valuable expertise to their analytical tasks, they hold no monopoly on relevant information.3 Whether that information has to do with the values of the decision makers or unusual local conditions known only through personal experience in a particular context, formal analysis will often become better if a broad range of people participate. In practical terms, sometimes the best way to find out what people want and value is to ask them.4
20
Motivations and Concepts
Procedural innovations are also being tried for many types of decisionmaking. Some policy makers have sought more participatory forms of governing. Others have emphasized the duty of leaders to engage in a broad public discourse on the issues of the day. Many business leaders have adopted customer-oriented, holistic management techniques emphasizing teamwork, quality, and interaction. Analytical innovations must accompany these changes in process. Some analysis must become self-consciously communicative. ROLES FOR ANALYSTS The "science wars" of the 1990s were just one manifestation of a longstanding intellectual debate on the meaning of expertise. The prevailing modernist view accepts and values expertise, and gives analysts a niche of their own. A more critical view denies analysts a privileged position in public decision-making. Analysts are responding in divergent ways to this critical challenge. Traditional View of the Public's Experts Not everyone is qualified to give advice, and public decision makers therefore seek out analysts with strong credentials that signal exceptional qualifications. Analysts acquire degrees, certifications, references, and experience to confirm that they are qualified. While most analysts claim links to the scientific enterprise, they are also aware that democracy provides the context for much of their work. Both science and liberal democracy presume rationality. This postEnlightenment, modernist view is rather exclusive: madmen, children, and incompetents are not welcome in either enterprise.5 Science and democracy thereby establish and preserve their authority, although they have quite different domains. Using a spectrum metaphor, Price arrays the exclusive domains of scientists, professionals, administrators, and executives along a continuum that connects truth to power.6 According to Price's model, the proper way for truth to relate to power is for new insights to slowly move from scientists to professionals to administrators to the decision-making arena. During this process, the insight becomes accepted conventional wisdom, and part of the foundation of good decisions. This linear model is attractive because it clearly defines the responsibilities of all actors, suggests the natural pathway along which information generated by experts should flow, and implies rationality in the organization of human activity. In short, analysts aren't decision makers and decision makers aren't analysts in the traditional model. The division of labor protects the authority of each, and rationality is the glue that holds them together. The Constructivist Critique Wildavsky wryly notes that "the purpose of analysis is to connect knowledge with power, not ignorance with weakness."7 Yet Wildavsky claims that the aggressive policy analysts of the 1960s overreached in their attempts at
Analytical Angst
21
comprehensive policy planning, even as governments found that they had little influence over public problems rooted in individual behavior. Reflecting the hubris of the times, there were unrealistic hopes that "macro-macho" analysis8 could address a broad array of social and economic problems as effectively as it had addressed military problems during the 1940s and 1950s. In reaction to these embarrassments, policy analysis has since become timid, emphasizing incremental rather than radical change, small solutions to small problems, and decentralized decision-making. Policy analysis is also no longer primarily a governmental activity. Policy shops of many types sprouted in the United States following World War II, and "as the variety of perspectives represented on the think tank scene increases and the advice they give diverges, a cacophony is rising on some issues . . . . Each side now brandishes the latest analytic findings generated by its own unit."9 Competing claims nullify one another. Other observers make even broader claims, alleging that the authority of science has weakened in recent years.10 Whether or not that is so, it is common today for experts to find themselves competing with others—professional stakeholders, the lay public—for the attention of decision makers. Three factors are especially important in explaining the unhappy relationship between experts and decision makers. The constructivist view of expertise, which adopts elements of a postmodern11 perspective, highlights these: the inadequacy of the slow transmission of insights from truth to power,12 the socially constructed nature of knowledge,13 and the interactive nature of real decision making.14 By giving science a fallible, human face, we concurrently give the impression that there are many truths, many publics, and many sciences. This cacophony weakens the authority of traditional experts, but it also has inclusive possibilities. "Lay" and "local" expertise may gain validity.15 The very real fact that individuals may wear more than one hat—scientist and citizen, for example— can be recognized. Yet how can the measured voice of scientific reason be heard in the tumult? Reactions to the Critique Practicing analysts have not been oblivious to the scholarly debate. Policy analysts are probably the most sensitized group of professional advisors to issues of appropriate roles and value orientations. Durning and Osuna show that practicing policy analysts group themselves into five distinct clusters, evocatively labeled objective technicians, client counselors, issue activists, ambivalent issue activists, and client helpers.16 These ideal-types vary by degree of value neutrality, participation in the politics of decision-making, keeping their personal opinions in check, letting client interests dictate analytical actions, and valuing analytic integrity over particular outcomes. Not all policy analysts adhere to the ideals of science, clearly, but there is evidently a residual desire to be objective. The "objective technician" ideal-type will be the default analyst throughout this book, although other species will appear from time to time. The most common response by both analysts and decision makers to the social constructivist critique of separated truth and power
22
Motivations and Concepts
has indeed been to ignore the critics,17 although some analysts actively fight against deconstruction18 even as others actively embrace it,19 while some 20 decision makers simply turn their backs on expertise.
Looking over the decades and across disciplines, one observes a cycle.
Analysts working in public settings seem to cycle between hubris and timidity under the influence of three factors: • •
•
The temptation to overreach disciplinary boundaries, as engineering systems analysts did in the 1960s when attempting to transfer methods appropriate for building an intercontinental ballistic missile to rebuilding urban areas. l The pursuit of substantive rationality ("the optimal solution") at the expense of procedural rationality ("the legitimate decision"), as seen in some current applications of risk-benefit analysis to environmental policy questions. Chapter 11 returns to this issue. The seductive force of reductionism, which encourages analysts to study only what can be measured, leads decision makers to manage only what can be measured, and eventually deprives both parties of a holistic view of the forest for having focused so intently on the trees.22
Rather than cycling endlessly between analytical hubris and timidity, we clearly need to redefine the problem as one of appropriate balance. There is plenty of room for a middle path. An analyst can reflexively internalize the lessons learned to date.23 An analyst can explicitly acknowledge the potential for bias, bounded rationality, and other limitations. This type of analyst will work self-consciously—but not timidly—to earn public credibility and trust. I define this humble analyst as one who accepts that there is sometimes legitimate skepticism regarding authoritative knowledge claims and who acts accordingly. Joint fact-finding is a humble mode of analysis. THE CONTEXT OF ANALYSIS The conditions under which analytical work takes place affect its planning, execution, and outcome. Key contextual dimensions include the decision type and structure, and the play of interests and power. Decision Type and Structure As the world grows more complex we must make more formal decisions more often. We make decisions both as individuals and with other people. Multiparty decision-making may take place within groups, organizations, and society as a whole, and serious problems affect each context.24 At every level, rational and effective decision-making is difficult. Decision scientists seek to remedy these problems by first describing how fallible individuals, groups, organizations, and societies currently make decisions. Armed with an empirical baseline, decision scientists then offer prescriptions based on informed normative theory. Empirical research such as that by Mintzberg et al describes the steps found in many multiparty decision-making processes.25 Figure 2.1 depicts three main stages: identification of the problem, development of potential solutions, and
Analytical Angst
23
selection of a preferred solution. While most significant decisions map well onto the steps outlined in this model, they vary tremendously in their degrees of complexity, number of iterations required to reach closure, and points of blockage or failure.26 A decision process rarely follows this neat sequence of steps straight through; more often the process will revisit earlier steps several times on the way to a final decision. Figure 2.1 Stages in Decision Making
Sources: Derived from Janssen (1992) and Mintzberg, Raisinghani, and Theoret (1976).
One can categorize decisions along several dimensions. The decision-making entity can be an individual or some larger social unit; and such units may be formal or informal, public or private, with hierarchical or egalitarian structures. The nature of the decision may be reversible or irreversible, large or small, stand-alone or sequential; there may be spillovers such that the decision affects actors not party to it, the impacts may be uncertain, and the decision may involve tradeoffs among multiple criteria. The impact of an analytical contribution depends in part on how that information enters the decision-making process. New information can change the parameters of public debate, reduce or transform uncertainty, suggest new choices, and clarify a decision's impacts. Tools such as decision support systems help to structure the knowledge base for decisions. These contributions can also make decisions more difficult when they clearly identify winners and losers or
24
Motivations and Concepts
mask a debate about values as a one about facts. Since analysts typically develop information and wield tools on behalf of others, shared decisions are particularly challenging. For example, teams will have several members who need to understand the analytical work. Advisory groups will have diverse representatives who may not share similar world views. Public forums will feature eclectic mixtures of opinionated, perhaps angry, people who want to talk rather than listen. Formal bodies such as legislatures may use new knowledge strategically in pursuit of partisan goals. Interests and Power The term "stakeholder" has become an inescapable part of the analyst's vocabulary. It is shorthand for the interested party, any individual or group affected by or desiring influence over a decision. Here I follow standard practice and award the label to parties who hold legitimate and self-interested stakes in the outcome of a decision.27 But this definition leaves room for controversy about who should count. It is useful to distinguish "professional" stakeholders (paid advocates and lobbyists, elected or appointed spokespersons) from others, such as neighbors and employees, who lack organizational resources or official recognition. These types play very different roles and have divergent analytical needs in shared decision-making processes. The decision context clearly will dictate some of the analyst's activities. Power relations are a key part of this context. Rational analysis, including the joint fact-finding variety, is a fragile endeavor that flourishes only when those with power permit it to do so. Analysis activities are most feasible when power relations among decision makers are stable, and they become moot when decision-making degenerates into naked aggression, whether political, economic, or physical. The institution of democracy is only a few centuries old, and the notion that the "force of the better argument" should prevail in public debates has particularly shallow roots in the human experience. Much more deeply rooted is our visceral awareness that might makes right. When Bacon proclaims that knowledge is power, he is making more of a normative than a positive statement. Machiavelli portrays a much harsher reality: "We must distinguish between . . . those who to achieve their purpose can force the issue and those who must use persuasion. In the second case, they always come to grief."28 Analysts need to know their place, not to resign themselves to grief but to carve out meaningful roles. Lim offers suggestions about how to read the context meaningfully.29 While his insights are directed to planners, they apply broadly. Lim notes that a decision-making context encompasses both means and ends, and that these may be clearly or poorly specified. Means include both instruments of action and resources to support action. Is there consensus or controversy about means and ends? Are they identified with confidence and rigor? Are they realistic? The analyst's job description depends on how well identified each element is in the particular decision context. If elements are poorly identified then the analyst may have multiple functions, some nontechnical. Additional functions may include mediation to resolve controversy, advocacy to establish goals, and
Analytical Angst
25
entrepreneurship to develop resources and tools to support action. This implies that the analyst's skills ought to transcend technical competence and include critical thinking, moral reasoning, and effective interpersonal relations. The good news is that each of these skills can be learned, and they improve with formal training and frequent practice.
Supporting the Analyst Buddhists pragmatically seek the "middle way" by performing appropriate action, thought, speech, and so on throughout their lives. Recasting technical expertise so that it is useful in a joint fact-finding context requires a similar balancing act involving improvements at several levels, as follows (think of these as my secondary hypotheses): 1.
2.
3.
4.
Joint fact-finding needs appropriate people who have technical expertise plus good interpersonal skills and strong moral reasoning capabilities. This does not mean that all analysts must become better rounded, but rather that successful joint fact-finding requires such people. Joint fact-finding needs appropriate methods that provide empirical checks on theory, foster understanding of underlying phenomena, and allow a broad analytical scope. This does not mean that all reductionistic, discipline-based methods should be abandoned, but rather that successful joint fact-finding analysis is decidedly synthetic.30 Joint fact-finding needs appropriate processes that reconcile alternative constructions of scientific knowledge by the disciplines, rely on crossdisciplinary and stakeholder review in addition to peer review, and promote interaction and informed public participation. This does not mean that all normal science should be phased out, but rather that boundary-crossing is a necessary part of successful joint fact-finding.31 Joint fact-finding needs appropriate institutions that do a good job of social construction, are decentralized to accommodate diversity, and encourage a balanced form of expertise useful to that diverse clientele. This does not mean that all institutions should perform less-partisan analysis, because partisan efforts clearly have value, but rather that institutions performing successful joint fact-finding will construct knowledge in this other way.32
NOTES 1. See chapter 14 for an answer. One can think of similar stories in many areas of public and private decision-making. Illustrative examples include the planner working for a developer who seeks a zoning variance from a municipal planning board; the recycling program designer who asks citizens to sort their trash; the engineer who evaluates proposed manufacturing process changes in a firm practicing Total Quality Management; the economist who calculates the incidence of a proposed change in the income tax code for members of a legislature; and the atmospheric scientist who estimates the impacts of global climate change for treaty signatories. 2. Some definitions: Populism is a political philosophy focused on the needs of the common people, and it often carries anti-elite and anti-intellectual overtones. Advocacy science is technical work performed by scientists in support of a political agenda. Social constructivists believe that knowledge, like other human achievements, is created within socially mediated contexts; thus scientific knowledge, for example, is socially
26
Motivations and Concepts
constructed in institutions such as universities according to procedures established by groups of people. 3. Charles E. Lindblom, Inquiry and Change (New Haven: Yale University Press, 1990), pp. 157-174. 4. Harold A. Feiveson, Frank W. Sinden, and Robert H. Socolow, eds., Boundaries of Analysis: An Inquiry into the Tocks Island Dam Controversy (Cambridge MA: Ballinger, 1976), pp. 36-39. 5. David H. Guston, "The Essential Tension in Science and Democracy," Social Epistemology 1', 1 (1993): 3-23. 6. Don K. Price, "The Spectrum from Truth to Power," The Scientific Estate (Cambridge, MA: Belknap Press, 1965), pp. 121-122. 7. Aaron Wildavsky, Speaking Truth to Power: The Art and Craft of Policy Analysis, rev. ed. (New Brunswick, NJ: Transaction Publishers, 1987), p. 9. 8. Wildavsky, Speaking Truth to Power, p. xxv. 9. Carol H. Weiss, "Helping Government Think: Functions and Consequences of Policy Analysis Organization," in Carol H. Weiss, ed., Organizations for Policy Analysis: Helping Government Think (Newbury Park, CA: Sage Publications, 1992), p. 15. 10. See, for example, Daniel R. Sarewitz, Frontiers of Illusion: Science, Technology and the Politics of Progress (Philadelphia: Temple University Press, 1996); or Sheila Jasanoff, "American Exceptionalism and the Political Acknowledgement of Risk," Daedalus 119 (1991): 61-81; Mary Douglas and Aaron Wildavsky, Risk and Culture (Berkeley: University of California Press, 1982). Harvey Brooks was one of the first to make this claim, in 1971. 11. Briefly, by postmodern I mean a view that is skeptical of certain aspects of the modem intellectual approach, specifically its claims that absolute or positive knowledge about society is obtainable and that there is a clear directionality or progress in our social evolution. A postmodern view is likely to be critical of the status quo, skeptical of authoritative knowledge claims, and supportive of alternative interpretations—in short, it will encourage multiple voices to speak on any matter. 12. There are severe limits to the deliberative translation from truth to power. Time pressure, for example, may force decision makers to act before a scientific consensus is reached, transmitted to professional practice, and accepted as conventional wisdom by administrators and the public. Institutional structure can hinder the flow of relevant information to decision makers. An agency's mandate can likewise allow it to ignore crucial data that fall outside its official purview. Many important decisions revolve around the implications of cutting-edge science for which no consensus or standard body of knowledge yet exists. See Sheila Jasanoff, The Fifth Branch: Science Advisors as Policymakers (Cambridge, MA: Harvard University Press, 1990), p. 78. 13. Expertise, like most other human characteristics, is socially mediated, according to Jasanoff, The Fifth Branch, pp. 12-15. Experts acquire credentials and status in a social context—a university, a laboratory, a firm—which affects both what they learn and whose respect they enjoy. Compounding this is the fact that knowledge, the stuff experts have to offer, is socially constructed in those same institutions. This adds a disturbing sense of relativism to the concepts of expertise and knowledge. Worse, rewards within each discipline tend to accrue to narrow specialists, whereas breadth of knowledge provides the stronger basis for expert advising. Evidence of institutional inertia, in which funding persists for unproductive established fields while new fields languish, increases public skepticism and opens the enterprise to charges of performing "ironic" science. See John Horgan, The End of Science (Reading, MA: Addison-Wesley, 1996). 14. Reality is more interactive than Price's linear model suggests. See Robert M. White, "Introduction," in Myron F. Ulman, ed., Keeping Pace with Science and
Analytical Angst
27
Engineering: Case Studies in Environmental Regulation (Washington, DC: National Academy Press, 1993), pp. 1-7. A network analogy might be more appropriate than that of a spectrum. While it is true that research science informs the professions and their codes of practice, one also sees regulatory science performed to guide administrators directly, and administrators in turn develop regulatory standards that in some ways substitute for codes of practice. Advocacy science likewise attempts to influence decision makers directly, rather than counting on indirect transmission of information through professionals and administrators. Professionals develop technology that opens new scientific vistas, and decision makers set funding priorities that dictate the direction of much scientific effort. Finally, the public is in many of these loops, and is indeed many "publics" with different perceptions and biases. 15. Brian Wynne, "Sheepfarming after Chernobyl: A Case Study," Environment 31,2 (1991): 10-15, 33-39. 16. Dan Durning and Will Osuna, "Policy Analysts' Roles and Value Orientations: An Empirical Investigation Using Q Methodology," Journal of Policy Analysis and Management 13, 4 (1994): 629-657. For an introduction to the literature on roles, see David L. Weimer and Aidan R. Vining, Policy Analysis: Concepts and Practice, 2nd ed. (Englewood Cliffs, NJ: Prentice-Hall, 1992), pp. 16-19. 17. President Clinton's boosterism on the editorial page of Science (279: 1111 "Catalyzing Scientific Progress," February 20, 1998) captures the persistence of the modem spirit. Most scientists pay no attention to assertions that their public credibility is in decline. The modem scientific enterprise is in many ways intact. Only when scientists overreach themselves analytically or demonstrate social irresponsibility by fraud, waste, or other misconduct do others express concern. Decision makers likewise react primarily to the individual symptoms of breakdown rather than looking for deeper causes. See Frederick Grinnell, The Scientific Attitude, 2nd ed. (New York: Guilford Press, 1992). 18. Some scientists actively fight against deconstruction. This backlash against the postmodern critique is evident in the literature. A notorious example is Paul R. Gross, Norman Levitt and Martin W. Lewis, eds., The Flight from Science and Reason (New York: New York Academy of Sciences, 1996). The backlash can also be seen in the official organs of some professional societies and scientific organizations. Examples include the American Physical Society (e.g., Bernard Ortiz de Montellano, "Post Modem Culturalism and Scientific Illiteracy," APS News, January 1998), the American Association for the Advancement of Science (e.g., letters to the editor in Science 276: 1953a-1957a, June 27, 1997 on deconstructing science), and Sigma Xi (e.g., Robert Frosch, "Defending Science and Technology," American Scientist 84: 514, [NovemberDecember 1998]). Underlying these efforts is a desire to clarify who really has expertise, to re-assert "one" rationality. 19. A few analysts have chosen to embrace constructivist approaches. They emphasize the social side of science, and seek inclusively to construct knowledge. Examples of their activities include community research-oriented "science shops" attached to universities, technology assessment by "consensus conferences" of lay people with access to experts, and broad efforts in science education. For a comprehensive argument favoring inclusive approaches to knowledge construction, see Richard E. Sclove, Democracy and Technology (NewYork: Guilford Press, 1995). For an evaluation of one such approach, see David H. Guston, "Evaluating the First U.S. Consensus Conference: The Impact of the Citizens' Panel on 'Telecommunications and the Future of Democracy,'" Science, Technology, and Human Values 24, 4 (1999): 451-482. 20. Populism, with its trashing of elites and its appeals to direct democracy, is a recurring theme in American political life. Surfacing in venues such as the 104th U.S. Congress, populism represents a powerful response by the public and some decisionmakers to disillusionment with expertise. Populists devalue expertise and then proceed to
28
Motivations and Concepts
make public decisions without it. They purportedly rely instead on common sense and an unarticulated notion of the will and wisdom of "the people." See Warren E. Leary, "Congress' Science Agency Prepares to Close Its Doors," New York Times, September 24, 1995. 21. Thomas P. Hughes, Rescuing Prometheus (New York: Pantheon Books, 1998), pp. 166-195. 22. James C. Scott, Seeing Like a State (New Haven: Yale University Press, 1998), pp. 11-22. 23. Donald A. Schon, The Reflective Practitioner: How Professionals Think in Action (New York: Basic Books, 1983). 24. Paul R. Kleindorfer, Howard C. Kunreuther, and Paul J. H. Schoemaker, Decision Sciences: An Integrative Perspective (Cambridge: Cambridge University Press, 1993). 25. H. Mintzberg, D. Raisinghani, and A. Theoret, "The Structure of Unstructured Decision Processes," Administrative Science Quarterly 21 (1976): 246-275. 26. Ronald Janssen, Multiobjective Decision Support for Environmental Management (Dordrecht: Kluwer Academic Publishers, 1992). 27. The common definition has strayed from its etymological roots as notions of a definable public good have been replaced by an interests-based view of public affairs. The stakeholder used to be the party who was trusted to safeguard the collective pot of wagered money during gambling; today it is anyone with a vested interest in that pot. See Mary R. English, "Stakeholders: Whose Interests? At What Sacrifice?" Energy, Environment and Resources Center working paper (Knoxville: University of Tennessee, 1998). 28. This characterization of power, and the citations of Bacon and Machiavelli, are from Bent Flyvbjerg (Steven Sampson trans.), Rationality and Power: Democracy in Practice (Chicago: University of Chicago Press, 1998), pp. 2, 131. 29. Gil-Chin Lim, "Toward a Synthesis of Contemporary Planning Theories," Journal of Planning Education and Research 5, 2 (1986): 75-85. 30. Stephen Kline, Conceptual Foundations of Multi-Disciplinary Thinking (Stanford, CA: Stanford University Press, 1995). 31. Brian Wynne, "Risk and Social Learning: Reification to Engagement," in Sheldon Krimsky & Dominic Golding, eds., Social Theories of Risk (Westport, CT: Praeger, 1992), pp. 275-297. 32. Edward Woodhouse and Dean Nieusma, "When Expert Advice Works, and When It Does Not," IEEE Technology and Society Magazine 16, 1 (Spring 1997): 23-29.
3 Fundamental Choices By exploring the role of analysis from the analyst's practically grounded perspective, I hope to encourage individual analysts to reflect on their context and professional choices. Yet social theorists and philosophers of science have covered much of the same ground at a far higher altitude. Engaging with the conceptual discourse here will help us interpret the case studies. COMPETING CONCEPTIONS There is an ongoing and epic struggle between two models of how society should use knowledge. To introduce the debate I use Lindblom as a foil, because he has so effectively synthesized this literature.1 Two Models of Social Learning The older model, with roots extending back at least to Plato, is the scientifically guided society. Bacon, Descartes, Condorcet, Bentham, Marx, and many modern systems analysts all believe that a good society takes advantage of scientific knowledge to make progress, that the masses are incapable of rigorous scientific investigations, and that society necessarily depends on a trained scientific elite for guidance. Whether called "a rule by a comprehensive knowledge of social facts," "the conscious, deliberate organization of society," "scientific socialism," or the "depoliticization of social problem solving," this model holds that there are right answers and optimal solutions to problems, and that only an expert elite can find them.2 The alternative model is the self-guided society. Polybius, Hume, Rousseau, Hayek, and many modern sociologists give experts a supporting role in the drama, but claim that ordinary individuals and their representatives are the stars of the show. As in chapter 2, they "appreciate the way in which Rome's accomplishments grew from experience or practice rather than design"; show
30
Motivations and Concepts
"skepticism regarding the capacity of government to rule wisely"; exhibit a "penchant for a competition of ideas"; maintain a "commitment to the market system instead of to central planning for fear the central planners cannot achieve a sufficient scientific competence"; and indulge in "bouts of hostility to elites."3 In 18th century France, democrats and scientists were allied together in the pursuit of reason and in the battle against superstition and hereditary authority. However, this alliance has since collapsed, potentially allowing the emergence of a self-guiding society.4 The key casualty of this collapse has been the loss of a privileged political voice for authoritative expertise: "there is a world of difference between a political process in which people honestly try to understand how the world looks from different vantage points, and one in which people claim from the start that their vantage point is the right one."5 Reason and inquiry prosper, while authority—whether aristocratic or meritocratic—withers. Proponents of the self-guiding model perceive lay inquiry to operate in an impaired, inconclusive probing mode that never reaches closure. Since social science is not only performed by impaired, fallible, yet adaptable humans, but also studies them, it must remain inconclusive to a far greater degree than natural science. Social science therefore should never substitute for lay inquiry, but rather supplement it. In addition, there is evidence that ordinary people find practical ways to manage problems that social science cannot even characterize accurately, because people can change their behaviors or their expectations. A corollary feature is that optimality becomes hard to define, since both options and objectives are malleable. Descartes' quest for "a single system of knowledge . . . eventually ran head-on into the articulation of the fact-value distinction and shrank to become an aspiration."6 In place of an ideal destination, scholars substituted a thoughtful journey, valuing vigorous, free inquiry as an end in itself. Social problem-solving is about where to go next on a long trip through an uncharted landscape. Lacking the hope of scientific closure, members of society must instead become adept at learning from their errors and their experiences. Beyond Caricatures The approach to analysis outlined in this book is clearly consonant with the self-guiding model. It accepts a supporting rather than a starring role for professional scientific inquiry. Yet the case studies will not fully support Lindblom's artificial dichotomy or his caricatures of professional inquiry. First, while Lindblom (1990) cogently critiques social science, he largely ignores the role of natural science in social problem solving. There is in fact a spectrum of scientific fields that enjoy varying degrees of closure in relation to specific social problems. Thus there is room for substantive rationality along with procedural rationality. Second, while he shows how unlikely it is that ambitious, comprehensive analyses can succeed,7 he misses the corresponding weaknesses of overly narrow, reductionistic analyses. To keep the edifice of scientific knowledge from collapsing, the tall, narrow pillars of disciplinary science in fact need
Fundamental Choices
31
cross-bracing at periodic intervals. He misses the integrative benefits of attempts at holism. Finally, when he critiques a scientifically guided society, he also tars all of science with the same broad brush. In fact, although his favorite targets— systems analysis, economics, and city planning—are ambitious in their scope, they usually remain one step removed from actual decision-making. He therefore understates their value, when held at arms-length from power, for articulating visions of the future and for exploring scenarios; that is, for serving as low-cost experiments. Developing Standard Practices The very notion that we engage in social problem-solving is itself controversial. Problem-solving is really problem succession, Wildavsky asserts: "Cut off one and another sprouts."8 He seems to ask analysts to swear a Hippocratic oath to above all do no harm. This view appears once again to put process over substance, to describe how bureaucrats perceive the world, as in: Problems may come and go, but bureaucratic analysts and their institutions endure. Yet in fact the idea is more subversive, and it echoes the analyst's first problem as mentioned in chapter 2. There I observed that analysts must always simplify reality to make modeling tractable, and that such simplification exacts a toll on the relevance of analysis. By identifying the costs of simplification, we can begin to unpack the practical implications of the problem-succession idea. First, analysis can never predict all of the impacts of a decision, so that unintended consequences must result from every action. Second, analysis inevitably takes a snapshot of reality. The three-way dynamics of our evolving scientific understanding of problems, their technological solutions, and our social aspirations ensure that reality immediately departs from every analytical model. Thus, implementation issues become difficult to separate from analysis issues. Wildavsky's prescriptions for analysts working in a context of problem succession anticipate mine in some respects: Be empirical.10 Decentralize, try experiments, and learn from experience. Find ways to make evidence matter, even to polarized audiences.12 Don't merely help society achieve given objectives, but help also with their formulation.13 Become "intellectually chastened—more variegated, more self-critical, and more aware of . . . social relations."14 Successful joint fact-finding must incorporate these tasks, while also addressing the second challenge posed in chapter 2, that of successful communication. If progress consists mostly of developing standard practices that free up human attention to focus on new problems,15 then analysts play a central role in helping society develop new routines based on new knowledge. Perhaps the analyst's key role is to help lay people with varied perspectives agree on what constitutes knowledge. This brings us face-to-face with the concept of legitimacy.16 While Weber and his students have thoroughly explored the
32
Motivations and Concepts
conceptual foundations of legitimacy, the practical job of developing reliable recipes for legitimacy remains mostly ahead of us. A CONCEPTUAL ARGUMENT FOR COMMUNICATIVE ANALYSIS To help analysts reflect on the implications of the shared decision-making context, this section develops the conceptual basis for a humble—but not timid—approach to joint fact-finding analysis. My objective is to show that this alternative approach has a coherent conceptual foundation, one that compares favorably with the theories underlying conventional analysis. This section is relatively demanding of the reader, but it is short and worth the effort. It raises general issues that appear repeatedly in the case studies. I offer a preview of those issues here. Analysts produce knowledge in a social context that influences how they perform their work, who legitimately participates in this activity, and which facts gain general acceptance. Since knowledge production has a social context, it is sometimes difficult to decompose decision-making problems into distinct fact and value components. Without a clear facts-values dichotomy, the rationale for a strict division of labor between analysts and decision makers weakens. This implies a need for analysts and decision makers to interact frequently, and thereby places a premium on successful communication. Yet we know that communication is difficult. Some communications problems are unintentional, as when an analyst is a poor public speaker or when society's organizational or cultural divisions impede the flow of information. Other communications problems are intentional, as when an individual lies, or when a political institution propagandizes. Whether enlightened self-interest or concern for others motivates analysts, I argue that they need to understand communications problems and know how to overcome them. The case studies will illustrate both the problems and potential solutions. Knowledge Is Socially Constructed Analytical tools are "scientific" in that they claim to have a rational basis, build upon a body of accepted theory, and aspire to meet standards of validity and reliability. Analysts are to some degree a part of the scientific enterprise; they follow scientific methods and norms, and produce knowledge. The strengths and weaknesses of analysis are in large measure those of science. Kuhn describes the roles of self-referential scientific communities in the social construction of knowledge using the "paradigm" concept.17 The broad strokes of the portrait he paints of science show a Darwinian succession of dominant theories, archetypes, reigning until discredited by countervailing evidence and displaced by competing theories having better explanatory power. Yet the details in Kuhn's picture consist of normal science, which is in fact the primary activity of the scientific enterprise. This involves living within a paradigm and testing incremental hypotheses suggested by its core norms. Without a consensus among early astronomers on the layout of the solar system, theories of celestial mechanics could never have been elaborated nor would
Fundamental Choices
33
observations attempting to validate those theories have been shared. The focused communication and accumulation of evidence within a research community art what makes possible the occasional revolutionary insight. Hence we have evolved disciplines that reduce the range of phenomena each community considers, to allow greater focus and more rapid learning. Normal science is clearly a social endeavor. A community of scientists share core beliefs—the basic norms of their discipline—and a common specialized language to allow efficient exchange of information. Research communities are bound together in distinctly social ways: aspiring scientists go to school and learn unique languages and myths from their elders; later they attend conferences, correspond, work within hierarchical organizations, and help allocate resources among their peers according to community standards of merit. The standards of evidence required in scientific communication are also selfreferential: investigators report research findings in specified peer-reviewed journals with familiar formats using accepted methods and applying standard statistical criteria. The scientific community is susceptible to social problems just like any other social community. For example, during the repressive Soviet era the politically connected biologist Lysenko successfully enforced an orthodoxy on his peers that for many years sidetracked progress in genetics research within the USSR.18 Even in democratic societies, power relations color all aspects of our political and economic lives, including life in the scientific community. Most research funding in western democracies, for example, has attached strings that link science to military or commercial objectives; little funding is available to study the problems of the poor. Political decision makers prohibit certain types of scientific activities altogether. They also harness individual self-interest to drive innovation by creating and defending patents. Social factors instead may differentially encourage professional success. For example, about half of the U.S. Nobel prize winners from 1901 to 1972 had teachers who were themselves Nobel prize winners. This "lineage" phenomenon may have been related both to the mutual attraction between talented people and to the social aspects of recognizing achievement (being "in the network"). Success in American science showed a strong correlation with pedigree, with five universities (that together produced fewer than twenty percent of the nation's science Ph.D.s) producing half of its Nobel laureates and National Academy of Science members.19 In spite of social pathologies, the scientific enterprise has been remarkably successful at advancing the frontiers of knowledge. Much credit goes to the distinctly social practice of peer review. This unique practice may be motivated by shared norms of the sort that Merton suggested for the scientific community.20 Others suggest that a less lofty shared norm of self-interest is adequate to explain the peer review process. Jasanoff skeptically presents peer review as a "social compact created and sustained by the self-centered communal needs of modern science . . . facilitated by the emergence of a professional scientific community concerned with upholding the interests of its members in recognition, authority, and above all, dependable knowledge."21 In
34
Motivations and Concepts
short, by policing themselves, professional scientists gain reliable sources of knowledge and avoid being regulated externally by other forces in society. Peer review ensures those objectives, whether lofty or utilitarian, better than other possible social arrangements. Scientific fact or "truth" is thus a relative and provisional concept, referring to the current consensus among a community of researchers. This consensus may be stable and widespread, as in the case of the Copernican model of the solar system, or it may be fleeting and local, as was the case for cold fusion. Temporary truths, however, are not what decision makers want from scientists. Even worse, the most interesting public and private decisions—medical, environmental, economic, and others—may involve cutting-edge science that lies far from the core truths of our current paradigms, and about which there is not yet a consensus. The scientific enterprise does not naturally produce information useful to lay decision makers; rather, the scientific enterprise produces knowledge for internal consumption. It follows that knowledge ought to be constructed differently for the decisionmaking context than it is for one's peers. The ideal behavioral norms of pure science are unlikely to apply in situations where tough decisions are made; parties may be biased, proprietary, self-interested, or credulous. Analysts— applied scientists—therefore must be able to function in domains where facts and values are virtually inseparable. Analysts need a nuanced view of rationality that helps them sort through competing claims about knowledge. Rationality Is Multifaceted One way to characterize the division of labor between decision makers and analysts is that decision makers decide upon reasonable decision rules, while analysts strive to apply those rules rationally. "Reasonable" decision rules are internally consistent and are the outcome of moral argumentation. "Rational" application is logical, valid, reliable, and empirically tested. Analysts get into trouble when they adopt decision rules independently of decision makers. Apparently irrational decisions may simply be based on different decision rules than those assumed by the analysts. Analysts are better off engaging decision makers in explicit discussions about the reasonableness of decision rules. For example, is a "net social benefit" decision rule used to justify the displacement of urban poor for a highway project reasonable? Secondarily, is the method used for estimating the costs and benefits rational? A rational analysis is of no use if it is based on an unreasonable decision rule. Rational facts and reasonable values are interdependent. Consider a situation in which drinking water supplies are disrupted, leaving a community to rely on bottled water. An analyst for a chain of drugstores might assume that the company's goal is to maximize profits, and thus suggest tripling the cost of bottle water. If the actual goal is to maintain good public relations, the company owners might instead prefer to reduce prices. The decision to reduce price is "irrational" only when the "reasonable" decision rule is not understood.
Fundamental Choices
35
There are conflicting views about what constitutes a reasonable decision rule. Philosophers commonly distinguish decision rules based on consequences (or "consequentialist") and rules based on a sense of obligation to others (or "deontological"), but there are many variations. Pragmatism defines the meaning of an action as no more or less than the sum of its consequences; neoclassical utilitarianism seeks the greatest good for the individual; Benthamite utilitarianism seeks the greatest good for the greatest number; the Kantian categorical imperative directs us to treat ourselves and others as ends in themselves, and not only as means; Rawls urges consideration of distributive justice, and so on.22 Given the many possibilities, analysts should ask what is reasonable. To illustrate how the choice of decision rule affects analytical work, I introduce two arguments that approach the facts-values dichotomy from very different starting points. Ulrich explains that philosophers working under the banner of critical theory, a particularly productive line of recent inquiry, have found it useful to distinguish two types of reason: theoretical and practical.23 Theoretical reason helps us judge the empirical validity of theories, our hypotheses; this is the rationality of pure science. Practical reason helps us judge the normative validity of practices and actions, our choices; this is the rationality of decision-making. Only by integrating these two types of reason can we develop a basis for linking facts and values, analysis and decisionmaking. Both types of reason face the same problem: there is no logical principle allowing us to universalize, or generalize, our findings. Thus induction, the process of developing theory from empirical observations, cannot let us define absolute natural laws, but only best explanations, or temporary truths. These in turn provide our only basis for deduction, the process of drawing conclusions about particular phenomena from a set of (socially accepted) general principles of theoretical reason. Likewise, the practical process of making decisions based on subjective impressions cannot let us define absolute moral laws, but only good decision rules, or norms. Yet these two concepts of reason offer modest bases for "relatively rational" action. Later I briefly sketch how each approach can be used to link facts and values together, thereby creating an integrated rationality. Theoretical reason serves as the starting point for linking facts and values using consequentialist rationality. From among various consequentialist doctrines, I focus here only on utilitarianism because it underlies most scientific, economic, and engineering analysis.24 In this view, rational argument will ideally cycle between theorizing and empirical testing. Scientists know that it is useful to test theory empirically, to quell useless conjecture and to redirect work in more promising directions. They know that the best hypotheses are those that can be tested, because in that way, rational argument advances knowledge within the scientific community. Rational action, in turn, is defined by utilitarians as useful action. A decision to increase the price of bottled water might meet Weber's criteria for utilitarian decisions when there is a convergence of purposes (increase profits) and values
36
Motivations and Concepts
(profits are the main goal of a company), plus adequate means in regard to these purposes (people are willing to pay more for water).25 This utilitarian notion of rationality is success oriented and assigns instrumental rather than intrinsic value to the means employed.26 In more extreme forms, it lets the ends justify the means. Theoretical reason effectively assumes that the empirical validity of actions can be tested, so that the job of rational analysis is to identify optimal actions. In non-social situations it provides a basis for optimization, or what analysts call "the science of the best."27 In social situations it provides a basis for selfinterested strategic action. In fact, maximizing utility (self-interest) becomes the normative decision rule. Variants of this rule range from the blinkered selfinterest of Adam Smith's homo economicus to the enlightened self-interest typical of many real people. Practical reason instead serves as the starting point for linking facts and values together using obligation-based rationality. I focus on the communicative approach (described later), which became well known in the 1970s and has roots in critical theory that lead back to Kant's categorical imperative.28 This doctrine underlies some current work in political science and sociology, and is especially prevalent in practical fields such as planning and conflict resolution. The focus of practical reason is on the morality of social norms (normative validity), as measured by consensus and social acceptance. Thus, there is a widespread acceptance of a company's right to make profits on bottled water sales, but most people will condemn the company's behavior when it steps over the line into profiteering. Practical reasoning is not limited to decision-making. It can also be used to test the validity of scientific work, of facts. Indeed, interpreters of Jurgen Habermas argue that purely objective knowledge is an impossibility because it is socially constructed, as discussed earlier.29 They also argue that any purely instrumental ("optimizing") rationality enforced in a social context must come at the expense of both individual liberty and democracy.30 Liberty suffers because individuals are coerced into serving a dubiously defined public good. Democracy suffers because individuals relinquish decision-making power to the optimizers. Discourse becomes thinner and alienation sets in. The solution is to advocate a new kind of rationality, a new scientific method, and a new definition of knowledge. Habermas offers a basis for this in his communications theory of society.31 This theory emphasizes that communication is an essential part of social, political, and economic relations, and that it can be done well or poorly. If communications are distorted, as they almost always are, then problems arise. One group may communicate with another in an incomprehensible, insincere, illegitimate, or false manner. For example, technical specialists may write poorly. Experts may exclude wide participation in a debate by using jargon. Bureaucrats may cloak selfish motives in "the public interest." Religious leaders may use their pulpits for secular goals. Companies may misrepresent the findings of their product safety studies. In the case of the water shortage and accompanying bottled-water profiteering, we could compare the company's
Fundamental Choices
37
actual communications to an "ideal speech situation" featuring comprehensibility, sincerity, legitimacy, and truth, critically examining the extent to which communication has been distorted. A communications theory of society allows us to develop a more courteous basis for linking facts and values, a different definition of rationality. Communicative rationality has two interacting components: truth and Tightness, which are analogous to rationality and reasonableness. To paraphrase, In making truth claims (the problem of theoretical reason) we ask each other for agreement on the material conditions, the facts, of life. In making Tightness claims (the problem of practical reason) we seek agreement on what is correct action. Agreement— the validity of these claims—depends on consensus, and that is appropriately achieved by the force of the better argument, that is, through the use of language. Material facts and appropriate actions are inextricably linked; they are the moments of a comprehensive rationality.32 This is no instrumental, absolutist rationality; it is one founded on successful communication and mutual understanding. It sounds frighteningly contextdependent. Indeed, its advocates are sometimes accused of cultural relativism and similar crimes. Along with many others, I do not fully accept the normative agenda often associated with the communicative turn.33 Communicative theory can, however, describe how people actually interact when making complex decisions, which should interest even strict utilitarians. Decisions that seem "irrational" from a technical point of view may be completely rational from a nontechnical vantage. For example, as Slovic points out, public concern about disposing of nuclear waste may have little to do with the technical criteria for waste disposal, and much to do with whether the public trusts those experts. 4 In this case, as with many others, "rationality claims can no longer be established 'monologically' but only 'dialogically' in a discourse."35 In other words, utilitarianism sometimes seems frighteningly /^dependent of its context. The fact that a hurricane has contaminated the public water supply should matter to the company as it sets the price of bottled water. The preceding discussion suggests that both utilitarian and communicative approaches can legitimately apply to problems of linking scientific facts and decision-making values. It is useful to consider how actions or outcomes might differ for a given situation depending on which approach dominates. Jantsch defines three types of action:36 •
•
Instrumental action: The objectives are to rationalize the use of scarce resources, to optimize production under cost pressures, and to satisfy a criterion of efficiency. To test the validity of scientific facts, one satisfies oneself on the empirical evidence. To test the validity of decision-making values, one checks to see that the decision maker's utility has increased. Strategic action: The objectives are to rationalize a set of steering principles for managing complexity and uncertainty, to strategize about change induced partly by the actions of others, and to satisfy a criterion of effectiveness. To test the validity of scientific facts, one again satisfies oneself on the empirical
38
Motivations and Concepts
evidence. To test the validity of decision-making values, one checks to see that the decision maker's expected utility has increased. • Communicative action: The objectives are to rationalize a set of norms or collective preferences for managing conflict, to seek consensus and increase the potential for mutual understanding, and to satisfy a criterion of ethical behavior. To test the validity of scientific facts, one seeks consensus on the empirical evidence. To test the validity of decision-making values, one checks to see that the decision was consensual. Approaches are most likely to differ as a function of how social is the context.37 For simplicity, we can imagine two contexts: nonsocial (with a single decision maker) and social (with multiple decision makers). In a non-social context such as a private decision that has no spillovers, everything reduces to simple instrumental actions. There are no other actors either to strategize against or with whom to seek consensus. Thus, in linking facts and values, one must only satisfy oneself about the facts, and use a simple test of decision-making values: increased satisfaction. However, in the social, or multiparty, context there is divergence. Utilitarians will still apply instrumental methods to factual problems, but they will apply strategic methods to value problems. The link between facts and values thus becomes the individual—her combined perceptions and self-interest. This is bad news for analysts expecting that they can always legitimately speak truth to power. In a social context, consensus-oriented actors instead will apply communicative methods to both problem types. They will choose communicative approaches to factual problems for reasons mentioned in the previous section: science has a social context and facts exist only as provisionally accepted by the scientific community. They will choose communicative approaches to value problems because consensus is their criterion for acceptance. With the communicative approach, the link between facts and values becomes the community—its members' shared perceptions and interests. This is bad news for analysts who cannot communicate. In a social context, utilitarians and consensus-oriented actors will bring very different motivations to the task of linking facts and values together. Yet their choice of actions will frequently overlap because actions are nested and interdependent. One cannot manage complexity without also managing scarcity, and one cannot manage conflict without also managing complexity and scarcity. Thus communicative action is built upon a foundation of instrumental and strategic actions. Conflict can be a source of uncertainty and can lead to scarcity, and vice versa. Thus communicative action may be a good strategic or instrumental choice, just as management of scarcity or uncertainty may defuse conflict. The foregoing suggests that both utilitarians and consensus seekers should learn about instrumental, strategic, and communicative actions. Since instrumental and strategic techniques are well known, the remainder of this discussion focuses on the communicative approach. It is important at this point to distinguish between the problems of conflict and miscommunication. Our task
Fundamental Choices
39
is relatively modest: to avoid unnecessary misunderstanding and needless conflict. This is a narrower goal than building consensus, but it allows these lessons to apply to a broader range of joint fact-finding situations. Communication is Difficult Norms of pragmatic communication are to speak comprehensibly, sincerely, legitimately, and truthfully.38 We follow these norms often enough to expect them from each other, and our society suffers when we fail to deliver. This puts a great burden on analysts. Their expertise gives them both important information and the potential to severely distort acts of communication. When communicating within their own discipline problems may not arise, but when interacting with others—their bosses, the public, experts from different fields— communicative incompetence can lead to disaster. Knowledge is not knowledge until it is communicated; and the scientific method is not credible until it is understood. After all, "separating what you are talking about from how you are talking is as problematic as separating fact from value."39 Critical theorists point out that some distortions of communication are unintentional, while others are discretionary. Also, some distortions are tied to individual behavior while others are a function of the systemic context.40 These two dimensions provide a four-way typology of communicative distortions. Locating a context in one of the four resulting categories is the first step toward diagnosing, and then solving, communicative problems. Unintentional and Individual: Some communicative distortions are easier to fix than others. Familiar prescriptions can reduce the distortions in this category. For example, better training in public speaking, report writing, and experimental or model documentation can improve the personal communicative ability of an analyst. Better analytical techniques can screen out spurious data. Unintentional and Social: The structural distortions in this category require structural solutions. Thus, miscommunication between scientists and lawyers, for example, may be mitigated by cross-training a set of people to do forensic and regulatory science, appointing scientific advisors to the court, creating special institutions such as public utilities commissions which have expert adjudicators, using broadly accepted tools such as those employed in statistical analysis, or by using easily comprehended tools such as graphical data visualization. Inter-organizational miscommunication may be reduced with measures such as interagency committees and standardized reporting requirements. Intentional and Individual: Interpersonal manipulation problems seem less amenable to simple, universal solutions because the underlying norms become relevant: utilitarians accept strategic actions such as lying and bluffing that offend a communicative sensibility. Yet utilitarians acknowledge that collusion is the only way out of a prisoners' dilemma,41 and successful collusion depends on successful communication. Thus in competitive-cooperative (mixed motive, positive sum) situations where conflict has significant potential costs, both strategic and communicative actors will want improved communication. They
40
Motivations and Concepts
will disagree in purely competitive (zero-sum) situations. Solutions may include information sharing, truthful statements, and integrative bargaining that seeks to ensure win-win outcomes. Intentional and Social: The fourth category is the most problematic. Communicative distortions serving goals of structural legitimation require subversive remedies. Thus, whistleblowers may leak classified information, consumer advocates may refute advertisers' claims, and disadvantaged communities may document biases built into the current power structure. Analysts become advocates, and their actions become distinctly political. This quick tour of communicative distortions shows that the analytical context matters, expertise has a fragile value, analysts need to choose their battles carefully, and communication is ultimately a personal responsibility. Analysts need to understand where and how the information they produce will be used, if they want to foster more successful communication. These are personal decisions to engage in comprehensible, sincere, legitimate, and truthful communication, or to accept the consequences of not doing so. The decisions are more difficult in some circumstances than others. If one is employed by, for example, a chemical company, and one's job is to argue that none of its products are toxic, then adopting Habermasian norms may be unrealistic and personally costly. The nature of our political and economic systems is such that many analysts find themselves in similar positions, where they dare not act subversively. However, the descriptive insights about communications retain value even then. In sum, a communications theory of society has enormous normative and descriptive power, and it speaks directly to the concerns of analysts. From a simple premise—that communication is important—one can predict many social outcomes and prescribe practical changes in what analysts should do. As a basis for thinking about the roles of analysts and decision makers, and for understanding the links between facts and values, the communicative approach works at least as well as its consequentialist competitors. Communicative action also has strategic value measured in utilitarian terms. For these reasons I contend that communicative analysis must be done by more of us. TAKING THE MIDDLE PATH My argument in this chapter emphasizes that analysts rarely get to work alone, that working with others is not easy but can be made easier, and that our analytical methods ought to accommodate the reality of the communicative context, without necessarily adopting the norms of communicative action. This argument applies in spades to joint fact-finding. In closing I link t|iese observations to the notion of an analytical "middle path" introduced in chapter 2. Analysts working at the frontiers of knowledge must often take heroic measures to make progress. In a pure research context, all that the heroic analyst places at risk is her reputation. However, in an applied context where the analyst attempts to influence decision-making, then the stakes can be much higher.
Fundamental Choices
41
Lives or fortunes may be on the line. Analysts need to worry about overreaching from heroism into foolhardyness. A communicative approach to analysis need not be a timid approach. Indeed, analysts who fully engage in a work context of shared decision-making may be less likely to indulge in reactive thinking and focus on small problems than traditional disengaged analysts. Other parties will force them to broaden their scope, consider a greater range of mental models, and otherwise expand their ambitions. However, I believe that communicative approach to analysis must be a humble approach. By that I mean that the analyst must be willing to listen, to explain, and to tolerate diverse views. The advance of knowledge depends on a successful dialogue, and a willingness to explore worldviews that differ from one's own. This middle path of humility is not an easy path, of course, as the case studies in subsequent chapters suggest.
NOTES 1. Charles E. Lindblom, Inquiry and Change (New Haven: Yale University Press, 1990), pp. 213-230. 2. Ibid., pp. 214-215. 3. Ibid., pp. 214-216. 4. Ibid., pp. 228-229. 5. Deborah Stone, Policy Paradox: The Art of Political Decision Making, 2nd ed. (New York: W.W. Norton, 1997), p. 380. 6. Lindblom, Inquiry and Change, pp. 216-225. 7. Lindblom, Inquiry and Change, pp. 225-226. 8. Aaron Wildavsky, Speaking Truth to Power: The Art and Craft of Policy Analysis, rev. ed. (New Brunswick, NJ: Transaction Publishers, 1987), p. 2. 9. J. Clarence Davies, "Environmental Regulation and Technical Change: Overview and Observations," in Myron F. Uman, ed., Keeping Pace with Science and Engineering (Washington, DC: National Academy Press, 1993), pp. 251-262. 10. Wildavsky, Speaking Truth to Power, p. xxxii. 11. Ibid., p. xxix. 12. Ibid., p. xxxvii. 13. Ibid., p. xxxvii. 14. Ibid., p. xxxviii. 15. Alfred North Whitehead, quoted in Lindblom, Inquiry and Change, pp. 223-224. 16. Stuart Hill, Democratic Values and Technological Choices (Stanford: Stanford University Press, 1992), pp. 158-162. 17. Thomas S. Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962, 2nd ed., enl. 1970), pp. 1-23. 18. Mainstream (Mendelian) genetics research was banned, from 1948 until Lysenko lost influence in 1964, in favor of Lamarckian theories that dovetailed better with Marxism. Dissenters were deprived of funding, denied jobs, and even sent to prison camps. See Loren R. Graham, Science and Philosophy in the Soviet Union (New York: Alfred A. Knopf, 1972). See also Frederick Grinnell, The Scientific Attitude, 2nd ed. (New York: Guilford Press, 1992), pp. 144-145. 19. H. Zuckerman, Scientific Elite: Nobel Laureates in the United States (New York: Macmillan, 1977). Referenced in Grinnell, The Scientific Attitude, pp. 55-56, 71.
42
Motivations and Concepts
20. These include universalism (evaluation using impersonal, "objective" criteria), communality (knowledge as a "public" good not owned by individuals), disinterestedness (lack of personal incentives for research outcomes), and organized skepticism (requirement for empirical and logical evidence to support claims). See Robert K. Merton, "The Normative Structure of Science," reprinted in Merton's The Sociology of Science (Chicago: University of Chicago Press, 1973). Referenced in Sheila Jasanoff, The Fifth Branch: Science Advisors as Policymakers (Cambridge, MA: Harvard University Press, 1990), pp. 62-63. 21. Jasanoff, The Fifth Branch, pp. 62-64. 22. Deborah Johnson provides a succinct, accessible introduction to these philosophical perspectives in her Computer Ethics (Englewood Cliffs, NJ: Prentice-Hall, 1985), pp. 6-21. 23. Werner Ulrich, "Systems Thinking, Systems Practice, and Practical Philosophy: A Program of Research," Systems Practice 1, 2 (1988): 137-163. 24. By utilitarian, I mean a philosophical doctrine that values usefulness above all. 25. Max Weber, The Protestant Ethic and the Spirit of Capitalism (London: Allen & Unwin, 1930, initially published in German in 1905). 26. M. Horkheimer, Zur Kritik der instrumentellen Vernunft (Frankfurt: Fischer, 1967). Referenced in Ulrich, "Systems Thinking," p. 142. 27. R. W. Pike, Optimization for Engineering Systems (New York: Van Nostrand Reinhold, 1986), p. 1. 28. Kant "set forth a categorical imperative always to have a 'good will,' to treat persons as ends in themselves, never merely as means to arbitrary ends," according to David Miller, ed., The Blackwell Encyclopaedia of Political Thought (Oxford: Basil Blackwell, 1991). More intuitively, imagine a kindergarten teacher telling a student "how would you like it if Bobby did that to you?" 29. George C. Hemmens and Bruce Stiftel, "Sources for the Renewal of Planning Theory," APA Journal 46, 3 (July 1980): 341-345. Referenced in that valuable review were two works by Habermas: Jurgen Habermas, trans. Jeremy J. Shapiro, Knowledge and Human Interests (Boston: Beacon Press, 1971). Also Jurgen Habermas, trans. John Viertel, Theory and Practice (Boston: Beacon Press, 1973). See also John Forester, "Critical Theory and Planning Practice," APA Journal 46, 3 (July 1980): 275-286; John Forester, Critical Theory and Public Life (Cambridge, MA: MIT Press, 1985); and Thomas McCarthy, The Critical Theory of Jurgen Habermas (Cambridge, MA: MIT Press, 1978). 30. Jurgen Habermas, trans. Thomas McCarthy, Legitimation Crisis (Boston: Beacon Press, 1975). Referenced in Hemmens and Stiftel, "Sources for the Renewal," pp. 341— 345. 31. Jurgen Habermas, trans. Thomas McCarthy, Communication and the Evolution of Society (Boston: Beacon Press, 1979). Referenced in Hemmens and Stiftel, "Sources for the Renewal," pp. 341-345. 32. Hemmens and Stiftel, "Sources for the Renewal," pp. 341-345. 33. A good sampling of critical comments is found in a symposium on the limits to communicative planning theory published in the Journal of Planning Education and Research 19, 4 (Summer 2000): 331-378. 34. Paul Slovic, "Perceived Risk, Trust and Democracy," Risk Analysis 13, 6 (1993): 675-682. 35. Ulrich, "Systems Thinking," pp. 137-163. 36. Erich Jantsch. Design for Evolution (New York: Braziller, 1975), p. 209 ff. Referenced in Ulrich, "Systems Thinking," p. 147.
Fundamental Choices
43
37. Jurgen Habermas, Theorie des kommunikativen Handelns (Frankfurt: Suhrkamp, 1981), p. 384. Referenced in Ulrich, "Systems Thinking," pp. 146-147. 38. John Forester, "Critical Theory and Planning Practice," APA Journal 46, 3 (July 1980): 275-286. 39. Hemmens, and Stiftel, "Sources for the Renewal," pp. 341-345. 40. John Forester, "Planning in the Face of Power," APA Journal 48, 1 (Winter 1982): 67-80. 41. The prisoners' dilemma is a staple of game theory, which originated in John von Neumann's and Oskar Morgenstem's Theory of Games and Economic Behavior (Princeton, NJ: Princeton University Press, 1944). A prisoners' dilemma is a mixedmotive game in which two players have incentives both to compete and collude, and the payoff is higher if they succeed in colluding.
This page intentionally left blank
Part II Serving Multiple Decision Makers
This page intentionally left blank
4 Assessing Technology for the U.S. Congress Partisans don't mind spilling blood, as the 104th Congress demonstrated in 1995. Republicans led by Newt Gingrich axed the U.S. Congressional Office of Technology Assessment (OTA) and offered a stinging slap in the face to experts nationwide. In 1996, there was a conference to capture lessons about how OTA performed policy analysis before memories of the agency faded. Several generations of analysts attended, and they mourned the agency's loss, recalled key events, and pondered lessons for the future. Email and letters came from many others—Nobelists, Congressmen, successful academics, minor bureaucrats, secretaries—who knew the agency and had something to say about it. This case study is based in part on that conference and its aftermath.1 The OTA case spectacularly illustrates the challenges of serving multiple decision makers and of synthesizing insights from multiple disciplines. The case shows that one solution to the analyst's technical problem (appropriately modeling reality) is to invite participants in the debate to help scope the analytical work. It demonstrates that the analyst's resulting communicative problems are underappreciated and indeed quite difficult. It shows the fragility of analytical "truth"-speaking activities in a context of partisan power struggles. The story of OTA—a policy analysis institution that nominally had six hundred bosses—reveals much about the potentials and limits of joint factfinding. This institution had a good run, from a hesitant start in 1972 to its spectacular death at the end of 1995. OTA produced almost 750 detailed assessments, at a rate of a few dozen per year. With 140 analysts and a $20 million annual budget, it was not a large enterprise by Washington's standards. It typically sought to adopt a neutral stance while contributing substantive information on highly politicized issues. OTA leased a small suite of offices on Capitol Hill that looked more like a set of graduate student carrels than the headquarters of a government agency. Here
Serving Multiple Decision Makers
48
one found lively, intelligent, casually dressed people typing eagerly on computer keyboards, sifting purposefully through mounds of paper on the floor near their desks, and radiating a sense of belief in their work. These passionate intellectuals looked like they belonged in a vibrant academic center instead of an anteroom off the corridors of power. Many of them went back to academia after a belligerent Congress shuttered the agency. OTA'S ORIGINS A first wave of technology assessment enthusiasm crested in the late 1960s and early 1970s as an adjunct to the critical science movement. Primack and von Hippel describe how these critical voices within science began calling for preassessment before committing society to innovations such as supersonic transport and nuclear weapons.2 Activists such as Rachel Carson, Ralph Nader, and Barry Commoner gave policy momentum to the observation that the fruits of technological progress were not all good. In addition, research budgets were rapidly rising, and science- and technology-related legislative initiatives were on the increase. Von Hippel and others of his generation suggest that the conduct of the Viet Nam war led to Congressional distrust of Executive branch expertise. U.S. policy entrepreneurs, led by Connecticut Democrat Emilio Daddario, therefore conceived of the OTA, and created it in 1972. During its first few years the agency was perceived to be highly politicized, and this nearly allowed skeptics to destroy it. Later the agency adopted clearer trappings of a balanced, bipartisan mode of operation that allowed it to flourish.3 The craft of policy analysis matured during this same period into a teachable set of skills rooted in microeconomics, organizational analysis, and politics. Policy analysis, a New Deal innovation, was a frankly modern enterprise, an attempt to apply scientific methods to problems in public affairs. The technology assessment movement also applied the methods of science, but did so in an unmistakably postmodern (meaning critical and polyphonous) style. In pursuit of mainstream credibility, OTA strove to define its brand of technology assessment as a niche specialty within the broader field of policy analysis.5 The agency resolved the resulting modern-postmodern tension by marrying methods of rational policy analysis to an inclusive, integrative assessment process. The OTA Assessment Process By the late 1980s, OTA had developed a distinctive approach to technology assessment which included the following steps:6 (1) prerequest conversations with Congressional committee members and staff; (2) formal requests for studies from Congressional committee chairs; (3) a proposal by OTA staff framing the study; (4) approval by OTA's Technology Advisory Board (TAB), a bipartisan governing body made up of six Senators and six Representatives, equally divided by party; (5) selection of an advisory panel; (6) a project plan, data collection
Assessing Technology
49
and analysis involving the advisory panel, workshops, contractor reports, staff research, and interim briefings for Congressional clients; (7) a draft report; (8) broad review, first in-house and then external, followed by revision and approval by the TAB; (9) release of the full report, a summary, solicitation of press coverage, and policy actions such as Congressional hearings and briefings. AN ILLUSTRATIVE STUDY A look at the workings of one OTA study will let us see how the agency's study process unfolded and will reveal its strengths and weaknesses. This section examines the inputs, procedures, and outputs of Preparing for an Uncertain Climate, an influential study published in 1993. It won praise from experts and garnered a Notable Government Document award from the American Library Association. The Climate Study A prominent atmospheric scientist often dwells in conversation on the theme of "what's wrong with those economists?" He enjoys talking to them about the policy implications of what he studies, but he puzzles about the disconnect that allows so many economists not to worry as much as he does about global climate change. Although the scientist felt this disconnect during his Advisory Panel service for OTA's Preparing for an Uncertain Climate report, the report itself actually did a good job of bridging the disciplinary divide. How did the study process work, and why was it successful? Preparing for an Uncertain Climate was a large report, covering two volumes, published in October 1993. It was the second OTA report on the issue of climate change. The first, entitled Changing by Degrees: Steps to Reduce Greenhouse Gases, was published in 1991. Both OTA efforts clearly lagged behind the cutting edge of climate change science by several years. The energy policy community had been debating the merits of climate policy alternatives since the late 1970s. OTA's first effort (Changing by Degrees, 1991) was if anything a response to the policy-oriented multiyear scientific consensus-building exercise of the Intergovernmental Panel on Climate Change (IPCC) which released its findings 1990. OTA's second effort (Preparing for an Uncertain Climate, 1993) was a more deliberative follow-up. Interest in the topic continues to rise as subsequent IPCC assessments and formal treaty negotiations occur. This chronology highlights the OTA's role as a careful, authoritative packager of extant scientific information (rather than a purveyor of cutting-edge research). The time elapsed between the request for an OTA report and its final delivery was typically eighteen to twenty-four months. While the 1991 report focused on steps to reduce the buildup of greenhouse gases in the atmosphere, the 1993 effort evaluated the problem of making such decisions under conditions of uncertainty. This reframing of the climate change debate represented (1) an acceptance of political reality, in the sense that climate change skeptics were dominant in Congress; and (2) a broadening of the
50
Serving Multiple Decision Makers
analytical effort to include additional disciplinary perspectives. The study sought specifically to answer three key questions: What is at risk over what time frames? How can we best plan for an uncertain climate? Will we have answers when we need them? To answer these questions properly, the analysts needed to look in detail at different natural and human systems: coastal areas, water resources, agriculture, wetlands, preserved lands, and forests. The study also needed to examine available sources of information and the legal context. One result was that the seven-member in-house project staff was supplemented with a large number of consultants, some twenty-five in all. The outside contractors contributed a great deal of technical information to the study, whereas the in-house staff synthesized this work into a coherent, comprehensible narrative. A twenty-one-person advisory panel oversaw the study. The staff organized eight workshops involving an additional 154 people on the specialized topics listed at the beginning of this paragraph, with each workshop having between fourteen and thirty-one participants (the median was twenty-two). Advisory Panel for the Climate Study Overseeing the study was a distinguished advisory panel headed by a university-based policy analyst. Eleven of the twenty-one panel members were from academia, five were from nonuniversity research institutions, two worked in state government, two were from the business world, and one was from an environmental advocacy group. The breakdown by discipline included six natural scientists, five economists, four political scientists/public policy analysts, two lawyers, two engineers, one psychologist, and one philosopher. A dozen members of the advisory panel and eighty-three other participants in that study responded to a survey about their experience as study participants.7 When asked how adequate the range of stakeholder perspectives was on this OTA study, most respondents thought that it was better than adequate. The adequacy of the range of disciplinary perspectives was also better than adequate. Consistency checks asking "who was there?" and "who was inappropriately missing?" produced similarly positive results. This fits OTA's reputation for acquiring balanced advising. When asked how well the respondent's own concerns were incorporated into the study, answers were positive for stakeholder concerns, and were even more positive for disciplinary insights. The only exceptions were economists, who mostly felt that the report underrepresented them along both dimensions. This is likely to reflect their late arrival on the climate change scene, which continues to be dominated by natural scientists. When asked with whom they interacted most closely during the assessment process, responding advisory group members had an interesting bias. The environmentalist interacted with other group members more than was expected, based on relative numbers, and the business people interacted less than was expected. Links were especially tight between academics, nonuniversity
Assessing Technology
51
researchers, and the environmentalist. Academics interacted more among themselves than they did with any other class of stakeholder. Interactions sorted by discipline suggested that those most prone to interact with others from their same discipline were the economists. Thus, presence on the advisory panel did not automatically translate into representation, and interactions between experts with different perspectives were not uniform. In other words, ensuring diversity was not enough; process managers needed to encourage actual interactions. The process as a whole ensured that they did so by providing interested parties with multiple opportunities to comment, first when scoping the study, then when reviewing the draft product, and finally in public once the study was published. Staff on the Climate Study The professional staff on the Climate study included the project director, two senior analysts, two analysts, and two junior analysts. Three other OTA analysts made minor contributions. An in-house secretary and a professional editor devoted much time to the project. Twenty-five consultants wrote draft chapters and prepared background analysis: thirteen were academics, five were independent consultants, two worked at national laboratories, two worked for think tanks, two worked for large consulting firms, and one worked for state government. The project director and her superiors were practically the only staff members involved in the early stages of the project. This included prerequest conversations with Congressional committee members and staff, eliciting the formal request for a study from several Congressional committee chairs, developing an OTA staff proposal framing the study, and acquiring approval from the bipartisan governing body (TAB). Once the proposal was approved, then other staff members joined the project. Their first task was to assemble the advisory panel mentioned earlier. This was a time-consuming process, because the team, mainly the project director, had to identify the desired mix of disciplinary and stakeholder perspectives, identify potential candidates for each slot, and then feel out each candidate's suitability and availability. Project directors were taught to avoid demagogues and "unreasonable" candidates who would not work well in a diverse group. The professional staff prepared draft materials for the advisory panel to review, and the project director provided the first of a series of interim briefings for the project's Congressional clients. The advisory panel played an active role in helping to scope the study, by commenting on the draft project plan, recommending data collection and analysis efforts, suggesting specific research tasks for staff analysts and contractors, and identifying potential workshop participants and contractors. Subsequently, the staff and consultants conducted the workshops, performed the research, and wrote a draft report. The staff performed or supervised all of the writing, which spanned a period of many months.
Serving Multiple Decision Makers
52
The advisory panel met for a brief mid-project update from the staff. Minor suggestions were offered at this point. When the draft report was complete, then the advisory panel met one last time to comment on the report's adequacy and to ensure that the external review process had been wide-ranging and thorough. Finally, the TAB reviewed and approved the report, paving the way for public release of the full report, a summary, a press briefing, and Congressional briefings, about two years after the initial request for the study had been issued. EVALUATION OF ADEQUACY, VALUE, EFFECTIVENESS, AND LEGITIMACY Evaluating the Climate study in terms of chapter l's criteria for good science policy analysis proved revealing. The results reported here were broadly corroborated across six OTA studies on topics ranging from terrorism to teaching. A survey measured technical success of a study along the following dimensions: • • • • • •
Comprehensiveness (addresses all key issues, acknowledges diverse perspectives) Credibility (is believable and authoritative) Objectiveness (is balanced and nonpartisan) Readability (has good writing and editing, no jargon) Timeliness (published in time to be useful to legislators or policy actors) Up-to-date technical content (captures current technical consensus on issue)
Study participants believed that stakeholder and cross-disciplinary review were important determinants of the credibility of the OTA studies, but that other factors, notably the staff and consultants performing the study, had the strongest influence on readability, technical content, comprehensiveness, and other ingredients of success. Adequacy Former employee Christopher Hill observes that "the need for a process and staff to sift and winnow technical information and judgement for Congress was recognized more than three decades ago. . . . [OTA] did so with distinction."8 Respondents confirm this; they were uniformly enthusiastic about the readability, comprehensiveness, and up-to-date technical content of OTA studies. Perceptions of the timeliness of reports varied somewhat, but many respondents felt that this dimension was difficult for them to evaluate. Objectiveness and credibility ratings also were high. According to survey respondents, the key determinants of up-to-date technical content in the Climate study were the membership of the advisory panel, particular analysts involved, and disciplinary composition of the analytical team. Contractors from outside government played an important role in improving technical adequacy.
Assessing Technology
53
The key determinants of the Climate study's comprehensiveness were the membership of the advisory panel, particular analysts involved, and the OTA project director. Other factors frequently mentioned included the disciplinary composition and diversity of the study team and their analytical framework. OTA reports typically enjoyed good scientific adequacy, and the Climate study was no exception. Adequate technical content seems tied to the individuals involved as project directors, analysts, contractors, or advisors, with the analytical framework and the disciplinary composition of the team also playing roles. Adequate comprehensiveness seems tied to these same factors, plus disciplinary diversity. The advisory panel, which performs both scoping and review, and represents both stakeholder and disciplinary perspectives, seems to be an important determinant of adequacy. However, the review function apparently must operate both within and across disciplinary boundaries. Value Former House Science Committee chairman Walker is blunt: "As a consumer of OTA's products for many years, I can tell you that it definitely was not my primary source of information on the issues at hand."9 Note that during most of Walker's tenure in Congress he was in the minority party and not a member of the leadership. OTA advisory panel members on the Climate study perceived the most important audience for the reports to be the chairs, members, and staff of the Congressional committees requesting the work. Least important were the general public and Congress's rank-and-file membership. This matches Whiteman's observation that information transfers in Congress take place unevenly, within small networks of interested parties clustered around specific issues.10 The low priority accorded the general public conflicts with our ex post understanding of the agency's great social value in providing vetted technical information as a public good.11 Most respondents apparently believed that the Climate study had successfully reached experts in the field and, to a lesser extent, political decision makers. Respondents also consistently believed that they had failed to communicate their findings to the general public. Personal considerations of value seemed only modest. Few respondents felt that they had learned much of significance about the topic, the agency, or the players in the policy debate, although this varied across studies. Much of the value of OTA's reports seems misplaced. Committees received valuable products as intended, but the Congressional rank-and-file did not. Expert communities and the general public unintentionally received value. Study participants experienced moderate personal benefits. Effectiveness "Scientists and engineers typically eschew direct involvement in politics— many find the seemingly irrational and personality-driven political process to be distasteful and out of keeping with their own professional norms," says Hill.12 Is
54
Serving Multiple Decision Makers
marginalization the inevitable result? Indeed, advisory panel members showed a surprising ignorance regarding the impacts of their reports. Most respondents had not tracked or participated in the communication of the results to policy makers and the public. Indeed, one advisory panel member said that "our role was to get the input facts straight" and nothing more. This was clearly a wasted opportunity, given that panel members had authority because they were technically knowledgeable, as well as civil legitimacy because some of them were stakeholders. The professional stakeholders often had personal contacts among the "six hundred bosses" that could have been exploited to improve the success of communications efforts. By contrast, contacts with OTA project directors showed that they had a much clearer understanding of their reports' effects. Typical effects included a Congressional hearing to release a report, press coverage of the release, and testimony by OTA staffers before Congress. On rare occasions a report would provoke more concrete political action. Advisory panel members in the Climate study believed that overall success was a function of the OTA project director involved, the report's topic, and the membership of the advisory group. Secondary but still important were the analytical framework employed, the stakeholder and disciplinary diversity of the study participants, and the particular analysts involved. Respondents looked beyond the Climate report and its sisters to consider the effectiveness of OTA as an institution and the plausibility of various explanations for the OTA's demise. Explanations they considered to be most plausible centered on the political context of the 104th Congress (typical images they invoked were "witch hunt" and "sacrificial lamb") rather than any lack of effectiveness at the agency. However, the downside of a strategy of serving primarily the committee chairs also appears—OTA was nearly invisible to the rank-and-file, and committee chairs changed when the political party in control changed. Measuring effectiveness was problematic, but several signals suggest a failure to connect with a key portion of OTA's constituency. A confidential e-mail message from a distinguished scientist (Nobelist, former university president, OTA Advisory Council member) provides confirmation: I concur with most of your assessment, including the quizzical response we all have to how "effective" OTA was. Most commonly expressed concern: a thorough and balanced study took much longer than the legislative attention span. And how can you get much public attention without the energy of intense advocacy? Representative Walker seems to agree: "OTA's main drawback was that it did not deal with the issues on a legislative rhythm."13 But his real emphasis is on the pluralistic rather than scientific worldview prevalent in Congress. Fully 224 members of the 104th Congress had law degrees, whereas only seventeen had science degrees (chiefly in engineering and medicine), making interests and partisanship more central to the legislative culture:
Assessing Technology
55
While it is always beneficial to have a neutral source . . . able to distill information concisely and accurately, it is often the parties who have the most interest in a particular issue, either for or against, who are best able to explain their respective positions.14 Facts are not insignificant, but they are viewed strategically: "the most effective communicators are those who are able to back up their positions with solid facts."15 Hill, the technical analyst, counters by reiterating the weaknesses of advocacy science: Representative Walker is correct when he says that those who have an interest in an issue have a strong motivation to provide Congress with information in support of their positions. What he leaves out is that more than information is involved. Special interests on all sides have an additional incentive—to frame their interpretations of the implications of their positions for the future in a light that is most favorable to their own interests, not the public's interests. Since the future is always uncertain (even the judgments of technical experts about the future are uncertain!), special interests can exploit the uncertainties in the scientific and technical understandings that are essential to a good decision.16 Effectiveness is the quintessential utilitarian measure of success and, as such, those from the two ends of Price's truth-power spectrum interpreted OTA's demise differently. Hill, defender of truth, warned that barbarian horsemen had breached the gates, while Walker, victorious partisan, argued that Congress had put a slow-moving nag out to pasture. Legitimacy Hill claims that "OTA enjoyed relatively widespread acceptance and acclaim from both sides of the aisle in Congress during most of its life"17 and study participants confirm this view. OTA nemesis Walker also agrees that the reports were "comprehensive and sound."18 Objectiveness and credibility are important dimensions of legitimacy that could as well be tied to authoritative as to civil sources. In prominent efforts such as the Climate study, participants attributed credit for objectiveness primarily to the OTA project director and the advisory panel. Also important were other authority-based sources of objectiveness: analytical framework and particular analysts involved. Civil sources of objectiveness such as stakeholder diversity were less influential. The story for credibility was almost identical. The most important determinants of credibility were the advisory panel and the project director. Also important were the particular analysts involved and their disciplinary diversity. Again, authority-based sources of credibility outweighed civil sources. Yet former OTA staffers noted that in run-of-the-mill studies, the advisory panel members were less stellar and it was the stakeholder representation that gave such studies their legitimacy. The panel structure that ensured stakeholder representation and civil legitimacy was taken for granted in the prominent
56
Serving Multiple Decision Makers
studies. To examine this claim, I briefly examine a run-of-the-mill study entitled U.S. Oil Import Vulnerability: The Technical Replacement Capacity published in late 1991. The Vulnerability study was commissioned in anticipation of the Gulf War with Iraq, and its purpose was to assess U.S. vulnerability to long-term oil supply disruptions. Unlike the Climate study, this one won no awards. It found its way onto half the number of library shelves as the Climate study, earned a fraction of the press coverage, and went largely unnoticed by experts in the field. Vulnerability was technically adequate because it borrowed heavily from existing studies performed by the U.S. Department of Energy. It had value only to its Congressional clients, but it demonstrated effectiveness by influencing the Energy Policy Act of 1992 (P.L. 102-486) and several predecessor bills.19 With the exception of a pair of Stanford professors who served as consultants, the participants in this study were not highly credentialed and their names offered little authoritativeness to the study. The study's legitimacy therefore depended on its balance of perspectives. Participants in the scoping workshop included one representative each from the oil, gas, and electricity industries, one advocate each for solar energy and energy efficiency, two industrial energy consumers, two energy sector forecasters, two federal government representatives, and one state government representative. Reviewers of the study were similarly diverse. The resulting study was ultimately useful, but it offered little new information to experts in the field. This illustrative outcome was typical of OTA's approach, which ensured that studies met relatively high standards for competency and balance before being released. Respondents awarded OTA reports much legitimacy. In terms of inputs to the study process, the balance of stakeholder perspectives established civil legitimacy, but important dimensions of legitimacy seemed to have status-based sources. In sum, typical OTA products satisfied the criteria of adequacy, value, and legitimacy, but evidence of effectiveness was less convincing. OTA IN ITS PRIME Many current participants in the technology assessment field began their professional lives with OTA already in existence. In the late 1980s, this cohort produced a second wave of literature on technology assessment that studied actual practices of OTA and its younger counterparts outside Washington. The primary empirical lesson of the mature OTA experience was that a carefully honed strategy of neutrality was necessary at this Congressional support agency: its key elements included a bipartisan governing board, and presentation of options rather than policy recommendations.20 A series of reports by the Carnegie Commission on Science, Technology, and Government examined the vital signs of the Congressional support agencies as of 1990. The prognosis for OTA in terms of product quality and credibility was excellent, but problems were identified. A key dilemma was that the Congress
Assessing Technology
57
had a strong oral rather than written tradition, with the corollary that personal contacts were more important than written reports as sources of information.21 For OTA, whose primary products were large written reports, this meant "excellent analyses by OTA are not used as fully as would be desirable."22 In other words, OTA personnel were not fully successful as communicative analysts. Analyst-to-analyst communication was successful, although limited across disciplines and stakeholder groups, but analyst-to-decision-maker communication left much to be desired. OTA should have changed its reporting format (in fact, the agency began to develop shorter reports, interim briefings, and other communications innovations following the Carnegie Commission's findings, but never finished this transformation due to its sudden death in 1995). In confirmation of Whiteman's finding that technical information flows in small networks within Congress,23 the present study also indicates that information flows were uneven, so that legislators who were merely attentive to an issue were much less likely to have seen a relevant OTA report than those who were active on that issue. Attempts to measure the impacts of OTA's work have found that much of their effect has been indirect. OTA narrowed the range of acceptable definitions and structured the terms of debate in areas of controversy. Once OTA had "worked over" a subject, it was harder for special interests to advance nonsensical arguments. This particular contribution of OTA is now missing from U.S. Congressional debates. OTA reports were rarely cited directly in legislation, but often influenced the legislative agenda and the crafting of particular bills.24 This is a typical path of influence for research and analysis, which is said to enter and change the terms of the policy debate rather than directly influencing decisions.25 Yet work whose influence is hard to measure is also hard to defend when the climate for analysis changes. A former Carnegie Commission staffer penned this ominous warning in 1993: The American polity, as it pertains to science and technology, is bound to the usual and full dynamic behavior of ecosystems. In natural systems the pattern is one of slowly increasing organization or connectedness accompanied by a gradual accumulation of capital. Stability initially increases, but the system becomes so over-connected that rapid change is triggered. Fires, storms, pests, and senescence release stored capital and a new cycle begins, where the pioneers have many opportunities. . . . The particularly rapid growth of non-governmental organizations during the last decade may locate America's place in the current cycle. The mature ecology, the climax forest, so to speak, should beware. The public and political questioning of science, its integrity and institutions, should not be taken lightly. The landscape will change.26
WHY ME? WHY NOW? Three years later, in 1996, the landscape changed and OTA disappeared, prompting a third wave of literature on the agency. It initially centered on questions similar to those asked by all victims of violence: "Why me? Why
58
Serving Multiple Decision Makers
now?" Defenders offered a range of plausible explanations of the agency's demise.27 It is ironic that one effect of OTA's disappearance was a resurgence of scholarly interest in technology assessment. Other recent writing has instead delivered prescriptive lessons for the future practice of technology assessment.28 These valuable retrospective studies of OTA have suggested the following. Most prominently, OTA's institutional context was an important determinant of its analytical style and access to talent. The diversity and decentralization of the U.S. Congress made balance and neutrality the basic survival strategies at the OTA.29 In addition, OTA's association with this uniquely demanding Congressional client enabled it to attract America's best and brightest as participants in its studies, ensuring that its reports were broadly read.30 Three categories of technology assessment were practiced at OTA: "first, a high-level process of problem-framing and stakeholder participation at which OTA excelled; second, a tactical or ground-level collection of research methods deployed in a (mostly) appropriate and very pragmatic fashion; and third, a midlevel or strategic assessment methodology that OTA sought on many occasions but ultimately failed to grasp."31 It is the first category of practice, involving stakeholder and expert participation, on which this chapter has focused. A former staffer32 has specified which aspects of the OTA approach he believed were essential to successfully practice technology assessment, and I concur: 1. 2. 3. 4. 5. 6.
the improved access to technical talent resulting from having Congress as the client; the multidisciplinary teams to ensure adequate breadth; the small size of the analytical teams which minimized the number of interfaces across which to lose information; the advisory panels for review which enforced intellectual honesty by ensuring exposure to varied, representative perspectives on an issue; the need to go out and talk to people; and the practice of offering only policy options, not recommendations, to prevent the agency from becoming ideologically pigeonholed. Nonessential aspects of the OTA approach included:
1. 2. 3.
having a broad range of programs, given that demand for studies greatly exceeded their ability to produce them; having long, slow reports comprehensively covering interests, facts, and perspectives, given that the political process only noticed interests; and having more than two advisory panel meetings, one for scoping and one for reviewing, given that intermediate meetings were almost always a waste.
CONCLUSIONS REGARDING REVIEW MECHANISMS Policy analysts hope that their craft has some scientific aspects. Economics, political science, and organizational analysis claim to be social sciences, a
Assessing Technology
59
contested and self-conscious label that at least admits of scientific aspirations. Public policy schools, journals, and professional societies seek to enforce Merton's scientific norms, including communal sharing of results, universal evaluation criteria, no personal stake in findings, and organized skepticism. Practitioners find these norms diluted on the job. Still, the scientific ideal remains. Political decision makers, being above all else partisans, may still welcome policy analysis with scientific validity and reliability because it provides good intelligence. Political decision makers inevitably view policy analysis strategically, but they prefer it to have high rather than low quality. On occasion, political decision makers have supported increased professionalization of policy analysts in the hope of improving the quality of the product. In this pursuit of scientific ideals, policy analysts and their clients have relied surprisingly rarely on external review processes, in spite of frequent calls for peer review of policy analysis.33 Review is the fundamental self-policing mechanism of science, valued because it ensures the reliability of knowledge and secures the autonomy of science. Decision makers have been mostly unwilling to trade autonomy for reliability in policy analysis. Thus, externally reviewed policy analysis has developed in only a few niches to date. One such niche has been in science and technology policy, where specialized knowledge and complex phenomena make reliable, valid policy analysis difficult, and where many analysts have already been socialized in the review-intensive culture of science. Interest in review processes now seems to be increasing in other policy domains. For example, the U.S. Environmental Protection Agency (USEPA) is implementing wider review processes for its regulatory science activities, and the National Research Council (NRC), which has traditionally employed external peer review in preparing its studies, is now struggling with the question of how to integrate stakeholder review as well. Peer review occupies a central position in the social structure of science. It is a decentralized quality control mechanism that makes the "marketplace for ideas" work successfully, because it separates accepted knowledge from speculation and talented scientists from incompetents. The effectiveness of peer review depends in part on shared scientific norms such as those listed by Merton and criticized by Jasanoff.34 Just as real markets have imperfections, so does real peer review. While review clearly brings overall benefits, policy analysts need to apply it appropriately in their domain. In particular, scientific experience offers no guidance on the role of stakeholder (as opposed to peer) review. The distinction is not the level of relevant technical knowledge: professional stakeholders typically know a great deal about their fields. The important distinction is that professional stakeholders are interested parties who often have (or desire to have) influential links to the decision makers for whom the analysis is being prepared. Should stakeholders still participate in review processes? Merton's ideal scientists would say "no" (stakeholders would contaminate the review process) while Jasanoff s skeptical observers of science would say "yes" (peer
60
Serving Multiple Decision Makers
review itself is contaminated, so why not?). The way forward depends on the distinction between authoritative and civil legitimacy. If these two sources of legitimacy compete with one another, then Merton's ideal scientists are right. If authoritative and civil legitimacy are instead complementary, then Jasanoff s skeptics win the day. Only empirical evidence can clarify the matter. OTA is one of the few policy analysis institutions that built its reputation in part upon extensive review processes, so I now return to the case study. As analysts at other agencies increase their use of external review processes for policy analysis, they can benefit from the lessons OTA's legacy offers for innovators: •
•
• •
• •
The external review process employed by OTA is a key contributor to its record of successful studies. However, a good procedural framework alone is not enough, because success still depends on talented study participants, analysts, and managers. Having a prestigious client, such as the U.S. Congress in OTA's case, ensures access to superior resources and talent. The special contributions of the external review process are to bolster the study's adequacy and legitimacy, with stakeholder review contributing to adequacy and civil legitimacy, and expert review contributing to adequacy and authority-based legitimacy. Effective external review starts at the scoping stage of the analysis; it does not merely critique work already produced. Putting specialized experts in the same room together is no guarantee that cross-specialty communications will actually occur, because of uneven interpersonal affinities. For successful externally reviewed policy analysis, participants therefore need both technical expertise and interpersonal skills. To increase effectiveness, agencies ought to find ways to involve external reviewers in selling the product, since otherwise tradeoffs between effectiveness and legitimacy seem likely, as happened at OTA. An important OTA contribution was to provide Congress with a way to tap private sector expertise through outside contractors. Congressional agencies such as the Congressional Research Service, General Accounting Office, and Congressional Budget Office rarely use contractors despite the fact that on any given subject the vast majority of knowledge resides outside government.35
More broadly, it seems that the two sources of legitimacy, civil and status based, may be complements rather than substitutes. However, the clear tradeoff between political autonomy and scientific validity will force an enterprise to choose its legitimation strategy. An NRC study, for example, will logically pursue autonomy and authoritative legitimacy while sacrificing civil legitimacy, knowing that its analysis is several steps removed from actual policy-making. USEPA's nonautonomous rulemakers, on the other hand, must seek both kinds of legitimacy because they are making policy. A survey respondent comments, "that's exactly why the EPA has a problem. They tend to get into the trap of liking the 'science' that supports their previous policy decisions." This respondent feels that the art of making civil and authoritative legitimacy
Assessing Technology
61
complementary has eluded USEPA, although the next case studies suggest that they are working on the problem.
NOTES 1. My research strategy included reviewing the published study products, developing independent measures of the study's influence, convening a meeting of experts in the field of science and technology policy to discuss the agency and its products, interviewing former OTA staff members, and surveying participants in the study process. The work was funded by internal grants from various comers of Princeton University. The meeting is documented in Clinton J. Andrews, ed., Technical Expertise and Public Decisions: Proceedings of the 1996 International Symposium on Technology and Society (Piscataway, NJ: Institute of Electrical and Electronics Engineers, June 1996). 2. Joel Primack and Frank von Hippel, Advice and Dissent: Scientists in the Political Arena (New York: Basic Books, 1974). 3. Gregory C. Kunkle, "New Challenge or the Past Revisited? The Office of Technology Assessment in Historical Context," Technology in Society 17, 2 (1995): 175— 196. 4. Critics of modem policy analysis and technology assessment pointed out the limits to instrumental rationality, the biases resulting from interlinked knowledge and power, the importance of an open analysis process, and the need for experts to respect and communicate with the public. In recent years they have also highlighted the important effects that institutional arrangements have on outcomes, the dangers of disciplinary over-claims, the value of alternative analysis methods, and the broad range of alternative processes for developing and legitimating policy analysis. See Laurence H. Tribe, "Technology Assessment and the Fourth Discontinuity: The Limits of Instrumental Rationality," Southern California Law Review 46 (1973): 617-660; Jurgen Habermas, trans. Jeremy J. Shapiro, Knowledge and Human Interests (Boston: Beacon Press, 1971); Harold P. Green, "The Limitations of Technology Assessment," American Bar Association Journal 69 (1983); Harry Otway, "Experts, Risk Communication, and Democracy," Risk Analysis 1, 2 (1987): 125-129; Sheila Jasanoff, The Fifth Branch: Science Advisors as Policymakers (Cambridge, MA: Harvard University Press, 1990), pp. 231-232; Nathan Keyfitz, "Contradictions between Disciplines and their Influence on Public Policy," Federation of American Scientists Public Interest Report 48, 3 (May/June 1995): 1-10; Emery Roe, Narrative Policy Analysis: Theory and Practice (Durham, NC: Duke University Press, 1994), pp. 2-4; and Frank Fischer, Evaluating Public Policy (Chicago: Nelson-Hall Publishers, 1995), pp. 17-24. 5. John H. Gibbons and Holly L. Gwin, "Technology and Governance: The Development of the Office of Technology Assessment," in Michael E. Kraft and Norman J. Vig, eds., Technology and Politics (Durham, NC: Duke University Press, 1988), pp. 98-122. 6. U.S. Congress, Office of Technology Assessment (OTA), OTA Role & Function, pamphlet (Washington, DC: OTA, 1995). Now available at the OTA archive web site: . 7. The survey is documented in Clinton J. Andrews, "Roles of Stakeholder and Peer Review at the OTA," Technology & Society at a Time of Sweeping Change: Proceedings of the 1997 International Symposium on Technology and Society (Piscataway, NJ: Institute of Electrical and Electronics Engineers, June 1997). Further details are available from the author.
62
Serving Multiple Decision Makers
8. Christopher T. Hill, "Science, Technology, and the U.S. Congress: What Should Be Their Relationship?" IEEE Technology & Society Magazine 16, 1 (Spring 1997): 9. 9. Robert S. Walker, "The Quest for Knowledge versus the Quest for Votes," IEEE Technology & Society Magazine 16, 1 (Spring 1997): 4, 6-7. 10. David Whiteman, Communication in Congress: Members, Staff, and the Search for Information (Lawrence: University Press of Kansas, 1995). 11. Christopher T. Hill, "The Congressional Office of Technology Assessment: A Retrospective and Prospects for the Post-OTA World," in Clinton J. Andrews, ed., Technical Expertise and Public Decisions: Proceedings of the 1996 International Symposium on Technology and Society (Piscataway, NJ: Institute of Electrical and Electronics Engineers, June 1996), pp. 4-12. 12. Hill, "Science, Technology, and the U.S. Congress," p. 8. 13. Walker, "The Quest for Knowledge," p. 7. 14. Ibid., p. 7. 15. Ibid. 16. Hill, "Science, Technology, and the U.S. Congress," p. 9. 17. Ibid. 18. Walker, "The Quest for Knowledge," p. 7. 19. Roger Herdman (OTA Director), testimony before the U.S. Congress Senate Appropriations Committee, May 26, 1995. 20. Technology Review, "How John Gibbons Runs through Political Minefields: Life at the OTA," (October 1988): 47-51. 21. Bruce Bimber, "Congressional Support Agency Products and Services for Science and Technology Issues: A Survey of Congressional Staff Attitudes about the Work of CBO, CRS, GAO and OTA," report prepared for the Carnegie Commission on Science, Technology, and Government (New York: Carnegie Corporation of New York, 1990). 22. Rodney W. Nichols, "Vital Signs OK: On the Future Directions of the Office of Technology Assessment, U.S. Congress," report prepared for the Carnegie Commission on Science, Technology, and Government (New York: Carnegie Corporation of New York, 1990). 23. Whiteman, Communication in Congress. 24. Herdman, testimony before the U.S. Congress. See also Robert M. Margolis, "Losing Ground: The Demise of the Office of Technology Assessment and the Role of Experts in Congressional Decision-making," in Clinton J. Andrews, ed., Technical Expertise and Public Decisions: Proceedings of the 1996 International Symposium on Technology and Society (Piscataway, NJ: Institute of Electrical and Electronics Engineers, June 1996), pp. 36-44. 25. Judith E. Innes, Knowledge and Public Policy: The Search for Meaningful Indicators, 2nd exp. ed. (New Brunswick, NJ: Transaction Publishers, 1990). 26. Jesse H. Ausubel, "The Organizational Ecology of Science Advice in America," European Review 1, 3 (1993): 249-261. 27. Examples include: George E. Brown, Jr., "OTA: Victim of Irrationality" Environment (December 1995); William C. Clark, "Environmental Stupidity," Environment (December 1995); Joseph F. Coates, "Technology Assessment: Here Today, Gone Tomorrow," Technological Forecasting and Social Change 49 (1995): 321-323; Amo Houghton, "In Memoriam: The Office of Technology Assessment, 1972-95," Congressional Record 141, 153 (September 28, 1995): E 1868; Warren E. Leary, "Congress's Science Agency Prepares to Close Its Doors," New York Times (September 24, 1995); M. Granger Morgan, "Death by Congressional Ignorance," Pittsburgh Post-
Assessing Technology
63
Gazette (August 2, 1995); Fred W. Weingarten, "Obituary for an Agency," Communications of the ACM 38, 9 (September 1995): 29-32. 28. For a good summary, see the special issue of Technological Forecasting and Social Change (54 [2-3], 1997) on technology assessment, co-edited by David Guston and Bruce Bimber. 29. Bruce Bimber, The Politics of Expertise in Congress: The Rise and Fall of the Office of Technology Assessment (Albany, NY: SUNY Press, 1996), pp. 12-24. 30. Hill, "The Congressional Office of Technology Assessment." 31. Fred B. Wood, "Lessons in Technology Assessment: Methodology and Management at OTA," Technological Forecasting and Social Change 54 (1997). 32. David Jensen, based on my notes of his comments during a panel presentation. For abstract see p. 263 in Clinton J. Andrews, ed., Technical Expertise and Public Decisions: Proceedings of the 1996 International Symposium on Technology and Society (Piscataway, NJ: Institute of Electrical and Electronics Engineers, June 1996). 33. For arguments favoring peer review of policy analysis see Serge Taylor, Making Bureaucracies Think (Stanford, CA: Stanford University Press, 1984), pp. 3-37; or Frank von Hippel, Citizen Scientist (New York: Simon & Schuster, 1991), pp. 1-51. 34. Sheila Jasanoff, The Fifth Branch: Science Advisors as Policymakers (Cambridge, MA: Harvard University Press, 1990), pp. 64-76. 35. Terry Davies suggested this point to me.
This page intentionally left blank
5 Institutional Factors Affecting Analysts I have suggested that analysts need appropriate institutions within which to practice their craft. All such institutions should do a good job of constructing knowledge; and some should also accommodate diverse approaches to this task, preserve the autonomy of analysts, and encourage their neutrality. Few existing institutions now employ the joint fact-finding approach. Whether or not the situation changes in the future, analysts can only increase their effectiveness by becoming more aware of their institutional context. This chapter reflects on a few key factors identified by the OTA case, but it is by no means an exhaustive examination of institutional issues. Chapter 2 observes that the adversarial approach used in courts systematically constructs knowledge, but that it tightly constrains the range of acceptable approaches to that task and inhibits neutrality. For example, it is the disputing parties who typically hire any expert witnesses involved in a case, so no one expects the experts to contribute in a balanced fashion. Only recently have judges received encouragement from the U.S. Supreme Court to hire their own technical advisors for complicated cases.1 Even then, the role of the "special master" is usually limited to that of gatekeeper or quality controller rather than provider of new information.2 Most experiments with joint fact-finding and cooperative approaches to knowledge construction have been in the alternative dispute resolution arena that augments judicial decision-making.3 If that is the situation in judicial institutions, what happens in legislative and administrative contexts where rules of evidence are less clear? POWER Power is the most significant element of context for analysis. Regardless of institutional setting, such as branch of government, practicing analysts are more likely to encounter the harsh world of Machiavelli and Nietzsche than the ideal
66
Serving Multiple Decision Makers
world of Bacon and Rawls. The brutal reality is that, even in western democracies, much decision-making boils down to a simple contest of wills. Many technical analysts are blind to the overriding importance of power relationships. The Enlightenment project of which they are a part pursues ideals of freedom, justice, and truth, and many analysts assume that what they have experienced at the "truth" end of Price's spectrum applies in general. Flyvbjerg better captures reality when he suggests: "Power has a rationality that rationality does not know, but rationality does not have a power that power does not know."4 Relations between analysts and decision makers are more likely to exist when there is political stability, therefore the power of rationality is embedded in stable power relations rather than in confrontations.5 Analysts have a better chance of influencing decision-making when major power struggles are not going on, as the OTA case confirms. POLITICIZATION Public policy analysts produce work that is subject to scrutiny by political partisans, interest groups, and other actors in the policy-making process. Analysts from executive branch agencies such as the U.S. President's Council of Economic Advisors, Council on Environmental Quality, or Office of Science and Technology Policy have conflicting institutional incentives. On the one hand they are asked to provide accurate information and realistic advice to the President, yet on the other they are asked to defend the President's policies during partisan wrangling. According to one theory, such expertise inevitably becomes politicized, because only those analysts demonstrating greater loyalty to boss than loyalty to profession survive the first Presidential crisis.6 At a minimum, there is a significant degree of self-selection by the experts, so conservative economists, for example, will join a Republican administration whereas liberal economists will join a Democratic administration. Analysts drawn from the ranks of the Civil Service can temper the tendency toward politicization. This strategy has worked well among analysts with strong professional or disciplinary ties and in agencies with research mandates. However, professional bureaucrats typically have much less access to and clout with the political decision makers. Executive agency analysts may also have mixed motives when their testimony can influence their agency's budget. One solution to the problem of inappropriate institutions has been ad hoc reliance on academia. Academics can bring great technical expertise to bear on public questions, although their training rarely equips them to share their knowledge in a political environment. Nevertheless, academics are comfortable with the ideal norms of scientific behavior and with the practice of peer review. The U.S. National Research Council (NRC, serving the National Academies of Science and Engineering and the Institute of Medicine) provides a relatively autonomous institutional vehicle for delivering expert advice to the national government. It assembles teams of volunteers, many of them academics, to carry out specific studies requested by governmental decision makers, and it subjects
Institutional Factors Affecting Analysts
67
the products to rigorous peer review. It less often seeks stakeholder review of its products, on grounds that the study's scientific credibility could be compromised. For ad hoc analysis, this institution is appropriate in many ways. Yet critics complain that that NRC studies suffer from weaknesses similar to those of academia, having especially an elite orientation and a conservative bias.7 Also, because it relies on volunteers, the NRC complex is not equipped to perform regular policy analysis, only special studies. OTA has been labeled as a "boundary" organization.8 Such an organization carves out a neutral space for performing technical analysis on an ongoing basis by balancing the relative influence of competing interests. Another example is the Health Effects Institute, which is cofunded by the automobile industry and the U.S. Environmental Protection Agency to perform toxicological research that directly informs regulatory decision-making. Since funding comes in equal portions from both parties to the regulatory debate, research findings are more readily accepted. Successful boundary organizations meet three criteria: (1) they provide the opportunity to create objects and standardized packages (such as patents, reports, or model research agreements) that are useful to both parties; (2) they involve the participation of actors from both sides of the boundary, as well as professionals who mediate between them; and (3) they exist in an institutional space that is distinct from, yet maintains direct ties to both sides. OTA performed better on the second and third criteria than on the first. LEARNING FROM OTA The post-mortem of OTA in chapter 4 shows the importance of institutional context in determining analytical process and style. Having six hundred bosses (of which only a dozen or so paid close attention) forced analysts to carefully manage the value content of their work, to strive for balance, and to seek wide review of products by both experts and stakeholders. Participants in OTA studies responded to our survey with powerful expressions of warmth, concern, and goodwill for the now-defunct OTA and its products. Without exception, respondents had good things to say about the agency, and hoped that in the future we would once again have a way to generate credible assessments. This may be nothing more than a response bias, but the same message, whether scribbled in marginal comments or in page-long answers to open-ended questions, came from Republicans and Democrats, liberals and conservatives, and all stakeholder and disciplinary perspectives. The agency is sorely missed by the expert community, which knows what it has lost. One respondent summarized the 104th Congress's populist fervor that doomed OTA as follows: "Indeed, there were loud howls of protest from the 'intellectual' community; all the more reason for a public execution to show who was in charge." "Power makes stupid," says Nietzsche, "politics devours all seriousness for really intellectual things."9 Technical analysis is only a second-order concern during times of political upheaval. The Gingrich revolution and its "Contract with America" put Republicans in the leadership positions in Congress for the
68
Serving Multiple Decision Makers
first time in decades. One casualty of the destabilized power relations was OTA. A well-known atmospheric scientist said in his survey response: What I find frightening about this case is that I would do very little different [at OTA]. OTA represented honesty, neutrality, objectivity, and scientific/technical accuracy. I submit that what happened (my kindest hypothesis here) was the "French Revolution effect"; the innocent get beheaded at the same rate as the guilty. To me, it serves as a sobering example of what can happen when demagoguery (of any stripe) runs unchecked. This scientist's lament is a poignant reminder that knowledge is a very weak form of power, and that successful joint fact-finding is a fragile bloom that perishes quickly in a hostile environment. Two entries shared from the diary of another distinguished scientist add political color: October 26, 1995, 8 am, Breakfast at Capitol, Members Dining Room. Attending are House Speaker Gingrich, Congressmen Walker (Chair, House Science Committee) and Houghton (longtime defender of OTA), plus [several other leading scientists]. I thought Gingrich made it pretty clear that he was not unhappy to see the demise of OTA, because he thought he could secure good scientific advice on his own, and use it at his political discretion: i.e., all of the virtues of disinterestedness and plurality of input made its work less manageable from his perspective. As later events made it clear he was seeking a radical change in the balance of executive and legislative power, verging to a parliamentary system where the executive might be paralyzed by a vote of no confidence, this all comes of a piece. He also implied that the OTA staff was tainted by too long a period of Democratic control of the appointments. March 4, 1996. I (and doubtless many thousands of others) received Gingrich's "nomination" to a Republican National Committee "Chairman's Advisory Board" implying that the best way my advice would be heard would be if accompanied by a contribution of $5000. OTA's important role as a careful and somewhat conservative synthesizer of scientific information for broad public policy-making purposes has been taken for granted in the literature on the agency. Yet among study participants this seemed surprisingly poorly understood. Survey respondents thought that they had contributed to extremely client-focused studies, and attached little importance to the "public good" aspects of their work. For example, respondents failed to cite the frequent use of OTA studies by non-U.S. technology assessment agencies. 10 This dissonance is troubling, and suggests a communicative failure by the agency.
CONCLUSIONS The OTA case confirms that analysts working in a context with multiple decision makers did things differently. OTA analysts dealt primarily with elite players in the public policy arena: invitations to participate in these studies were considered prestigious and were rarely declined.
Institutional Factors Affecting Analysts
69
At OTA, they quite successfully handled the analyst's technical challenge of appropriate simplification by seeking help from an external advisory panel in scoping the work, and by offering a range of alternatives instead of a favored recommendation to their clients. They handled the analyst's communicative challenges less successfully, by recruiting reviewers from various disciplines and stakeholder perspectives, and by offering, in their final years, a variety of products beyond their large, written reports. OTA missed various communications opportunities by failing until quite late to match their communicative style (mostly written) to that of their clients (mostly oral). OTA's distinctive process also failed to exploit the communications network represented by the agency's stakeholder advisors, those well-known experts and advocates who scoped and reviewed the studies. Most important, the OTA story demonstrates the important political context of expert advice. Technical policy analysis thrives when there are stable power relations, and withers when stability is disrupted. Joint fact-finding may be a relatively hardy flower compared to partisan analysis, but it still can be crushed by the boisterous partisanship that periodically romps through western democracies.
NOTES 1. Jocelyn Kaiser, "Project Offers Judges Neutral Technical Advice," Science 284 (4 June 1999): 1600. 2. Sheila Jasanoff, Science at the Bar: Law, Science, and Technology in America (Cambridge, MA: Harvard University Press, 1995), p. 66. :. Lawrence Susskind, Sarah McKearnan, and Jennifer Thomas-Larmer eds., The Consensus Building Handbook: A Comprehensive Guide to Reaching Agreement (Thousand Oaks, CA: Sage Publications, 1999). 4. Bent Flyvbjerg, trans. Steven Sampson, Rationality and Power: Democracy in Practice (Chicago: University of Chicago Press, 1998), p. 2. 5. Ibid., pp. 232-234. 6. Bruce Bimber, The Politics of Expertise in Congress: The Rise and Fall of the Office of Technology Assessment (Albany, NY: SUNY Press, 1996), pp. 12-24. 7. Andrew Lawler, "Is the NRC Ready for Reform?" Science 276 (May 9, 1997): 900904. 8. David H. Guston, Between Politics and Science: Assuring the Integrity and Productivity of Research (New York: Cambridge University Press, 2000). 9. Friedrich Nietzstche, Twilight of the Idols (Harmondsworth: Penguin, 1968), p. 60. Quoted in Flyvbjerg, Rationality and Power, p. 230. 10. David Guston and Bruce Bimber, coeds., Technological Forecasting and Social Change 54 (2-3), 1997.
This page intentionally left blank
Part III Mixed Participation in Analysis
This page intentionally left blank
6 Comparing Environmental Risks Casual visitors to Montpelier in the late 1990s would not have guessed that this smallest of state capitals was home to the Green Mountain Institute for Environmental Democracy (GMIED). An enterprise with a name this evocative could exist only in Vermont, where it consulted to state and federal environmental protection agencies around the nation. Their small group of bright, dedicated professionals in risk assessment, consensus-building, law, and public policy probably spent more time in airports than in their offices, and they were the preeminent experts on comparative risk. This multicase study is based on a raid of their archives.1 OTA's study process explained much of the success it had, given its peculiar institutional context. This case explores the role of process in more detail with a review of comparative risk, a marriage of process and analysis that has been used by government at the local, state, and federal levels to inform environmental regulatory priorities in the United States. The key variable distinguishing comparative risk projects from one another is the degree of participation in the analytical work by nonanalysts such as officials, professional stakeholders, and members of the lay public. Comparative risk projects performed by the U.S. Environmental Protection Agency (USEPA) and the States of Washington, California, and Minnesota illustrate two procedural points. First, institutional context constrains process choices whereas the analytical task does not; indeed, the procedural context should frame the analysis. Second, the cases also make clear that lay participation in analysis is more than just feasible; it is often helpful. They show that participation helps to resolve the analyst's technical problem of appropriate simplification, while at the same time increasing the analyst's communicative burden.
74
Mixed Participation in Analysis
COMPARATIVE RISK We have many environmental problems, some more important and some less so. Government is one of the key actors trying to improve the natural environment, and since it has limited resources there is a need to focus more effort on solving some problems rather than others. Environmental laws have evolved in response to public concerns expressed in various ways over many decades. These laws have been implemented in a political context that is subject to the vagaries of budgeting processes, shifting ideologies, and improving scientific information. Many political actors now express a desire to "rationalize" environmental regulation at the local, state, and national levels. They desire variously to make regulations more cost-effective, target emerging rather than diminishing threats, and integrate a fragmented policy domain. There is a perceived need for good scientific information to inform the process of setting environmental priorities. Risk is technically defined as the likelihood that an adverse consequence occurs, times the magnitude of that consequence, the product of a probability and a magnitude. Risk comparisons take place on a normalized basis, typically the average probability of a single death. For example, the average annual risk of death by lightning is about one chance in two million. Given multiple hazards, risk provides a common metric for comparisons. Readers may be familiar with risk "ladder" tables which list hazards in order of increasing probability of death. For example, a policeman is more likely to die in the line of duty than a fireman, and an average person is more likely to die in an automobile accident than from exposure to environmental hazards.2 Yet risk comparison is really a multidimensional task. First, environmental hazards affect not only human health, but also ecosystems and socioeconomic activity. Second, risk estimates are uncertain; some are unknown and others are not precisely predictable. Third, the incidence of risks is variable because individuals and their circumstances differ; for example, risks faced by urban and rural communities differ from one another and from population averages. Further, comparisons could evaluate many dimensions: Local or widespread impacts? Acute or chronic effects? New or familiar hazards? Voluntary or involuntary exposures? Some dimensions of comparison lie well outside the domain of technical risk assessment. Credible environmental risk comparisons are likely to require more than a simple risk ladder approach. They will need the richness of thoughtful discussion, careful analysis, and common sense. The defining feature of a comparative risk project is the use of scientifically vetted risk assessment data to rank the various risks to human health and safety, quality of life, and ecosystems. Following this ranking activity, some projects then develop risk management strategies intended to inform policy decisions. Comparative risk is an approach intended to introduce better science into the inherently political process of environmental priority-setting. Proponents of comparative risk claim that they do not seek to displace a democratic process of deciding what is important. Instead, they see comparative risk as a
Comparing Environmental Risks
75
complementary approach that provides useful information to political decision makers. Many U.S. states have now carried out comparative risk projects, as have USEPA and some localities. Designers of comparative risk projects have blended four ingredients in various proportions: experts, officials, professional stakeholders, and the general public. Table 6.1 shows that many recipes have been tried to date.3 Table 6.1 U.S. and State-Level Comparative Risk Projects Project USEPA Washington Vermont Colorado Louisiana Hawaii Maine Michigan California New Hampshire Florida Kentucky Tennessee Alaska Texas Mississippi Ohio Nebraska North Dakota Utah District of Columbia New York Minnesota Iowa Arizona New Jersey
1987, 1990 1988-90 1988-91 1988-90 1990-91 1990-92 1990-96 1991-92 1992-94 1993-97
Experts High
Involvement of: Stakeholders Officials Low High
High Medium High Medium High High High High High
High Low Medium Medium Medium Medium Low Medium Medium
High Low Medium High Medium High Medium High High
High High Medium Medium Medium High Low Low Medium
1993-95 1993-95 1993-96 1993-95 1993-96 1994-97 1994-96 1994-99 1994-97
High High High High High High Medium High Medium
High High Low Low Medium High Medium High High
Low High High Medium High High Medium Low High
Low Medium Low High Low Low High Low Medium
1994-95 1996
Medium High
High Low
High Medium
Low Low
19962001 1996-98 1996-98 1993-95 19982002
High
Medium
High
Low
Medium High High High
High Medium Medium High
Medium High High Medium
Medium Medium Medium Low
Dates
Public Low
While the analytical tasks and the types of expertise needed to synthesize the risk assessment data are well defined, projects have employed a variety of processes. For example, the first USEPA project in 1987 used in-house talent to
76
Mixed Participation in Analysis
perform the entire job. By contrast, Minnesota hired a "citizens jury" of randomly selected lay people to rank environmental risks with assistance from experts. By comparing comparative risk projects we can better understand the roles of process and lay participation in determining outcomes. 4 In chronological order, I now present brief case studies of the USEPA, Washington, California, and Minnesota comparative risk projects to show more completely how analysis and process intertwine.
NOTES 1. In 1997, I was asked to help run New Jersey's comparative risk project after teaching a comparative risk module in a public policy curriculum for several years. The invitation to practice what I teach provided the opportunity to perform this study. The findings here are being borne out in my own state's experience. 2. Stephen G. Breyer, Breaking the Vicious Circle: Toward Effective Risk Regulation (Cambridge, MA: Harvard University Press, 1993). 3. Table 6.1 was based on a telephone survey of comparative risk project participants in the fifty U.S. states that we performed in 1998. 4. I evaluated the comparative risk projects qualitatively but systematically. Adequacy was evaluated in ordinal terms (low, medium, high) by reading the project's final report and technical appendices and comparing the final report to reports from other projects. Value—internal, external, and personal—was evaluated for each project by interviewing selected agency personnel, risk assessment experts, and participants, respectively. Effectiveness was evaluated by searching for press reports, legislation, and other visible evidence of the project's impact. Legitimacy was evaluated on an input basis in terms of reported participation by experts and others, on a procedural basis in terms of reported openness and transparency, and on an output basis in terms of whether the final report was officially accepted. Thus I was able to consider substance, process, and participation. I relied on a growing evaluation literature in comparative risk to corroborate my findings.
7 USEPA's Unfinished Business USEPA performed the first comparative risk project and has helped fund most of the others.1 The first USEPA comparative risk project had more in common with OTA's studies than with the state-level comparative risk projects. The USEPA project involved only technically knowledgeable participants, spoke to official decision makers, and excluded the general public, much as OTA studies did. The context for this project included a securely Republican White House, a securely Democratic Congress, and a growing acceptance of risk assessment and costbenefit analysis in regulatory circles. USEPA was created in 1970 by executive order, not by legislation. Its responsibilities have grown haphazardly in response to Congressional concern about specific problems. The Clean Air Act has given it responsibility for administering a variety of air pollution control activities; the Clean Water Act has extended its responsibilities into another medium; the Resource Conservation and Recovery Act has added solid and hazardous wastes to USEPA's purview; and other legislation has tacked on additional responsibilities. Resources have flowed to USEPA's legislatively mandated programs, and not generically to USEPA. Without organic enabling legislation, USEPA administrators have had difficulty shifting resources across programs to target the most pressing current environmental problems.2 Agency executives since Ruckleshaus in 1983 had felt a need to examine this issue, and in 1986, Administrator Lee Thomas asked his staff to evaluate whether the agency's resources were being allocated to those problems that posed the greatest risks.3 By February of 1987, USEPA had an answer. The findings and the method employed attracted wide attention. The agency itself revisited the study, as did numerous outside parties in conferences, articles, and books. The study launched a cottage industry that a decade later still supported consultancies such as
78
Mixed Participation in Analysis
GMIED in Vermont. What was the structure of the USEPA project, how did they execute it, and did it succeed? PROJECT STRUCTURE Over a nine-month period, eighty-two USEPA staffers and one consultant devoted countless hours to evaluating the relative risks of thirty-one environmental issues such as acid precipitation, global warming, and oil spills. Although it was an internal agency study, it represented a significant investment of senior staff resources. A five-person team of project leaders, assisted by an eight-person project staff, supervised four work groups. Each group included a mix of senior agency managers and technical experts. A few people were members of more than one group. The cancer risk work group included twenty-five members, most with health science backgrounds, and it was charged with evaluating only the cancer risks associated with the thirty-one environmental issues. The noncancer health effects work group had twenty-three members, also mostly health scientists, who assessed both acute and chronic effects of several types: cardiovascular, developmental, hematopietic, immunological, kidney, liver, mutagenic, neurotoxic/behavioral, reproductive, respiratory, and other. The ecological effects work group had nineteen members, mostly biologists, who assessed impacts on natural ecosystems (salt water, fresh water, terrestrial, and avian systems, both plant and animal) resulting from habitat modification and environmental pollution. The welfare effects work group had twelve members, mostly economists, who analyzed a variety of damages to property, goods, services, or activities to which a monetary value can often be assigned, for example, crops, fisheries, tourism, buildings, and visibility. The project leaders defined the scope of the project for the work groups, specifying the thirty-one issues in advance. Some scientists were unhappy because they did not organize the list by source, pollutant, or receptor, but rather by the way in which laws were written and programs organized. Thus some double counting resulted between, for example, health risks of hazardous waste sites and drinking of contaminated water. As a work-around, leaders instructed the work groups to take account of inter-media transfers and secondary effects by following pollutants from cradle to grave. They also directed the work groups to focus on risks present now, those residual risks left after decades of regulation that represented USEPA's "unfinished business." ANALYSIS PROCESS USEPA managers asked scientists to use both quantitative data and expert judgment to provide advice despite data gaps and uncertainty. The results thus were not meant to be scientifically reproducible, but rather to represent informed judgment. Many scientists felt uncomfortable with this request because
USEPA's Unfinished Business
79
speculative judgment—getting ahead of your data—is frowned upon in the professional scientific culture. Project leaders also directed participants to follow a common set of analytical guidelines to enhance the comparability of the results. These included using uniform assumptions about emissions, exposure, and dose-response relationships; measuring effects from current problems whenever they occur without time discounting; and considering risks to both the total population and to the most exposed individual. All of the work groups proceeded through three basic steps, as follows: 1.
2.
3.
Establish a basis for comparisons, that is, agree on an overall conceptual approach for comparing risks among the problem areas, including a common denominator such as numbers of cases (for cancer risks) and dollars (for welfare effects). This proved difficult for the noncancer health risk work group, which found a need to track three measures in parallel: severity of health endpoints caused by the substance, population exposed to the substance, and potency of the substance at expected exposure levels. The group then combined these scores in various ways to produce relative risk rankings, and ensured the robustness of the rankings with extensive sensitivity analysis. The ecological risks work group had even more trouble. It finally solicited advice from outside academic experts before settling on criteria that included intensity of impact, scale of impact, ecosystem recovery rate, potential for control, and uncertainty regarding impacts, all qualitatively weighed. Develop standardized data, that is, accumulate and organize existing data on risks for each of the thirty-one problem areas, typically in the form of summary sheets. This task was relatively easy for the cancer risk work group because comparable data sets and standardized analytical strategies had been in use for many years, although significant data gaps remained. By contrast, the noncancer health effects and ecological effects work groups had little comparable data and no widely accepted methodology; hence they had a more difficult time. The welfare effects work group enjoyed accepted methods but suffered from particularly significant data gaps. Use informed judgement, that is, combine the data from the summary sheets with the judgement of work group members to produce a relative ranking of the thirty-one problem areas. The scientists approached this "unscientific" task with trepidation and each of the four work groups adopted a different ranking strategy. In no case were the risk rankings mechanical; instead they were based on partial quantitative information, qualitative impressions, and informed judgment. Cancer risks were ranked in order from one to twenty-six, with three problems unranked for lack of data, and two unranked because they involved double counting of effects already considered elsewhere. Noncancer health risks were ranked using only three categories—low, medium, and high—and six problems were unranked for lack of data. Ecological risks were ranked into six levels, with nine problems left unranked because they posed no ecological risk or involved double counting. Welfare effects were ranked in order from one to twenty-three, with the remaining eight problems left unranked in a category called "minor effects."
Mixed Participation in Analysis
80
The project leaders made no effort to integrate the four separate rankings produced by the work groups into a single prioritized list. Instead, Unfinished Business summarized the results as follows: • • •
•
•
No problems rank relatively high in all four types of risk, or relatively low in all four. Whether an environmental problem appears large or not depends critically on the type of adverse effect with which one is concerned. Problems that rank relatively high in three of four risk types, or at least medium in all four include: criteria air pollutants; stratospheric ozone depletion; pesticide residues on food; and other pesticide risks (runoff and air deposition of pesticides). Problems that rank relatively high in cancer and non-cancer health risks but low in ecological and welfare risks include: hazardous air pollutants; indoor radon; indoor air pollution other than radon; pesticide application; exposure to consumer products; and worker exposures to chemicals. Problems that rank relatively high in ecological and welfare risks, but low in both health risks include: global warming; point and non-point sources of surface water pollution; and physical alteration of aquatic habitats (including estuaries and wetlands) and mining waste. Areas related to ground water consistently rank medium or low.4
The report observed that these priorities were quite different from USEPA's current budget priorities and from public opinion polls (which were highly correlated with one another; see Table 7.1). The report, released in February 1987, made no recommendations for specific policy changes. Instead it closed with a call for public debate about environmental priorities and for research to fill gaps in the data. REACTIONS Public debate ensued, but it was less about priorities and more about the prioritization approach. The Unfinished Business report generated much interest in the comparative risk paradigm, and USEPA funded similar exercises within their regional offices and in state governments, where four pilot projects were launched in 1988.5 Following the Reagan-Bush transition that installed William Reilly as USEPA Administrator in 1989, the agency took a second look at its own project. The agency's Science Advisory Board (SAB), made up of eminent outsiders, reviewed the risk ranking in Unfinished Business at Reilly's request and largely endorsed both the method and results.6 The SAB's own report Reducing Risk confirmed that risk should be a consideration when developing agency priorities, and that the informed judgment of agency scientists was an appropriate basis for ranking risks.7 Both reports stressed the need to consider other factors in addition to residual risk when developing budget priorities for the agency. However, neither report took that next step, preferring to leave such delicate judgments in the hands of agency managers and the U.S. Congress.
81
USEPA's Unfinished Business Table 7.1 USEPA Comparative Risk Project Ranking Impact Threat Criteria air pollutants from mobile and stationary sources Hazardous air pollutants Other air pollutants Radon—indoor air only Other indoor air pollutants Radiation—other than radon Stratospheric ozone depleters C O2 and global warming Direct point source discharges to surface water Indirect point source discharges to surface water Nonpoint source discharges to surface water Contaminated sludge (municipal, scrubber sludge) To estuaries, coastal waters, and oceans from all sources To wetlands from all sources From drinking water at tap Active hazardous waste sites 1 nactive haz. waste sites Non-hazardous waste sites— municipal Non-hazardous waste sites— industrial Mining waste (includes oil and gas extraction wastes) Accidental releases—toxics (includes all media) Accidents—Oil spills Releases from storage tanks Other groundwater contamin. Pesticide residues on foods eaten by humans and wildlife Application of pesticides Other pesticide risks New toxic chemicals Genetically altered materials Consumer product exposure Worker chemical exposures
Cancer Risk M
Noncancer Risk H
Ecological Risk H
Welfare Risk H
Public Perception 4
H NR H H H H NR L
H NR M H M M NR L
M NR NR NR L H H H
L H L NR NR H H H
4 NR NR 10 13 NR 14 2
L
M
H
H
2
M
M
H
H
2
M
L
M
L
NR
NR
M
H
H
NR
NR H H H M
L H L L M
H NR L M M
M L M M M
NR 9 1 1 NR
H
M
M
L
NR
M
L
H
L
12
L
H
M
L
3
L M M H
NR L NR H
M L M H
L L NR NR
5 NR NR 7
H H H NR H H
H M NR NR H H
NR H NR NR NR NR
NR M NR M NR NR
8 8 NR 11 10 6
Note: H = high risk, M = medium risk, L = low risk, NR = not ranked, public perceptions ranged ordinally from 1 = highest risk to 14 = lowest risk Source: U.S. Environmental Protection Agency (USEPA), Unfinished Business: A Comparative Assessment of Environmental Problems (Washington, DC: USEPA, 1987).
82
Mixed Participation in Analysis
Alice Rivlin, at a 1992 conference that critically examined the USEPA comparative risk projects, observed that the rational analysis of public policy has a long tradition, but it also has a tendency toward elitism. "The public gets the impression, often correctly, that the experts are talking mumbo-jumbo, and don't care about them or their concerns, their fears, and their priorities." A situation of mutual contempt often results, as "the pundits think the political process is not working because it pays too much attention to public opinion, and the public apparently thinks the political process is not working because no one is paying attention to them."8 Jonathan Lash, cochair of the SAB committee that produced Reducing Risk, countered that comparative risk was not "a nostrum to quell the effects of public ignorance and to prevent the contamination of the domain of experts," but rather "a superb vehicle for the integration of science and public values in environmental policymaking."9 The main substantive effect of the first USEPA comparative risk project was to elevate the importance of indoor air pollution, especially radon, as an environmental problem. Beyond this, the USEPA comparative risk projects have not directly influenced many specific legislative or regulatory decisions, although they have inspired risk-based planning exercises within the agency. Budget priorities at the margin also may be shifting toward higher-risk problems.10 Agency managers have taken to heart the need to launch a public debate over environmental priorities. However, this need has often been phrased in the managerial language of the SAB, which called for education of the public by experts. According to Reducing Risk, "this dichotomy between public perception and professional understanding of environmental risk presents an enormous challenge to a pluralistic, democratic country."11 One explanation offered for this divergence was that "unlike 'experts,' lay people do not define risk simply in terms of the expected number of deaths. Other attributes such as controllability, voluntariness, and equity are important."12 Hornstein suggests that experts-only comparative risk projects like those of USEPA are very likely to suffer from microprocess failures including unrepresentative small groups, unequal access for special interests, and the inseparability of facts and values in expert decision-making.13 He also suggests that there may be macroprocess failures such as attempting to substitute science for politics, the potential for distortion by special interests, and an excessively marginalist rationality that damps out "republican moments" of inspired political action. Currently widespread public opposition to legislative proposals requiring risk-benefit analysis seems to give weight to Hornstein's arguments. An economist from Rivlin's agency, however, expresses concern that more participatory approaches will merely replace "a scientific elite with some other elite group, such as middle-class environmentalists, congressional staff, or industrial engineers."14 In 1997 the SAB started performing another round of comparative risk analysis at the request of the Clinton administration. This effort relied more heavily on outside experts than the 1987 project, but it did not seek broad public
USEPA's Unfinished Business
83
participation. The project never delivered substantive findings. Its final product was instead a primer on how to do comparative risk assessment. EVALUATION The experts-only approach to comparative risk practiced by the USEPA has served well for sparking debate about environmental priorities within the policy community. The reports have fairly high adequacy and value as syntheses of informed judgment about the implications of regulatory science. Some methodological concerns have been expressed, as critics have called the analyses "quick and dirty"15 and have pointed out "major data deficiencies."16 Yet the same critics "basically agreed with the sentiments expressed"17 and suggested that "even a flawed quantitative priority-setting strategy would be an improvement over any nonquantitative alternative."18 The USEPA reports have high effectiveness measured in symbolic terms, but few direct impacts on policy, as one would expect of internal studies.19 Measured in terms of the press coverage, critical commentaries, and conferences they provoked, the projects were very effective.20 At the 1992 conference, Tracy Meehan, then a USEPA manager, characterized the value of the reports as showing that policy making could be reasonably improved, which thereby created an obligation to do so. "By treating all risks as equal, we squander our resources and perpetuate a fraud on the public." The USEPA projects enjoyed substantial authority due to the involvement of nationally recognized experts on the SAB. They were "our most reliable compass in a turbulent sea of siren songs," according to the USEPA administrator.21 As an agency activity performed at the direct personal request of the administrator, these projects also enjoyed high official (pluralistic) civil legitimacy.22 However, they had low grassroots civil legitimacy because there was little public participation. Greenpeace charged that "EPA's risk assessment is nothing more than a political agenda with a thin veneer of science over it."23 Only in the most recent study did professional stakeholders become involved. The USEPA projects have been friendly to analysts because every participant is a traditional expert. In the 1987 project, the participants sidestepped crossdisciplinary communications problems by creating four single-discipline work groups. The project leaders likewise limited their synthesis efforts across the four lists to a simple sorting approach. Instead of creating an overall ranking, they grouped together risks that ranked consistently low or high, or that received inconsistent rankings. The least comfortable part for most analysts was the exercise of informed judgment when extrapolating beyond the available data. Thus, USEPA failed to follow the OTA model of multidisciplinary groups and wide review processes for improving expert-to-expert communication. However, by limiting the extent of their synthesis, the project leaders showed a humility similar to that practiced at OTA, where reports listed alternatives but not recommendations.
84
Mixed Participation in Analysis
My own view is that the USEPA project provided an inadequate technical and procedural basis for changing regulatory priorities. But it offered an integrative view of the nation's environmental problems that had never been seen before. It began to show us the forest instead of individual trees, and forced difficult questions into the public debate: Why should we care about environmental problems? How should we set priorities? How should we balance fairness and efficiency criteria? What tasks should be allocated to the federal government rather than to the states? Why do we know so little about most environmental problems?
NOTES 1. The USEPA case is based on a review of the report entitled Unfinished Business, plus a review of the critical literature cited in chapter 7 and discussions with participants in the project. See U.S. Environmental Protection Agency (USEPA), Office of Policy Analysis, Office of Policy, Planning, and Evaluation, Unfinished Business: A Comparative Assessment of Environmental Problems, overview report and technical appendices (Washington, DC: USEPA, February 1987). 2. Comments by F. Henry Habicht, then-USEPA deputy administrator, at a conference in Annapolis, MD in November 1992. Summarized in Resources for the Future, Conference Synopsis: Setting National Environmental Priorities (Washington, DC: Resources for the Future, February 1993), pp. 2-3. 3. Charles W. Kent and Frederick W. Allen, 'An Overview of Risk-Based Priority Setting at EPA," in Adam M. Finkel and Dominic Golding, eds., Worst Things First? The Debate over Risk-Based National Environmental Priorities (Washington, DC: Resources for the Future, 1994), pp. 47-68. 4. USEPA, Unfinished Business. 5. Kenneth Jones and others, "Comparative risk," Section 13.2, Handbook of Odors and WOC Control, draft manuscript available from the Green Mountain Institute for Environmental Democracy (Montpelier, VT, 1997), 45 pp. 6. Kent and Allen, "Risk-Based Priority Setting," pp. 47-68. 7. U.S. Environmental Protection Agency (USEPA), Science Advisory Board, Reducing Risk (Washington, DC: USEPA, 1990). 8. Comments by Alice Rivlin, then-deputy director of the U.S. Office of Management and Budget, at a conference in Annapolis, MD in November 1992. Summarized in Resources for the Future, Conference Synopsis, p. 2. 9. Comments by Jonathan Lash, then-president of the World Resources Institute, at a conference in Annapolis, MD in November 1992. Summarized in Resources for the Future, Conference Synopsis, p. 3. 10. Kent and Allen, "Risk-Based Priority Setting," pp. 47-68. 11. USEPA, Reducing Risk. 12. Comments by M. Granger Morgan, chairman of the Department of Engineering and Public Policy at Carnegie Mellon University, at a conference in Annapolis, MD in November 1992. Summarized in Resources for the Future, Conference Synopsis, p. 4. 13. Donald T. Hornstein, "Paradigms, Processes and Politics: Risk and Regulatory Design," in Adam M. Finkel and Dominic Golding, eds., Worst Things First? The Debate over Risk-Based National Environmental Priorities (Washington, DC: Resources for the Future, 1994), pp. 147-165.
USEPA's Unfinished Business
85
14. Comments by Richard Belzer, then an economist at the U.S. Office of Management and Budget, at a conference in Annapolis, MD in November 1992. Summarized in Resources for the Future, Conference Synopsis, p. 2. 15. Comments by M. Granger Morgan, chairman of the Department of Engineering and Public Policy at Carnegie Mellon University, at a conference in Annapolis, MD in November 1992. Summarized in Resources for the Future, Conference Synopsis, p. 4. 16. Comments by Dale Hattis, Center for Technology, Environment and Development at Clark University, at a conference in Annapolis, MD in November 1992. Summarized in Resources for the Future, Conference Synopsis, p. 4. 17.Ibid. 18. Comments by M. Granger Morgan, chairman of the Department of Engineering and Public Policy at Carnegie Mellon University, at a conference in Annapolis, MD in November 1992. Summarized in Resources for the Future, Conference Synopsis, p. 4. 19. Kenneth Jones, "A Retrospective on Ten Years of Comparative Risk," report prepared for the American Industrial Health Council by the Green Mountain Institute for Environmental Democracy (Montpelier, VT, 1997). 20. In addition to the literature and conference cited in this chapter, the USEPA comparative risk projects claimed a special issue of EPA Journal 19, 1 (January/February/March 1993). They were discussed in forums as diverse as the Philadelphia Inquirer (May 19, 1991, 4-F), Utne Reader (September/October 1991, 22), Public Administration Review (January/February 1990, 82-90), and Columbia Law Review (92: 562-633, 1992). 21. William K. Reilly, "Aiming Before We Shoot: The Quiet Revolution in Environmental Policy," address by the Administrator, USEPA, at the National Press Club, September 26, 1990, Washington, DC, Office of Communications and Public Affairs (A-107), 20Z-1011. 22. Kent and Allen, "Risk-Based Priority Setting," pp. 47-68. 23. Joseph Thornton, analyst for Greenpeace, quoted in Mark Jaffe, "EPA Ranks Hazards, and Draws Some Criticism," The Philadelphia Inquirer (May 19, 1991), p. 4F.
This page intentionally left blank
8 Washington's Environment 2010 Advocates of the comparative risk paradigm relish telling the story of the Washington State comparative risk project. It is the highlight of their standard stump speech, useful for encouraging weary project participants embroiled in the intractable details of implementing comparative risk in various states. Based on the Washington project, proponents say: "It can be done, and you can have an impact!" Following the 1987 release of its Unfinished Business report, USEPA sponsored pilot comparative risk projects in four U.S. states, one of which was Washington. The Washington Environment 2010 project was by far the most successful of the pilots, earning high scores for adequacy, value, effectiveness, and legitimacy.1 Unlike the national projects, this one sought public participation in many forms. It also sought policy priorities and not mere risk rankings. PROJECT STRUCTURE The policy entrepreneur behind the Washington project was Christine Gregoire, the strong and politically savvy director of the State's Department of Ecology. She rode this high visibility project into the State Attorney General's office and a subsequent gubernatorial candidacy. Washington had a Democratic governor and a split state legislature during this period, with Democrats controlling the lower house and Republicans clinging to a one-vote margin in the upper house. The state went for Reagan in both 1980 and 1984, but supported Democrats in 1988 and 1992. The project was launched in 1988 with $225,000 in USEPA seed funding plus $486,000 from the state. In addition, more than $600,000 in government staff time was allocated to the project.2 It was nominally directed by an interagency steering committee that included the leaders of nineteen state agencies and two
88
Mixed Participation in Analysis
federal agency representatives, and, indeed, these officials participated actively in the project.3 However, the Department of Ecology took the lead, and some say that they "ran away with the project."4 Technical expertise resided in a Technical Advisory Committee made up of twenty-six state employees aided by outside consultants and a project staff. These experts provided input to a Public Advisory Committee (PAC) made up of thirty-four prominent Washingtonians. The PAC had roughly balanced representation (one third each) from business interests, environmental interests, and elected officials at the state and local levels.5 PAC members, characterized officially as "citizens," but informally as classic moderate "stakeholder types," were nominated by the steering committee and appointed by Governor Gardner. Later in the project, an Action Strategies Analysis Committee composed of twenty-eight agency planners was created to translate the findings into realistic policy options. There was broad public participation throughout the project: in addition to the PAC there were education efforts, a major public summit meeting, and many smaller town meetings around the state. ANALYSIS PROCESS The PAC created the official risk rankings based on a combination of expert advice, public input, and political judgment. Yet it was the interagency steering committee that really launched the process. In collaboration with advisors from USEPA, the steering committee commissioned a set of technical papers on Washington's human health, ecosystem, and economic risks. They implicitly set the initial scope of the process so that it echoed the USEPA effort of a year earlier. The technical committee was granted enough resources to develop high quality technical papers on specific problems within the three categories of health, ecosystem, and economics. The papers provided four types of valuable information: (1) they characterized the risk, (2) they reported on trends over time, (3) they described management options related to each problem, and (4) they estimated how easy or difficult it would be to reduce the risk. Individuals wrote the papers and technical subcommittees approved them. The technical committee went on to rank the problems, but this ranking was never published. Instead, the technical committee took the communicative step of polishing the papers to make them easily readable, and then delivered the papers to the PAC. Members of the PAC were overwhelmed by the technical data contained in the issue papers. Technical committee members therefore spent a great deal of time working through the data with the PAC. These tutorial sessions allowed the PAC members to understand but not master the technical topics. The interactions with experts provided PAC members with a shared knowledge base that improved the quality of the subsequent priority-setting exercise. PAC members became better able to interpret the "facts" from their own—and others'—points of view. Participants reported that the education process added tangible value, producing a different outcome than would have been the case if they had merely voted as
Washington's Environment 2010
89
interest group representatives. They became representatives in the Burkian sense,6 serving as trustees rather than delegates, acting in what they perceived to be the best interests of their constituents, but now with superior knowledge of environmental risks. As one PAC member said, "We had to be leaders."7 The PAC released in late 1989 a preliminary ranking of problems based on multiple criteria (human health risk, ecological risk, economic risk, risk trend, manageability of the threat, and personal judgement). They split twenty-three threats into five priority levels, which, truly represented policy priorities because they included both risk assessment and management criteria (see Table 8.1). The project next involved the broader public. The PAC and steering committee convened a large public environmental summit in November 1989 to discuss their progress to date. They shared the preliminary policy priorities and background information in this two-day forum, while also eliciting feedback from the six hundred public participants. Twelve town meetings conducted around the state subsequently involved an additional one thousand people in the discussions. Public participants at each meeting were asked to add items of concern and to vote on their own list of priorities. These lists typically differed from that of the PAC in ranking indoor air pollution and radon quite low, but agreed in endorsing outdoor air pollution as the top priority. State staff people assigned to an Action Strategies Analysis Committee boiled down the three hundred-odd action ideas generated at the public meetings to twelve clusters of options, and assigned a subcommittee to each cluster. The staffers, mostly ex-technical committee members, evaluated the political, technical, and institutional feasibility of each idea, as well as its risk reduction potential. They reported their findings to the PAC and steering committee at a two-day retreat in April 1990. The analysts were quickly handed back a set of action priorities. The analysts developed a draft action agenda during May 1990, which the PAC took to another round of town meetings for comments. Final comments were incorporated at a June 1990 meeting of the PAC and steering committee, and a report was published in July 1990. The report, Toward 2010: An Environmental Action Agenda, reprised the PAC's ranked list of environmental threats and added a new list of "challenges." The twelve challenges were not ranked, but rather grouped into broad categories to organize the proposals contained in the action agenda. Included among the challenges were the PAC's top-ranked environmental threats plus the public's additions, minus indoor air pollution. The summary of the action agenda was broadly distributed, but the full report went out to only 350 people, mostly longtime project participants. A follow-up publication appeared in October 1990, entitled A Citizen's Guide to Washington's Environment. It explained the state's environmental problems and recommended actions that individuals could take. Oddly, this report lopped off the bottom tier of the PAC's ranking altogether, leaving only eight threats on the list of "final priorities."
90
Mixed Participation in Analysis
Table 8.1 Washington State Comparative Risk Project Ranking Environmental Threat Priority Level 1 Ambient air pollution Point source discharges to water Nonpoint source discharges to water Priority Level 2 Drinking water contamination Uncontrolled hazardous waste sites Wetlands loss/degradation Nonchemical impacts on forest lands Nonchemical impacts on agricultural lands Priority Level 3 Indoor air pollution Hydrologic disruptions Global warming and ozone depletion Regulated hazardous waste sites Nonhazardous waste sites Nonchemical impacts on recreational lands Pesticides (those not covered elsewhere) Priority Level 4 Indoor radon Radioactive releases Acid deposition Sudden and accidental releases Nonchemical impacts on range lands Priority Level 5 Nonionizing radiation Materials storage Litter Source: Washington State Department of Ecology (WDOE), Toward 2010: An Environmental Action Agenda (Seattle, WA: WDOE, July 1990).
EVALUATION Washington's comparative risk project immediately proved its effectiveness.8 Following the lead of the governor, the agencies, and the diverse interests represented on the PAC, the state legislature passed a landmark Clean Air Act in 1991. Other legislation was also credited to the consensus built by the comparative risk project, including a Growth Management Act (1990-1), a Transportation Demand Management Act (1991), a State Energy Strategy
Washington's Environment 2010
91
(1991), and bills encouraging recycling, water conservation, and non-point source pollution prevention (all 1991-2). In addition, Executive Order 90-06 implemented the action agenda within the agencies, and budget requests starting in Fiscal Year 1992 reflected the agenda's priorities at the margin. As a follow-up, the Department of Ecology began reporting environmental indicators that allowed better tracking of progress toward environmental goals in the state. Indoor smoking restrictions also gained impetus from the project, as did a landmark 1998 legal settlement in which tobacco companies compensated U.S. states for the health costs of smoking-related illnesses (this was also led by then-Washington Attorney General Gregoire).9 The scope of state agency involvement facilitated the implementation of this agenda. Many officials had their hands in the creation of the product and were familiar with it.10 The project mixed risk ranking and risk management criteria in inconsistent ways when setting policy priorities, so that it was impossible to sort out the relative importance of problem severity and ease of solution. Thus, by some ideal measure, the project lacked transparency and was methodologically impure. However, this methodological compromise improved the project's shortterm political effectiveness by delivering a timely action agenda as its product.11 The USEPA projects never got that far. In the view of some PAC members, the effectiveness of the Washington project was diminished when the agencies took back control of the results from the PAC following the release of the 1990 final report. These participants could have helped to "sell" the results in a variety of public forums; instead they were sent home and the agency professionals took charge. Even PAC members who were public officials, such as the city manager of Yakima, were "allowed to fade away" because the administration was "reluctant to have an officially sanctioned 'loose cannon' out expressing opinions and making suggestions that may require serious responses."12 The project provided substantial value to policy makers and participants, but less to the technical community. Policy makers were asked to implement a virtual political consensus, a rare gift. Direct project participants learned a great deal about their issues and the perspectives of other stakeholders. This benefit did not, however, transfer to nonparticipants who had representatives on the project, because nonparticipants did not get to interact with the technical experts. Representatives learned enough to understand the technical issues but not to explain the science to others. Technical experts had the satisfaction of an attentive audience, although they did not learn much new science during the process. The Washington comparative risk project had good technical adequacy, based on the high-quality background papers that were written by the agency staff participating in the technical committees and on their briefings. However, critics have pointed out that (1) the project was methodologically compromised as described earlier; and (2) the project's scope was inadequate because it avoided several of the most contentious environmental issues facing the state, including logging, spotted owls, and water quantity problems. Insiders say that these issues
92
Mixed Participation in Analysis
were avoided because other forums were already in place for them, but skeptics say that the comparative risk project generated a successful consensus precisely because it avoided the difficult issues. "If you're going to do something meaningful in the environmental arena it requires pissing off some people," said one PAC member. 13 This outcome raises the question of legitimacy. Project inputs delivered substantial initial legitimacy, since experts, officials, stakeholders, and the lay public all participated. The process further enhanced the project's legitimacy, since it was mostly open, transparent, and responsive to public input. An observer later noted: "They work with the same data, and a common set of objectives and problem definitions, so the interaction compels explicit discussion of the meaning and significance of the data and the values that each participant brings to the process. It is an interaction from which everyone learns." 14 Project outputs were officially accepted by the governor and the legislature, which further builds the case for legitimacy. The decision to scope the project narrowly and thereby reduce controversy diminished the project's comprehensiveness but not its legitimacy. According to another PAC member, "if you can't build a mansion, you build a house you can support." 15 In sum, the Washington project showed that it was feasible to couple substantively rational methods of risk assessment and priority-setting with procedurally rational official representation and public participation. In so doing, the expert's job expanded to include translation and education activities, and some methodological rigor and comprehensiveness were sacrificed for pragmatic reasons.
NOTES 1. The Washington case is based on a review of the reports produced, interviews with selected participants, and the critical literature. Washington State Department of Ecology, Environment 2010 project reports including The State of the Environment Report (November 1989), Toward 2010: An Environmental Action Agenda (July 1990), and The 1991 State of the Environment Report (July 1992). Key contributions to the critical literature on Washington's project include Kenneth Jones, "A Retrospective on Ten Years of Comparative Risk," report prepared for the American Industrial Health Council by the Green Mountain Institute for Environmental Democracy (Montpelier, VT, 1997); David L. Feldman, "Environmental Priority-setting through Comparative Risk Assessment," Association for Public Policy Analysis and Management Annual Research Conference, Pittsburgh, PA, November 2, 1996, 34 pp.; Margot Dick and Marian Slaughter, "Final Report: An Analysis of Nine Comparative Risk Project Budgets," prepared for the Western Center for Comparative Risk by Ross & Associates (Boulder, CO, January 1994); Richard Minard and Ken Jones with Christopher Paterson, "State Comparative Risk Projects: A Force for Change," Green Mountain Institute for Environmental Democracy (formerly NCCR) Issue Paper #8 (Montpelier, VT, 1993), 83 pp. 2. Dick and Slaughter, "Final Report," 0-1. 3. Personal communication with Dee Peace Ragsdale, Washington State Department of Ecology, May 17, 1999.
Washington's Environment 2010
93
4. Minard, Jones, and Paterson, "State Comparative Risk Projects." 5. Washington State Department of Ecology, Environment 2010: The State of the Environment Report (Seattle, WA, 1989), p. 7. 6. Edmund Burke, "The English Constitutional System," in Hannah Pitkin, ed., Representation (New York: Atherton Press, 1969), p. 175. 7. Nan Henrickson as quoted in Minard, Jones, and Paterson, "State Comparative Risk Projects," p. 25. 8. Washington State Department of Ecology, Environment 2010: The 1991 State of the Environment Report (Seattle, WA, 1992), p. 1. 9. Betty Holcomb, "The tobacco slayer," Good Housekeeping 229 (July): 27-28. 10. Personal communication with Dee Peace Ragsdale, Washington State Department of Ecology, May 17, 1999. 11. Minard, Jones, and Paterson, "State Comparative Risk Projects," p. 15. 12. Ibid., p. 25. 13. Bob Nichols as quoted in Minard, Jones, and Paterson, "State Comparative Risk Projects," p. 31. 14. Jonathan Lash as quoted in Christopher J. Paterson and Richard N. L. Andrews, "Values and Comparative Risk Assessment," chap. 15 in the Handbook for Environmental Risk Decisionmaking (Boca Raton, FL: CRC Press, 1996), p. 218. 15. Fred Souter as quoted in Minard, Jones, and Paterson, "State Comparative Risk Projects," p. 31.
This page intentionally left blank
9 California's Toward the 21st Century The infamous California State comparative risk project started in 1992, well after the conclusion of the successful Washington State project. Pete Wilson, a Republican, was governor, and he was engaged in constant warfare with a Democratically controlled state legislature. Control of the legislature would be split in 1994, when the Republicans picked up a one-vote margin in the lower house while Democrats retained control of the upper house. Politics in California is a blood sport fueled by wealth and ideological extremes, with neither party dominating over time in national elections. The draft report of the California comparative risk project was released about six months before the 1994 election and became one of its early casualties.1 PROJECT STRUCTURE AND PROCESS This ambitious, underfunded, doomed project received $150,000 in seed money from USEPA and matched it with an equal amount of state funds and large allocations of state staff time plus volunteer efforts.2 The California project was managed by an interagency cooperative representing thirty different state bodies, supported by a project staff of fifteen people from California's Environmental Protection Agency (Cal-EPA). Stakeholder input arrived via a thirty-eight-member Statewide Community Advisory Committee (SCAC), and six technical committees provided expert input. Three of the technical committees focused on standard comparative risk topics: human health, ecological risk, and social welfare. These committees were charged with technically constructing a basis for comparing environmental risks. Three additional committees were later added to the standard recipe: environmental justice, economic perspectives, and education. These committees essentially
96
Mixed Participation in Analysis
served to deconstruct the risk comparisons made by the first three committees and the SCAC. The six technical committees took general scoping directions from the SCAC, developed information and recommendations, and returned these to the SCAC, which compiled a draft report with help from the project staff. This draft was entitled Toward the 21st Century: Planning for the Protection of California's Environment, and it was released for public comments in May 1994. Governor Wilson distanced himself from the draft report following adverse press coverage of the social welfare topic. The report was never finalized and it has had no policy impact. What went wrong? I briefly discuss each committee's activities in order to answer this question. In scoping the comparative risk project, the SCAC developed an important innovation: it distinguished between categories that previous projects had mixed together. It developed three lists that represented distinct ways of organizing the spectrum of environmental issues. List I considered environmental releases to media (water, air, land) by sources (e.g., for water: industrial, municipal, nonpoint source releases to surface water, releases to ground water). List II included environmental stressors (specific materials) such as asbestos, particulate matter, and pesticides. List III included potential human threats to environmental integrity such as agricultural practices, transportation systems, and water management. However, only List II was used consistently by all three technical risk assessment committees. It thus became the basis for comparing across dimensions of human health, ecological risk, and social welfare (see Table 9.1). HUMAN HEALTH The Human Health Committee included fifty experts from within and outside state government. Detailed results were delivered to the Advisory Committee, which did not modify the rankings but merely accepted them. Individual experts prepared threat-specific reports following a standard format, which was only briefly reviewed by the committee (they claimed no full peer review took place). Together the expert committee members ranked the comparative risks, creating first an absolute ranking, and then a comparative (relative) ranking. Experts performed cancer and noncancer health risk assessments for each threat, and relied on indicator chemicals to represent the thousands of chemicals actually out there. Rankings were Low, Medium, High, Insufficient Data to Rank, Not a Problem, and Not Ranked or covered under other topic areas. The Human Health Committee's approach had several strengths: members held honest discussions of assumptions, scientific uncertainty and data gaps. They also developed useful summaries of indicator chemicals' effects, made explicit estimates of the percent of each topic area that was analyzed, and reported explicit levels of confidence by which to bound the assigned rank for each problem. In response to concerns expressed by the environmental equity committee, the Human Health Committee provided information on
California's Toward the 21st Century
97
subpopuiations (identifiable susceptible/sensitive populations), and highly exposed subpopuiations (defined by activity or demographics or geography) rather than a simplistic focus on population aggregates. Finally, the Human Health Committee made explicit recommendations regarding the comparative risk rankings for problems. A reviewer said, 'To its credit, the committee consistently uses toxicological concepts. . . . Whether one agrees or disagrees with the conclusions, one can see the committee's reasoning and weigh it accordingly."3 The Human Health Committee's approach also suffered from certain weaknesses, the most important of which was that it examined only a few indicator chemicals for each category of threat. In addition, it maintained an unrealistic zero-threshold assumption for cancer risks and it performed a simplistic equity analysis. However, in sum, the Human Health Committee's work was more technically proficient than most other state-level comparative risk projects. ECOLOGICAL RISK The Ecological Risk Committee included eighteen members from various California institutions. Experts on this committee developed a multistage model of cause-and-effect (activity->stressor->medium-^receptor->effect) for ecological risks. For each of one hundred-plus exposure pathways connecting effects back to activities, a committee member prepared an evaluation sheet. The sheet described the pathway elements (e.g., land disposal->solid waste->land->terrestrial populations and communities-^altered animal behavior, injury, and mortality during feeding at waste sites), assigned a ranking score, provided references, and discussed remaining data needs. Ranking scores were based on four criteria, each with a five-point range. The criteria were intensity, extent, reversibility, and uncertainty. Based on scores (the range was five to twenty) all exposure pathways were then ranked as Low, Medium, or High. Next, the full committee discussed the rankings in comparative terms, and revised the ranks. The committee then aggregated the exposure pathway findings back to create activity or threat-level rankings. For each aggregate threat the committee prepared a summary sheet that defined the threat, and summarized its history and its current condition. The committee then translated the aggregate threat rankings into categories comparable to the other committees, but felt that not all of the translations were equally successful. Finally they mentioned some unfinished business: completing an exhaustive activity-by-stressor matrix rather than the sparse one used; improving the operationalization of the "intensity" and "extent criteria" in the rankings; evaluating recreation impacts; and acquiring better data on some exposure pathways. Strengths of the Ecological Risk Committee's approach included a clear multicriteria ranking system, a credible cause-and-effect model, and a useful detail sheet on each exposure pathway. Their approach was very transparent. Weaknesses of their approach included a lack of quantitative comparisons, lots
98
Mixed Participation in Analysis
of data gaps, and a forced marriage of bottom-up and top-down categorizations. This effort represented a pragmatic analytical compromise. Table 9.1 California Comparative Risk Project Ranking Environmental Threat
Alteration of acidity, salinity, or hardness of water Alteration of aquatic habitats Alteration of terrestrial habitats Asbestos Carbon monoxide Environmental tobacco smoke Greenhouse gases Inorganics Lead Microbiological contaminants Nonnative organisms Oil and petroleum products Ozone Particulate matter Persistent organochlorines Pesticides—agricultural use Pesticides—nonagricultural use Radionuclides Radon SOx and NOx Stratospheric ozone depletors Thermal pollution Total suspended solids, biological oxygen demand, or nutrients in water Volatile organics
Human Health Impact Low
Ecological Health Impact Medium
Social Welfare Impact Low
NR NR
High High
High High
NR Medium High
NR NR NR
Medium Low High
NR High Medium Medium NR NR High High High Medium Medium
Medium High Medium Low High Medium High Low Medium Medium Medium
High Medium High Medium Medium Medium High High Medium High High
High High Low NR NR Low
NR NR High NR NR Medium
High Medium Medium High Low Low
High
Low
High
Note: NR = Not ranked Source: California Comparative Risk Project (CCRP), Toward the 21st Century: Planning for the Protection of California's Environment (Berkeley, CA: California Public Health Foundation, 1994), p. 32.
California's Toward the 21st Century
99
SOCIAL WELFARE The Social Welfare Committee was relatively small, consisting of twelve members from industry, academia, and government. The group lacked adequate stability of membership, time, resources, and (in many reviewers' eyes) expertise to do their job thoroughly. The expertise, at least, should have been readily available in California. The group developed a simple (Low, Medium, High) qualitative ranking system for each List II topic; however, it represented an aggregation of several rankings along different criteria. For each criterion a number of measures were applied: number of people affected, number of people exposed, severity of impact, irreversibility, involuntariness, potential for catastrophe, lack of detectability. The criteria included environmental and aesthetic well-being, economic well-being, physical well-being, peace of mind, future well-being, equity of impact, and community well-being. Summing across these criteria gave the overall ranking for each List II topic such as asbestos, carbon monoxide, and radon. A similar exercise was applied to the List III threats. The Social Welfare Committee's approach provided broad coverage of social perspectives and concerns, and used a clear and simple ranking technique. It had an interesting emphasis on noneconomic factors. However, this approach also revealed a foolhardy level of ambition, evident in the failed attempt to be comprehensive. The analysis therefore ended up shallow and diffuse. The committee failed to quantify many things that could have been quantified (e.g., income changes), lacked coherent conceptual underpinnings, and double counted impacts covered elsewhere (e.g., physical well-being). It seemed that the wrong people ended up on the Social Welfare Committee—perhaps they were all that were left after attrition. The committee lacked skills or resources to take advantage of what the disciplinary perspectives of economics, political science, sociology, and psychology had to offer. The discussions of available metrics thus seemed uninformed. A sympathetic reviewer commented that in contrast to the human health and ecological committees, "the committee assessing risks to social welfare appears to have had little expertise in this area or even an awareness of the substantial literature on it."4 An environmentalist supportive of the overall report said, "There are some squishy areas that can make you a bit queasy. No doubt about that."5 Hostile reviewers from the business community harshly decried: Death by environmentalism. . . . It doesn't take a genius to realize that such ill-defined concerns as "anxiety about the future" can apply to any project or development. . . . A promise to appoint regulators who will rely on science rather than fear would do wonders for our "peace of mind."6 Leaking the draft report just prior to its official release, headlines in the Los Angeles Times especially poked fun at the committee's "peace-of-mind" measure. The Times characterized comparative risk as "a new and controversial method of evaluating environmental risk that downplays the traditional role of
100
Mixed Participation in Analysis
science and takes into account people's values, opinions, fears, and anxieties."7 This distortion led to the political downfall of the California project when Governor Wilson disowned it following the public furor. THE DECONSTRUCTORS The overall structure of the California project had two components: the comparative risk ranking component described above, and the critique-ofcomparative-risk component. One set of committees was constructing the risk ranking while another set was deconstructing it. No wonder it was politically stillborn! This made the report interesting to read, but it also made readers discount many of the findings. I now discuss the critiquing committees. The sixteen-member Environmental Equity Committee (the key deconstructors) included primarily academics and activists rather than Cal-EPA staffers. The Environmental Equity Committee made a very important innovation in comparative risk practice by insisting that the risk assessors discuss not only the total population risk but also the most at-risk subpopuiations. According to a Cal-EPA manager, the committee told Californians to "look at farm worker communities exposed to high levels of pesticides, children who are more vulnerable to respiratory diseases, or subsistence fishermen who may ingest dangerously high levels of mercury because they eat more fish." The technical committees did so, and this made the report much more informative. They successfully added an equity criterion to a debate that had been focused mainly on an efficiency criterion. Yet much of the Environmental Equity Committee's report served only to weaken the credibility of the risk assessment paradigm. The Economic Perspectives Committee included fourteen members from a variety of perspectives: state government, industry, academia, and environmentalists. The Economic Perspectives Committee put together a useful summary of the economic concepts relevant to comparative risk, plus a moderate amount of data documenting the costs of environmental regulation. This report was technically accurate but was disconnected from the rest of the comparative risk effort. Its insights probably should have been included with the Social Welfare Committee's, rather than standing on their own (and in opposition to that other committee's thrust). A commenter noted that "Business groups have exploited (and exacerbated) these conflicts in their attacks on the report."8 The Education Committee was made up of twenty-two people from state government, academia, industry, and various civic groups. The Education Committee put together a valuable report on the status of environmental education in California. It reviewed curricula, sponsored roundtables in different parts of the state, and recommended a more sophisticated use of public participation by state government. The Education Committee offered useful, noncontroversial recommendations that have, to some extent, survived being dragged down with the rest of the California comparative risk project. Nonetheless, even the Education Committee felt the wrath of the business community: "if the kids aren't scared yet [from having environmental problems
California's Toward the 21st Century
101
damage their view of the world as a safe and nurturing place, as measured by the social welfare committee], they will be once educators adopt the report's plan to insert its unsubstantiated dogma in public-school curricula."9 Overall, the California comparative risk project appears to have gotten away from its sponsors. The technical committees led the process while the SCAC merely rubber-stamped the results without much reflection, or at least without a sufficient investment of time in discussing and understanding the technical recommendations. Interaction between the technical committees and the SCAC was inadequate. The report reads more "like a patchwork of conflicting agendas than a consensus document."10 Nevertheless, in the areas of human health and ecology, "the technical analyses are among the most comprehensive yet conducted and utilized assessment methodologies that went beyond those of most other projects."11 Yet the quality of the work done by the technical committees was uneven: the Human Health Committee did a good job, the Ecological Committee also did a good job but didn't stick to the planned reporting format, and the Social Welfare Committee did a transparent but weak job that was characterized by an otherwise sympathetic reviewer as a "totally superficial and inept treatment." The Environmental Equity Committee actively disputed the appropriateness of the comparative risk paradigm because of its efficiency emphasis, even as the economics committee endorsed it. The Education Committee made useful policy recommendations that stood independent of the overall report. The California project was parsed into too many pieces that were never reintegrated. Limited resources may have contributed to the project's problems. California's budget was far less than Washington's, even though California is a much larger and more complex entity. EVALUATION The California experience suggests that the public advisory committee should be in charge and that the technical committees should be clearly subservient to them in the project hierarchy. Such an arrangement might enhance quality control and ensure more usable results. A humbler stance on the part of the analysts could achieve the same end. The actual outcome left participants unhappy because they wasted so many hours on a project that went nowhere. The California comparative risk project was ineffective because it alienated its client and lacked an alternative audience. "The Governor has taken the position that the report will scare businesses out of California," observed one of its authors.12 While the state's political leadership did not value its findings, the project had value for many of its participants, who learned a great deal about environmental education and environmental policy-making in California. It was also valuable for the larger comparative risk community because of its technical innovations and its thoughtful deconstruction of the comparative risk paradigm. The social welfare portion of the project report was inadequate, but the human health, ecological risk, and education chapters were superb. Finally, the report
102
Mixed Participation in Analysis
lacked legitimacy: to its detriment it relied on small groups of self-selected experts and failed to establish requisite checks and balances. One reviewer commented: "Paradoxically, procedures that rely on consensus award veto power to the most determined stakeholders (be they representatives of business, local areas, or other narrowly defined interests), who can hold group consensus hostage to their own demands." 13 The California project lacked the procedural rationality to make up for its lapses in substantive rationality. The analysts failed in their technical challenge to appropriately simplify reality. The analysts also failed in their three communicative challenges: among experts in different disciplines, between experts and decision makers, and between experts and the general public. Above all, their context failed them—the project's leaders were unable to prevent the heat of an impending election from derailing the effort. Like OTA, this project was killed off as a by-product of partisan sport. Today, samizdat copies of the draft California report sit in prized spots on the bookshelves of comparative risk analysts everywhere, except within the walls of Cal-EPA.
NOTES 1. The California case is based on a review of the draft report, the critical literature it generated, press reports, and a presentation by one of its principals. California Comparative Risk Project (CCRP), Toward the 21st Century: Planning for the Protection of California's Environment (Berkeley, CA: California Public Health Foundation, 1994). Kenneth Jones, "A Retrospective on Ten Years of Comparative Risk," report prepared for the American Industrial Health Council by the Green Mountain Institute for Environmental Democracy (Montpelier, VT, 1997). See also Richard L. N. Andrews, "Report on Reports: Toward the 21st Century: Planning for the Protection of California's Environment," Environment 37, 4 (May 1995): 25-28; Richard Stone, "Peace of Mind Pollution?" Garbage (Fall 1994): 10-13; Investor's Business Daily, "Editorial: Death By Environmental!sm" (June 14, 1994); Frank Clifford, "Cal/EPA's Newest Hazard: Risks to Peace of Mind," Los Angeles Times (June 11, 1994): Al, A24; Carol Henry, Presentation to the Green and Gold Task Force of the New Jersey Department of Environmental Protection (Trenton, NJ, January 1998). 2. Margot Dick and Marian Slaughter, "Final Report: An Analysis of Nine Comparative Risk Project Budgets," prepared for the Western Center for Comparative Risk by Ross & Associates (Boulder, CO, January 1994). 3. Andrews, "Report on Reports," p. 26. 4. Ibid., p. 27. 5. Lawrie Mott, Nat. Res. Defense Council, quoted in Clifford, "Cal/EPA," p. A25. 6. Investor's Business Daily, "Editorial." 7. Clifford, "Cal/EPA's Newest Hazard," p.Al. 8. Andrews, "Report on Reports," p. 27. 9. Investor's Business Daily, "Editorial." 10. Andrews, "Report on Reports," p. 27. 11. Jones, "A Retrospective," p. A6. 12. William Pease quoted in Richard Stone, "Peace of Mind." 13. Andrews, "Report on Reports," p. 28.
10 Minnesota's Risk-Based Environmental Priorities The story of the Minnesota project is happier, if less dramatic, than that of California. It is a tale of small events, regular people, and shared values.1 PROJECT STRUCTURE The Minnesota Pollution Control Agency (MPCA) ran a comparative risk assessment as an adjunct to a strategic planning exercise begun in 1994. The project's unique feature was its thoughtful and parsimonious use of the public's time; it truly treated participation as a scarce resource. The project ran on three distinct tracks: (1) an in-house process involving agency staff, (2) a highly structured "Citizens Jury" process for eliciting general public perceptions, and (3) a workshop for the agency's professional stakeholders. The comparative risk project began in 1996, with a $100,000 grant from USEPA. Having learned from California and other states the pitfalls of setting an over-ambitious agenda given a modest budget, Minnesota planned to finish its project in eighteen months instead of the two to five years typical of other state projects.2 It succeeded in doing so. The political climate was stable, with a multiterm Republican governor and solid Democratic control of both houses of the state legislature. During 1996, a team of eighteen agency staff defined criteria for evaluating risks, selected twelve broad issue areas lying within the agency's regulatory mandate, and assessed for each the risks to human health, ecosystems, and quality of life. Issue areas included hazardous waste flows, storage tanks, abandoned hazardous waste storage sites, sources of air pollution (mobile, area, and industrial), septic tanks, nonpoint water pollution sources, waste water treatment plant discharges, animal feedlots, spills and other environmental emergencies, and solid waste management.
104
Mixed Participation in Analysis
ANALYSIS PROCESS In performing the technical analyses, team members relied on existing, sometimes inadequate, data, which forced the analysts to exercise qualitative professional judgment rather than perform formal risk assessments. This team, which represented all major environmental offices in the agency, then systematically ranked the relative seriousness of the effects associated with the issues. Questions guiding this effort included: What is the issue/problem? What impacts/effects come from this? Who or what is affected, and how? How many people/ecosystems are affected? How often are they affected? How serious is the impact? How certain is the information? The team also prepared a concise summary of issues for sharing with the general public.4 The agency contracted with the creators of the trademarked Citizens Jury approach to implement the public track of the comparative risk project. This process was developed by the Jefferson Center for Democratic Processes in Minneapolis, and previously had been applied to a variety of public policy issues such as transportation planning, facility siting, and crime prevention. For each jury, the organizers would recruit a group of "average" Minnesotans using a stratified sampling philosophy and employing a telephone survey approach with random-digit dialing to reduce sampling bias. They selected eighteen jurors for the comparative risk project, and paid them for their time. The citizens met for five days. During the first four days, the MPCA staff presented information on the twelve environmental issues, and other expert witnesses provided alternative viewpoints on each issue. Each citizen was asked to rank the issues several times every day: once in the morning, again after presentations by agency experts, and one more time after internal deliberations among citizens. Consensus was not required, but citizens were asked to explain their reasoning. On most days, citizens divided into three subgroups to study particular issues more intensively for a portion of the day, after which they would report back to the group as a whole. On the final day, the citizens worked in small groups to rank the issues, and the rankings were then compiled to create an overall ranking.5 Next, the agency's professional stakeholders were given an opportunity to assert their priorities. Individuals having regular contact with the agency (regulated parties, environmental advocates, other governmental agencies) were invited to a three-and-one-half-day workshop. Again, the twelve issues were presented by agency staff, discussed by the group, ranked individually, and then ranked by the group. However, no expert witnesses from outside the agency were provided, because participants were already familiar with the issues, and were assumed to be capable of performing their own cross-examinations.6 The final step in the comparative risk project was a comparison of the three ranking exercises (see Table 10.1). Professional stakeholders and MPCA's staff had roughly similar rankings, whereas the public (Citizens Jury) ranking was quite different. Broadly speaking, the stakeholders and staff assigned lower ranks to long-regulated risks (e.g., solid waste, storage tanks) and higher ranks to newer issues (e.g., nonpoint sources of water pollution, animal feedlots).
Minnesota's Risk-Based Environmental Priorities
105
Citizens emphasized quality-of-life and highly publicized issues (e.g., air pollution, spills and environmental emergencies) more than did the other two groups.7 This outcome matches speculation in the risk communication literature that citizens tend to evaluate risks holistically, that they do not think about environmental risks separately from other hazards, and that a social amplification phenomenon can occur via press coverage.8 Table 10.1 Minnesota Comparative Risk Project Ranking Environmental Threat Area sources of pollution Feedlots Hazardous wastes Industrial sources of air pollution Mobile sources of air pollution Nonpoint sources of pollution Septic tanks Solid waste Spills and environmental e mergencies Storage tanks Superfund (contaminated sites) Wastewater treatment
Citizens Jury 6 9 4 1
Stakeholders 4 3 10 6
MPCA Staff 4 5 10 2
2 8 12 10 3
2 1 5 11 9
1 3 7 8 11
11 5 7
12 7 8
12 9 6
Note: Ordinal ranking from 1 = most serious risk to 12 = least serious risk. Source: Minnesota Pollution Control Agency (MPCA), Environmental Planning Unit, Risk-Based Environmental Priorities Project Final Report (Minneapolis, MN: MPCA, September 1997).
EVALUATION What did the agency learn from the project? The project managers identified four items:9 (1) significant data gaps existed; (2) the uneven levels of information made the risk comparisons less meaningful; (3) communication skills were lacking among agency staff tasked to explain issues to the citizen jurors; and (4) many participants failed to grasp the key concept of residual risk,10 leading to confusion during the ranking process. Minnesota's speedy, segregated, low-budget comparative risk process was innovative but made little impact. The press ignored it, and so did the sponsoring agency to a large extent. It was effective primarily in focusing agency attention on public communication issues.11 The citizen jurors and participating agency staff found the experience valuable, but its larger value seems to have been limited to identifying data gaps. The project's technical adequacy was quite low,
106
Mixed Participation in Analysis
and agency personnel felt that the half-hour presentation allotted to each issue formed a poor basis for informed decision-making by the citizens.12 Organizers reported that the Citizens Jury process, which previously had handled only single policy issues, was "stretched to the limits" by the comprehensive risk comparison task.13 The good news was that the structure of the project addressed each type of legitimacy discussed in this book: agency experts provided some status-based legitimacy; the stakeholder workshop and the involvement of agency officials guaranteed some pluralistic civil legitimacy; and the Citizens Jury arguably provided a measure of grassroots legitimacy. The Citizens Jury also stewarded the public participants' limited time by carefully planning the interactions and paying those who agreed to serve as jurors. Yet the artificial, highly managed process weakened the project's claim to grassroots legitimacy. In sum, Minnesota invested little, and made efficient use of public input, but received very modest returns. Other states have not emulated Minnesota's approach. This project introduced Minnesota's analysts to their technical and communicative problems, but the analysts did not have the opportunity to solve the problems during the project. However, the MPCA has now developed an interest in its communicative challenges.14 NOTES 1. The Minnesota case is based on project documents, third-party evaluations, and discussion with the project manager. Minnesota Pollution Control Agency (MPCA), Environmental Planning Unit, Risk-Based Environmental Priorities Project Final Report (Minneapolis, MN: MPCA, September 1997), 11 pp. Green Mountain Institute for Environmental Democracy (GMIED), "MPCA Citizens Jury Bridges the Gap," Synergy 1, 2 (November/December 1996): 3-5 (Montpelier, VT: GMIED). See also Kenneth Jones, "A Retrospective on Ten Years of Comparative Risk," report prepared for the American Industrial Health Council by the Green Mountain Institute for Environmental Democracy (Montpelier, VT, 1997). Paul Schmiechen, presentation to the Green and Gold Task Force of the New Jersey Department of Environmental Protection (Trenton, NJ, January 1998). 2. MPCA, Risk-Based Environmental Priorities, p. 2. 3. Ibid., p. 2. 4. Ibid., p. 3. 5. Jefferson Center, Report on the Citizens Jury on Risk-Based Environmental Priorities (Minneapolis, MN: Jefferson Center, 1996). 6. Ibid., pp. 3-4. 7. Ibid., pp. 4-9. 8. For a survey of this literature and a full discussion of these points, see Sheldon Krimsky and Dominic Golding, eds., Social Theories of Risk (Westport, CT: Praeger, 1992). 9. MPCA, Risk-Based Environmental Priorities Project, pp. A9-11. 10. Recall that residual risk is the "unfinished business" left after the effects of existing regulations have been taken into account. For example, given our existing drinking water treatment system that includes filtration and chlorination, what residual risk to the public health remains?
Minnesota's Risk-Based Environmental Priorities
107
11. Paul Schmiechen, presentation to the Green and Gold Task Force of the New Jersey Department of Environmental Protection. 12. GMIED, "MPCA Citizens Jury," p. 4. 13. Ibid., p. 4. 14. As an illustration of the agency's interest, the MPCA released a request for proposals in 1998 soliciting help in developing watershed-planning programs that marry technical analysis and broad public participation.
This page intentionally left blank
11 Procedural Factors Affecting Analysts Most technical work collects dust instead of kudos. Much analysis never properly enters the process of making decisions. It may appear too early or late, contain inappropriate information, or lack legitimacy. Onto the shelf it goes! Competing analyses may cancel one another out so that both end up on the shelf, a valid but annoying result. Other work fails to enter the long informal process of fact-finding and negotiation that precedes a formal decision, and therefore it may be ignored. How should analysis blend with process if the analysis is to be effective? In a decision process involving lots of public participation, how should analysts share the work? AGENDA-SETTING The comparative risk cases are examples of agenda-setting processes in public policy. Initial steps in political decision-making are typically invisible to observers. These include developing an awareness of a decision arena, such as a town council or a federal court, and gaining access to its agenda-setting process, explained next.1 Agenda-setting is that early stage in the decision-making process during which an issue becomes identified as a priority, one recognized and diagnosed by political decision makers. Issues gain prominence on the public policy agenda with help from eloquent leaders, disruptive crises, potent symbols, and patient policy entrepreneurs, as I now discuss in the context of three popular models. Process models of policy-making identify triggering devices and initiators that create issues. Progression on systemic agendas (gaining press coverage, for example) and then institutional agendas (such as getting a bill introduced in Congress) is a function of issue characteristics, relevant publics, and affected institutions.2 Illustratively, water pollution was initially a local concern of public health officials, but it slowly overwhelmed localities and eventually the national
110
Mixed Participation in Analysis
government found it necessary to step in with financing for sewage treatment and regulations for polluters. Symbolic models emphasize the transformation of perception that must occur for an issue to gain policy prominence. Problems must acquire symbolic ownership, causality, and assigned responsibility; a conventional wisdom of accepted "facts;" and other elements of a good story or dramatic structure.3 For example, unregulated local garbage dumps were replaced by regional, federally regulated solid waste landfills following highly visible catastrophes at dumps, reports of Mafia control over waste collection, moral condemnation of the existing system (disposal should be a last resort after reducing, reusing, recycling), and other transformative events. Another way to describe this process is with the image of distinctly evolving streams of problems, solutions, and political factors that periodically converge to create a policy window.4 This moment of opportunity may be seized by a policy entrepreneur to set the agenda. The sulfur dioxide emissions trading program in the Clean Air Act Amendments of 1990, for example, was an idea of great academic interest that languished on the shelf until the composition of the U.S. Congress changed, a deregulatory spirit swept Washington, environmentalism became mainstream, and the political and economic clout of utility companies shrank. Suddenly the legislation gained favor, passed both houses of Congress, and was signed into law, before scientists had even completed the major study of acid precipitation that was to justify it. All three models proclaim the value of a catalyst—a triggering device, symbol, or opportunity—in spurring political activity. For example, the popular books Silent Spring and Unsafe at Any Speed provoked political action on problems (pesticide pollution, automobile safety) that otherwise might have simmered along for years without governmental attention.5 The Chernobyl nuclear accident and the Exxon Valdez oil spill were catalytic accidents that aroused public concern, while the fall of the Berlin wall and the election of the 104th Congress were catalytic events that have left a mark on public policy. As these and other agenda-setting models suggest, the political process of translating knowledge into action is not smooth and steady. Instead, the process moves fitfully and in response to specific people and events. The efficacy of a comparative risk project will thus depend on its circumstances as much as on its design. However, a poorly designed project will almost always fail, so that only well-designed projects have a chance of succeeding. What are the elements of a well-designed project? I suggest that they are substance, process, and participation. SUBSTANCE, PROCESS, AND PARTICIPATION Simon makes a distinction between substantive and procedural rationality in decision-making that enriches the discussion of rationality begun in chapter 3.6 He measures substantive rationality in terms of optimal outcomes: was this the "best" decision, judged by criteria such as efficiency, fairness, and wisdom? Simon measures procedural rationality in terms of optimal process: were the
Procedural Factors Affecting Analysts
111
stages in decision-making reasoned, transparent, and legitimate? The concept of substantive rationality is convergent in many respects with theoretical reason and the goal of status-based legitimacy, while procedural rationality converges nicely with the concept of practical reason and the goal of civil legitimacy. The process of political agenda-setting does not guarantee substantively optimal outcomes. Indeed, it is problematic even to define what might be such an outcome. Edwards and colleagues argue that the very concept of an optimal decision is flawed, because we operate with incomplete knowledge of the potential solutions and their impacts, both before and after making the decision.7 Further, one job of a political system is to reconcile diverse preferences; policy decisions therefore must inevitably represent compromise for some parties. Nevertheless, while it may be difficult to define ideal outcomes, it is easier to spot inferior outcomes. Some policy decisions appear to lack substantive rationality because they perform poorly relative to equally costly alternatives. An example is the decision by a regulator with a fixed budget to target a lowrisk hazard (such as a mildly contaminated site) while ignoring one imposing a greater risk to human health (such as indoor air pollution). While risk reduction may not be the only goal of environmental policies, it is certainly a primary aim. Concern about inferior outcomes provides the impetus for many comparative risk projects. However, by pursuing substantive rationality we may force a tradeoff with diminishing procedural rationality. Should environmental policy decisions be entrusted solely into the hands of a few expert risk assessors? Or should the people and their elected representatives have the final say? In liberal democracies, political agenda-setting as described earlier enjoys a certain legitimacy. It has a degree of procedural rationality that few citizens would give up merely to reduce risks more cost-effectively. U.S. citizens rarely cede substantial discretionary power to experts; instead, they prefer that their environmental officials operate in a transparent manner, following rigid, statutory requirements.8 In the risk arena, scientists are called to go beyond mere factual reporting and to exercise informed expert judgment, a task that blends facts and values. Comparative risk projects thus are likely to need elements of both substantive and procedural rationality if they are to influence public policy. The need to balance substantive and procedural rationality in decision-making recalls the argument in the OTA case (chapters 4-5) that legitimacy has both status-based and civil sources. Authoritative (expert) contributions can improve substantive rationality by bolstering the knowledge base of policy decisions. It makes sense for the most knowledgeable people to make certain decisions. Political systems therefore delegate many routine decisions to technocrats to enhance the substantive rationality of policy outcomes. For example, an expert, appointed board at the Federal Reserve Bank rather than the U.S. Congress or President periodically adjusts the prime rate of interest, a key macroeconomic policy lever. Scientists at the USEPA and its state counterparts likewise translate broad legislation into detailed regulations for environmental protection. Proponents of greater substantive rationality in environmental policy typically
call for more delegation to experts and less political influence.9
112
Mixed Participation in Analysis
Policy decisions in liberal democracies also need civil legitimacy, pursued by means of a rational process. However, procedural rationality has many variations. For instance, in the direct democracy of a New England town meeting, procedural rationality means that every interested party may be heard, that debate follows rules of decorum, that everyone gets one vote, and that a majority of votes decides the issue at hand. In representative government, procedural rationality involves a constitution to set rules and structure, fair elections of representatives, formally operated deliberative bodies, appropriate delegations of power to appointed bureaucrats, and checks and balances to prevent abuses. Bureaucrats and other appointees may in turn solicit direct public participation to enhance the civil legitimacy of decisions. Officials use public participation for several purposes: to test the political efficacy of proposed alternatives, improve the knowledge base for decision-making, develop new ideas, co-opt alienated parties into the mainstream, share or delegate responsibilities, or build support for a proposed policy. Mechanisms for participation cover a spectrum including the following:10 • • • • • • •
One-way outward (persuasive advertising, education, newsletters) One-way inward (surveys, focus groups) Simple information exchange (public hearings, hotlines) Consultation (advisory groups, citizens juries) Joint decision-making (public-private partnership) Delegation (contracting with community group for implementation) Self-determination (local bootstrapping)
An agency's choice among these mechanisms is important and ideally reflects its goals for inviting participation. Often a combination of mechanisms is best. The choice also implicitly reflects the agency's definition of "the public." Some agencies solicit stakeholder involvement defined narrowly to include only politically active persons with a major stake in the policy decision, such industry lobbyists and environmental advocates. These professional stakeholders are few in number and may also bring substantive expertise to the table. Other agencies define stakeholders more broadly and include any party potentially affected by the policy decision, such as neighbors and employees. Agencies can alternatively specify that participation should be open to the general public, or that a representative sample be involved. Pragmatists argue that agencies should expect relatively little of the general public,11 but few say that the general public should be actively excluded from participation. Participation can even provide agencies with a means of "constructing" citizens, by managing the terms of civic engagement.12 Yet recent experience suggests that agencies sometimes fail to recognize that participation is a scarce resource,13 and as a result, complaints of stakeholder fatigue are increasing.14 Agencies and analysts are under increasing pressure to use their opportunities for public interaction wisely. A public agency using public participation faces a paradox: it may create a new source of civil legitimacy that competes with its existing mandate. Public
Procedural Factors Affecting Analysts
113
officials already have the legitimacy of government behind them and operate within carefully constructed systems of checks and balances. When should appointees of elected officials define the public interest, and when should that job be delegated to a survey, focus group, or advisory body? Surveys, for example, can suffer from biases, while focus groups and advisory bodies are vulnerable to charges that they are unrepresentative. Participation does not automatically enhance civil legitimacy. Proponents of greater procedural rationality in environmental policy typically call for more lay participation and less expert influence.15 Yet this is actually a three-way debate among factions that may not even speak the same language.16 One faction uses managerial language to promote substantive rationality by involving experts to a greater degree in policy decisions. Another faction uses pluralistic language to argue for more procedural rationality, which can be promoted by increasing the involvement of public officials and major stakeholders. A third faction uses grassroots, communitarian language to demonstrate that it trusts neither the experts nor the politicians and professional stakeholders on the grounds that all are members of traditional elites; their call for greater procedural rationality emphasizes greater direct lay participation, delegation of power, and self-determination. These three factions often speak past one another, leading to impaired communication and policy paralysis. In short, the contributions of various actors to rationality and legitimacy are unclear. Do technical experts enhance only substantive rationality (and numinous, status-based legitimacy)? Do public officials enhance both substantive and procedural rationality (and carry both status-based and civil legitimacy)? Do major stakeholders enhance only procedural rationality (and civil legitimacy from a pluralistic perspective)? Does public participation enhance only procedural rationality (and civil legitimacy from a grassroots perspective)? Or does expanded participation also expand the knowledge base, as Wynne suggests?17 Worse, are there tradeoffs between substantive and procedural rationality, and between pluralistic and popular processes? Policy practitioners have struggled to develop recipes—to design processes— that successfully blend expert, official, key stakeholder, and lay contributions. Environmental policy makers using the comparative risk paradigm have been at the forefront of this effort, experimenting intensively over the last decade with alternative approaches. LESSONS LEARNED The OTA case showed that institutional context may strongly influence analytical decisions and study processes. The comparative risk cases make it clear that institutional context constrains but does not determine process choices. The case studies of state-level comparative risk projects show that remarkably diverse processes have been used. Yet the processes were not all equally successful. The case studies suggest the following lessons: 1.
Projects should pursue both substantive and procedural rationality. Adequacy, value, effectiveness, and legitimacy depend on balancing these goals, not
114
2.
3. 4. 5.
6.
7.
8.
9.
Mixed Participation in Analysis focusing on one or the other. Comparative risk projects with weak technical analyses were ridiculed. Those conducted behind closed doors without a public mandate were ignored. Factors such as resources, leadership, and timing also affected outcomes. As a corollary, status-based and civil sources of legitimacy are more often complements than substitutes. Comparative risk projects lacking one or the other may fail unless they enjoy a context that sequentially corrects the imbalance. Technical experts bring only status-based legitimacy to comparative risk projects. Officials and stakeholders bring both status-based and civil legitimacy to comparative risk projects. They offer expertise as well as credibility when they speak as representatives of broader interests. Civil legitimacy has both pluralistic and grassroots interpretations. Involving professional stakeholders and government officials brings some legitimacy to projects, and greatly strengthens effectiveness, value, and adequacy. However, direct, unscripted public involvement can further bolster legitimacy. Direct participation demonstrates that the process is open, transparent, and probably not a conspiracy among elites. Lay public participation primarily offers civil legitimacy to state and federal comparative risk projects. Observers say that states "have made significant efforts to involve the public in all phases of their projects, not so much to generate better technical estimates as to broaden public legitimacy for risk priorities."18 In local comparative risk projects, not discussed here, members of the general public also contribute expert knowledge of local circumstances.19 A good process design can minimize tradeoffs between expert and lay public involvement, and treat both as the scarce resources that they are. An ideal recipe for creating a comparative risk project apparently involves balancing the participation of experts, officials, professional stakeholders, and the public. These ingredients also need to be properly mixed. In other words, within modestly flexible bounds process really does matter. The California case and critical comments by environmentalists regarding other projects demonstrate that the analytical approach itself can be controversial. One environmentalist goes so far as to say that "any effort to rank risks is merely an environmental equivalent of Sophie's Choice: 'Which child will you hand over to the Nazis?' "20 Comparative risk is not the only paradigm that could be applied to the problem of developing an environmental management strategy.21 Alternatives that emphasize equity over efficiency, or prevention over control, or rely on indicators, benchmarks, or multicriteria decision analysis, among others, could equally well have been employed. This suggests that procedural context should help frame analytical choices. Power struggles between competing interests can easily kill comparative risk projects regardless of their design. The findings on OTA apply equally well to comparative risk—rationalistic analysis thrives under stable power relations, and otherwise withers. It is not that a pluralistic polity fails to accept the intrusion of technical experts into agenda-setting, they are accepted. Instead the lesson is more fundamental: It is fruitless to follow communicative norms when everyone else is playing a strategic game. Plus, the contingent framings underlying a particular analytical framework are less subject to scrutiny and criticism under stable conditions.
Procedural Factors Affecting Analysts
115
All of the comparative risk case studies support the argument that analysts need broad-based skills. Solving the analyst's technical problem of appropriate simplification depends on the analyst's discretion, plus early input from outside parties. Solving the analyst's communicative problems requires processes (such as meetings and reviews) to bridge disciplinary boundaries, plus good writing and speaking skills for interactions with decision makers and the lay public. Reading the political context is a fundamental survival skill, not an entertaining pastime. Technical, interpersonal, and critical/ethical talents contribute in equal measure to analytical success in the realm of comparative risk.
CONCLUSION The comparative risk experience confirms that lay participation in analysis is feasible and desirable, without being a guarantee of success. This review supports conceptual arguments made in earlier chapters about the need to consider interpersonal, institutional, and procedural issues when performing technical work. It also demonstrates that analysts must make context-relevant analytical choices.
NOTES 1. John Gaventa, Power and Powerlessness: Quiescence and Rebellion in an Appalachian Valley (Urbana: University of Illinois Press, 1982). 2. Roger W. Cobb and Charles D. Elder, Participation in American Politics: The Dynamics of Agenda Building (Baltimore: Johns Hopkins University Press, 1983). 3. Gusfield, Joseph, The Culture of Public Problems (Chicago: University of Chicago Press, 1981), pp. 1-21. 4. John W. Kingdon, Agendas, Alternatives, and Public Policies (Boston: Little Brown, 1984). 5. Rachel Carson, Silent Spring (Boston: Houghton Mifflin, 1962). Ralph Nader, Unsafe at Any Speed (New York: Pocket Books, 1966). 6. Herbert A. Simon, Administrative Behavior: A Study of Decision-making Processes in Administrative Organizations (New York: Harper & Row, 1976). 7. W. Edwards, I. Kiss, G. Majone, and M. Toda, "What Constitutes a Good Decision?" Acta Psychologica 56 (1984): 5-27. 8. Clinton J. Andrews, "Policies to Encourage Clean Technology," in Robert H. Socolow, Clinton J. Andrews, Frans Berkhout, and Valerie M. Thomas, eds., Industrial Ecology and Global Change (Cambridge: Cambridge University Press, 1994), pp. 405422. 9. Stephen G. Breyer makes this argument in Breaking the Vicious Circle: Toward Effective Risk Regulation (Cambridge, MA: Harvard University Press, 1993), pp. 70-81. 10. Federal Environmental Assessment Review Office (FEARO), Public Involvement: Planning and Implementing Public Involvement Programs, prepared by Praxis, Inc. for FEARO, (Hull, Quebec, Canada, 1988). 11. Seymour Mandelbaum, "Stakeholders and Citizens," paper presented at the Annual Conference of the American Collegiate Schools of Planning (Pasadena, CA, November 4, 1998). 12. Mark B. Brown, "The Civic Shaping of Technology: California's Electric Vehicle Program," Science, Technology, & Human Values (forthcoming).
116
Mixed Participation in Analysis
13. Alec I. Gershberg, quoted in Lisa J. Servon, "The Intersection of Social Capital and Identity: Thoughts on Closure, Participation, and Access to Resources," paper presented at a conference on Civic Participation and Civil Society (Bellagio, Italy, April 6-10, 1999). 14. Susan S. Fainstein, "New Directions in Planning Theory," Urban Affairs Review 35, 4 (March 2000): 451-478. 15. One such voice is that of Richard E. Sclove, Democracy and Technology (New York: Guilford Press, 1995), pp. 216-219. 16. Bruce A. Williams and Albert R. Matheny, Democracy, Dialogue, and Environmental Disputes: The Contested Languages of Social Regulation (New Haven: Yale University Press, 1995). 17. Brian Wynne, "Sheepfarming after Chernobyl: A Case Study," Environment 31,2 (1991): 10-15, 33-39. 18. Christopher J. Paterson and Richard N. L. Andrews, "Procedural and Substantive Fairness in Risk Decisions: Comparative Risk Assessment Procedures," Policy Studies Journal 23, 1 (1995): 86. 19. Ralph M. Perhac, Jr., "Comparative Risk Assessment: Where Does the Public Fit In?" Science, Technology, and Human Values, 23, 2 (Spring 1998): 221-241, esp. 237. For detailed descriptions of several local projects see Edward Delhagen & Joanne Dea, Comparative Risk at the Local Level: Lessons from the Road (Boulder, CO: Western Center for Environmental Decision-making, 1996). 20. Comments by Mary O'Brien, professor of environmental studies at the University of Montana, at a conference Annapolis, MD in November 1992. Summarized in Resources for the Future, Conference Synopsis: Setting National Environmental Priorities (Washington, DC: Resources for the Future, February 1993), p. 3. 21. Adam M. Finkel and Dominic Golding, "Alternative Paradigms: Comparative Risk Is Not the Only Model," EPA Journal (January/February/March 1993): 50-52.
Part IV Multiple Decision Makers and Mixed Participation in Analysis
This page intentionally left blank
12 Analyzing New England's Electricity Alternatives Utility engineers measure their success in terms of phone calls—no calls is good news, many calls means trouble. For much of 1987, the phone was ringing off the hook at the New England Power Pool (NEPOOL). This regional coordinator served dozens of utility companies regulated by six different states, and it was having trouble balancing electricity supply and demand. Surging economic growth had caused the demand for electricity to soar, yet disagreements among the many parties in this highly regulated industry were preventing timely resource planning decisions. Positional conflict, uncertainty about the future, and massive complexity conspired to produce planning paralysis. A university-based joint fact-finding effort helped the New England region address severe planning problems in its electricity sector from 1987 through 1996. It was a planning realm involving arcane technical concepts that were not the usual stuff of public debates. It was a context with multiple decision makers, involving an analytical effort with mixed participation of professional and lay actors. Analysts confronted both the technical challenge of appropriate simplification and the full range of communicative problems. Here I provide an overview of the project and its development. By way of a disclaimer, readers should note that I played a participant-observer role in the initial stages of this project.1
THE NEW ENGLAND PROJECT The analytical realm of the New England Project (NEP) was long-term investment planning, also known as "resource planning" or 'least-cost planning." The NEP asked what investments (in alternatives such as power plants or energy conservation programs) utility companies should make to ensure that the supply of electricity could meet its demand. Analytical tools from engineering systems analysis, operations research, and microeconomics were used to answer this question.
120
Multiple Decision Makers and Mixed Participation
NEP CHRONOLOGY The NEP began in 1987, when New Hampshire Governor John Sununu visited the Massachusetts Institute of Technology (MIT) and asked that academics get more directly involved in the regional electricity debate. What Sununu had in mind was publicity—a dose of numinous legitimacy—for the beleaguered, unfinished Seabrook nuclear power plant. What Sununu got instead was a small group of faculty, staff, and students who offered a more complex recipe for legitimacy. They proposed a joint fact-finding exercise that would explore how different electricity options fared across a variety of visions of an uncertain future, as measured along multiple criteria. An analysis team would interact periodically with an advisory group of regional decision makers; together, these two groups would investigate scenarios. Results would then be broadly shared around the region. This modeling effort was encouraged by several of the major policy players in New England, including the Conservation Law Foundation, the Massachusetts Department of Public Utilities (DPU), and the New England Power Pool (NEPOOL), some of whom had experience with previous efforts of this sort in the region. To test the viability of this analyst-decision maker interaction, NEPOOL organized a small advisory group in January 1988. The analysis team was initially supported by internal funding from the MIT Energy Lab. Monthly advisory group meetings began in March 1988. Initially, the analysis team presented work that demonstrated its electricity sector modeling capabilities and revealed its analytical assumptions about power planning issues. The analysis team subsequently modified these in response to advisory group members' suggestions. At each meeting, the advisory group reviewed the previous month's analytical work, and drew lessons for public policy. Then, advisory group members offered their further concerns. During the year, the NEP participants explored a variety of topics. At the request of advisory group members, the analysis team also made presentations to various interested parties such as the Massachusetts AFL-CIO Energy Policy Committee, consulting firms, and gas utilities. In January 1989, NEPOOL member utilities and other interested parties formed a consortium to fund the MIT analysis team. At its first meeting in April 1989, the project sponsors and the university reached agreement on the structure of the project (advisory group and analysis team) and the composition of the project's advisory group (representation by interested stakeholders, flexible but by invitation only). At subsequent meetings, the advisory group expanded to include representatives of about twenty stake-holding groups, specifically DPU commissioners from several states, environmental spokespeople, business and labor leaders, utilities, and members of the New England Governors' Conference Power Planning Committee. The NEP spent most of 1989 designing and testing both its analytical tools and its interactive planning process. Especially helpful in this regard was a November 1989 workshop on regulatory reform convened by the MIT-Harvard Public Disputes Program. Some sixty-five public utilities commissioners and electric utility executives played a role-playing game based on the New England situation, and during their debriefing they specified the characteristics of analysis
Analyzing New England's Electricity Alternatives
121
techniques and planning processes they thought would be most helpful. I summarize their findings later in this chapter. Advisory group meetings in April, June, August, and November of 1989 and February 1990 transformed the NEP from an interesting concept into a successful project. The April meeting established an initial scope for the project. The June meeting more narrowly specified issues of concern, as well as the measures for evaluating progress on those issues. The August meeting reviewed the analysis team's proposed set of scenarios and accompanying modeling tools, and found both wanting. The November meeting examined preliminary modeling results, found them credible, and expanded the range of scenarios to be modeled. The February 1990 meeting reviewed additional results, and found them so interesting that they urged the analysis team to share the findings broadly. The NEP devoted the remainder of 1990 to a "road show" in which analysis team members interacted with dozens of groups around the six-state New England region. These face-to-face interactions disseminated the NEP's findings widely, allowed nonanalysts to question the assumptions embedded in the modeling effort, sparked suggestions for further investigations, and opened channels of communication among factions in the regional policy debate. The analysis team delivered new modeling results based on the road show to the advisory group at each of its December 1990 and April 1991 meetings. These findings were then taken on the road and also submitted for publication. A major rescoping exercise took place during the November 1991 advisory group meeting to ensure that the NEP addressed new issues. The analysis team, advisory group, and broader public continued to interact following this pattern for several years, with advisory group meetings taking place about every six months. The last NEP advisory group meeting took place in June 1996, when the project ended. Major topics addressed over the course of the project included: • • • • • • • • •
reliability of the region's electric system, environmental impacts of electricity generation, roles for energy conservation and renewable energy technologies, reuse of existing downtown power plant sites, vulnerability of the power system to exogenous shocks, regional implementation of the 1990 Clean Air Act Amendments, reliance by the region on electricity imports from Canada, decommissioning of the region's nuclear power plants, and potential impacts of electric vehicles on the regional power system.
NEP PROJECT DESIGN The NEP worked to achieve meaningful long-term public participation by arranging iterative interchanges. In these, an advisory group of interested parties, representing the range of opinions and perspectives relevant to the project, explored the tradeoffs between different project options, using information
122
Multiple Decision Makers and Mixed Participation
supplied by the analysis team. Once the advisory group accepted a set of results, they were broadly shared. Both the advisory group and the analysis team had specific roles to play within the framework. The responsibilities of the advisory group members in this process were to: (1) identify issues and concerns; (2) invent strategies and identify options; (3) accept or reject modeling approaches and assumptions; (4) express the concerns of their constituencies in discussions about the tradeoffs among options; (5) work creatively toward a consensus on the choice of favored sets of options; and (6) carry the results back to their own domains for use in making decisions. The responsibilities of the analysis team in this process were to: (1) assemble data and models; (2) articulate assumptions, methods, and results clearly; (3) respond to the interests, queries, and proposals of advisory group members; (4) assist the advisory group in inventing better options and packaging them into coordinated strategies; (5) assist the advisory group in moving toward a shared understanding of problems, options, and system interactions; and (6) share accepted results widely, thereby both provoking public debate and improving its informational basis. The NEP analysts embraced the communicative challenges of their context by creating a highly interactive project structure. Regular interactions among the advisory group, the analysis team, and the broader public created an iterative exploration process. The act of exploring many possible solutions in a systematic and open-minded way helped diverse factions converge on a shared understanding of how the power system worked. This advisory group/analysis team/outreach arrangement, using methods explicitly designed for conducting tradeoff analysis, was designed to produce outcomes that were more efficient, equitable, stable, and wise than those resulting from traditional plans produced by analysts working in relative isolation. I follow the evolution of one topic that the advisory group explored early on as an example of this process. At a meeting, a businessman (a senior vice president of Raytheon, Inc.) recalling the recent power outages asked: "How much would it cost to improve the reliability of the region's electric power system?" Following that meeting, the analysis team gathered data and made operational a capacity-expansion and production-costing model of the New England power system. The analysis team also pondered, what does electric reliability mean to a businessman? To help the businessman understand the technical results, the analysis team developed "Danger Hours," a measure of the number of hours each year industrial customers could expect emergency interruptions to occur. They then modeled a 5 percent increase in the system's reserve margin (its excess of supply over demand) across a variety of possible future conditions. They found that this strategy gave an average reduction in Danger Hours of 60 percent, for an average cost increase of only 0.5 percent, with an average 6 percent decrease in the emissions of key air pollutants. The analysis team presented these results at the next meeting, and the advisory group was inspired to suggest several additional reliability-related options. These included a change in the generation mix used to reach the higher reserve margin,
Analyzing New England's Electricity Alternatives
123
a change in the fuel mix used for new capacity, and the replacement of all power plants over forty years old with new units. After that meeting, the analysis team modeled these different alternatives, and found that: (1) most had only minor impacts on the cost of electricity; (2) the performance of a few alternatives seemed particularly volatile given uncertainty about future conditions; (3) certain choices had much better environmental performance than others; and (4) it was easy to spot the inferior (less reliable, more expensive, dirtier) alternatives. The advisory group discussed the small set of superior reliability-related alternatives at its next meeting, and the replacement of old power plants became their preferred option. The low cost of the reliability improvement surprised the advisory group, as did the consistent environmental gains. It appeared that the business interests could get a cheap reliability increase while environmental interests could get a cheap emissions reduction. Everyone at the table seemed happy to have found a possible win-win outcome through this iterative process. This uplifting story became the basis for an outreach effort. Analysis team members shared their results in a humble, experimentalist's mode, asking people around the region: What's wrong with our results? Can you shoot any holes in our surprising findings? The NEP continued to enjoy increasing advisory group membership as its analytical results were disseminated, its findings became accepted, and its recommendations saw implementation. The analysis team operated in a trust-building manner which, as we saw in the better comparative risk projects, sought to balance civil and authority-based sources of legitimacy. NOTE I. I was a member of the New England Project (NEP) analysis team from 1987 to 19') 1. This was a project of the MIT Energy Laboratory. Other founding members of the team included Carl Bespolka, Stephen Connors, Fred Schweppe, Richard Tabors, and David White. The NEP was patterned after a previous effort led by Schweppe and White for the Consolidated Edison Company of New York. See C. Luce, "An Energy Strategy for the 1980s," report published by Consolidated Edison Company of New York in 1980. The case study is based on participant observation, interviews with key participants, and a review of project documents. Analysis Group for Regional Electricity Alternatives (AGREA), Energy Laboratory, Massachusetts Institute of Technology, Cambridge, MA. February 1990 Presentation to Advisory Group for the New England Project; November 1989 Presentation to Advisory Group for the New England Project; April 1989 Presentation to Advisory Group for the New England Project; Advisory Group Meeting background presentation materials, February 1-April 24, 1989. Much of this research was documented in my dissertation and hence the details are not repeated here. See Clinton J. Andrews, "Improving the Analytics of Open Planning Processes," Ph.D. Diss., Department of Urban Studies and Planning (Massachusetts Institute of Technology, 1990).
This page intentionally left blank
13 Analysis in Context The New England region hosted the birth of the industrial revolution in America, and its myriad towns and villages have grown into a tightly interconnected megalopolis. The building stock and infrastructure are the oldest in the nation, and they are increasingly inadequate for the needs of the current population. Yet, in many areas, there is very little room available for expanding these services. The politically active population has challenged all encroachments on the crowded local landscape. New England consists of some of the smallest states in the nation. Indeed, the political jurisdictions are, in many cases, smaller than the electric company service territories. Most of the electricity in the region (90%) is generated by private, investor-owned companies.1 The region's major electric utilities thus face regulation by two or more states, as well as the Federal Energy Regulatory Commission. The regional power pool, then called New England Power Pool (NEPOOL), offers true economies of scale to its member utilities by dispatching the various power plants more economically, to accommodate regional rather than utility-specific, minute-by-minute electricity demands.2 With indigenous resources (mainly hydroelectric dams) providing less than 6 percent of the power consumed in New England, the region has relied heavily on imports of hydroelectric energy from Canada.3 Compared with the rest of the nation, New England has acquired an atypical generation mix, depending heavily upon oil, nuclear power, and imported electricity. New England also has used relatively less coal and, until recently, less natural gas for electric power generation.4 PLANNING PARALYSIS In New England, the 1980s were marked by a dramatically decreasing reserve margin between the supply and demand for electric power, leading to crises during peak demand periods on hot summer afternoons.5 However, stakeholders
126
Multiple Decision Makers and Mixed Participation
in the regional electricity debate differed widely on the choices to be made to mitigate these problems. Environmentalists, regulators, and utilities were polarized regarding the technological path to be followed. Environmentalists advocated energy conservation, regulators encouraged independent power production, and utilities relied on foreign power purchases in lieu of financially risky traditional power plant investments. This situation led to a state of paralysis in the mid-1980s when no positive action seemed to be taken in any direction. By 1988, the situation had deteriorated to the point that the lights started to go out. Utilities had to take emergency actions, such as reducing the voltage of the power grid, more than thirty times that year—breaking all previous annual records.6 This situation did not develop overnight. Icons such as the Seabrook nuclear power plant and the Hydro-Quebec transmission lines littered the political landscape of the region, demonstrating a long history of disagreements about the right way to ensure adequate electric service. The ill-fated Seabrook project, for example, left the boardroom of Public Service of New Hampshire (PSNH, one of the region's electric utilities) in about 1970, and it was approved by New Hampshire state regulators in 1972. Only in 1976, when construction began, did the general public become aware of the project, which was expected to cost about one billion dollars and reach completion within a decade. Environmentalists disliked the plant's location, adjacent to a beautiful wetlands area, and near one of the most popular beaches in New England. Worse, the oil price shocks of the 1970s reduced the demand for electricity, and the 1979 incident at Three Mile Island galvanized public opposition to nuclear technology. Other parties began to second-guess the utility's apparently unilateral decision. Yet PSNH appeared to believe that the best way to overcome opposition was with momentum, and it refused to engage in debate with its critics. This proved to be an extremely costly tactic, because when the plant was halfway built, pressure from both regulators and the financial community forced one of its two units to be mothballed, and an investment of hundreds of millions of dollars could not be recovered. Further, the legal challenges to bringing the remainder of the plant on line succeeded in causing years of delays, during which the plant sat idle and unproductive, and capital costs soared. The plant did not produce power until 1990. The final cost of the plant was approximately 6.35 billion dollars, of which only a small fraction could be recovered from electric ratepayers. These losses ultimately bankrupted PSNH. 7 Similar problems plagued many aspects of New England's electric power debate during the 1980s. In Massachusetts between 1976 and 1987, environmental and consumer interests ensured that not a single power plant proposal survived the siting process.8 Yet conservation, the option favored by environmentalists, was largely ignored by the utilities. No one got their favorite option, and everyone was unhappy with the reliability problems resulting from this deadlock. More complex strategies, consisting of packages of diverse options, were simply not explored in the public debate, even though they represented a way out of this dilemma.
Analysis in Context
127
Utility planners during this period lost credibility as forecasted demand for electricity failed to materialize, and "optimal" investments proved to be costly and inflexible. Relative to other sectors, electric power planners had a wellestablished analytical tradition. Sophisticated econometric models provided forecasts, and detailed engineering-economic models identified optimal investment choices. Yet the analysts running these models did not help utility managers to manage uncertainty or defuse controversy. They made ever-moreelaborate point forecasts and employed increasingly complex algorithms that optimized, assuming perfect foresight, for a single cost minimization criterion while ignoring environmental spillovers. And as far as the public was concerned, the analysts did all of this in a manner that was incomprehensible to the lay person. Two generations after the invention of decision analysis and two centuries after the publication of Bayes' theorem9 for thinking systematically about uncertainty, only a few electric power planners were formally analyzing strategies for managing uncertainty. Granted, that any of them were doing so placed them ahead of their peers. None of them, however, were making any effort to deal with controversy, assuming it to be someone else's problem, a difficulty of process not analysis. RESPONSES TO PARALYSIS The New England power-planning debate took place in a context of extremely decentralized decision-making, with six state public utility commissions, seventy-odd utility companies, and opportunities for other parties to intervene in the planning. Why couldn't procedural innovations alone take care of planning paralysis? Did analysis really have to change too? A continuation of the New England story illustrates that there was plenty analysts could and ought to have done differently. Starting in 1987, a number of concerned parties tried to break the impasse in the policy debate over New England's electricity future. One group, thinking that face-to-face discussion could find common ground, organized a series of clandestine breakfast meetings between utility executives and environmentalists.10 These went nowhere, because the issues were too complex to discuss without analytical support. A number of other groups performed careful analyses of particular problems, but this work merely solidified existing positions in the debate. The Chamber of Commerce demonstrated that electricity shortages could stifle economic growth;11 the environmental community demonstrated that conservation alone could meet all future electric service needs;12 and the nuclear lobby demonstrated the unequivocal need for the Seabrook nuclear plant.13 This kind of analysis was not helping. Three other efforts were more effective. One was a white paper developed by the Federal Reserve Bank of Boston, a relatively disinterested and credible source, which showed that all parties shared responsibility for the failed policies of the past.14 Another group, the "Collaborative," found a narrow area of common interest—pilot energy conservation projects—and developed a cooperative process for designing model programs. With regulatory
128
Multiple Decision Makers and Mixed Participation
encouragement that essentially removed financial risk from the process, the technical representatives of utilities and environmental organizations were able to work together to get programs started.15 The third successful effort—the New England Project (NEP)—looked at longer-term issues, and developed a forum for exploring the region's electricity future with support from a university-based analysis team. Participants in this forum included many of the key stakeholders16 in the policy debate, and they met periodically with the analysts, alternately scoping studies and reviewing results. This interactive planning process examined thousands of scenarios for the future of New England's electric service industry, and over time demonstrated impartiality and gained credibility. The interactive planning process helped to forge a regional consensus around a multipronged approach that combined demand- and supply-side options into a coherent long-term strategy. And it did so using modeling techniques that were different in several important respects from those that had previously failed the industry. The NEP's analytical innovations explicitly acknowledged a context of unproductive conflict, uncertainty, complexity, and dispersed decision-making power. These forced analysts and decision makers to interact, and required the development of skills unfamiliar to most engineers and economists. The innovations took advantage of improvements in computing technology. Above all, the innovators acknowledged that analysis involved value judgments, so the analysts employed a frank strategy of managing normative content (keeping track of those judgments, avoiding reliance on those that were controversial) and building credibility. These innovations improved the outcome of the planning effort. These three efforts helped to resolve New England's late 1980s electricity debate and made it easier for the region to cope with subsequent crises—an excess of supply over demand resulting from the severe economic recession of the early 1990s, followed by regulatory reforms to spur competition at the end of the millennium. The Fed's analysis had credibility and impartiality going for it. The Collaborative and the NEP emphasized cooperation and informationsharing, with the NEP featuring a distinctive approach to analysis. Where did the NEP's approach come from? The organizers of the NEP were practical, imaginative folks with a proclivity for boundary-crossing. Although working in a university setting, they followed the engineering tradition of applied science research, and engaged as much in topical as in disciplinary debates. When the crisis in New England's electricity sector appeared, these researchers looked outside their home field of systems analysis for relevant ideas. They found useful insights in the literature on principled negotiation and consensus-building. Concepts from this literature helped them design the NEP, and guided their behavior when serving on the NEP's analysis team. Specifically, the designers of the NEP learned that paralysis can result when a planning process that depends on some level of consensus fails to achieve it. Yet achieving consensus, or even acquiescence, may be quite difficult. It depends on finding innovative solutions to difficult planning problems, and doing so in a credible manner. Adversarial processes for the planning, approval, and
Analysis in Context
129
implementation of large projects, plus massive uncertainty and technical complexity, are barriers that often deter our inventiveness. They combine to throttle constructive debate on many important planning issues. New England's electric power-planning woes clearly demonstrate each of these points. FACTORS STIFLING CREATIVITY Creativity is a widely acknowledged—but rarely emphasized—quality of planning efforts that can let us find better ways to achieve our stated goals. It can help to increase the efficiency with which we, as a society, use resources to meet our economic needs. It may also allow us to expand the range of social goals that may be accommodated. Without it, planning remains a zero-sum game where stakeholders bicker about how to slice up a fixed-size pie.17 In New England, stifled creativity led to a constant battle over priorities—should they produce reliable electricity or reduce environmental impacts or reduce costs to the consumer? The battle over priorities played out in the political arena where power traditionally trumps rational debate. Yet the balance of power had changed during the 1970s and 1980s so that environmentalists were able to block utility company initiatives. Strong environmental laws and sympathetic regulators counterbalanced the traditional coalition of business and utility interests, so that no group dominated. The positional, unproductive conflict that often occurs in our litigious society is one factor that can stifle creativity. A planning decision made by the developing party may be later contested by other parties, with the ultimate judgment on the project's merits being made in a courtroom or hearing chamber. Adversarial interactions and win-lose outcomes encourage the use of information as a weapon rather than as a tool, and cause stakeholders to posture from extreme positions rather than look for mutually beneficial middle ground.18 In New England during the 1980s, various groups in the electric power debate carved out extreme positions—conservation-only versus nuclear-only, for example—which led to policy paralysis and no activity of any type to ensure the future adequacy of electric service. Uncertainty can also hinder constructive debate. It increases the perceived risks of making decisions about long-lived investments, exposing decision makers to second-guessing by other concerned parties.19 It also makes discussions about the merits of planning options more diffuse,20 in part because parties expect the future to unfold in different ways. Expectations for economic growth, fuel prices, capital costs, and technology availability were among the uncertainties in New England's electric power debate. Differing expectations for future growth in electricity demand were a fundamental part of the controversy over the need for new electric power investments in the region. Complexity can confound efforts to think creatively, because some planning problems cannot be simplified to the point where the merits of each option are intuitively obvious. Indeed, planning options can produce a variety of counterintuitive impacts as the result of the interactions among complex economic, technological, and ecological systems. An example from the NEP illustrates this
130
Multiple Decision Makers and Mixed Participation
point. Because of New England's aged electricity-generating equipment, investments only in end-use energy conservation (a zero-emissions technology) would lead to worse cumulative sulfur dioxide, nitrogen oxides, and particulate emissions over a twenty-year period than would a balanced strategy that also includes new but highly efficient power plants (which produce emissions). Because fewer of the system's old, dirty, inefficient plants are retired using a conservation-only strategy, the overall impacts are the opposite of what the technology-specific characteristics would indicate.21 Overcoming These Barriers Positional conflict, uncertainty, and complexity are barriers that can stifle creativity and prevent the development of consensus in any planning process. The NEP's analysis team believed that the region needed a planning process rooted in democracy, but capable of incorporating new technical knowledge. To promote constructive debate, they opened up the planning process and sought to address the concerns of all stakeholders in the debate by including them in the joint fact-finding exercise. As we saw in the comparative risk cases, public participation is widely held to be a good thing. Justifications for opening up planning processes to include wider public participation and consensusbuilding activities are various and persuasive. Participation as "policy" is a value judgment that opening up the planning process is desirable in and of itself. Participation as "strategy" implies its acceptance as a means for achieving other ends. Participation as "communication" suggests that improved information flows lead to better planning decisions. Participation as "therapy" can co-opt alienated groups into the mainstream. Participation as "conflict resolution" may (or may not) lead to reduced tensions and stable outcomes in controversial planning decisions.22 Advocates of alternative dispute resolution argue that because of the adversarial nature of formal public utility commission hearings, they typically result in zero-sum, win-lose outcomes. In contrast, informal negotiated efforts can result in positive-sum (all gain) outcomes.23 The alternative dispute resolution literature also argues for a change in negotiating strategies. Rather than using the "hard," positional bargaining approach found in a courtroom or used car lot, negotiators should employ a principled, integrative (but not "soft") approach. Elements of the integrative approach include the following.24 Encourage safe brainstorming of ideas by separating the tasks of "creating" and "claiming" options. Improve upon initial ideas by "shuttling between the specific and the general," and seek to identify hybrid plans that everyone prefers relative to what was originally on the table. Explore tradeoffs among options in ways that allow stakeholders to estimate and update their best alternative to a negotiated agreement (BATNA)25 in a low-risk setting. Such strategies depend on an open planning process. However, if the public is included in only a perfunctory way, then the benefit of this exchange will be minimal.26 Involvement by interested parties in the planning process becomes much more meaningful and fruitful if these parties are made an intrinsic part of the analytical effort.27
Analysis in Context
131
To better understand the challenges of implementing a more open approach to planning in the technically complex and highly uncertain electric power context, members of the NEP analysis team conducted an experiment. The experiment took place in 1989 as part of an Institute on Reforming Electric Utility Rate Setting, Rulemaking, and Least-Cost Planning, sponsored by the MIT-Harvard Public Disputes Program, a project of the Program on Negotiation at Harvard Law School. More than sixty senior public utility regulators and power company executives from across the United States and Canada participated. Part of the meeting was devoted to playing and then discussing a negotiated electric utility resource-planning game written for the occasion. The game's instructions established a context similar to that faced in New England, including impending shortages, a limited set of technological options, complexity, massive uncertainty, and controversy. Following ten plays of the game, these decision makers completed a questionnaire eliciting their thoughts on how to provide analytical support to New England's open planning process.28 Survey results suggested that electric utility open planning processes shared some things in common with other types of negotiations. These included concerns about the conflicting interests of the parties, polarization around different options, the value of mediation, and the usefulness of both standard negotiating strategies and dispute resolution techniques. However, the technical complexity of the subject appeared to add another layer of concerns. Obstacles such as conflicting technical information, disparate values for intangibles, and uncertainty about the future created special demands on the negotiating process. Technical proficiency as well as training in dispute resolution techniques appeared to have value in this context, and objective criteria for framing the discussion seemed harder to formulate. Even among well-meaning senior decision makers engaged in a simulation, rational discussions were difficult, and a consensus outcome was not guaranteed. Advocates of principled negotiation recommended strategies of information sharing and joint fact-finding that allowed decision makers to discuss the normative aspects of scientific and technical work face-to-face, thereby avoiding the false facts-values dichotomy.29 The success of such an approach still depended on the specific participants, but at least they would be "principals," that is, directly interested parties, instead of "agents" working more or less on behalf of those parties. Yet such efforts needed support structures to ensure efficient use of the principals' time, to analyze options and uncertainties carefully, and to evaluate outcomes in a progressive manner that led to the identification of better alternatives.30 While it was easy to argue that joint fact-finding should take place in this technically complex policy dispute, specifying precisely how to do this was difficult. New England's electric power system was so complex that accurately simulating the overall regional performance of a proposed class of power plants was beyond the capabilities of most of the stakeholders in the debate, including many of the electric utility companies. The system was so sensitive to uncertainty that slightly different input assumptions changed the relative attractiveness of the various planning options. Indeed, when the sixty participants in the negotiated least-cost electricity-planning simulation were
132
Multiple Decision Makers and Mixed Participation
asked what the characteristics of the ideal analytical method to support their efforts would be, their answers did not resemble any currently available computer package. The participants wanted the following: • •
•
•
•
•
•
Time demands of the analytical effort on decision makers (not analysts) should be minimal. The participants did not think that decision makers could afford to become involved in time-consuming interactions. Analytical assumptions should be based on expert judgment, not group decision-making. The participants worried that, like other participatory political processes, open planning could turn into lowest common denominator planning. They saw a value to expertise that ought to be preserved in whatever new planning approach was developed. Impacts should be measured using multiple criteria, not an aggregate measure such as dollars. The participants appeared to believe that aggregate measures were too opaque for use in a negotiating context where parties would not always trust numbers produced by analysts for their use. Analysis should explicate tradeoffs among options, and not merely identify optimal choices. Black-box models that produced a single "right" answer would not be credible to parties who a priori disliked that answer. Since stakeholders would value various impacts differently, they were likely to have different definitions of "optimal." Analysis should consider many possible scenarios, and not just carefully define the most probable or base case. The great interest in multiple scenarios showed how important uncertainty had become as a planning concern in the electric power context. Mediation should be technically substantive, and not limited only to process issues. A strong majority of the respondents were concerned enough about the technical complexity of electric power-planning that they preferred a mediator with technical knowledge over one who could only enhance the process. Both neutrality and technical expertise appeared to be important criteria to consider when choosing the analysis team, so that independent consultants and academics were strongly preferred over regulators or regulated parties. When asked what the major tasks of the analysts should be, the participants in the game mentioned the need to identify issues of primary importance to the planning effort, and to get the parties to share information with one another in ways that revealed their preferences, for generic attributes if not for specific options. Another major job of the analysts was to invent packages of options for the group to discuss. Equally important was the job of helping the parties to find agreement. Steps in this process included analyzing shared information to find acceptable options, uncovering common ground beneath conflicting statements, and finding ways to sort through or prioritize options that would be acceptable to all parties.
In short, the participants in the negotiated least-cost planning simulation felt that analysis should provide them with enough useful information to understand the context and implications of decisions without making unreasonable time demands. The value of technical knowledge needed to be recognized and preserved in this process. The widespread preference for a technically substantive mediator underscored this criterion. The special needs of a consensus-building process recommended the use of multiple measures for
Analysis in Context
133
evaluating impacts, and careful exploration of the tradeoffs among options. Uncertainty about the course of future events suggested a need to analyze many possible scenarios. Clearly, these decision makers wanted help from technically talented analysts also equipped with good communications skills. The NEP analysis team took these messages to heart and developed a scenario-based multiattribute tradeoff analysis framework, described later. THE NEP'S STRUCTURE The NEP analysis team developed a project structure based on conceptual insights from the literature on alternative dispute resolution, as well as practical insights generated during the role-playing game. Both the process and the role of analysis in the NEP differed from currently accepted practice. How did the NEP attempt to prevent positional conflict from stifling constructive debate? It attempted to open up the planning process. It used a procedural framework for principled negotiation that was supported by analysis team members doubling as technically trained facilitators. The NEP attempted to invite all stakeholders in the debate to participate, to ensure that their concerns were adequately addressed. How did the NEP attempt to deal with uncertainty? It explored a range of possible future outcomes during the planning process, and tested the robustness and flexibility of planning options against them. Of course, since the range of possible future events was quite subjective, this had to be done in a way that provided useful information to parties with widely differing expectations of the future. What about technical complexity? The NEP had to aspire to a high level of technical knowledge, without diluting it during the process of addressing the problems of controversy and uncertainty. Analytical strength, acknowledging both the systemic context and the pervasiveness of uncertainty, had to be meshed with the procedural strengths of assisted negotiation. The analysis team had to serve, communicate with, and mediate between members of the advisory group. The basic premise of the approach developed and tested in the NEP was that better strategies could be found if all the parties to the electric power-planning debate—utilities, environmentalists, regulators, and customers—worked together at the earliest stages of project-planning. The NEP analysis team supported this planning group in its efforts (see Figure 13.1). The planning effort was set up in the form of a public public policy analysis exercise. Recall the argument in chapter 3 for communicative action geared toward mutual understanding rather than traditional, self-interested strategic action. The NEP analytical effort was conceived as just such a two-way street: the analysis team needed to produce information of value to the assembled stakeholders, and at the same time elicit the stakeholders' preferences on a variety of topics for later use. The interactions between the NEP analysis team and the planning or advisory group of stakeholders were structured so that four steps carried the group through a full iteration of the planning process, with consensus representing the desired final outcome.31 These steps are listed in Table 13.1.
134
Multiple Decision Makers and Mixed Participation
Figure 13.1 The New England Project's Open Planning Approach
Table 13.1 Procedural Framework for the New England Project 1.
Identify issues to focus the analysis on, and attributes by which to compare the performance of different options.
2.
Develop scenarios examining the performance of combinations of options (strategies) across a variety of uncertain future events (futures).
3.
Explore system behavior by simulating the performance of scenarios, which allows participants to observe the tradeoffs between strategies along various dimensions of performance, for a variety of possible uncertain future conditions. Develop better strategies based on this information.
Repeat until diminishing returns set in. 4.
Seek consensus on a favored strategy. Observe which strategies interest each party, what uncertainties concern them, and how they weigh the various attributes relative to one another. Adjust strategies to increase the potential for consensus based on this information.
Repeat as necessary.
Analysis in Context
135
NOTES 1. New England Power Pool (NEPOOL), "New England Power Pool," brochure (West Springfield, MA: NEPOOL Inc., 1984), p. 16. 2. U.S. Department of Energy, Energy Information Administration (EIA), Electric Power Annual 1988, DOE/EIA-0348(88) (Washington, DC: U.S. Government Printing Office, 1990), Tables 10 and 19. 3. NEPOOL, NEPOOL Forecast Report of Capacity, Energy, Loads and Transmission 1989-2004 (CELT Report) (West Springfield, MA: NEPOOL, Inc., 1989), p. 3. 4 Analysis Group for Regional Electricity Alternatives (AGREA), Energy Laboratory, Massachusetts Institute of Technology, Cambridge, MA. Advisory Group Meeting background presentation materials, February 1—April 24, 1989. See also Electric Council of New England (ECNE), Electric Utility Industry in New England 1987 Statistical Tables (Bedford, MA: ECNE, 1988), pp. 11-12 (updated with data from NEPOOL CELT Report. 5 AGREA, presentation materials. 6 Clinton J. Andrews, Stephen R. Connors, Daniel Greenberg, Warren Schenler, Richard D. Tabors, David White, and Kristen Wulfsberg, "Assessing the Tradeoffs between Environment, Cost and Reliability: Developing a Coordinated Strategy to Ensure New England's Electricity Supply," Proceedings of the New England Environmental Exposition (Boston, MA, 1990). 7 Boston Globe, articles on March 2, 1990, pp. 1, 12; March 4, 1990, pp. 39, 41; and November 14, 1990, pp. 25-27. 8 Massachusetts, Commonwealth of, Energy Facilities Siting Council (MA EFSC). a 1990 review of their Decisions and Orders, vols. 1-16. 9. Bayes' theorem provides a basis for modifying a priori (before-the-fact) probabilities regarding a set of events based on the occurrence of actual events, thereby producing updated a posteriori (after-the-fact) probability estimates. Systems analysts refer to Bayesian updating as a shorthand for the process of revising expectations about an uncertain future, given new information. 10. Common Ground (1989), correspondence and draft proposals, Boston, MA. 11. Chamber of Commerce of Greater Boston (1988), Results of a Study on the Impact of Electricity Shortages on Massachusetts Businesses (plus raw data from the study), prepared by Pathfinder Research Group, Boxborough, MA; Findings of the Greater Boston Chamber of Commerce Executive Electric Energy Study, prepared by Pathfinder Research Group, Boxborough, MA; Energy, Environment and the Economy Bulletin, vol. 1, nos. 1-4. 12. Boston Globe (1987), "Shadows Haunt Region's Energy Future/No Consensus on Meeting Energy Needs," two parts, November 22-23, p. 1. 13. Boston Globe (1988), "Dukakis Is Faulted on Energy," June 17, p. 85; and advertisement, May 31, p. 51. 14. Yolanda Henderson, R. Kopcke, G. Houlihan, and N. Inman, "Planning for New England's Electricity Requirements," New England Economic Review (Jan/Feb 1988): 330. 15. R. Russell, "The Power Brokers," The Amicus Journal (Winter 1989): 31-35. 16. Based on their "stopping power." 17. R. Fisher and W. Ury, Getting to Yes: Negotiating Agreement without Giving In (New York: Viking/Penguin, 1981), pp. 73, 101-111. 18. Lawrence Susskind and John Cruikshank, Breaking the Impasse (New York: Basic Books, 1987), pp. 78, 95-150. 19. W. Lough and K. White, "A Technology Assessment Methodology for Electric Utility Planning in the United States," Technological Forecasting and Social Change 34 (1988): 54.
136
Multiple Decision Makers and Mixed Participation
20. Paul Slovic, Baruc Fischoff, and S. Lichtenstein, "Rating the Risks: The Structure of Expert and Lay Perceptions," Environment 21 (1979): 141-166. 21. Stephen R. Connors and Clinton J. Andrews, "System-wide Evaluation of Efficiency Improvements: Reducing Local, Regional and Global Environmental Impacts," in J. Tester et al., eds., Energy and the Environment in the 21st Century (Cambridge, MA: MIT Press, 1991). 22. N. Wengert, "Citizen Participation: Practice in Search of a Theory," in A. Utton, ed., Natural Resources for a Democratic Society (Boulder, CO: Westview Press, 1976), pp. 1-40. 23. Susskind and Cruikshank, Breaking the Impasse, pp. 32, 78, 95-150. 24. This approach was popularized by Fisher and Ury, Getting to Yes, pp. 73, 101111. 25. The BATNA is an important component of the principled negotiation approach because it tells the negotiator when to leave the table and pursue her interests in a different forum. 26. Harold A. Feiveson, Frank W. Sinden, and Robert H. Socolow, eds., Boundaries of Analysis: An Inquiry into the Tocks Island Dam Controversy (Cambridge, MA: Ballinger, 1976), pp. 36-39. 27. C. Holling, Adaptive Environmental Assessment and Management (New York: John Wiley & Sons, 1978), pp. 1-5, 20. 28. For details see Clinton J. Andrews, "Improving the Analytics of Open Planning Processes." In the decade since the game was created, it has been played dozens of times in both professional and classroom settings. The outcomes reported here are representative of the long-term experience with this simulation tool. 29. Connie Ozawa and Lawrence Susskind, "Mediating Science-Intensive Policy Disputes," Journal of Policy Analysis and Management 5, 1 (1985): 23. 30. Lawrence Bacow and Michael Wheeler, Environmental Dispute Resolution (New York: Plenum Press, 1984), pp. 158-184. See also Feiveson, Sinden, and Socolow, eds., Boundaries of Analysis, pp. 36-39. 31. Clinton J. Andrews, Stephen R. Connors, Daniel Greenberg, Warren Schenler, Richard D. Tabors, David White, and Kristen Wulfsberg, "Assessing the Tradeoffs between Environment, Cost and Reliability: Developing a Coordinated Strategy to Ensure New England's Electricity Supply," Proceedings of the New England Environmental Exposition (Boston, MA, 1990). See also Stephen R. Connors, Richard D. Tabors, and David White, "Tradeoff Analysis for Electric Power Planning in New England: A Methodology for Dealing with Uncertain Futures," ORSA/TIMS Transactions (1989).
14 Evaluating NEP The New England Project (NEP) provided the opportunity to test a new analysis framework in practice, and to experiment with several aspects of the open planning concept. How well did it perform against criteria of adequacy, value, effectiveness, and legitimacy? ADEQUACY The project enjoyed a reputation for competent technical analysis. The analytical approach allowed the project to be unusually comprehensive in scope. Access to industry data permitted up-to-date technical content. Close involvement of many technically knowledgeable parties in setting the scope, endorsing assumptions, and interpreting results ensured that errors were caught and corrected. Analysis team members had backgrounds in electrical engineering, mechanical engineering, operations research, regional economics, and regional planning. Yet the range of disciplinary perspectives represented on the analysis team was its major technical weakness: engineering and operations research were overrepresented, while economics and other social sciences were underrepresented. Thus, economic phenomena such as price response (price elasticity of demand) were modeled relatively crudely, while engineering phenomena such as the utilization rates of power plants (regional dispatching order) were modeled in elaborate detail. According to advisory group members, this did not reduce the adequacy of the results so much as it reduced the range of questions the analysis team was capable of addressing. Aside from this problem, the New England Project enjoyed substantial adequacy.
138
Multiple Decision Makers and Mixed Participation
VALUE There was good evidence that the New England Project provided internal, external, and personal considerations of value. It created internal value for the planning research and practice communities by generating peer-reviewed publications about the project and its methodological innovations.1 The dissemination of these methods into other contexts (see below) confirmed that the planning field had gained something valuable. The project created external value by contributing directly to a revitalization of the energy policy dialogue in New England. This in turn led to policy decisions and on-the-ground actions that helped keep the lights on. The project's long-running financial support by a consortium of regional interests confirmed that these players saw value in the activity.2 As a side benefit, the project created personal value for its different classes of participants, including the students,3 faculty, and staff.4 Advisory group members gained both substantive knowledge and valuable political connections as a result of participating.5 In sum, the New England Project provided significant value. EFFECTIVENESS The New England Project was spectacularly effective in reorienting the stalled regional policy debate away from polarization around single options such as "conservation only" or "nuclear only," moving it toward consideration of multicomponent strategies. The project was quite successful at improving the quality of the energy policy debate in this region: key players began to speak with increased sophistication about managing risk and balancing the region's portfolio of electricity resources. It legitimated certain technical options such as the "repowering" of existing power plants with updated hardware and different fuels. These successes were empirically visible, measured in terms of speaking invitations for members of the analysis team, press coverage of major findings, references to the project's work in official documents, policy decisions that seemed to accord with the project's findings, and hundreds of millions of dollars in technology investments by the utilities in resources favored by the project's analysis.6 Yet it never succeeded as a forum for direct negotiations on policy matters. No consensus documents signed by all parties emerged from this forum; instead, it remained an informal forum where ideas could be safely exchanged and evaluated, with formal decision-making happening elsewhere. Given that there were significant institutional barriers to the creation of a new formal forum (such as sunshine laws and anticronyism statutes), this failing was not troubling. Official decision makers participated in the project, and they incorporated its findings directly into their various decisions as they saw fit. Overall, the New England Project enjoyed a substantial level of effectiveness. LEGITIMACY Once fully established, this university-based project enjoyed substantial statusbased legitimacy due to its host institution, analysts, and participants. This
Evaluating NEP
139
legitimacy was not an entitlement, but instead had to be earned over time. Analysts had to demonstrate competence, responsibility, and evenhandedness. They had to attract regional leaders to this forum. They largely succeeded on both of these counts. The civil legitimacy of the project likewise had to be earned. The membership of the advisory group had to show balance, discourse had to be civil, analytical work had to be responsive to participants' concerns, and meetings had to be open to the public. Public outreach, with conference presentations, regional road shows, and press contact, was also necessary. The project seemed to perform adequately in these areas as well. The major threat to legitimacy was the project's funding, which came primarily from a consortium of the region's utility companies, albeit with encouragement from their regulators. Thus, nonparticipants automatically placed the project in the "black hat" camp with industry rather than the "white hat" camp with environmentalists. The project had to renew its legitimacy with environmental advocates on every new issue. Legitimacy remained a constant challenge for the project, but by continuing to work at it, the project largely continued to enjoy it. The New England Project lasted from 1987 to 1996, during which it analyzed tens of thousands of scenarios, provided neutral ground for decision makers to meet, trained a new cohort of experts, built a political consensus adequate to implement several realistic packages of technical options, and developed and disseminated several analytical innovations. It was relatively long-lived as an applied university research project, and its alumni continue to contribute to the sector's policy debates. Funding for the project ended, appropriately, when the restructuring of the U.S. electricity sector commenced.7 This approach has since been applied to a variety of public and private sector efforts involving decentralized decision-making, and its development continues.8 OTHER EFFORTS The NEP was just one of many interesting communicative experiments in the field of energy planning taking place during the last decade of the twentieth century. This section summarizes a few other illustrative efforts. Customer Participation Commonwealth Electric Company (COM/Electric), then an electric utility located in Massachusetts, involved its customers in a 1990 resource-planning process modeled on the NEP. An analysis team made up of company planners and NEP analysts interacted with four Consumer Advisory Groups representing COM/Electric's service territories. Although these meetings took place within the span of March to May of 1990, the COM/Electric analysis team spent the entire preceding year developing its modeling framework and testing it on a "safe" internal audience of the company's managers. The practical focus of the exercise was to ask customers how they would make tradeoffs between lower electricity prices and improved environmental performance. Its broader purposes were to engage customers in a dialogue over the relative merits of planning alternatives and to improve the credibility of the company's planning process.
140
Multiple Decision Makers and Mixed Participation
This project demonstrated that members of the lay public could successfully participate in scenario-based multiple-attribute tradeoff analysis.9 Employee Engagement From 1993 to 1996, the Tennessee Valley Authority (TVA) conducted the "TVA Vision 2020" process, which developed a long-term, twenty-five-year investment plan for the agency at the request of its new chairman, appointed by President Clinton.10 TVA's key problem was a large debt burden associated with an ambitious program to build nuclear power plants, coupled with an inability to make those plants operational.11 The TVA project followed the analytical and procedural model established by the NEP. As in the COM/Electric case, TVA tested its analytical apparatus on internal audiences before inviting participation from outsiders. However, in the TVA case, the launching of the external advisory group in June 1994 was almost anticlimactic. The difficult battles, such as arguing over realistic ranges of uncertainty for nuclear power plant-operating performance, had already been fought within fifteen internal committees participating in the analytical work. Thus initial optimistic assumptions, favored by nuclear engineers (whose jobs depended on an attractive nuclear option), were overturned by the agency's board of directors in favor of industry-standard numbers. The exercise gave the new chairman of the agency leverage with his own staff to reduce TVA's nuclear power commitment. It became a way to turn his large, slow ship around. The external advisory group members remained largely unaware of this implicit and internally political goal of the project. This left a bitter taste in some mouths: "We worked together for a year and then they ignored us when writing up the plan," complained one external participant. With the exception of an explicit shift away from nuclear power technology, the final plan focused more on investment risk management strategies than it did on specific future technology choices. Yet the project helped to establish a culture of financial risk management within the agency. Probabilistic Analysis The Northwest Power Planning Council, supported by the Bonneville Power Administration, developed an elaborate interactive planning framework and put great efforts into making their sophisticated planning methods comprehensible to the general public.12 They served the governors of Idaho, Montana, Oregon, Washington, numerous Native American Tribes, and federal interests, working in a truly decentralized decision-making context. They were particularly successful at portraying probabilistic simulations of investment alternatives' performance in an easy-to-understand format that made modeling assumptions visible, showed how uncertainty affected outcomes, and portrayed tradeoffs among criteria such as electricity costs and protection of salmon habitat. With official status and an annual budget13 exceeding that of the New England Project by more than an order of magnitude, this effort set the standard for integrated electric resource planning in the mid-1980s through the mid-1990s.
Evaluating NEP
141
Multimethod Analysis BC Gas, a natural gas utility in British Columbia, Canada, involved the public in a demand-side resource-planning effort from 1994 to 1995. Seattle City Light, an electric utility, engaged in a similar exercise at about the same time. Demandside resources included energy-efficient lighting subsidies, home energy conservation programs, commercial and industrial retrofits, load management and load control programs, and other efforts to manage the demand for energy. The special feature of these projects was experimentation with multiple methods of preference elicitation. Different approaches, such as holistic assessment of alternatives, multicriteria weighting schemes, and multicriteria tradeoff methods, gave inconsistent results. Thus, any one technique alone would provide poor guidance for decision makers. These projects asked stakeholder groups to apply several of the methods mentioned above in order to triangulate on more reliable, valid preference maps. This multimethod approach was effective at building public confidence in each company's demand-side resource plan while also diminishing the likelihood of analytical errors.14 Multicriteria Accounting Publicly traded corporations must disclose accounting data to potential investors. Since environmental liabilities may affect the firm's risk profile, companies are now required to disclose their environmental performance. Their accounting departments are therefore developing corporate environmental management systems and releasing selected data to external audiences. While most provide no more than footnotes in the annual report, a few firms have developed elaborate corporate environmental reports and have made them accessible on the web.15 Firms such as Public Services Electric and Gas (New Jersey) and Pacific Gas and Electric (California) report their performance along multiple environmental criteria ranging from air pollution emissions to environmental penalties and fines.16 Decision Support By adding a well-designed user interface to a computer model, analysts can turn it into a decision support system for use directly by decision makers and other interested parties. This removes the analyst as middle-man and allows users to perform intensive "what-if analysis. In the early 1990s, the U.S. Department of Energy's Energy Information Administration (ElA) created the National Energy Modeling System (NEMS) in this spirit. While serving primarily as the forecasting tool that underlies the agency's Annual Energy Outlook, NEMS is also available to any party desiring to experiment with alternative energy policy scenarios. This complex set of interlinked, sectorspecific modules only partially succeeds as a decision support system, because it remains difficult to use. However, it is well documented and freely available. The EIA web site is one of many that are beginning to provide public access to analysis software.17
142
Multiple Decision Makers and Mixed Participation
These analytical efforts were a few among many trying to adapt to the communicative context in the energy sector. Similar innovations took place in a range of fields from urban planning1 to global climate change policy.19
CONCLUSIONS Some pop psychologists claim that analysts choose their careers as a kind of refuge. 20 Analysts are shy or lack social skills, and find the precision of quantitative methods reassuring in an otherwise messy, worrisome world. That characterization is insulting, but it precisely highlights the often absent talents needed to succeed in a communicative context. Sometimes analysts must become outgoing, entrepreneurial, and adaptive; and on such occasions they are uniquely positioned to get warring factions talking again. The NEP and similar experiments in opening up technically complex planning processes had many successful aspects. The experiments demonstrated that meaningful public participation in major project-planning was feasible, and that a consensus-seeking stakeholder group enjoying appropriate analytical support could induce progress in a deadlocked policy debate. The experiments confirmed that with focused effort, a humble mindset, and a balance of substantive and procedural rationality, analysts can overcome both technical and communicative challenges.
NOTES 1. Illustrative publications include: Stephen Connors and Clinton Andrews, "System-wide Evaluation of Efficiency Improvements: Reducing Local, Regional and Global Environmental Impacts," in J. Tester, et al., eds., Energy and the Environment in the 21st Century (Cambridge, MA: MIT Press, 1991); Clinton Andrews and Stephen Connors, "Existing Capacity—The Key to Reducing Emissions," Energy Systems and Policy 15 (1992): 211-235; Clinton Andrews, "The Marginality of Regulating Marginal Investments," Energy Policy 20, 5 (May 1992): 450-463; Clinton Andrews, "Sorting out a Consensus: Analysis in Support of Multi-party Decisions," Environment and Planning B: Planning and Design 19, 2 (Spring 1992): 189-204; Richard Tabors, Stephen Connors, Carl Bespolka, David White, and Clinton Andrews, "A Framework for Integrated Resource Planning: The Role of Natural Gas Fired Generation in New England," IEEE Transactions on Power Systems 4, 3 (1989): 1010-1016. 2. A consortium of nine New England electric power companies funded the New England Project at a level averaging $150k/year from 1989 to 1995. The U.S. National Renewable Energy Laboratory provided an additional $115k/year in funding during 1993 and 1994. 3. Most of the graduate students who worked on the project ended up finding related jobs upon graduation. In their first jobs upon graduating, six alumni of the New England Project stayed in academia; four joined consulting firms specializing in public utility issues; and one each worked at an electric utility, an electric power industry equipment manufacturer, a state public utilities commission, and the Federal Energy Regulatory Commission. 4. The associated faculty and research staff saw their research and consulting opportunities increase as a result of their links to this project. Illustrative clients outside New England included Hydro Quebec, Tennessee Valley Authority, and the Hawaii Public Utilities Commission.
Evaluating NEP
143
5. State regulators, utility executives, environmental advocates, and others took advantage of the neutral ground provided by the NEP for schmoozing, discussing substantive issues away from the limelight, and coordinating regional policy positions. One advisory group member became president of the National Association of Regulatory Utility Commissioners; another became assistant secretary of energy of the United States. 6. As a result of the NEP, analysis team members received speaking invitations for the New England Governors' Conference Annual Meeting, U.S. Department of Energy Annual Electricity Forum, North American Electric Reliability Council Board of Tmstees Meeting, and similar nonacademic bodies. Both the trade press (e.g., Electricity Daily) and newspapers (e.g., the Boston Globe) covered the NEP. Regulators also referred to the NEP findings (e.g., Massachusetts Department of Public Utilities; Connecticut Board of Public Utility Control). New England's first repowering of an existing downtown power plant (Manchester Street Station, Providence, RI) was approved in the aftermath of a supportive NEP analysis. 7. The electricity sector is undergoing deregulation in response to federal legislation (Energy Policy Act of 1992, Public Utilities Regulatory Policies Act of 1978), industry maturity (a densely interconnected transmission grid that reduces transaction costs between producers and consumers), and new technologies (especially ubiquitous computing and telecommunications, and gas turbine combined cycle power plants). Regulators are encouraging competition among generators and a separation of the generation, transmission, distribution, and end-use services functions of the vertically integrated monopoly franchise. Direct competition for retail customers is underway in some regions including New England. This is shifting the planning focus away from long-term issues to short-term concerns; and marketplace indicators rather than comprehensive engineering-economic models are beginning to guide investment and risk management decisions. Decentralized economic decision-making is beginning to substitute for decentralized political decision-making in the electric utility sector. See Clinton J. Andrews, ed., Regulating Regional Power Systems (Westport, CT: Quorum Books, 1995), pp. 3-26. 8. Analysts from the New England Project participated in both the COM/Electric and TVA projects described under "Other Efforts." A direct descendent of the NEP is the group of SESAMS (Strategic Electric Sector Assessment Methodology under Sustainability Conditions) projects of the Alliance for Global Sustainablility, made up of researchers from the Massachusetts Institute of Technology, Swiss Federal Institute of Technology, and University of Tokyo. See, for example, Stephen R. Connors and Warren W. Schenler, "Climate Change and Competition—On a collision course?" Proceedings of the 60th American Power Conference, Volume I (Chicago, IL: April 14-16, 1998), pp. 17-22. The approach has also been applied to solid waste management issues. See Clinton J. Andrews and Stephen Decter, "Beyond Trashtalk," Working Paper No. 2, Edward J. Bloustein School of Planning and Public Policy (New Brunswick, NJ: Rutgers University, April 1997). 9. Analysis Group for Regional Electricity Alternatives (AGREA), Energy Laboratory, Massachusetts Institute of Technology, Cambridge, MA: Proposal to COM/Electric to provide integrated resource planning assistance, January 1989; Presentation to the Internal Advisory Group for the COM/Electric Open Planning Project; May 1989; 1st Presentation to the Advisory Groups for the COM/Electric Open Planning Project, March 1990; 2nd Presentation to the Advisory Groups for the COM/Electric Open Planning Project, April 1990; 3rd Presentation to the Advisory Groups for the COM/Electric Open Planning Project, May 1990. 10. Tennessee Valley Authority (TVA), Energy Vision 2020 (Chattanooga, TN: TVA, 1996).
144
Multiple Decision Makers and Mixed Participation
11. Allan G. Pulsipher, "Comment on the Tennessee Valley Authority Case," in Clinton J. Andrews, ed., Regulating Regional Power Systems (Westport, CT: Quorum Books, 1995), pp. 321-332. 12. The planning framework, analytical tools, and results are summarized in Northwest Power Planning Council (NPPC), Fourth Northwest Conservation and Power Plan (Portland, OR: 1998). 13. The budget from 1981 to 1999 has averaged $7 million annually, and expenditures have been a few percent below that. Of that, about $1.5 million annually has gone to electric power planning (excluding associated administrative, public affairs, and legal functions). With restructuring in the electricity industry, the electric power portion of the budget has shrunk by about 25 percent since 1997. See Northwest Power Planning Council, 7999 Budget (Portland, OR: NPPC, 1998). 14. Benjamin F. Hobbs and Graham T. F. Horn, "Building Public Confidence in Energy Planning: A Multimethod MCDM Approach to Demand-side Planning at BS Gas," Energy Policy 25, 3 (1997): 357-375. For a wide-ranging review of energy sector projects, see Benjamin F. Hobbs and Peter Meier, Energy Decisions & The Environment: A Guide to the Use of Multicriteria Methods, International Series in Operations Research and Management Science, (Norwell, MA: Kluwer Academic Publishers, 2002). 15. Marc J. Epstein, Measuring Corporate Environmental Performance (Chicago: Robert D. Irwin, 1996), pp. 106-144. 16. Public Service Electric and Gas Company, 2000 Environmental Statistics (Newark, NJ). Downloadable at . Pacific Gas and Electric Company, 2000 Environmental Report (San Francisco). Downloadable at . 17. For a summary of the model see U.S. Department of Energy, Energy Information Administration (EIA), National Energy Modeling System: An Overview (Washington, DC: EIA, 1998). A variety of energy sector models are available on the web at . Other governmental agencies also have web-accessible models of various types. 18. Examples include Criterion Planners/Engineers, Inc., INDEX: Software for Community Indicators, brochure (Portland, OR: Criterion, 1999); Evans and Sutherland, Inc., E&S RAPIDsite, software for in-context visualization for land development (1999); Richard E. Klosterman, "The What If? Collaborative Planning Support System," in P. K. Sikdar, S. L. Dhingra, and K. V. Krishna Rao, eds., Proceedings of the 5th International Conference on Computers in Urban Planning and Urban Management, volume 2 (Mumbai: Narosa Publishing House, 1997), pp. 692-702; Wilson Orr, Ugrow: An Urban Growth Model (Prescott, AZ: Sustainability and Global Change Program, Prescott College, 1999) ; Paul Patnode, The Community Works Planning Support System: Integrating GIS, Virtual Reality, and Impact Assessment (Environmental Simulation Center, Ltd., 1999). 19. Jae Edmonds and John Reilly, The IEA/ORAU Long-Term Global Energy-C02 Model, Report No. ORNL/CDIC-16 (Oak Ridge, TN: Oak Ridge National Laboratory, 1986). 20. See D. Lutz, "The Personality of Physicists Measured," American Scientist 82 (July-August 1994): 324-325; or Nathaniel Branden, The Six Pillars of Self-Esteem (New York: Bantam, 1994), pp. 191-192.
15 Methodological Factors Affecting Analysts Each analytical tradition has its own conceptual basis, and the attempt to marry analysis to a decision-making process forces us to examine these theoretical foundations. This section links the conceptual underpinnings of formal project evaluation to the practical choices made by the analysis team during the New England Project (NEP). It uses the New England story to illustrate fundamental value judgments that analysts must make, even when working in a communicative context. It examines traditional project-planning methods in just enough detail for readers to perceive how the NEP diverged from that tradition. This is necessarily a simplified snapshot of a sophisticated field, presented without recourse to mathematics. The methods discussed here are not the only approaches available, nor would all of them classify as cutting-edge today. However, they represent fairly the range of choices available when the project took place. This discussion illustrates general points about the links between analysis and process, as well as analysts and decision makers, and they apply equally well to the latest methodological innovations. Below I review the major framing assumptions behind the analytics of public project evaluation: the definitional issues, system boundaries, and ethical principles. Different techniques vary in their applicability to a context of decentralized decision-making under controversy, complexity, and uncertainty. I also discuss how analysts matched methodological choices to a joint fact-finding process.
146
Multiple Decision Makers and Mixed Participation
BASIC THEORY OF PROJECT EVALUATION The standard decision rule used in public project evaluation, also known as cost-benefit analysis, defines the preferred alternative as that which delivers the greatest net benefits. This rule allows the selection of projects that have both winners and losers, as long as the net benefits to society are positive.1 It is an efficient decision rule, but it is not necessarily fair. It applies imperfectly to a contested project in a democratic society, where evidence of an aggregate welfare improvement will not placate those who lose something by its implementation. If there are losers and they have adequate political clout, they will stop the project.2 This was clearly the case in New England's electricity debates. Compensation of losers by the winners, or redesign of the project to have no losers, seems necessary if social consensus on a favored project option is required for successful implementation. In this context, an analysis approach that explores tradeoffs, encourages redesign, and develops compensation packages becomes appropriate.3 Public participation also may be desirable, if controversy or uncertainty is present. Analytical approaches vary in their appropriateness for different levels of project size, spillovers, impacts, and public participation. The choice of approach depends on the project's physical, economic, and political context. Most public projects have multidimensional impacts. An extension to a water supply system, for example, has clear monetary costs; the utility must pay in order to excavate trenches, lay pipes, and install pumps. However, a project also may have environmental impacts if land is flooded to provide a new reservoir, and public health impacts if lead pipe is removed, or if chlorination reduces coliform bacteria counts. Likewise, the quality of service may be affected, as delivery pressures increase, or water clarity improves. Social impacts may also occur, if the extension of service induces local population growth. When evaluating design alternatives, a decision maker must somehow compare impacts across all of these dimensions in order to choose a preferred option. This exercise may be broken into separate steps—measurement, valuation, and aggregation.4 Measuring some impacts is easy. An engineer can quickly and precisely quantify the amount of trenching (cubic yards of earth) required by the water project. Similarly, she can easily quantify the number of acres to be flooded for a new reservoir. However, the clarity of the water leaving a new filtration plant is much less certain; it depends on seasonal changes in water chemistry, acid rain effects, and other factors that are hard to predict. The reduction in neurological damage to children that results from removing lead piping is even more uncertain, being undetectable until years later, and, even then, difficult to quantify. Measurement of direct impacts thus varies in quantifiability, and in the level of uncertainty surrounding the estimates. Valuation of the impacts that have been measured presents an even greater challenge. Of course, some impacts are easy to value, such as the cost of piping and pumps. The markets for these materials work quite well in most countries, and there is a social consensus on their monetary value. However, there may be
Methodological Factors Affecting Analysts
147
much less social consensus on the value of water clarity, or of avoiding highly uncertain levels of neurological damage in children. Intangible impacts, lacking a widely accepted market value, may be valued differently by different people. Aggregation makes the process of deciding among alternatives easier; it compresses the information presented to a decision maker. The degree to which dissimilar attributes may be aggregated depends in part on the analyst's success in valuing impacts using a common currency. For example, it may be possible to value both piping and flooded land in dollar terms, if there is a working market for both (no externalities). Conversely, dollar values of water clarity and neurological damage may not enjoy a social consensus, and thus they may need to be reported separately. Key aspects of multidimensional decision-making thus include the uncertainties surrounding the measurement of estimated impacts, the degree of social consensus accompanying their valuation, and the level of complexity attending the aggregated results.5 The appropriate means and extent of public participation vary with analytical characteristics. If there is widespread agreement about an issue and the analyst recognizes this, then the analysis requires minimal public participation. For example, public hearings to establish prices for commercially available piping and pumps would have little value. Yet where discord or uncertainty exists, then participation will have greater value, as when choosing whether to prioritize a chlorination investment over efforts to remove lead pipe. Large projects with significant spillovers also will benefit from meaningful public participation that adds stability to the decision-making process. ANALYTICAL APPROACHES Analysts interact with decision makers in different ways when using various project evaluation tools. This assessment provides a basis for discussing the enhancements developed for the NEP. The discussion is limited to two major families of techniques: aggregative techniques (variants of cost-benefit analysis) and disaggregative techniques (versions of multiple criteria decision analysis). Table 15.1 summarizes the key strengths and weaknesses of these approaches. Cost-Benefit Analysis Many public and private decision makers like the apparent simplicity and elegance of cost-benefit analysis, which can identify project alternatives that increase aggregate social welfare and thereby promote overall economic efficiency. The cost-benefit analyst seeks to value all impacts in dollar terms, and to aggregate them together into a single benefit-cost ratio. Estimating plausible values of intangible factors is one of the greatest analytical challenges under this approach. For example, when attempting to value water clarity, the analyst may elicit people's willingness to pay (or their willingness to accept compensation) for different levels of turbidity by survey or interview. Assessing the value of reducing neurological damage in children (by replacing lead pipe) is a more heroic endeavor. This could be based on reduced lifetime income, increased
148
Multiple Decision Makers and Mixed Participation
social security burden, parents' willingness to pay to avoid this problem, or many other techniques, each of which may produce a different estimate. Costbenefit analysis can accept public participation in the process of determining the shadow prices for intangible impacts. Surveys, interviews, and advisory panels are ways of eliciting people's opinions on these valuation problems. The key strengths of a cost-benefit approach to project evaluation are that it is systematic (i.e., replicable and teachable), explicit (specific costs and benefits are identified and quantified), directional (the process narrows the field of choice), and progressive (planners are put in a position to create better alternatives next time). The main weaknesses of the cost-benefit analysis approach include: (1) a necessary assumption of social unanimity, such that aggregate social welfare improvement can be used as a decision rule; (2) the possibility of not counting (or miscounting) intangible impacts; and (3) the difficulty of characterizing how options will perform in uncertain future circumstances. Major variants of the aggregative cost-benefit analysis approach include engineering-economic analysis, private cost-benefit analysis, economic costbenefit analysis, and social cost-benefit analysis. Engineering-economic analysis typically measures a project's cost-effectiveness, holding perceived benefits constant while seeking design options that minimize costs. Alternatively, given budget constraints, an analyst may maximize benefits for a fixed cost. Private cost-benefit analysis is a more comprehensive approach used by profit-seeking firms and in public agencies for small projects. This method seeks to identify options that maximize net benefits to the firm or agency, within legal constraints set by society. It assumes that no spillovers exist that will affect outside parties. These noneconomic approaches to cost-benefit analysis are useful when: (1) spillovers are unimportant, (2) all impacts are measurable and can be valued in dollar terms or held constant, (3) goals and standards are clearly defined, and (4) a social consensus surrounds the project. These approaches ignore, or hold fixed, nonquantifiable and intangible attributes in the evaluation process, and they typically discount future costs and benefits at market interest rates. They assume that public participation is unneeded, because analysts are meeting predetermined goals, or complying with publicly accepted standards, for least cost. Economic cost-benefit analysis, the favored approach of public policy makers, measures social benefits broadly. Social cost-benefit analysis adds normative content by weighting the interests of certain groups in the social welfare function, and developing a social discount rate for weighting future impacts. This allows equity concerns to be made part of the project selection criteria.6 Economic and social cost-benefit analysis are extremely useful tools when: (1) one party has a public mandate to make independent decisions or where democratic decision-making and participation are of little concern; (2) most of the important impacts can be valued in dollar terms without sparking controversy, even if the value of intangible items must be estimated; and (3) uncertainties surrounding major impacts are minimal.
Methodological Factors Affecting Analysts
149
Table 15.1 Characteristics of Project Evaluation Techniques Technique Engineering Economic & Social Cost& Private Benefit Cost-Benefit Characteristic Analysis Analysis interval or interval or ratio data ratio data Measurement only only
Multiobjective Decision Analysis ordinal or categorical data ok
Multiattribute Tradeoff Decision Analysis Analysis ordinal or ordinal or categorical categorical data ok data ok ex post ex ante or ex projectpost projectex ante specific specific projectalso estimate normalization normalization specific market value value of Valuation only intangibles normalization if desired if desired ex ante or ex ex post post projectprojectspecific specific ex ante aggregation aggregation projector use sorting or use sorting specific monetize all monetize all Aggregation aggregation techniques impacts impacts techniques possible— possible— necessary— possible— social accepts requires accepts unanimity assumed but participation participation participation in valuation in valuation in valuation accepts and and and no—social participation unanimity in valuation aggregation aggregation aggregation stages stages Participation stages assumed stage assumes assumes noncomnoncommensurate mensurate objectives, objectives, evolving evolving assumes systematic, systematic, conflicting, preferences, preferences, allows allows noncomexplicit, explicit, uncertainty uncertainty mensurate directional, directional, analysis analysis objectives Strengths progressive progressive assumes assumes social social vulnerable to less consensus, consensus, group ignores ignores non- transparent, dynamics, nonmonetary quantifiable assumes fixed timepreferences, impacts, impacts, more scoping consuming limited limited limited for uncertainty uncertainty uncertainty decisions are implicit participants analysis analysis Weaknesses analysis
150
Multiple Decision Makers and Mixed Participation
Multicriteria Decision Analysis Decision analysis provides an alternative to cost-benefit analysis for making project choices. Decision analysis is a relatively prescriptive paradigm that offers systematic approaches for choosing a course of action in uncertain circumstances. Decision analysis incorporates fewer implicit value judgments than cost-benefit analysis in that it emphasizes analytical strategies over specific decision rules. The focus here is on multicriteria techniques that have been used previously to support decision-making on complex, contentious projects such as electric power or water supply systems. These techniques include optimizing or finding the best alternative given a need to balance multiple objectives, screening alternatives based on multiple attributes, and evaluating tradeoffs among alternative projects. Multiobjective decision-making problems are essentially design problems, in which mathematical programming is used to create a "best," optimal alternative from an extremely large, continuous set of choices, subject to specific design constraints. The analyst must elicit and quantify the multiple, and possibly conflicting objectives of decision maker(s) in order to specify an objective function.7 Optimizing according to the objective function of the decisionmaker, subject to explicit constraints, can lead to the formulation of a well-balanced plan.8 Consideration of the effects of uncertainty is a second-order concern.9 For example, multiobjective analysis could guide a water supply decision maker in balancing the relative allocation of resources between new water filtration facilities, lead pipe removal, and extending service to new customers. All of these objectives are desirable, but budget constraints and quality standards may limit the amount of each to be realized. When evaluating only a limited number of alternatives, multiattribute decision analysis techniques are particularly useful. Given a sophisticated impact analysis capability (e.g., a computerized simulation model), the performance of each project alternative can be revealed in a vector of attribute measures that describes, for example, its cost, environmental impacts, and quality of service impacts. Multiattribute decision-making tools then focus on assisting in the choice process among these existing alternatives, typically using either optimization (identifying the best alternative) or sorting methods (screening out less-preferred alternatives).10 Either way, the analyst needs to learn about the preferences of decision makers: How much relative importance does each one apply to each attribute? 11 Uncertainty can become a primary rather than secondary concern using this technique.12 Since analysts can evaluate only a finite number of alternatives, some preselection must occur that identifies a relatively small number of feasible candidates.13 For example, when the set of water supply options is limited to indivisible "big ticket" items, such as building either a holding reservoir or a storage tank, then multiattribute analysis may be appropriate. Tradeoff analysis is a variant of multicriteria decision analysis that focuses explicitly on accommodating broad public participation. Tradeoff analysis assumes that a group rather than an individual will make the planning decision. Tradeoff techniques are designed for interactive use, in which interested parties
Methodological Factors Affecting Analysts
151
evaluate the multiattribute impacts of project options in an open way. In simple, often graphical terms, they reveal the various tradeoffs implicit in choices among options.15 These include the consideration of uncertainty.16 It is an especially useful technique when controversy surrounds a project proposal, social consensus on appropriate action does not yet exist, all options have not yet been identified, and uncertainty affects the relative attractiveness of alternative designs. For example, if the water supply agency finds the choice between chlorinating water and removing lead pipe to be controversial, it could set up a citizens' advisory group to explore the tradeoffs between these options or to identify possibilities for compensation of losing parties.17 Tradeoff analysis is time consuming and cumbersome, so it enjoys limited application.18 Humble or Foolhardy? I have discussed these methods in order of the increasing communicative challenge they pose for the analyst. Engineering cost-effectiveness analysis is easy in the sense that it requires a very specialized, traditional toolbox. Economic cost-benefit analysis is more of a challenge because the analyst must attempt, somewhat heroically, to value intangible factors. Multiobjective optimization requires the analyst to communicate with and to elicit the preferences of a decision maker. Multiattribute analysis depends on repeated interactions with a decision maker. Tradeoff analysis requires that the analyst work with a group of decision makers or stakeholders. The later methods require the broader toolkit discussed in chapter 2. Cost-benefit analysis and other single-criterion, static, optimization-oriented methods have limited applicability in the design of contentious projects. Such techniques may unnecessarily constrain the range of normative (values-based) choices and place the analyst out in front of the decision makers rather than in sync with them. Given the difficulty of separating facts from values during analysis, analysts need to track the normative content of their work. Analysts also need a systematic basis for choosing methods that keep analysts and decision makers more tightly linked, when such a linkage is necessary. In this way, analysts may distinguish between humble and foolhardy approaches. The basis for choice is to ask the analyst the contingent series of questions shown in Figure 15.1: Are there nonmarket impacts or distortions of the perfect market model that result from this project? Can the impacts be quantified? Is uncertainty irrelevant? Is there a social consensus that all worthwhile options are on the table? Is there a social consensus on the valuation of the impacts? Can all impacts be aggregated to a single net benefit number? Each of these questions flags a point in the evaluation where normative bias may be introduced by the analyst. For example, traditional electric utility planning methods focused only on the profit-making objectives of the company, and outside participation was rarely encouraged. Yet the impacts of the company's decisions were felt broadly. In New England, that narrowly bounded mechanism broke down in the face of
152
Multiple Decision Makers and Mixed Participation
interventions by environmentalists, regulators, and others. It thus became necessary to try a different approach. Figure 15.1 Flowchart for Tracking Normative Content of Analysis
SCENARIO-BASED MULTIATTRIBUTE TRADEOFF ANALYSIS This section outlines the scenario-based multiattribute tradeoff analysis technique employed by the NEP to satisfy its special analytical needs. See the literature cited in the notes for further analytical details and illustrative results. Prior to this project, the elements of the scenario-based multiattribute tradeoff analysis approach had been individually applied to various planning problems, but they had been developed mainly as heuristics, or "techniques that worked"
Methodological Factors Affecting Analysts
153
in specific planning situations. Scenario analysis, for example, became popular as a way to avoid the pitfalls of deterministic planning by exploring a small number of alternative sets of possible future events.19 Likewise, multiattribute evaluation proved useful in cases where valuation and aggregation problems undermined the validity of cost-benefit analysis.20 Tradeoff analysis gained popularity for its ability to provide a "paper trail" for open planning efforts.21 Planners faced with massive uncertainty and controversy also identified several techniques not to use: • • • • • •
Don't do deterministic planning for a single assumedly certain future. Don't econometrically extrapolate future events only from what happened in the past. Don't evaluate options without taking their systemic context into account. Don't derive an optimal plan inside of a "black box" that obscures value judgments and assumptions from public scrutiny. Don't put all of your eggs in one technological basket. Don't commit sins of omission; that is, don't ignore plausible alternatives or uncertainty.
Mainstream decision science uses the concept of expected utility to deal with uncertainty. Preferences are assumed to be a function of the value of an outcome, its probability of occurrence, and the decision maker's risk aversion.22 This continuously differentiable function is conceptually pleasing as a decision rule. However, unless the analysis digs below the probabilities to examine the various pathways leading to a particular outcome, very little will be learned about the behavior of the system under study. Creative thinking about better strategies will not be encouraged unless the decision maker learns something about both his or her preferences and the way the system works. The tradeoff analysis approach was specifically intended to clarify how a complex system works. Various innovators developed techniques that offered piecemeal insights. For example, from the first electric power industry application of tradeoff analysis23 to the present, scatter plots have been used to evaluate the relative performance of different strategies in multiattribute space. Another technique was the use of SMARTE (Simulation, Modeling and Regression, and Tradeoff Evaluation) functions, which developed a straightforward functional relationship between uncertainties, options, and impacts.24 These functions (based on least-squares curve-fitting) described system behavior in a way that clarified relationships and helped in choosing among options, based on a limited set of simulations. RISKMIN was yet another specialized database tool, used for systematically identifying dominant options in a large set of scenarios.25 These techniques were evidence of the growing recognition among planners that they ought to explore a variety of possible future circumstances, and systematically analyze many strategies for each set of circumstances. The shared intuition was that the best way to begin creating a better future reality was to sample its possibilities in an organized manner. Specifically, the NEP analysts thought of the planning process as a formal intellectual
154
Multiple Decision Makers and Mixed Participation
investigation of a complex system's behavior based on a limited set of experimental results. By including a rigorous scoping of the experimental design and careful attention to sample characteristics, quality control, inductive data analysis, and defensible conclusions, the analysis team could improve their chances of obtaining meaningful results. Techniques such as tradeoff curves portrayed in scatter plots, SMARTE functions and RISKMIN were heuristic expressions of what was really a "future-sampling" approach to systems analysis. The NEP analysts sampled the future by developing a method for generating and evaluating thousands of scenarios, each differing from its neighbors in only one respect (see Figure 15.2). First, they created a simulation model that reflected realistic behavior of New England's electricity sector over a decadeslong planning horizon (see Figure 15.3). Given information on the expected growth in the regional economy and several other exogenous factors, the model calculated electricity demand, power system operations, investments in new power plants, electricity prices, environmental impacts, power system reliability, and other attributes. The model accepted two types of exogenous inputs: uncertainties (uncontrollable factors) and options (controllable factors). By exhaustively combining all possible combinations of uncertainty values and options, the NEP analysts created large, symmetric scenario sets. While the "curse of dimensionality" daunted previous generations of modelers, cheaper computing power made this approach increasingly feasible. What was the rationale for this apparently "brute force" approach? Figure 15.2
Sampling the Future
Methodological Factors Affecting Analysts
155
Figure 15.3 New England Project Scenario Analysis Process
When designing experiments to test hypotheses, analysts choose between equalized and controlled experimental conditions. 26 In a planning context, the emphasis is on controlled experimental conditions, which require that analysts hold all conditions constant except for the one factor being studied. To study
156
Multiple Decision Makers and Mixed Participation
several different factors one would run a number of controlled experiments. Since there are infinite factors to control, the analytical task must be limited contextually. The number and characteristics of the scenarios analyzed depend on the audience for the planning effort, reflecting each participant's greatest hopes and worst fears. A possible future can be defined as the occurrence of a unique combination of independent uncertain events. Uncertainties may include economic growth rate, fuel price and availability, weather, and the advent of new technologies, for example. For analytical purposes, a possible future can be defined by the set of values assigned to these uncertainties. Thus one possible future in the New England Project would have high economic growth, moderate fuel prices, no fuel shortages, normal weather, and on-time availability of new technologies. Another possible future might differ by only one uncertainty value, say, having low economic growth, but all other uncertainty values unchanged. Likewise, a strategy can be defined as a unique combination of options. Monte Carlo simulation is one popular method for capturing the effects of uncertainty27 that emphasizes equalized instead of controlled experimental conditions. It assumes a probability distribution for possible future events, and then randomly samples from that distribution to get the uncertainty values that drive the modeling effort. By taking enough samples, the modeling effort can be made to enjoy a distribution of outcomes that reflects the uncertain inputs. This approach is useful when there is a consensus about the range and distribution of possible future events. However, when there is no consensus on the relative likelihood of possible futures, then a scenario-based approach may be more appropriate. The NEP analysts preferred to model discrete possible futures rather than a probabilistic continuum. First, as mentioned above, participants learned about the behavior of the system in response to factors (options and uncertainties) that were discretely defined. Second, many people could easily conceptualize specific possible futures composed of independent uncertainties. Third, different stakeholders could focus their attention on different possible futures—those of greatest individual interest. Finally, the relative probabilities of these possible futures could be easily changed during subsequent analysis, using either holistic (one possible future relative to another) or component (one uncertainty value relative to another) assignments. Models of complex systems have numerous feedback loops, wherein electricity demand is partially determined by its price even as the price is in part a function of demand, for example. Choosing which feedback loops to close within each scenario, and which to leave open, was an important decision for the NEP analysis team. In other words, which factors should be made endogenous and which should be left exogenous; where should they set their model's boundaries? Key factors influencing this decision were analytical resources and stakeholder perspective, as follows: •
Which feedback loops were they capable of modeling?
Methodological Factors Affecting Analysts • • • • • •
157
Which feedback loops involved too many uncertainties to be modeled effectively? Which feedback relationships were significant enough to bother with? Which linkages involved evaluating intangible impacts that might be controversial? Which linkages were themselves controversial? When modeling scenarios that unfolded over time, did the audience want more to hear about technical strategies or decision mles? Which model boundaries contained the range of perspectives of the parties involved?
THE IMPLICATIONS OF THIS APPROACH A large, symmetrical set of scenarios represented an experimental "sample of the future" that warranted detailed statistical analysis. While the analysis team could not develop confidence intervals relating the sample to what the actual future holds, they could develop sample statistics to describe the performance of options (and strategies) within a complex system, as well as the effects of uncertainty on that system. Thus they used a controlled-conditions experimental design philosophy to create a large number of distinct scenarios, but then applied the equalized-conditions philosophy to the symmetrical set of scenarios, their "sample of the future." Figure 15.4 Range of Sulfur Dioxide Emissions with and without Repowering (1,080 Scenarios for Each Case)
Standard statistical techniques were fruitfully applied to this sample: exploratory data analysis methods were particularly helpful for understanding the performance of individual option sets, the impacts of specific uncertainties, and significance of stakeholders' preferences. For example, the pair of box plots in Figure 15.4, in which a New England data set of 2,160 scenarios was divided
158
Multiple Decision Makers and Mixed Participation
into two cohorts—with and without repowering of existing power plants— showed that this option could make a dramatic difference in sulfur dioxide pollution emissions over a range of experimental conditions. The programming challenge lay in integrating models of subsystems to allow rapid evaluations of large numbers of scenarios. The analytical challenge became one of developing an efficient, effective strategy for mining the data amassed during the multiattribute scenario simulations. A question that was often asked by experienced negotiators when initially introduced to this approach was: "Why not just do joint model-building, and thus avoid having to run so many scenarios?" Of course, this approach did have elements of joint model-building, in the sense that many joint assumptions were made in building the modeling "engine" that produced the scenario results. But the presence of uncertainty mandated that a wide variety of cases be examined. Creative thinking, or brainstorming, likewise generated many options that deserved to be evaluated. Finally, either new information or turnover in the planning group often caused parties to revise their assumptions about the probabilities of future events. Using scenario-based multiattribute tradeoff analysis, the NEP became a regional forum devoted to spurring the inventiveness of participants in the planning debate by: (1) exploring the behavior of the electric power system, and (2) understanding the participants' preferences. Formal negotiations continued in other forums, placing this project at a slight remove from actual decision making. Individual decision-makers preferred to take the information gained back to their own domains and use it to make better independent decisions. While individual stakeholders continued to disagree about the relative importance of attributes such as cost and environmental impacts, they found common ground in a number of areas. The debate progressed to the point of defining multifaceted strategies instead of polarized single-option solutions. The region successfully overcame its planning paralysis, thanks in part to the NEP. NOTES 1. The net-benefit decision rule is formally known as the Kaldor-Hicks potential Pareto improvement criterion. It is less stringent but also less fair than the "no losers test" (Pareto criterion) underlying voluntary transactions in an ideal marketplace. See Ed Mishan, Cost-Benefit Analysis (London: Allen & Unwin, 1982), pp. 162-163. 2. The equity theme is well explored in the facility siting literature. See J. Schofield, Cost-Benefit Analysis in Urban and Regional Planning (London: Allen & Unwin, 1987), pp. 2-23; D. Hagman and D. Missczynksi, eds., Windfalls for Wipeouts (Chicago: American Society of Planning Officials, 1978); M. O'Hare, L. Bacow, and D. Sanderson, Facility Siting and Public Opposition (New York: Van Nostrand Reinhold, 1983), pp. 67-87. 3. Hagman and Missczynksi, Windfalls for Wipeouts; O'Hare, Bacow, and Sanderson, Facility Siting, pp. 61-SI. 4. Michael Elliott, "Pulling the Pieces Together: Amalgamation in Environmental Impact Assessment," EIA Review 2, 1 (1981): 11-37. 5. When aggregating a project's multiple impacts to create a succinct basis for making decisions, analysts must choose among alternative approaches. Analysts may (1) attempt
Methodological Factors Affecting Analysts
159
to develop a social welfare function representing society's aggregate best interest, and use that for optimization exercises in lieu of the company's objective function; (2) develop separate objective functions for each stakeholder group, and do iterative optimizations in the hope that they will converge; or (3) abandon mechanical optimization efforts and present the tradeoffs, in multiple attribute terms, directly to the stakeholders so that they may negotiate a preferred solution. For more on (1) see Ralph Keeney and Howard Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs (New York: John Wiley & Sons, 1976), pp. 5-30, 515-548. For more on (2) see Ralph Keeney, "Decision Analysis: An Overview," Operations Research 30, 5 (1982): 803-837. For more on (3) see Connie Ozawa and Lawrence Susskind, "Mediating Science-Intensive Policy Disputes," Journal of Policy Analysis and Management 5, 1 (1985): 23. Also see Howard Raiffa, The Art and Science of Negotiation (Cambridge MA: Harvard University Press, 1982), pp. 285-287. 6. Schofield, Cost-Benefit Analysis, pp. 2-23. 7. R. Rosenthal, "Concepts, Theory, and Techniques: Principles of Multi-Objective Optimization," Decision Sciences 16 (1985): 134. 8. The analyst typically elicits the decision maker's preferences before the analysis, to allow the formulation of an objective function that collapses multiple and conflicting objectives (via weighting schemes, for example) into a single optimizable function. This function may include multiple dimensions such as cost, environmental impacts, and quality of service, valued according to the decision maker's preferences rather than the shadow-pricing efforts of the analyst. This is a useful approach when the design options are well understood, the decision maker emulates the preferences of his or her constituents, and enough consensus exists to warrant representing people's diverse preferences with a single social welfare function. Conversely, the analyst may generate a large "efficient" set of alternatives, and then later elicit the decision maker's preferences in choosing the "best" of these. Again, this depends on having already identified the entire "efficient" set, and having a decision maker with a strong public mandate, whose preference mapping is widely shared, and whose decision therefore will not be overturned. This technique has significant implicit communicative challenges. However, information has value in decision-making, and preferences may change over time as new information is received. Thus, for some projects, it may be desirable to elicit the decision maker's objectives in an interactive manner, by repeatedly gauging his or her tradeoff preferences as new data become available. An interactive approach allows analysts to identify new options, and lets the decision maker incorporate the viewpoints of other interested parties into the objective function, as the analysis reveals potential project impacts. See C. Hwang, S. Paidy, K. Yoon, and A. Masud, "Mathematical Programming with Multiple Objectives: A Tutorial," Computation and Operations Research 7 (1980): 6. 9. Sensitivity analysis becomes the primary means of considering the effects of uncertainty. The robustness or flexibility of an option in an uncertain future is thus limited to being a second-order rather than a primary decision-making criterion. Multiobjective optimization approaches also have the disadvantage of being opaque methods that provide an optimal solution (or small set of superior solutions), without illustrating clearly to decision makers and the public what tradeoffs were made while winnowing out the inferior options. 10. Inferior alternatives may be weeded out using a general decision mle such as dominance (superior alternatives perform better across all attributes than inferior alternatives), or more refined rules. See Rosenthal, "Concepts, Theory, and Techniques." 11. To sort through the set of alternatives, the analyst must develop a formal measure of each alternative's performance along each attribute such as cost or environmental
160
Multiple Decision Makers and Mixed Participation
impact. However, measurement does not mean boiling everything down to a common currency such as dollars. Instead it means quantifying to the degree that the data allow. Qualitative data can be nominal (placed in categories such as "yellow" or "blue") or ordinal (placed in a ranking such as "low" or "high"), whereas some quantitative data may be measured on an interval scale (that measures fixed intervals but has an arbitrary starting point, such as temperature in degrees Celsius) or a more versatile ratio scale (that measures fixed intervals from zero, such as the amount of money in your pocket). Nominal data are difficult to use in a multiattribute analysis, but ordinal, interval, and ratio data work well. Once quantified, each attribute then needs to be normalized, so that its score covers a standard range from lowest to highest case. Next it is necessary to develop a weighting function that indicates the relative importance of the various attributes. Only then can one aggregate scores across attributes. If information on the decision maker's preferences is available, the analyst may then attach weights to attributes prior to the sorting process. The sorting may proceed using additive weights across attributes, or hierarchal systems that focus first on one attribute, and then others. See Hwang, Paidy, Yoon, and Masud, "Mathematical Programming." 12. This method can address uncertainty by simulating the performance of options across a range of possible future circumstances, and then using "inter-future" sorts to identify options having a low variation in performance across that range of future circumstances. Such alternatives are "robust" (exhibiting stable performance) or "flexible" (exhibiting adaptability, which in turn promotes stable performance). The final sorting process can then include as additional attributes the alternatives' flexibility or robustness under uncertain conditions. 13. Multiattribute analysis thus initially filters out infeasible alternatives in an implicit rather than explicit manner. Multiattribute analysis is most useful when analysts have a good understanding of the universe of alternatives, and when the alternatives are large enough and distinct enough to prevent the creation of hybrids. Interactive uses of multiattribute analysis allow new options to be introduced into the decision set and allow decision makers to enjoy input from concerned parties as they learn more about project impacts. However, most of these techniques retain an opaqueness that may hinder acceptance of the results if the project politics are contentious. 14. Individual members of the decision-making group may sort through various options according to their own preferences. Preferences may change as additional information is provided. Further, participants may invent new options to include in the next iteration of the analysis. In later stages, graphical aids and simple comparisons can help the decision makers to winnow out options universally identified as inferior. This type of approach is open as opposed to opaque and abstruse. All of the participants would know how the favored option was chosen, and of the tradeoffs involved in that choice. Choices would be made directly by them, rather than by a mathematical programming technique that simulates their preferences. The decisions made are thus more likely to meet with public acceptance and approval (to the degree that all stakeholders were represented in the decision-making group). See Carl Bespolka, "A Framework for Multiple Attribute Evaluation in Electric Utility Planning," Master's thesis in Operations Research (Massachusetts Institute of Technology, 1989). 15. Analysts characterize the tradeoffs using measures of both an attribute's absolute magnitude and its variability across alternatives. See Stephen R. Connors, "Integrated Resource Planning as a Tool for Technical Mediation: A Case Study in Electric Power Planning," Master's thesis in Technology and Policy (Massachusetts Institute of Technology, 1989). 16. The analysis can likewise address uncertainty, as above, by examining an option's robustness or flexibility across possible future circumstances. See Fred Schweppe and
Methodological Factors Affecting Analysts
161
Hyde Merrill, "Multiple Objective Tradeoff Analysis in Power System Planning," Proceedings of the 7th Power Systems Computation Conference (Lisbon, Portugal: August, 1987). See also Clinton J. Andrews, "Evaluating Risk Management Strategies in Resource Planning," IEEE Transactions on Power Systems 10, 1 (1995): 420^126. 17. Explicit analysis of the tradeoffs between options could allow decision makers to develop compensation packages addressing the concerns of those who stand to lose something from project implementation. Equity concerns can thus become an integral part of the analysis and decision-making process. See O'Hare, Bacow, and Sanderson, Facility Siting, pp. 67-87. 18. Indeed, most of the sorting and tradeoff analysis tools beyond simple dominance comparisons have seen limited practical use. Tradeoff analysis may not be an appropriate technique when a significant level of public participation will not improve the efficiency, equity, stability, and wisdom of the planning decision. Small, popular projects with few spillovers may not benefit from the extensive amount of public involvement required for the tradeoff analysis approach to decision-making. 19. Early applications of scenario analysis in the electric power sector include: Al Destribats and J. Malley, "NEESPLAN II 1988: Using New Methodologies to Evaluate Uncertainty," presentation to Strategic Planning Services Committee meeting (New Orleans, LA: Edison Electric Institute, 1988); and Southern California Edison Company (SoCal Edison), Systems Planning and Research, "Planning for Uncertainty: A Case Study," Technological Forecasting and Social Change 33 (1988): 119-148. 20. Keeney and Raiffa, Decisions with Multiple Objectives, pp. 5-30; D. McAllister, Evaluation in Environmental Planning (Cambridge, MA: MIT Press, 1982), p. 140. 21. Clinton J. Andrews, Stephen R. Connors, Daniel Greenberg, Warren Schenler, Richard Tabors, David White, and Kristen Wulfsberg, "Assessing the Tradeoffs between Environment, Cost and Reliability: Developing a Coordinated Strategy to Ensure New England's Electricity Supply," Proceedings of the New England Environmental Exposition (Boston, MA: 1990); Bespolka, "A Framework for Multiple Attribute Evaluation;" Connors, "Integrated Resource Planning." 22. Keeney and Raiffa, Decisions with Multiple Objectives, pp. 6-7. 23. C. Luce, "An Energy Strategy for the 1980s," report (New York: Consolidated Edison Company, 1980). 24. Hyde Merrill, Fred Schweppe, and David White, "Energy Strategy Planning for Electric Utilities Part I, SMARTE Methodology," IEEE Transactions on Power Apparatus and Systems, vol. PAS-101, no. 2 (1982). 25. Electric Power Research Institute (EPRI), RISKMIN: An Approach to Risk Evaluation in Electric Resource Planning, Volume 1: Methodology, EPRI-EL-5851 (Palo Alto, CA: EPRI, 1988). 26. C. Olson and M. Picconi, Statistics for Business Decision Making (Glenview, IL: Scott, Foresman and Co., 1983), pp. 436-438. 27. The first well-executed electric utility sector application of Monte Carlo simulation was by the Northwest Power Planning Council (NPPC), Northwest Conservation and Electric Power Plan, Volume II (Portland, OR: NPPC, 1986), pp. 8-34-8-41.
This page intentionally left blank
PartV The Practice of Joint Fact-Finding
This page intentionally left blank
16 Lessons Learned Preceding chapters have explored the theory and practice of joint fact-finding in substantial detail. This chapter looks across the cases for applicable lessons, organized around the themes introduced at the beginning of the book: the analyst's technical and communicative problems, the complementary sources of legitimacy for analysis, and the impacts of positional conflict, uncertainty, and complexity. I also focus on the personal choices of analysts who voluntarily decide to strive for a sense of balance and to take on interpersonal responsibilities.
THE ANALYST'S TECHNICAL PROBLEM Modeling reality inevitably involves simplification, and the analyst's central technical problem is to simplify appropriately. Disciplinary traditions offer much guidance in this regard, but they are inadequate as the sole basis for simplification when analysts serve decision makers. The Office of Technology Assessment (OTA) solved this problem by devoting much effort to the scoping phase of each project. Analysts at OTA engaged in discussions about a study's focus and approach with their Congressional clients, an advisory panel of knowledgeable outsiders, and in workshops attended by a broad range of experts and professional stakeholders. In this way they ensured that they had not missed any crucial issue and that their framework for analyzing each problem was credible. The better comparative risk projects also adopted a strategy of soliciting broad involvement in scoping discussions. The U.S. Environmental Protection Agency (USEPA) project, done entirely in-house for an audience of agency decision makers, received from those managers a list of environmental issues for ranking and a set of uniform assumptions to guide analysis. The Washington State project instead solicited scoping advice from a group of prominent residents, and analysts interacted frequently with that group as the work progressed. That
166
The Practice of Joint Fact-Finding
stakeholder group extended the outreach effort to the general public. The unsuccessful California project allowed the technical work groups much more leeway to establish work scopes and analytical assumptions, with uneven results and some analytical decisions that lacked broad acceptance. The Minnesota project limited its scope by design to ensure that the project was manageable, but also thereby trivialized the results to a certain extent. The New England Project (NEP) solicited broad outside involvement at the earliest stages of the project's scoping discussions, and made continuing interactions a hallmark of the project. The analysis team therefore had the luxury of testing the credibility of alternative modeling approaches with a critical audience and adjusting the approach and assumptions as necessary. The scenario-based multiattribute tradeoff analysis framework further mitigated the simplification challenge by letting participants examine whether results were actually sensitive to controversial or uncertain assumptions. The primary limit on project scope was the analysts' limited range of technical knowledge. This derived in part from the project's modest budget, which was inadequate to field a large analysis team. In sum, successful analysts asked decision makers, interested parties, and even the lay public for help in choosing the appropriate way to simplify reality during analysis. Interactive processes made asking easier, and empirically oriented methods made modeling more transparent. THE ANALYST'S COMMUNICATIVE PROBLEMS All of the contexts for analysis are communicative, but the nature of that challenge varies. Basic researchers share information with their peers and students; analysts working for a single boss talk among themselves and with their boss; analysts working with multiple decision makers find themselves in the middle of partisan debates; and analysts interacting with the lay public meet with suspicion, apathy or incomprehension. The OTA case (multiple decision makers, expert involvement in the analysis) demonstrated that only a dozen or so of the agency's six hundred bosses typically paid attention to any one of its products. Most policy-relevant information flowed through small, informal networks connecting analysts, Congressional staff, stakeholders, and the elected officials sitting on relevant committees. The large written reports produced by OTA fit poorly with the oral tradition of communication in Congress. Although OTA was careful to recruit influential experts and stakeholders to its advisory panels, when the time came to report the results of a study, the agency rarely took advantage of this influential network's points of access to decision makers. As a result, technically knowledgeable people used OTA's products more often than did the decision makers who paid for them. The broad review process used on OTA studies ensured that some level of communication among technical specialists from different disciplines took place. The USEPA comparative risk project involved only expert participants, much like the OTA studies. Unlike OTA, USEPA did not attempt much crossdisciplinary communication, and instead created separate discipline-specific
Lessons Learned
167
work groups. Managers were left to synthesize findings across groups as best they could. State-level comparative risk projects (single decision maker, mixed participation in the analysis) attempted more ambitious communications efforts involving professional stakeholders and the general public in addition to technical experts and agency decision makers. The successful Washington project placed major stakeholders and government officials in charge, and arranged for analysts to interact closely with them. The unsuccessful California project gave the technical specialists great autonomy and did not adequately encourage them to interact with interested parties, officials, the public, and each other. The Minnesota project ensured that all parties interacted, but constrained those interactions to the project's detriment by making them brief and highly scripted. The NEP (multiple decision makers, mixed participation in the analysis) placed analysts, interested parties, and decision makers in face-to-face contact on a regular basis, so that all parties became engaged in the analytical details and review of results. Upon their approval of a body of analytical work, analysts were then directed to share the findings directly with the region's interested parties (both experts and members of the lay public). The NEP's analysts devoted a significant fraction of their time to communications questions, exploring how to present technical concepts intuitively and tailoring their findings to match the interests of distinct audiences. Looking across cases, it is clear that the communicative responsibilities of the analysts changed—and increased—as the context opened up. Establishing a division of labor between analysts and technical communicators became less feasible, because the topics of communication covered not just results but also analytical inputs and procedures. COMPLEMENTARY SOURCES OF LEGITIMACY Of the evaluation criteria used by Clark and Majone, legitimacy provided the richest source of insights from the case studies. Scientific adequacy, considerations of value, and measures of effectiveness varied across cases, but legitimacy stood out as a problematic concept because it seemed orthogonal to the concerns of science and was at the same time so context-dependent. Numinous or status-based legitimacy—authoritativeness—was the key feature of the OTA studies I evaluated, wherein elite analysts, guided by elite stakeholders, produced studies for elite clients. However, OTA made great efforts also to assure civil legitimacy for its products, by involving stakeholders to a greater extent than was typical in federal analytical shops. Enterprises such as the National Research Council and USEPA are now confronting the need for broad review processes to improve civil legitimacy, and are encountering difficulties. Many of their problems arise from having analysts who function poorly in a highly communicative context, or are unaware of its implications for their work. The USEPA comparative risk project relied even more than the OTA studies on science-based legitimacy. It did not solicit any public or stakeholder input,
168
The Practice of Joint Fact-Finding
and relied instead on the wisdom of in-house experts and agency managers. The agency's Science Advisory Board, made up of prominent scientists, later reviewed and endorsed the work. The state-level comparative risk projects, by contrast, developed many trappings of civil legitimacy: stakeholder advisory groups, steering committees made up of public officials, and energetic public participation. Their technical experts were less well known than those involved in federal projects, so their potential authoritativeness was lower. For the state-level projects, civil and status-based legitimacy seemed to have a largely complementary (rather than substitutive) relationship, because the open process gave less well-credentialed analysts the opportunity to demonstrate competence, earn public trust, and learn from others. These projects also demonstrated that civil legitimacy had important pluralistic and communitarian components. Public officials and professional stakeholders provided pluralistic legitimacy, but not with grassroots advocates. Direct public participation, where used, provided a strongly complementary communitarian source of legitimacy. The complementary relationship between authoritativeness and civil legitimacy was also clearly visible in the NEP. The project organization reflected three sources of legitimacy: analysis team (authoritative), advisory group (pluralistic), and road show (communitarian). Removing any component would have killed the project. The quality of the information produced by the analysts was a strong function of the critical reviews performed by the other two groups. The highly charged political context made opaque, hard-to-understand modeling by experts untenable. Considered broadly, it appears that only a nation's top intellectual stars may carry enough "juice" to be the sole source of legitimacy for a study, especially if they can argue that opening up the analytical process to stakeholder or public participation would weaken the analysts' autonomy. However, most analysts do not carry such clout, and for them, a recipe that adds civil sources of legitimacy will greatly strengthen the acceptance of their work. Further, as the British experience with "mad cow" disease suggests, even the illuminati may not command enough respect to legitimize a flawed study of a contentious issue.1 Simon's distinction between two components of rationality, the substantive (what were the findings?) and the procedural (how was the analysis conducted?), suggests an additional lesson. Some fields, especially in the social sciences, lack strong agreement on substantive facts, which gives procedural transparency more centrality. In each of the cases here, substantive knowledge from the natural sciences plays a significant role. Atmospheric physics and chemistry, toxicology, and electrical engineering each have widely accepted substantive bodies of knowledge. For these cases, there is only a limited potential for substituting legitimate procedures in place of substantive knowledge—and it is mostly for downstream topics such as "what should we DO about X?" Substantive rationality enjoys relatively more centrality in these cases than would be true in studies of, say, quality-of-life or organizational behavior. This strengthens my argument that civil legitimacy (procedural rationality, practical reason) has a reinforcing and complementary relationship with numinous legitimacy (substantive rationality, theoretical reason). The lesson is that the field of inquiry will necessarily constrain the legitimation strategy an analyst chooses.
Lessons Learned
169
POSITIONAL CONFLICT Throughout this book, I have argued that positional conflict hinders certain kinds of knowledge production. The cases illustrate different aspects of this theme and demonstrate possible coping strategies. OTA minimized the effects of inevitable positional conflict in Congress by acquiring a bipartisan governance structure and avoiding policy recommendations. As an institution (a boundary organization) designed to operate in such an environment, OTA had no transformative ambitions and was content with an information provider's role. But analytical rationality thrives only when power relationships are stable. When a new party swept into power in the 104th Congress, all bets were off, and the fragile bipartisan shield protecting OTA collapsed. Some of the comparative risk projects had more catalytic aspirations. Most notably, Washington's process built a political consensus and sparked significant legislative action. However, the Washington project achieved consensus by removing controversial topics such as logging and spotted owls from consideration. The California project failed to reach completion partly because it became ensnarled in the always-vicious partisanship of statewide elections. The NEP did not get under way until policy paralysis became painful for New England during the summers of 1987 and 1988. Evidence of the tight capacity situation then became visible to the public, as emergency operating procedures such as voltage reductions and load curtailments were implemented dozens of times more often than in previous years. The region needed a crisis to precipitate the cooperative planning effort. This is likely to be true in general, and reflects the limited time resources available to those busy volunteers who end up participating in such efforts.2 This factor likely limits the potential for premeditated joint fact-finding. Of course, once the stakeholders understood the need for consensus on a longterm strategy, they supported this ad hoc project financially and politically. Individually, they also expressed disappointment—but no surprise—that the existing institutional arrangements had failed to bring this joint fact-finding effort about. The crisis induced some regulatory reform,3 but it also produced calls for the creation of a permanent long-term regional planning capability. The governors of the six New England states were loathe to give up sovereignty, but they had cooperative mechanisms in place for managing the common problems of their distinctive region. Rather than create a new planning agency, the New England Governors Conference reinvigorated its relevant standing committees, demanded more useful analysis from the New England Power Pool (NEPOOL), and hoped those existing arrangements would be adequate. Among U.S. multistate electrical regions, only the Pacific Northwest (Idaho, Montana, Oregon, Washington) managed to create a regional planning enterprise, and that was because the federal Bonneville Power Administration dominated that region and subsidized the planning agency.4 The NEP harnessed public participation to evaluate the intangible impacts of planning choices (such as environmental degradation) in a way that contrasted with the standard practice in benefit-cost analysis. To illustrate, consider the experience of a set of technical sessions sponsored by the Massachusetts
170
The Practice of Joint Fact-Finding
Department of Public Utilities (DPU) during 1987-1991, whose goal was to put a price on pollution, or more formally, to develop official valuations of environmental "externalities" or impacts.5 During the technical sessions, participants were asked to estimate the value of avoiding increments of air pollution. Using secret ballots, participants recommended adders ranging from 0.1 percent to 100 percent of the current price of electricity.6 During the next year the parties presented evidence to the DPU justifying adders all across this range, but could not reach a consensus on any value. The DPU was left to weigh the evidence and recommend a course of action that was unavoidably controversial. The DPU took more than two years to rule on this matter, and then they found a need to revisit the ruling almost immediately. By contrast, the multiattribute approach used in both the NEP and COM/Electric projects avoided this pitfall. All of the parties in the COM/Electric project agreed on the efficacy of two items—increased demandside management and reserve margin—within the space of three meetings a month apart. Consensus on repowering (building new power plants in the footprints of old ones) built noticeably in the NEP advisory group between two questionnaires administered six months apart. In these projects the time was spent finding areas of agreement on practical choices instead of battling over areas of disagreement in fundamental values. The scarce resource of participation was carefully husbanded because analysts did not waste the nonanalysts' time, and the parties in New England implemented many of the agreed-upon projects. Advisory group/analysis team interactions within a scenario-based multiattribute tradeoff analysis framework represented a useful approach for joint fact-finding. The arrangement spurred the stakeholders' inventiveness and improved the quality of the options being debated. It helped to shift the debate away from arguments over the choice among different options for the next single investment, toward discussion of attractive long-range, multioption strategies. This framework did not reduce the normative content of the analysis, it merely managed that content better. The normative steps were more clearly flagged, which allowed parties to buy into important parts of the analysis without agreeing on all planning assumptions. The least valuable part of the NEP was the conflict analysis portion. Its analytical insights lagged rather than preceded the debate in the advisory group. The project showed that efforts to understand the detailed preferences of the stakeholders were not very fruitful while the process of inventing better technical options was still going on. Preference elicitation would presumably have become more valuable once everyone was convinced that the best possible "efficient set" of strategies had been identified. But that threshold was never reached in this project. The NEP clearly improved the quality of the debate and helped to develop a shared recognition of useful next steps, but it did not lead to a formal consensus among the parties on an official plan.
Lessons Learned
171
UNCERTAINTY When the perception of uncertainty affects a public debate, its effects must be publicly explored. The experience of the NEP and its spinoffs supports this assertion. The OTA studies confirm that explicit discussion of uncertainty builds credibility in an elite context. The comparative risk projects illustrate that doing so is difficult when the lay public becomes involved. Various types of incertitude such as (stochastic) uncertainty and simple ignorance require different coping mechanisms.7 The NEP's process of exploring multiple futures for each strategy gave the analysis more credibility and value, but for unexpected reasons. I originally supposed that planning paralysis was a function of controversy over what different parties considered to be the most likely set of future events. However, during these experiments the parties did not want to look at just their personal "high probability" future; instead, they wanted to understand how well strategies performed across a wide range of possible future circumstances. The uncertainty analysis showed that strategies may perform consistently well in one dimension of performance, but not in another. The conflict then reduced to one concerning how different parties valued volatility in one dimension, say, costs, versus another, such as environmental impacts. Conflicting preferences among strategies was more due to different priorities about what dimensions of performance were important than to different expectations of the future. In the exploration of uncertainty, the essence of the debate became much clearer. Evaluation across a variety of possible future circumstances also helped the participants to better understand different strategies' strengths and weaknesses, and thereby invent better strategies. This spurred the parties' inventiveness, and helped them choose among strategies having distinct types of vulnerability. In the exploration of uncertainty, the debate also became much more constructive. The recent deregulation of the electricity and gas industries has permitted the creation of markets for risk management. This may reduce the role of regulators and political decision makers in such decisions, and increase the adaptive opportunities of utility companies and other players in the marketplace.8 COMPLEXITY OTA's function was to translate technical, complex issues for the Congress. Its studies represented major efforts at synthesis, and it typically took OTA eighteen months to write one of these book-length documents, which involved dozens of outside experts and became standard references. Most of the comparative risk projects also represented major synthetic efforts and years of work. Like OTA studies, a central task for analysts included packaging technical information for easy comprehension. In both cases, expert judgment, coupled with extensive review of the draft products, was the primary means of managing technical complexity. The intrinsic complexity of the New England electric power system had kept public debate on planning options at a relatively simplistic level for years, as opposing parties put forward polarized single-option solutions. By framing the inquiries in terms of multioption strategies, participants were freed to explore the
172
The Practice of Joint Fact-Finding
mutually beneficial middle ground. Strong analytical support was a prerequisite for this kind of brainstorming. The participants also discovered the value of crafting carefully coordinated strategies, as opposed to simply horse-trading options into a lowest common denominator strategy. In these electric power debates, there were significant and counterintuitive interactions among the options making up the strategies. By identifying these interactions, analysts maintained efficiency (or at least a high level of technical knowledge) in a more equitable planning process. The projects demonstrated that a lay audience could provide useful input into a technically complex planning process. Nonexpert public participants were able to offer guidance to the professional utility planners regarding important normative decisions. Responsibility for value judgments about framing assumptions and the tradeoffs among costs, environmental impacts, and reliability was fruitfully shared with members of the public. In the electric utility arena, the companies were the best-equipped entities to provide analytical support to their open planning processes because they had technically trained staff and computing resources. However, the companies were not seen as credible information providers until they proved the even-handedness of their analysis. Initial steps in this process included beginning to model environmental attributes, sharing the responsibility for making assumptions by undertaking joint model-building, and becoming responsive to information requests from nonutility and nongovernment parties, especially environmentalists. While any of these steps could seem to conflict with the shortterm strategic business considerations of these corporations, the improved stability in the long term could be well worth it. Analysts sometimes had to choose between using existing, well-known models that only approximated the analytical needs of the project versus developing new models from scratch. Industry-standard models had the advantages of being known and understood, and of being immediately available, often at a low cost. However, industry-standard models often were not designed for scenario generation, and did not calculate important attribute values. In the NEP, COM/Electric, and Tennessee Valley Authority (TVA) projects, analysts augmented existing models to allow planning under uncertainty, to measure multiple attributes, and to run many scenarios. Entities with longer time horizons and larger budgets, such as the Northwest Power Planning Council9 and NEPOOL,10 have constructed their own models from the ground up, allowing more useful features to be implemented. Ad hoc joint fact-finding efforts rarely have had that luxury. BALANCE The credibility of the analysts was crucial to the success of the studies evaluated in this book. For technically knowledgeable audiences, they needed to demonstrate both balance and technical competence. For lay audiences they needed to demonstrate even-handedness and an ability to communicate the normative aspects of technical questions.
Lessons Learned
173
OTA made balance its credo, and structured its analysis methods and reports accordingly. Comparative risk projects suffered when analysts (or entire technical work groups) appeared partisan. Comments were repeatedly made about the NEP—by New England environmentalists in particular—that the same analysis was less credible when performed by the utilities instead of the university. Yet even the university-based analysis teams had to earn the right to the neutral's role by overcoming a series of obstacles which are likely to arise in other planning debates.11 First, the stakeholders in each debate had to be convinced that a joint factfinding effort would be worthwhile. The New England analysis team had to depend on internal resources for support during the first year of the effort. Only after useful results were cost-effectively produced did external donors back the project. In general, parties were unprepared to believe that single-option solutions were inefficient, or that complex systems could have counterintuitive behavior, or that all-gain solutions could be found, unless it was demonstrated for them. A "seed" project needed to precede the full effort. Second, people had to be differentiated from institutions. In the New England case, the university's name (Massachusetts Institute of Technology) suggested technical competence but not neutrality. Indeed, the institution was regarded by some as being in industry's pocket, and its official spokesperson, the president, had advocated for one of the single-option solutions in New England's polarized debate—nuclear power. Members of the analysis team held quite different points of view from the president, but the project participants had to get acquainted with the individual analysts and their work before trust could be established. Both their competence and neutrality had to be demonstrated; only then could they gain personal credibility as neutrals. Institutions cannot advertise neutrality in public disputes; their personnel must earn it. However, institutions could play an important role by improving public access to these individuals (and their time), and by raising the seed money needed to launch dispute resolution efforts. Third, the composition of the analysis team along several dimensions was important. Technical credibility depended in part on being able to demonstrate expertise in all of the relevant disciplines, for example, electrical and mechanical engineering, chemistry, regional economics, and applied statistics. Neutrality depended in part on being able to show a diversity of views within the analysis team. Analytical biases were more often self-correcting because different analysts held predilections toward each of conservation, combustion technology, solar energy, and nuclear energy. ROLE OF THE ANALYST The cases describe analysts playing nontraditional roles. Instead of working invisibly to simply "get the numbers right" they also had important interpersonal responsibilities. OTA project directors had to cajole diverse members of advisory groups and workshops into working constructively together. Comparative risk analysts spent as much effort on communication as on analysis. The NEP analysts developed a mediating rather than a consultative mindset, as the following paragraphs illustrate.
174
The Practice of Joint Fact-Finding
First, during the process of building the models and defining the analytical assumptions on the NEP, the analysts learned that it was always better to share than not to share these responsibilities. Joint assumption-making brought adversaries face-to-face and forced them to carefully define their differences. It helped the parties better to understand what was being modeled, which gave them more confidence in the final results. It also provided a quality control check on the analytical work. In the case where the analysis team modeled demand-side programs without consultation, they were not credible, and had to repeat the analysis under the scrutiny of a demand-side subcommittee. Sharing modeling responsibility took time, but was vital for establishing credibility. Second, the process of narrowing the scope of the analysis required a consensus among the participants. In choosing which two thousand scenarios to model (out of the infinite number of possible scenarios), some options and uncertainties had to be postponed. However, the stakeholders would mistrust any analysis that ignored their primary concerns; thus the scoping effort required a consensus. The job of facilitating the prioritization process belonged to the analysts, and they relied on discussions, voting, and questionnaires to do this. An important aspect of their work thus consisted of leading a diverse group of strongly and openly committed decision makers to a consensus. The expert participants in the NEP demanded a role in model-building and assumption-making. The procedural challenge there was one of attentive refereeing. The lay participants had to be coaxed into helping to make assumptions and value judgments. The analysts had to serve as translators of technical jargon in addition to managing the normative content of the work. Much decision-making power was thus concentrated in the analysis team. In the TVA case, analysts had to perform both of the above tasks, by refereeing the internal debate and by assisting those participating in the external process. In general, it was clear that analysts needed both process skills and technical skills in order to succeed in this type of work. The generation of thousands of scenarios using assumptions agreed upon by consensus, accounting for uncertainty by covering ranges, and measuring multiple attributes to avoid valuation problems, provided too large a data set to be directly shared with the public. The analysis team had to boil the data down to a few defensible "stories" that could be simply explained. The inductive data analysis thus gained some guiding objectives: Could it be established that a widely believed phenomenon actually existed? Could they show convincingly that a counterintuitive result made sense? Mining the data for such stories, and then documenting them for public presentation became vital steps in the process of exploring system behavior. Some analysts now specialize in building complex computer models directly with groups during participatory workshops. This approach works well in managerial and administrative contexts where few data are available and models codify experience in a causal structure. Examples include modeling the flow of children needing foster care in New York State, homelessness in New York City, and Medicaid expenditures in Vermont.12 Those engaged in group modelbuilding processes have identified distinct roles for analysts, including facilitator (the person in front of the room eliciting knowledge), modeler/reflector
Lessons Learned
175
(critically evaluates the model being built), process coach (observes and guides facilitator's interventions in group dynamics), recorder (tracks the group's ideas and decisions), and gatekeeper (represents the client's interests). 13 Such efforts work best when analysts with relevant technical knowledge perform all roles with the exception of process coach (for whom less technical knowledge is better). 14 These are indeed analysts with a difference. 15
NOTES 1. Helen Gavaghan, "UK Mad Cow Disease: Report Flags Hazards of Risk Assessment," Science 290, 5493 (November 3, 2000): 911-913; and Laura Mauelidis, "Penny Wise, Pound Foolish—A Retrospective," Science 290, 5500 (December 22, 2000): 2257. 2. Servon cites external threats and exclusion from resources as the key reasons for organizing civic groups, and notes that those most in need of resources often have the least leisure time to donate to civic causes. Lisa Servon, "The Intersection of Social Capital and Identity: Thoughts on Closure, Participation, and Access to Resources," paper presented at a conference on Civic Participation and Civil Society (Bellagio, Italy, April 6-10, 1999), p. 6. 3. See, for example, Massachusetts Department of Public Utilities, Revised Order 8636-G/89-239 (Boston, MA, 1989), pp. 1-100. 4 Richard H. Watson, "Northwest Power Planning Council Case Study," in Clinton J. Andrews, ed., Regulating Regional Power Systems (Westport, CT: Quomm Books, 1995), pp. 187-206. 5 Massachusetts Department of Public Utilities, Order 86-36-F (Boston, MA, 1988), pp. 1-87; Revised Order 86-36-G/89-239 (Boston, MA, 1989), pp. 1-100. 6 Temple, Barker & Sloane (TBS), "Externalities Workshops," workshop materials prepared for the Massachusetts Department of Public Utilities (Lexington, MA: TBS, 1989). 7. Andrew Stirling, "On Science and Precaution in the Management of Technological Risk," report prepared for the European Commission Forward Studies Unit (Brighton, UK: University of Sussex, Science Policy Research Unit, May 1999), downloaded on July 17, 2001 from http://www.sussex.ac.uk/Units/gec/gecko/refs.htm. 8. Deregulation is a structural rather than procedural response to uncertainty, which nonetheless seems likely to improve the transparency of risk management and uncertainty analysis, because these factors will increasingly be priced in competitive markets. 9. Northwest Power Planning Council (NPPC), Northwest Conservation and Electric Power Plan, Volume II (Portland, OR: NPPC, 1986), pp. 8-34-8-41. 10. NEPLAN, NEPOOL Electricity Price Forecast for New England (West Springfield, MA: NEPLAN, 1989). 11. A few are mentioned here; for an interesting discussion of others see Jennifer Nash, "The University as Mediator: A New Model for Service," Master's thesis, Department of Urban Studies and Planning (Massachusetts Institute of Technology, 1990). 12. For an overview of group model building activities see the special issue on that subject edited by George P. Richardson in System Dynamics Review (Summer 1997). 13. George P. Richardson and David F. Anderson, "Teamwork in Group Model Building," System Dynamics Review 11,2 (Summer 1995). 14. Part of the job of the process coach is to prevent analysts from lapsing into jargon, and so a coach without relevant analytical training is actually preferable. Richardson and Anderson, "Teamwork," p. 2.
176
The Practice of Joint Fact-Finding
15. "Engineers with a difference" was the slogan used by the Technology and Policy Program at MIT during the time of the NEP to explain the rationale for cross-training technical experts to serve a bridging role.
17 Elements of Successful Joint Fact-Finding This book has progressed from an intuition that communicative aspects of context should affect analysis, to a conceptual argument explaining why these aspects of context ought to affect analysis, to a set of case studies demonstrating how they do affect analysis. This concluding chapter draws prescriptive lessons for the future. When analysts serve multiple decision makers or allow broad public participation in analytical work, several elements affect their success. Key elements of the joint fact-finding approach to analysis that appear in the OTA, comparative risk, and NEP cases include the following items. SUPPORT FROM DECISION MAKERS Technical analysts work at the pleasure of decision makers, whether they do so in an advocacy or joint fact-finding context. The case studies have provided stark reminders of the asymmetrical relationship between knowledge and power. Knowledge is a weak form of power and analysts may simply be brushed aside during partisan jousting. The force of the better argument has a chance to carry the day only when those with power allow argumentation. That typically happens during periods of political stability or when warring factions have fought to a standstill. Analysts in an advocacy setting are merely tools to be used or dropped according to the dictates of strategy, and analysts in a joint fact-finding setting should expect to fare no better. OTA carefully developed a bipartisan governance system that protected it for two decades, but that was during a period of relative stability in Congress. OTA failed to gain the support of the new leadership in 1994, and was out of business a year later. Of course, partisan Democratic Congressional staff members lost their jobs even sooner, as soon as their elected bosses left office.
178
The Practice of Joint Fact-Finding
The USEPA and Washington State comparative risk projects had high-level political support and a stable political context, which ensured adequate resources and an attentive audience. The California project lacked strong gubernatorial support and had an unstable political context, with distressing results. The Minnesota project had top-level acquiescence rather than enthusiastic support; as a result it became little more than a footnote. In New England, the energy analysts slowly earned the trust and attention of decision makers after partisans had fought to an even draw. The decision makers' best alternative to joint fact-finding was sufficiently unattractive that they supported the NEP for quite some time. Analytical effort of any type—advocacy or joint fact-finding—is wasted if decision makers are not supportive. Analysts should therefore seek permission from interested parties to perform joint fact-finding activities.
A MARRIAGE OF PROCESS AND ANALYSIS Successful communication is context dependent, so analysis is likely to need to adapt to the decision-making process it serves. For example, if a decisionmaking process is designed to be participatory, so that stakeholders interact to scope the problem, search for solutions, evaluate them, and negotiate a choice, then the stages of analysis will match the stages of the interaction. OTA worked hard to marry analysis to process. Indeed, OTA was known more for having a distinctive study process than for endorsing particular analytical methods. It brought outsiders in early to help scope the analytical effort, and brought them back later to review the findings. A marriage of process and analysis was also a key feature of the best comparative risk projects. Successful comparative risk projects such as Washington State's carefully structured the technical work around the needs of the broader process. Analysts received scoping guidance from the Public Advisory Committee, prepared issue papers according to guidelines established by the committee, and interacted face-to-face to explain technical details. Less successful projects such as California's had fewer interactions between the technical experts and their overseers. In California, this resulted in uneven technical work, inadequate integration of the project's many pieces, and an unclear policy message. In the planning arena, a marriage of analysis and process was integral to the success of the New England Project and its spinoffs. In those projects, analysis matched process in a self-conscious way. Analysts elicited potential participants' views about what should be the role of analysis, and then developed methods to match those specifications. The careful linking of process and analysis allowed broad participation in the analytical work, and permitted careful management of the normative content of analysis. Implicit in the marriage of analysis and process has been a consideration of the institutional context of analytical work. The OTA case portrayed Congress as an institution that placed important constraints on OTA's procedural and analytical choices. Congress forced its analysts to avoid partisanship and direct recommendations, and instead simply to characterize options as fully as possible.
Elements of Successful Joint Fact-Finding
179
Institutional constraints also influenced comparative risk projects, disrupting their timing, affecting the resources made available, and determining whether recommendations would be implemented. The electricity-planning cases showed what happened when institutional context varied: an approach that worked well in a context of decentralized regional decision-making lost legitimacy (but not efficacy) when applied within a utility company. SHARED INFORMATION Shared decisions most likely will need to build upon a shared information base regardless of the decision makers' underlying norms. Strict utilitarians (who value usefulness above all) will need to share information before they can collude in escaping a prisoners' dilemma to find a win-win solution. Consensus seekers will want to share information out of a sense of personal obligation and mutual respect. Rather than viewing information as a weapon, a strategic asset, information can instead be viewed as a tool for building credibility. The strategic value of most classified or proprietary information is overrated, and in any case it is a short-lived asset. Sharing that information relatively promptly often does no harm and may engender much good will. OTA worked hard to share information. Of course, as a synthesizer of existing technical information, OTA rarely worried about the strategic use of data. However, it generally sought to share its findings widely, and fought other agencies' attempts to keep findings secret, as when, for example, the Pentagon demanded (successfully) that OTA's influential "Star Wars" report be classified. Even then, the OTA published a slightly sanitized, unclassified version of the document. All comparative risk projects also have shared information as a matter of course. They typically have had a public education goal as well as a prioritysetting goal. One of the functions of public involvement has been to improve the credibility of information generated by the project. Comparative risk practitioners clearly have accepted Ulrich's claim that rationality claims are best established "dialogically," not "monologically."1 The electricity-planning projects provided a sterner test of the informationsharing argument. They all required that firms release proprietary data to analysts and other participants. For example, to model the operation of an electric power system, analysts needed sensitive information about power plant efficiencies, fuel prices, and contract terms. In every case, analysts and core participants were granted access to this information; however, in every case these parties had to sign confidentiality agreements with the utility companies. Ironically, in no case did the confidential data reveal any surprises: the numbers nearly always matched industry-standard values. Yet this tension between communicative and strategic uses of information persisted. It is clear that strategic concerns over information-sharing are not easily allayed, and therefore they require special attention from analysts. Analysts perceived as neutrals or third parties naturally have the best chance of gaining access to certain information. Other approaches can also be employed to encourage information-sharing. Many firms rely on a trusted third party—a trade
180
The Practice of Joint Fact-Finding
association—to aggregate individuals' data for public release. Others release normalized information that shows relative changes but not absolute quantities. This approach is popular in environmental management reports issued by firms. Often such information is adequate for joint fact-finding purposes. Today, many firms' legal departments appear incapable of distinguishing between proprietary information that has strategic value, such as the details of an unpatented technological innovation, versus information that does not, such as the overall efficiency of a production process.2 Governmental classification powers have also been overused in the past, to prevent embarrassment rather than protect national security, for example, although this problem has eased in recent years.3 INDUCTIVE REASONING Society's division of labor creates boundaries that hinder successful communication and problem-solving. People from different disciplines or backgrounds may bring different mental models of the problem to the table. Therefore, a useful analysis is likely to avoid heroic assumptions, make experimentation easy, and allow people to converge on a shared vision of the truth. An inductive (data-driven, bottom-up) approach may make this easier, whereas a deductive (theory-driven, top-down) approach must first overcome the clash of competing mental models.4 OTA often communicated its findings in an inductive manner. The agency's reports typically included a great deal of empirical data. Where conceptual disputes existed, the reports commented on the strengths and weaknesses of alternative theories. The agency made its reports useful to parties having different mental models of how the world worked. Comparative risk projects, by contrast, less often adopted an inductive approach. Inadequate data on the full range of environmental risks forced every comparative risk project to turn to theory for help. For example, analysts typically evaluated single indicator chemicals and then extrapolated their findings to thousands of similar toxic chemicals. They also relied frequently on informed judgment rather than explicit empirical analysis when characterizing risks. Analysts were forced into this position by the combination of significant data gaps and a desire for comprehensive coverage of environmental risks. The electric power-planning projects relied heavily on an inductive approach that distinguished them from traditional power-planning analyses. Where the sector had previously relied on methods from microeconomics, the New England Project and its spin-offs substituted methods based in engineering. The systems characterized in the simulation models were essentially physical systems whose equations had been empirically tested and widely accepted. Most of the normative components of the problem (value judgments), and many of the social phenomena that lacked broadly accepted analytical representations, were treated exogenously in scenarios. Participants then explored the results of the many simulations in an inductive or empirical manner, using decision analysis, graphical analysis, and exploratory data analysis.
Elements of Successful Joint Fact-Finding
181
While cheaper computing power made the New England approach increasingly feasible, it was not without cost. The task of generating and then interpreting thousands of scenarios placed new responsibilities on analysts. They now had to become adept at "meta" modeling, that is, constructing modeling systems and not just models. They also had to become talented and even-handed story-tellers, as they reduced scenario data for public consumption. A call for an inductive approach may overstate my intentions. Pure empiricism can quickly lead an analyst astray, because there are endless ways to interpret any datum. There are also many ways to characterize or partition what count as data in the first place. The activity needs to be situated within the overall "wheel of scientific discovery" that cycles through theories, hypotheses, observations, and empirical generalizations, that is, from deduction to induction and back.5 My point here is that analysts operating in a communicative context are likely to want to rely relatively more on inductive than on deductive reasoning. A corollary point is that analysts may accomplish more by simply publishing raw data on the web to ensure broad access than by delivering elaborate interpretive analyses to specific clients. ACTIVELY MANAGED NORMATIVE CONTENT Decision-making is a normative act that incorporates values, and analysis inevitably includes implicit or explicit value judgments too. Analysts working in a communicative context are likely to serve decision makers with divergent value systems. This requires analysts to recognize and actively manage the normative content of their work, lest they alienate a part of their audience. Illustrative management strategies include postponing value judgments and presenting tradeoffs rather than optima. OTA reports were distinctive because they characterized policy options rather than making policy recommendations. OTA consciously chose to minimize normative content in its work. Instead, OTA strove to make its work useful to decision makers representing diverse perspectives, and who often did not share the same values. OTA accomplished this task by discussing in substantial detail the tradeoffs among policy alternatives. Comparative risk projects actively managed their normative content by dividing the labor. Technical work groups were in charge of developing facts, and advisory committees were responsible for making value decisions. Public participation provided a reality check for both activities. However, since value decisions permeated the technical work, much interaction was needed, to scope the project, specify evaluation criteria, interpret incommensurate data, and rank the risks. Successful projects saw a great deal of interaction between technical work groups and advisory committees; unsuccessful ones seemed to have little interaction. The electric power planning projects actively managed normative content using both procedural and methodological innovations. Analysis teams were nominally in charge of developing facts, advisory groups were nominally responsible for value decisions, and frequent interactions took place between these groups. The methods used in these projects relied heavily on tradeoff
182
The Practice of Joint Fact-Finding
analysis. The analysts explicitly chose to avoid the normative baggage of aggregative evaluation techniques such as benefit-cost analysis and opted instead for multicriteria analysis. Instead of using optimization techniques pervasively, the analysts sometimes identified superior planning choices using a series of sorting techniques with slowly increasing normative content. These cases confirm that normative content is never eliminated from the processes of analysis and decision-making, yet it can be managed with good effect. Both procedural and methodological tools appear to be helpful in this regard, but most important is the conscious realization at the beginning of a project that active management of value judgments may be necessary. CROSS-DISCIPLINARY REVIEW Disciplinary specialists naturally prefer to apply their own tools to the problem at hand, even though there may be more appropriate—or complementary—tools available elsewhere. Good analysis is likely to make obvious its underlying, paradigmatic "framing" assumptions, and to specify the limits of analytical work. Remedying disciplinary overclaims during the analysis protects professional credibility more effectively than having overclaims exposed in public debate afterward. Analysts working in a communicative context should be especially willing to get help from experts outside their home discipline. OTA exemplified this, and it was perhaps the agency's greatest strength. It was widely known for its far-reaching review processes. It typically invited comments from elite players representing a range of disciplinary perspectives and a range of stakeholder perspectives. Likewise, every comparative risk project has asked experts from different disciplines—human health, ecology, and socioeconomic analysis—to evaluate a common set of environmental issues. Further, several have involved professional stakeholders in the analysis, thereby benefiting from the application of their practical expertise to these environmental problems. In the more successful projects, this broad involvement extended from the early scoping questions to review of final documents. In the weaker projects, these disciplinary perspectives were never integrated. The power-planning projects incorporated wide review into the very structure of their analytical work. Analysis teams were multidisciplinary, and advisory groups, internal and external, typically contained experts from various fields. By opening up modeling assumptions for inspection, and seeking wide involvement in both the scoping and review stages, these projects enjoyed much crossdisciplinary review. Keyfitz argues that disciplinary overclaims pose one of the greatest threats to analytical credibility.6 I suggested earlier that we need to become good at distinguishing heroic from foolhardy analysis. Broad, cross-disciplinary review helps us achieve that goal, making it an essential part of this alternative approach to analysis.
Elements of Successful Joint Fact-Finding
183
BROAD ANALYTICAL SCOPE While controversy often results from parties having divergent interests, it can also result from having defined a problem too narrowly. Shared decisions are likely to involve more possible actions and uncertainties than unilateral decisions are. This is both a problem and an opportunity. Scoping the analysis may become more difficult, but the universe of possible solutions may expand. By reframing a problem to include more of the relevant factors, analysts may find broadly acceptable solutions more easily. In a context of shared decisionmaking, an analyst may be able to spur creativity by exploring many options, uncertainties, and criteria early in the assessment process. In most OTA studies analysts looked broadly at questions, and addressed both their technical and socio-economic aspects. Where breadth was infeasible due to resource or page limitations, analysts typically discussed what was not included. In some cases, OTA would perform a linked series of studies to more fully address a major topic. The comparative risk approach casts a broad net by definition. It is unlike technical risk assessment, which evaluates a single problem in great depth. Instead, it compares across problems. In most projects analysts consider at least the range of problems the sponsoring agency is mandated to address, and some have looked even more broadly. However, the elimination of logging issues from the otherwise successful Washington State comparative risk project reminds us that breadth does not guarantee comprehensiveness. There is also an apparent tradeoff between topical breadth and empirical depth. The power-planning projects were also distinguished for their breadth. They were part of the movement toward integrated resource planning that developed in the late 1980s and early 1990s. This paradigm sought to include both supplyside options (such as power plants) and demand-side options (such as energyefficient light bulbs) on a level analytical playing field. These projects went further, however, by also seeking to integrate multiple decision-making criteria such as dollar costs, environmental impacts, and reliability of supply, and by integrating the multiple decision-making perspectives brought by various stakeholders. Breadth is a relative term, or course, and a broadly scoped electricity planning process may be narrow relative to a broadly scoped social planning process. The point here is really that the narrowness of focus that is so valuable for research productivity is inappropriate in an applied, public context. Given the inevitable tradeoff between breadth and depth, joint fact-finding analysis should consciously and capably tilt toward breadth. Managing that breadth effectively becomes a central analytical task. TRANSLATED STORIES Since communication is difficult, it is likely to be recognized as a key part of the analytical process, and not merely as an afterthought. When modeling complex phenomena, analysts may need to translate what they have done into lay language. Good translations will fairly represent the original, will be persuasive, and will be well documented—axes will be labeled. Compelling stories will
184
The Practice of Joint Fact-Finding
employ vivid, but appropriate, metaphors, analogies, and synonyms that give the audience many points of access. OTA failed in a key way with storytelling. OTA reports were well written and accessible documents. As such, they were widely circulated in expert communities outside government. However, they rarely connected with the lay public. More important, they failed to fully accommodate the oral tradition of the U.S. Congress. While OTA project directors would provide oral testimony on reports, they had few interim products that they could share with legislators and their staffs, and they had few informal mechanisms in place for sharing information. Weakness in the area of good storytelling has been identified by critics as the agency's major flaw. Risk communication lies at the heart of the comparative risk paradigm. Every comparative risk project has made an effort to translate its findings into lay English, and to encourage public debate about its findings. The more successful projects have devoted resources to communications before, during, and after the technical work, thereby demystifying the process and increasing the legitimacy of the outcomes. The power planning projects were forced into a storytelling mode by the choice of an inductive approach to analysis. Their analysts gained a responsibility to mine the output scenario data for stories to tell other participants in their interactive planning processes. One such story was that repowering old power plants had substantial and consistent environmental benefits (see Figure 15.4). They soon found that story-building was a valuable scientific activity: it helped analysts to detect and correct errors, it suggested new strategies for supplying electric services, it identified important vulnerabilities of the existing power system, and it sparked new modeling efforts to rectify the weaknesses of existing analytical approaches. It also had great communicative value, of course, in ensuring that results were broadly shared and rapidly disseminated. Solow claimed that the executive's expert needs to be a good storyteller, and our cases confirm that this skill only increases in importance as the analytical context broadens to include more actors. Yet how many economics, engineering, biology, planning, or public policy programs include courses on storytelling, or readings on how to select appropriate analogies? Only a few fields give storytelling the emphasis it deserves.7 These elements of an alternative approach to analysis find justification in the conceptual chapters at the beginning of the book; however, they are drawn from the case studies. Thus, they are not laws or even propositions; they are elements that seem to work, heuristics. CHOOSING TO BE A JOINT FACT-FINDER As computing becomes cheaper and more user friendly, the approach to analysis outlined here will become increasingly feasible. Information dissemination is now quick, cheap, and easy on the web. Simulation modeling is already becoming a packaged desktop tool, so that for some types of decisions, well-designed models and locally relevant data are already accessible.
Elements of Successful Joint Fact-Finding
185
Spreadsheets and object-oriented programming languages are further democratizing analysis. The graphical representation of quantitative data has become easy, and charting has become an effective means of communication. Yet unless analysts make efforts to understand their context, become attuned to procedural and institutional issues, and develop skills for working with nonanalysts, technical factors will not matter. Analysts will still be ignored, and they will continue to produce work that is late, off target, and expensive. Some analysts must choose to change the way they operate. Who should adopt this approach? Not everyone is cut out for the job, and not every job supports this approach. Some of us prefer partisanship and advocacy while others seek cooperation and balance. Both types can contribute. Some analytical jobs require a partisan approach, and society benefits from spirited debate. Without the fierce environmentalism of advocates like Rachel Carson, for example, environmental laws might not have come into being. Yet without the mediation efforts of the NEP analysts, New England's lights might have gone out. I find myself acting as a partisan on some issues and cooperating on others. I suspect that many of us share this mix of behaviors. For this group, the proper question then becomes: When should we choose the joint fact-finding approach? I am not bashful in offering a utilitarian answer: when it works better than instrumental or strategic analysis, as it does quite often. Specifically, I suggest that it works better when there is genuine controversy, technical complexity, massive incertitude, and either political stability or paralysis. Making this choice is difficult because it requires the analyst to step outside her current situation and critically reflect on it. Thus, I ask a great deal of analysts when I urge them to adopt the approach described here when their context requires it, and to be willing to suggest it when others fail to do so. CONCLUSIONS This book has relied on case studies to sketch here the outlines of the joint fact-finding approach to analysis. While the cases have clarified the nature of this approach, the limits of its applicability have not been defined. The cases have provided some of the reasons why such an approach has sometimes been adopted, and they have characterized some of its elements. Future research could develop a more exhaustive catalog, further evaluate this alternative approach, and, if warranted by evidence of success, precisely codify emerging best practices. Even better would be to experiment explicitly with its various elements, and thereby determine each one's range of applicability. The goal should be less to establish a distinct paradigm for doing analysis than to give analysts additional degrees of freedom, new tools, and a helpful sense of perspective. Our ways of making technical decisions in democratic societies need improvement. The affordable margin for error has decreased as the mass of humanity, its power to alter the physical environment, and its material expectations each have grown faster than the carrying capacity of the world. The voices of many parties need to be heard in the decision processes that affect
186
The Practice of Joint Fact-Finding
those parties, yet valued technical knowledge should not be supplanted. Participatory decision-making is desirable, but crude horse-trading is not. By exploring the modern practice of joint fact-finding, this book has shown that we can recapture many of the benefits of the ancients' way of resolving arguments by means of a shared investigation of the facts. Along the way, it has identified several issues affecting the overall relationship between analysis and decision-making. The cases have shown that analysts are irrevocably bound to their communicative contexts, and that analysts need to work differently when serving multiple decision makers or sharing the analytical work with others. Joint fact-finding deserves more attention from both researchers and practitioners.
NOTES 1. Werner Ulrich, "Systems Thinking, Systems Practice, and Practical Philosophy: A Program of Research," Systems Practice 1, 2: 137-163. 2. Eric von Hippel, The Sources of Innovation (New York: Oxford University Press, 1988). 3. Frank von Hippel, Citizen Scientist (New York: Simon & Schuster, 1991). 4. The clear exception is in managerial applications where data are lacking and time is of the essence. Then approaches such as system dynamics that rely on causal modeling may work best, provided the model is built using a legitimate process. See J.A.M. Vennix, "Mental Models and Computer Models: The Design and Evaluation of a Computer-Based Teaming Environment for Policy Making," Ph.D. diss. (Catholic University of Neimegan, The Netherlands, 1991). 5. Walter Wallace, The Logic of Science in Sociology (New York: Aldine deGruyter, 1971). Cited in Earl Babbie, The Practice of Social Research, 8th ed. (Belmont, CA: Wadsworth, 1998), p. 59. 6. Nathan Keyfitz, "Contradictions between Disciplines and Their Influence on Public Policy," Federation of American Scientists Public Interest Report 48, 3 (May/June 1995): 1-10. 7. For an environmental science example, see M. H. Glantz, "The Use of Analogies in Forecasting Ecological and Societal Responses to Global Warming," Environment 33, 5 (June 1991): 10-33. For a public policy example, see Emery Roe, Narrative Policy Analysis: Theory and Practice (Durham, NC: Duke University Press, 1994), pp. 108-125 (ch. 6). Planning has perhaps embraced this idea most fully; see, for example, James Throgmorton, "Planning as a Rhetorical Activity: Survey Research as a Trope in Arguments about Electric Power Planning in Chicago," Journal of the American Planning Association 59 (1993): 334-346.
Select Bibliography Andrews, Clinton J. "Evaluating Risk Management Strategies in Resource Planning." IEEE Transactions on Power Systems 10, 1 (1995): 420-426. . "Giving Expert Advice." IEEE Technology & Society Magazine, 17, 2 (Summer 1998): 5-6. . "Improving the Analytics of Open Planning Processes." Ph.D. diss., Department of Urban Studies and Planning, Massachusetts Institute of Technology, 1990. . "Policies to Encourage Clean Technology." in Robert H. Socolow, Clinton J. Andrews, Frans Berkhout, and Valerie M. Thomas, eds., Industrial Ecology and Global Change. Cambridge: Cambridge University Press, 1994. Pp. 405422. . "Sorting Out a Consensus: Analysis in Support of Multi-party Decisions." Environment and Planning B: Planning and Design 19, 2 (Spring 1992): 189— 204. , ed. Regulating Regional Power Systems. Westport, CT: Quorum Books, 1995. Pp. 3-26. , ed. Technical Expertise and Public Decisions: Proceedings of the 1996 International Symposium on Technology and Society. Piscataway, NJ: Institute of Electrical and Electronics Engineers, June 1996. Andrews, Clinton J., and Stephen Connors, "Existing Capacity—The Key to Reducing Emissions." Energy Systems and Policy 15 (1992): 211-235. Andrews, Richard L. N. "Report on Reports: Toward the 21st Century: Planning for the Protection of California's Environment." Environment 37, 4 (May 1995): 2528. Ausubel, Jesse H. "The Organizational Ecology of Science Advice in America." European Review 1, 3 (1993): 249-261. Babbie, Earl. The Practice of Social Research. 8th ed. Belmont, CA: Wadsworth Publishing Co., 1998. Pp. 333-355. Bacow, Lawrence, and Michael Wheeler. Environmental Dispute Resolution. New York: Plenum Press, 1984. Pp. 158-184. Bardach, Eugene. A Practical Guide to Policy Analysis: The Eightfold Path to More Effective Problem Solving. New York: Chatham House, 2000.
188
Select Bibliography
Bespolka, Carl. "A Framework for Multiple Attribute Evaluation in Electric Utility Planning." Master's thesis in Operations Research, Massachusetts Institute of Technology, 1989. Bimber, Bmce. "Congressional Support Agency Products and Services for Science and Technology Issues: A Survey of Congressional Staff Attitudes about the Work of CBO, CRS, GAO and OTA." Report prepared for the Carnegie Commission on Science, Technology, and Government. New York: Carnegie Corporation of New York, 1990. . The Politics of Expertise in Congress: The Rise and Fall of the Office of Technology Assessment. Albany, NY: SUNY Press, 1996. Branden, Nathaniel. The Six Pillars of Self-Esteem. New York: Bantam, 1994. Breyer, Stephen G. Breaking the Vicious Circle: Toward Effective Risk Regulation. Cambridge, MA: Harvard University Press, 1993. Brooks, Harvey. "The Resolution of Technically Intensive Public Policy Disputes." Science, Technology, and Human Values 9, 1 (Winter 1984): 39-50. Burke, Edmund. "The English Constitutional System." In Hannah Pitkin, ed. Representation. New York: Atherton Press, 1969. P. 175. California Comparative Risk Project (CCRP). Toward the 21st Century: Planning for the Protection of California's Environment. Berkeley, CA: California Public Health Foundation, 1994. Clark, William C , and Giandomenico Majone. "The Critical Appraisal of Scientific Inquiries with Policy Implications," Science, Technology, and Human Values 10, 3 (Summer 1985): 6-19. Cobb, Roger W., and Charles D. Elder. Participation in American Politics: The Dynamics of Agenda Building. Baltimore: Johns Hopkins University Press, 1983. Connors, Stephen R. "Integrated Resource Planning as a Tool for Technical Mediation: A Case Study in Electric Power Planning." Master's thesis in Technology and Policy, Massachusetts Institute of Technology, 1989. Connors, Stephen R., and Clinton J. Andrews. "System-wide Evaluation of Efficiency Improvements: Reducing Local, Regional and Global Environmental Impacts." In J. Tester, et al., eds., Energy and the Environment in the 21st Century. Cambridge, MA: MIT Press, 1991. Davies, J. Clarence. "Environmental Regulation and Technical Change: Overview and Observations." in Myron F. Uman, ed., Keeping Pace with Science and Engineering. Washington, DC: National Academy Press, 1993. Pp. 251-262. Douglas, Mary, and Aaron Wildavsky. Risk and Culture. Berkeley: University of California Press, 1982. Duming, Dan, and Will Osuna. "Policy Analysts' Roles and Value Orientations: An Empirical Investigation Using Q Methodology." Journal of Policy Analysis and Management 13, 4 (1994): 629-657. Edwards, W., I. Kiss, G. Majone, and M. Toda. "What Constitutes a Good Decision?" Acta Psychologica 56 (1984): 5-27. Elliott, Michael. "Pulling the Pieces Together: Amalgamation in Environmental Impact Assessment." EIA Review 2, 1 (1981): 11-37. Epstein, Marc J. Measuring Corporate Environmental Performance. Chicago: Richard D. Irwin, 1996. Pp. 106-144. Fainstein, Susan S. "New Directions in Planning Theory." Urban Affairs Review 35, 4 (March 2000): 451-478.
Select Bibliography
189
Feiveson, Harold A., Frank W. Sinden, and Robert H. Socolow, eds. Boundaries of Analysis: An Inquiry into the Tocks Island Dam Controversy. Cambridge, MA: Ballinger, 1976. Finkel, Adam M., and Dominic Golding. "Alternative Paradigms: Comparative Risk Is not the Only Model." EPA Journal (January/February/March 1993): 50-52. Fischer, Frank. Evaluating Public Policy. Chicago: Nelson-Hall Publishers, 1995. Fisher, R., and W. Ury. Getting to Yes: Negotiating Agreement without Giving In. New York: Viking/Penguin, 1981. Pp. 73, 101-111. Flyvbjerg, Bent, trans. Steven Sampson. Rationality and Power: Democracy in Practice. Chicago: University of Chicago Press, 1998. Forester, John "Critical Theory and Planning Practice." APA Journal 46, 3 (July 1980): 275-286. . Critical Theory and Public Life. Cambridge, MA: MIT Press, 1985. . The Deliberative Practitioner. Cambridge, MA: MIT Press, 1999. . "Planning in the Face of Power." APA Journal 48, 1 (Winter 1982): 67-80. Foster, Kenneth R., and Peter W. Huber. Judging Science: Scientific Knowledge and the Federal Courts. Cambridge, MA: MIT Press, 1997. Gaventa, John. Power and Powerlessness: Quiescence and Rebellion in an Appalachian Valley. Urbana: University of Illinois Press, 1982. Gibbons, John H., and Holly L. Gwin. "Technology and Governance: The Development of the Office of Technology Assessment." In Michael E. Kraft and Norman J. Vig, eds. Technology and Politics. Durham, NC: Duke University Press, 1988. Pp. 98-122. Glantz, M. H. "The Use of Analogies in Forecasting Ecological and Societal Responses to Global Warming." Environment 33, 5 (June 1991): 10-33. Graham, Loren R. Science and Philosophy in the Soviet Union. New York: Alfred A. Knopf, 1972. Green, Harold P. "The Limitations of Technology Assessment." American Bar Association Journal 69 (1983). Grinnell, Frederick. The Scientific Attitude. 2nd ed. New York: Guilford Press, 1992. Gross, Paul R., Norman Levitt and Martin W. Lewis, eds. The Flight from Science and Reason. New York: New York Academy of Sciences, 1996. Gusfield, Joseph. The Culture of Public Problems. Chicago: University of Chicago Press, 1981. Pp. 1-21. Guston, David ^.Between Politics and Science: Assuring the Integrity and Productivity of Research. New York: Cambridge University Press, 2000. . "Critical Appraisal in Science and Technology Policy Analysis: The Example of Science, the Endless Frontier." Policy Sciences 30 (1997): 233-255. . "The Essential Tension in Science and Democracy." Social Epistemology 7, 1 (1993): 3-23. . "Evaluating the First U.S. Consensus Conference: The Impact of the Citizens' Panel on 'Telecommunications and the Future of Democracy.' " Science, Technology, and Human Values 24, 4 (1999): 451-482. Guston, David H., and Bruce Bimber, coeds. Technological Forecasting and Social Change 54, 2-3 (1997), special issue on technology assessment. Habermas, Jurgen. Theorie des kommunikativen Handelns. Frankfurt: Suhrkamp, 1981. P. 384. , trans. Thomas McCarthy. Communication and the Evolution of Society. Boston: Beacon Press, 1979. , trans. Thomas McCarthy. Legitimation Crisis. Boston: Beacon Press, 1975.
190
Select Bibliography
, trans. Jeremy J. Shapiro. Knowledge and Human Interests. Boston: Beacon Press, 1971. , trans. John Viertel. Theory and Practice. Boston: Beacon Press, 1973. Hagman, D., and D. Missczynksi, eds. Windfalls for Wipeouts. Chicago: American Society of Planning Officials, 1978. Hemmens, George C , and Bruce Stiftel. "Sources for the Renewal of Planning Theory." APA Journal 46, 3 (July 1980): 341-345. Henderson, Yolanda, R. Kopcke, G. Houlihan, and N. Inman. "Planning for New England's Electricity Requirements." New England Economic Review (Jan/Feb. 1988): 3-30. Hill, Christopher T. "Science, Technology, and the U.S. Congress: What Should Be Their Relationship?" IEEE Technology & Society Magazine 16, 1 (Spring 1997): 9. Hill, Stuart. Democratic Values and Technological Choices. Stanford: Stanford University Press, 1992. Pp. 158-162. Hobbs, Benjamin F., and Graham T.F. Horn. "Building Public Confidence in Energy Planning: A Multimethod MCDM Approach to Demand-side Planning at BS Gas." Energy Policy 25, 3 (1997): 357-375. Hobbs, Benjamin F., and Peter Meier. Energy Decisions & The Environment: A Guide to the Use of Multicriteria Methods. International Series in Operations Research & Management Science. Norwell, MA: Kluwer Academic Publishers, 2002. Holling, C. Adaptive Environmental Assessment and Management. New York: John Wiley & Sons, 1978. Pp. 1-5, 20. Horgan, John. The End of Science. Reading, MA: Addison-Wesley, 1996. Horkheimer, M. Zur Kritik der instrumentellen Vernunft. Frankfurt: Fischer, 1967. Hornstein, Donald T. "Paradigms, Processes and Politics: Risk and Regulatory Design." In Adam M. Finkel and Dominic Golding, eds. Worst Things First? The Debate over Risk-Based National Environmental Priorities. Washington, DC: Resources for the Future, 1994. Pp. 147-165. Hughes, Thomas P. Rescuing Prometheus. New York: Pantheon Books, 1998. Pp. 166— 195. Hwang, C , S. Paidy, K. Yoon, and A. Masud. "Mathematical Programming with Multiple Objectives: A Tutorial." Computation and Operations Research 1 (1980): 6. Innes, Judith E. Knowledge and Public Policy: The Search for Meaningful Indicators. 2nd expanded ed. New Brunswick, NJ: Transaction Publishers, 1990. Janssen, Ronald. Multiobjective Decision Support for Environmental Management. Dordrecht, Kluwer Academic Publishers, 1992. Jantsch, Erich. Design for Evolution. New York: Braziller, 1975. Jasanoff, Sheila. "American Exceptionalism and the Political Acknowledgement of Risk." Daedalus 119 (1991): 61-81. . The Fifth Branch: Science Advisors as Policymakers. Cambridge, MA: Harvard University Press, 1990. . Science at the Bar: Law, Science, and Technology in America. Cambridge, MA: Harvard University Press, 1995. Johnson, Deborah. Computer Ethics. Englewood Cliffs, NJ: Prentice-Hall, 1985. Pp. 621. Journal of Planning Education and Research. "Symposium on the Limits to Communicative Planning Theory." 19, 4 (Summer 2000): 331-378. Keeney, Ralph. "Decision Analysis: An Overview." Operations Research 30, 5 (1982): 803-837.
Select Bibliography
191
Keeney, Ralph, and Howard Raiffa. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. New York: John Wiley & Sons, 1976. Pp. 5-30, 515-548. Kent, Charles W., and Frederick W. Allen. "An Overview of Risk-Based Priority Setting at EPA." In Adam M. Finkel and Dominic Golding, eds. Worst Things First? The Debate over Risk-Based National Environmental Priorities. Washington, DC: Resources for the Future, 1994. Pp. 47-68. Keyfitz, Nathan. "Contradictions between Disciplines and Their Influence on Public Policy." Federation of American Scientists Public Interest Report 48, 3 (May/June 1995): 1-10. Kingdon, John W. Agendas, Alternatives, and Public Policies. Boston: Little, Brown, 1984. Kleindorfer, Paul R., Howard C. Kunreuther, and Paul J. H. Schoemaker. Decision Sciences: An Integrative Perspective. Cambridge: Cambridge University Press, 1993. Kline, Stephen. Conceptual Foundations of Multi-Disciplinary Thinking. Stanford, CA: Stanford University Press, 1995. Krimsky, Sheldon, and Dominic Golding, eds. Social Theories of Risk. Westport, CT: Praeger, 1992. Kuhn, Thomas S. The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1962, 2nd ed., enl. 1970. Pp. 1-23. Kunkle, Gregory C. "New Challenge or the Past Revisited? The Office of Technology Assessment in Historical Context." Technology in Society 17, 2 (1995): 175196. Lee, Kai N. Compass and Gyroscope: Integrating Science and Politics for the Environment. Washington, DC: Island Press, 1993. Lim, Gil-Chin. "Toward a Synthesis of Contemporary Planning Theories." Journal of Planning Education and Research 5, 2 (1986): 75-85. Lindblom, Charles E. Inquiry and Change. New Haven: Yale University Press, 1990. Lindblom, Charles E., and Edward J. Woodhouse. The Policy Making Process. 3rd ed. Englewood Cliffs, NJ: Prentice-Hall, 1993. Lutz, D. "The Personality of Physicists Measured." American Scientist 82 (July-August 1994): 324-325. Majone, Giandomenico. Evidence, Argument, and Persuasion in the Policy Process. New Haven: Yale University Press, 1989. McAllister, D. Evaluation in Environmental Planning. Cambridge, MA: MIT Press, 1982. P. 140. McCarthy, Thomas. The Critical Theory of Jurgen Habermas. Cambridge, MA: MIT Press, 1978. Meltsner, Arnold J. Policy Analysts in the Bureaucracy. Berkeley: University of California Press, 1976. Merton, Robert K. "The Normative Structure of Science." Reprinted in Merton's The Sociology of Science. Chicago: University of Chicago Press, 1973. Minnesota Pollution Control Agency (MPCA). Environmental Planning Unit. Risk-Based Environmental Priorities Project Final Report. Minneapolis, MN: MPCA, September 1997. 11 pp. Mintzberg, H., D. Raisinghani, and A. Theoret. "The Structure of Unstructured Decision Processes." Administrative Science Quarterly 21 (1976): 246-275. Mishan, Ed. Cost-Benefit Analysis. London: Allen & Unwin, 1982. Pp. 162-163. Nash, Jennifer. "The University as Mediator: A New Model for Service." Master's thesis, Department of Urban Studies and Planning, Massachusetts Institute of Technology, 1990.
192
Select Bibliography
Nelkin, Dorothy. Technological Decisions and Democracy. Beverly Hills, CA: Sage Publishers, 1977. P. 83. Nichols, Rodney W. "Vital Signs OK: On the Future Directions of the Office of Technology Assessment, U.S. Congress." Report prepared for the Carnegie Commission on Science, Technology, and Government. New York: Carnegie Corporation of New York, 1990. Nietzstche, Friedrich. Twilight of the Idols. Harmondsworth: Penguin, 1968. Northwest Power Planning Council (NPPC). Fourth Northwest Conservation and Power Plan. Portland, OR: NPPC, 1998. O'Hare, M., L. Bacow, and D. Sanderson. Facility Siting and Public Opposition. New York: Van Nostrand Reinhold, 1983. Pp. 67-87. Otway, Harry. "Experts, Risk Communication, and Democracy." Risk Analysis 1, 2 (1987): 125-129. Ozawa, Connie. Recasting Science: Consensual Procedures in Public Policy Making. Boulder, CO: Westview Press, 1991. Pp. 28-32. Ozawa, Connie, and Lawrence Susskind. "Mediating Science-Intensive Policy Disputes." Journal of Policy Analysis and Management 5, 1 (1985): 23. Paterson, Christopher J., and Richard N.L. Andrews. "Procedural and Substantive Fairness in Risk Decisions: Comparative Risk Assessment Procedures." Policy Studies Journal 23, 1 (1995): 86. Patton, Carl V., and David S. Sawicki. Basic Methods of Policy Analysis and Planning. 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 1993. Pp. 15. Perhac, Ralph M., Jr. "Comparative Risk Assessment: Where Does the Public Fit In?" Science, Technology, and Human Values 23, 2 (Spring 1998): 221-241, esp. 237. Pike, R. W. Optimization for Engineering Systems. New York: Van Nostrand Reinhold, 1986. P. 1. Price, Don K. "The Spectrum from Truth to Power." The Scientific Estate. Cambridge, MA: Belknap Press, 1965. Pp. 121-122. Primack, Joel, and Frank von Hippel. Advice and Dissent: Scientists in the Political Arena. New York: Basic Books, 1974. Raiffa, Howard. The Art and Science of Negotiation. Cambridge MA: Harvard University Press, 1982. Pp. 285-287. Resources for the Future. Conference Synopsis: Setting National Environmental Priorities. Washington, DC: Resources for the Future, February 1993. Richardson, George P., and David F. Anderson. "Teamwork in Group Model Building." System Dynamics Review 11,2 (Summer 1995). Roe, Emery. Narrative Policy Analysis: Theory and Practice. Durham, NC: Duke University Press, 1994. Rosenthal, R. "Concepts, Theory, and Techniques: Principles of Multi-Objective Optimization." Decision Sciences 16 (1985): 134. Sarewitz, Daniel R. Frontiers of Illusion: Science, Technology and the Politics of Progress. Philadelphia: Temple University Press, 1996. Schofield, John. Cost-Benefit Analysis in Urban and Regional Planning. London: Allen & Unwin, 1987. Schon, Donald A. The Reflective Practitioner: How Professionals Think in Action. New York: Basic Books, 1983. Sclove, Richard E. Democracy and Technology. New York: Guilford Press, 1995. Scott, James C. Seeing Like A State. New Haven: Yale University Press, 1998. Pp. 11-22. Simon, Herbert A. Administrative Behavior: A Study of Decision-making Processes in Administrative Organization. New York: Harper & Rowe, 1976.
Select Bibliography
193
Slovic, Paul. "Perceived Risk, Tmst and Democracy." Risk Analysis 13, 6 (1993): 675682. Slovic, Paul, Baruc Fischoff, and S. Lichtenstein. "Rating the Risks: The Structure of Expert and Lay Perceptions." Environment 21 (1979): 141-166. Southern California Edison Company, Systems Planning and Research. "Planning for Uncertainty: A Case Study." Technological Forecasting and Social Change 33 (1988): 119-148. Stone, Deborah. Policy Paradox: The Art of Political Decision Making. 2nd ed. New York: W. W. Norton, 1997. P. 380. Susskind, Lawrence, and John Cmikshank. Breaking the Impasse. New York: Basic Books, 1987. Pp. 78, 95-150. Susskind, Lawrence, Sarah McKearnan, and Jennifer Thomas-Larmer, eds. The Consensus Building Handbook: A Comprehensive Guide to Reaching Agreement. Thousand Oaks, CA: Sage Publications, 1999. System Dynamics Review. Special issue on group model building (Summer 1997). Taylor, Serge. Making Bureaucracies Think. Stanford, CA: Stanford University Press, 1984. Pp. 3-37. Technology Review. "How John Gibbons Runs through Political Minefields: Life at the OTA." October 1988: 47-51. Tennessee Valley Authority (TVA). Energy Vision 2020. Chattanooga, TN: TVA, 1996. Throgmorton, James. "Planning as a Rhetorical Activity: Survey Research as a Trope in Arguments about Electric Power Planning in Chicago." Journal of the American Planning Association 59 (1993): 334-346. Tribe, Laurence H. "Technology Assessment and the Fourth Discontinuity: The Limits of Instrumental Rationality." Southern California Law Review 46 (1973): 617660. Ulrich, Werner. "Systems Thinking, Systems Practice, and Practical Philosophy: A Program of Research." Systems Practice 1, 2: 137-163. U.S. Congress. Office of Technology Assessment (OTA). OTA Role & Function. Pamphlet, Washington, DC: OTA, 1995. Now available from the OTA archive web site: . U.S. Environmental Protection Agency (USEPA). Office of Policy Analysis, Office of Policy, Planning, and Evaluation. Unfinished Business: A Comparative Assessment of Environmental Problems. Overview report and technical appendices. Washington, DC: USEPA, February 1987. . Science Advisory Board. Reducing Risk. Washington, DC: USEPA, 1990. Vennix, J. A. M. "Mental Models and Computer Models: The Design and Evaluation of a Computer-Based Teaming Environment for Policy Making," Ph.D. diss. Catholic University of Neimegan, The Netherlands, 1991. von Hippel, Eric. The Sources of Innovation. New York: Oxford University Press, 1988. von Hippel, Frank. Citizen Scientist. New York: Simon & Schuster, 1991. von Neumann, John, and Oskar Morgenstem. Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press, 1944. Walker, Robert S. "The Quest for Knowledge versus the Quest for Votes." IEEE Technology & Society Magazine 16, 1 (Spring 1997): 4, 6-7. Wallace, Walter. The Logic of Science in Sociology. New York: Aldine deGruyter, 1971. Washington State Department of Ecology. Toward 2010: An Action Agenda. Seattle, WA, July 1990. Weber, Max. The Protestant Ethic and the Spirit of Capitalism. London: Allen and Unwin, 1930. Initially published in German in 1905.
Select Bibliography
194
. The Theory of Social and Economic Organization. New York: Free Press, 1922; 1957. Weimer, David L., and Aidan R. Vining. Policy Analysis: Concepts and Practice. 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 1992. P. 1. Weiss, Carol H. "Helping Government Think: Functions and Consequences of Policy Analysis Organizations." In Carol H. Weiss, ed. Organizations for Policy Analysis: Helping Government Think. Newbury Park, CA: Sage Publications, 1992. P.15. Weiss, Carol H., with Michael J. Bucuvalas. Social Science Research and DecisionMaking. New York: Columbia University Press, 1980. Pp. 248-276. Wengert, N. "Citizen Participation: Practice in Search of a Theory." In A. Utton, ed. Natural Resources for a Democratic Society. Boulder, CO: Westview Press, 1976. Pp. 1-40. White, Robert M. "Introduction," In Myron F. Ulman, ed. Keeping Pace with Science and Engineering: Case Studies in Environmental Regulation. Washington DC: National Academy Press, 1993. Pp. 1-7. Whiteman, David. Communication in Congress: Members, Staff, and the Search for Information. Lawrence: University Press of Kansas, 1995. Wildavsky, Aaron. Speaking Truth to Power: The Art and Craft of Policy Analysis. Rev. ed. New Brunswick, NJ: Transaction Books, 1987. Williams, Bmce A., and Albert R. Matheny. Democracy, Dialogue, and Environmental Disputes: The Contested Languages of Social Regulation. New Haven: Yale University Press, 1995. Wood, Fred B. "Lessons in Technology Assessment: Methodology and Management at OTA." Technological Forecasting and Social Change 54 (1997). Woodhouse, Edward, and Dean Nieusma. "When Expert Advice Works, and When It Does Not," IEEE Technology and Society Magazine 16, 1 (Spring 1997): 2 3 29. Wynne, Brian. "Risk and Social Learning: Reification to Engagement." In Sheldon Krimsky and Dominic Golding, eds. Social Theories of Risk. Westport, CT: Praeger, 1992. Pp. 275-297. . "Sheepfarming after Chernobyl: A Case Study." Environment 31,2 (1991): 1015, 33-39. Yin, Robert. Case Study Research: Design and Methods. 2nd ed. Beverly Hills, CA: Sage Publications, 1994. Zuckerman, H. Scientific Elite: Nobel Laureates in the United States. New York: Macmillan, 1977. Referenced in Grinnell, The Scientific Attitude, pp. 55-56, 71.
Index Page numbers in italics refer to material in tables and figures. Actively managed normative content, 181-82 Adequacy, 14,60,87, 113, 167; evaluation of, 52-53, 105, 137 Advisory groups, 24, 168, 170 Advocacy science, 7, 25 n.2, 27 n.14, 185 Agenda-setting, 109-10, 111 Aggregation in project evaluation, 146, 147,149 Alonso, William, 18 Analysis: decision type and structure, 22-24; factors affecting, 3, 5, 11, 13, 25, 65-69; marrying with process, 13,76, 145, 178-79; skills, 25, 115; stages, 22-23, 23 Analysis teams, 120, 122-23, 137, 166, 168, 170, 173 Analyst, 5; communicative problems, 166-67; role of, 20-21, 31, 142, 173-75; technical problems, 16566 Analytical scope, 147-57; broad, 183 Analytical scope of analyses by: California, 96, 166; Minnesota, 104-5, 166; New England Project, 166, 174; Office of Technology Assessment, 165; U.S. Environmental Protection Agency,
78, 165; Washington State, 88, 91-92, 165 Archimedes, 6 Assumptions, framing, 5-6 Authority, 5, 167-68 Bacon, Francis, 24, 29, 66 Balanced analysis, 22, 25, 172-73 Bargaining, 40 Bayes' theorem, 127, 135 n.9 Bentham, Jeremy, 29 Best alternative to a negotiated agreement (BATNA), 130, 136 n.25 Boundary organizations, 67 Bureaucracy, 112 California: comparative risk analysis, 73,76,95-102, 103, 167; consensus building, 169; process and analysis, 178; project scope, 166; support from decision makers, 178 California EPA. See California: comparative risk analysis Carnegie Commission on Science, Technology, and Government, 57
196 Carson, Rachel, 48, 185; Silent Spring, 110 Catalysts in policy making, 110 Citizens jury, 76, 103, 104, 106 Civil legitimacy, 60, 106, 111, 114, 123, 168 Clark, William C , 14, 167 Client, 5, 6-7 Client-oriented advising, 12 Climate change study, 49-50; advisory panel, 50-51; evaluating, 52-56; staff, 51-52 Commoner, Barry, 48 Communality, 42 n.20 Communicating, 7, 9, 11-13, 18-19, 105, 106; burden of, 73, 119; contexts for analysts, 12; successful, 31, 37, 69 Communicative action, 38, 133 Communicative analysis, 32-40 Communicative distortions, 13, 39-40, 166-67; individual, 39; intentional, 39, 40; social, 39, 40; unintentional, 39 Communicative theory, 36-37 Communitarian (grassroots) language in public policy, 83, 113, 168 Comparative risk analysis, 13, 184; California's Toward the 21st Century, 95-102, 167; Minnesota's Risk-Based Environmental Priorities, 105, 105-6; in planning, 73-76, 113; U.S. Environmental Protection Agency's project, 77-84; Washington's Environment 2010, 87-92, 167 Complexity, 129, 130, 134, 171-72 Comprehensibility, 29, 40 Condorcet, Marquis de, 29 Conflict analysis, 38, 178 Consensus building, 7, 38, 132,134, 169 Consequentialism, 35 Context of analysis, 15, 22-25; decision type and structure, 2224; interests and power, 24-25; supporting the analyst, 25 Cost-benefit analysis, 34, 77, 111, 146— 49, 152 Cost-effectiveness analysis, 148, 151
Index Courtroom, 7 Creativity, 129-33, 158 Critical science, 48 Critical theory, 35, 36 Cross-disciplinary review, 25, 182; process, 166-67 Customer participation in analysis, 139-40 Daddario, Emilio, 48 Danger Hours, 122 Decision analysis, 22-24 Decision maker, 5; support from, 17778 Decision support systems, 141 Decision type and structure, 22-24 Deconstruction of expertise, 22, 27 n.18, 100-101 Deduction, 35 Deductive reasoning, 181 Deontologism, 35 Descartes, Rene, 29, 30 Disciplinary boundaries, 19 Disciplinary overclaims, 61 n.4, 182 Disinterestedness, 42 n.20, 68 Dispute resolution, 8, 9, 130, 131 Duming, Dan, 21 Ecological risks, 3, 12, 73-76 Ecological risks as identified by: California, 96, 97-98; crossdisciplinary review, 182; Minnesota, 104-5; U.S. Environmental Protection Agency, 78; Washington State, 89 Economic cost-benefit analysis, 148, 149, 151 Education, 12, 82, 88, 179 Edwards, W., I l l Effectiveness, 14, 87, 113, 167; evaluation of, 53-55, 76 n.4, 90, 138 Electricity planning: actively managed normative content, 181; broad analytical scope, 183; crossdisciplinary review, 182; in New England, 119-23, 125-34, 13742; process and analysis, 179; project evaluation, 146
Index Empiricism, 181 Employee engagement in analysis, 140 Energy Vision 2020, 17 Engineering-economic analysis, 148, 149 Environment 2010. See Washington State: comparative risk analysis Evaluation framework, 13-14; in: California, 101-2; Minnesota, 105-6; Washington State, 90-92 Expert, 3-6; best practices, 6, 10, 20, 36, 60, 69, 166; qualifications of, 6, 20-21, 184; relation to decision makers, 4-5, 132, 167; role of, 5, 8, 29, 58, 174, 176 Expert judgment, 66, 69, 111, 168 Expert witnesses, 20, 104 Expertise, 26 n. 13; in adversarial setting, 8-10; deconstruction of, 22, 27 n. 18 Fact-value distinction, 4, 11, 19, 35, 111 Flyvbjerg, Bent, 28 n.28, 66 Future, sampling the, 154, 157 Gardner, Booth, 88 Gibbons, Jack, 5 Gingrich, Newt, 47, 67-68 Green Mountain Institute for Environmental Democracy (GMIED), 73, 78 Greenpeace, 83 Gregoire, Christine, 87 Habermas, Jurgen, 36, 40 Harvard Law School, 120, 131 Hayek, Friedrich August von, 29 Health risks, identified by: California, 95, 96-97; cross-disciplinary review, 182; Minnesota, 104-5; U.S. Environmental Protection Agency, 78; Washington State, 91 Heller, Walter, 6 Hill, Christopher, 52, 53, 55 Hornstein, Donald T., 82 Houghton, Amo, 68 Hubris, 22
197 Hume, David, 29 Humility, 83 Induction, 35 Inductive reasoning, 180-81 Information sharing, 18, 40, 88, 122, 132, 179-80, 184 Institutional factors affecting analysis, 3,5, 11,25,65-69 Instrumental action, 36, 37 Instrumental rationality, 61 n.4 Interests, 24-25 Jantsch, Erich, 37 Jasanoff, Sheila, 33, 59-60 Joint fact-finding: approach to, 32-40, 65; communication flows, 10; definition, 3; elite, 12; examples, 3, 6; lessons learned, 165-76; participatory, 12; settings, 7-11; successful, 31, 177-86. See also specific case studies Joint model building, 158, 172. See also Modeling Kant, Immanuel, 36 Kennedy, John F., 4, 6 Keyfitz, Nathan, 182 Kuhn, Thomas, 32 Lash, Jonathan, 82 Lay expertise, 73 Leadership, 5, 89 Legislatures, 7 Legitimacy, 14, 31-32, 40, 87, 109, 113; evaluation of, 76 n.4, 92, 102, 106, 138-39; sources of, 5556,60,106, 111, 114,123,167-68 Lim, Gil-Chin, 24 Lindblom, Charles E., 7, 29, 30 Local expertise, 7, 21, 88, 114 n.6 Lying, 4 Lysenko, Trofim, 33 Machiavelli, Niccolo, 24, 65 Majone, Giandomenico, 14, 167
198 Managerial language in public policy, 82, 113 Marx, Karl, 29 Massachusetts Institute of Technology (MIT), 3, 120, 131 Measurement in project evaluation, 14, 146, 149, 167 Mediation, 132 Meehan, Tracy, 83 Merton, Robert K., 33, 59, 60 Methodological factors affecting analysis, 13,25, 145-58 Minnesota: comparative risk analysis, 73, 76; 103-6; project scope, 166; support from decision makers, 178 Minnesota Pollution Control Agency. See Minnesota: comparative risk analysis. Mintzberg, H., 22 Modeling, 120, 128, 156-58, 168, 172, 174-75; desktop tool, 184; meta, 181; reality, 18 Monte Carlo simulation, 156 Multiattribute decision analysis, 133, 149, 150, 151, 160n.l3, 170 Multicriteria accounting, 141 Multicriteria decision analysis, 147, 150-51,752 Multimethod analysis, 141 Multiobjective decision analysis, 149, 150, 151 Nader, Ralph, 48; Unsafe at Any Speed, 110 Neutrality, 5, 8, 65, 132 New England Power Pool (NEPOOL), 119-23, 125-34 New England Project, 13, 145, 168; analysis methods, 121, 127, 147, 152-57; chronology, 120-21; evaluation, 137-^-2; inductive reasoning, 181; participants, 120, 167, 173-74; positional conflict, 169-70; process and analysis, 178; sponsors, 120, 128, 178; structure, 120, 121-23, 133, 166, 173; support from decision makers, 177-78; uncertainty, 171 Nietzsche, Friedrich Wilhelm, 65, 67 Nobel laureates, 33 Normal science, 25, 32, 33
Index Normative content, 6, 151, 152, 178; actively managed, 181-82 Numinous legitimacy, 167, 168 Objectivity, 5, 55 Open planning processes, 130, 131-32, 134, 137, 168 Optimization, 22, 36, 111, 132, 150, 151 Organized skepticism, 42 n.20, 59 Osuna,Will,21 Ozawa, Connie, 8 Paradigm, 32 Partisanship, 5, 7, 102, 185 Peer review: applications to public policy, 59; norms of, 59, 66; practice of, 25, 33-34, 66-67; rationales for, 59 Personal factors affecting analysis, 25 Plato, 29 Pluralistic language in public policy, 83, 113 Policy analyst, 20-21; role of, 6; training of, 115; types of, 21. See also Analyst Policy streams model of policy making, 110 Politicization, 66-67 Polybius, 29 Populism, 25 n.2, 28 n.20 Positional conflict, 130, 169-70 Postmodernism, 21, 26 n.l 1 Power, 20, 24-25, 65-66, 67 Practical reason, 35, 36 Pragmatism, 35, 112 Price, Don K., 20 Primack, Joel, 48 Prisoners' dilemma, 39, 40 n.43 Private cost-benefit analysis, 148, 149 Probabilistic analysis, 140 Problem solving, 6, 7 Problem succession, 31 Procedural factors affecting analysis, 3, 13,25, 109-15 Procedural rationality, 22, 102, 110-15, 168 Process, marrying with analysis, 13, 76, 178-79
Index Process models of policy making, 10910 Project evaluation methods: analytical approaches, 147-57; basic theory of, 146-47; characteristics of, 149 Proprietary data, 34, 179-80 Public participation, 146-47, 149, 152; broad, 150; and communicating, 18; in cost-benefit analysis, 148; legitimacy in, 168; meaningful, 142; in Minnesota's project, 103; requirement, 17; requirements for, 112-14 Public policy analysis, 12, 133 Rasputin, Grigori, 6 Rationality, 4, 6, 7, 34-39, 66. See also Instrumental rationality; Procedural rationality; Substantive rationality Rawls, John, 35, 66 Reasonable decision rules, 34-35 Reducing Risk (U.S. Environmental Protection Agency), 82 Reductionism, 22, 30 Reilly, William, 80 Research, 12 Residual risk, 105, 106 n. 10 Review policy. See Peer review Rightness, 37 Risk assessment, 22. See also Comparative risk analysis Risk-Based Environmental Priorities. See Minnesota: comparative risk analysis RISKMIN, 153-54 Rivlin, Alice, 5, 82 Road show, 168 Rousseau, Jean-Jacques, 29 Ruckleshaus, William, 77 Sampling the future, 154, 157 Scenario analysis, 153 Scenario-based multiattribute trade of analysis, 152-57, 158, 166, 170 Science wars, 20 Scientifically guided society, 29 Seabrook nuclear power station, 120, 126
199 Self-guided society, 29-30 Sensitivity analysis, 131, 159 n.9 Sharing information, 18, 40, 88, 122, 132, 179-80, 184 Silent Spring (Carson), 110 Simon, Herbert A., 110, 168 Simulation, Modeling and Regression, and Tradeoff Evaluation (SMARTE), 153-54 Sincerity, 40 Slovic, Paul, 37 SMARTE (Simulation, Modeling and Regression, and Tradeoff Evaluation), 153-54 Smith, Adam, 36 Social construction of knowledge, 32 Social constructivism, 25 n.2, 27 n.19 Social cost-benefit analysis, 148, 149 Social learning, 29-30 Social studies of science, 32-40 Social welfare risks, identified by: California, 95, 96, 99-100; crossdisciplinary review, 182; Minnesota, 104-5; U.S. Environmental Protection Agency, 78 Solow, Robert, 3-5, 6, 184 Sorting, 8, 34, 83, 149, 150, 160nn.ll, 12, 18, 182 Spectrum from truth to power, 20, 66 Stakeholder fatigue, 112 Stakeholder review, 25, 88 Stakeholders, 24 Standard practices, 31-32 Status-based legitimacy, 60, 106, 111, 114, 123, 167-68 Story-telling, 4, 181 Strategic action, 37-38 Substantive rationality, 22, 110-13, 168 Sununu,John, 120 Symbolic models of policy making, 110 System behavior, 134, 153, 174 Systems analysis, 31, 119, 128, 154 Teamwork, 103, 104 Technology assessment, 47-67; approach to, 48-49 Temporary truths, 34
200 Tennessee Valley Authority (TVA), 17-18, 140 Theoretical reason, 35-36 Thomas, Lee, 77 Timidity, 22 Tobin, James, 4 Toward the 21st Century. See California: comparative risk analysis Tracking normative content of analysis, 752 Tradeoff, 23, 172; exploring, 122, 130, 132; forcing, 111; management strategies, 181 Tradeoff analysis, 150-51; characteristics, 149; in electric power planning, 181-82; intention of, 153, 161 n.18 Translating analytical results, 15 Translating analytical stories, 183-84 Truth, 34, 40, Ulrich, Werner, 179 Uncertainty, 130, 133, 134, 151, 154, 156, 171-72 Unfinished Business (U.S. Environmental Protection Agency), 77-84, 87 Universalism, 42 n.20 Unsafe at Any Speed (Nader), 110 U.S. Congressional Budget Office (CBO), 5 U.S. Congressional Office of Technology Assessment (OTA), 5, 11; actively managed normative content, 181; balanced analysis, 173; broad analytical scope, 183; communicative problems, 166; complexity of issues, 171; crossdisciplinary review, 182; demise of, 57-58, 67-68; factors identified by, 65-69, 167; inductive reasoning, 180; information sharing, 179; origins, 48-49; perception of uncertainty, 171; positional conflict, 169;
Index process and analysis, 178; project scope, 165; serving multiple decision makers, 47-61; storytelling, 184; support from decision makers, 177 U.S. Environmental Protection Agency, 12, 60-61; communicative problems, 166-67; comparative risk projects, 73, 75-76, 75, 165; perception of uncertainty, 171; Reducing Risk, 82; review process, 59; sources of legitimacy, 167-68; support from decision makers, 178; Unfinished Business, 77-84, 87 U.S. National Research Council/National Academies of Science, 59, 60, 66-67 U.S. Supreme Court, 65 Utilitarianism, 35-38 Utility planning, 3, 13. See also Electricity planning Valuation in project evaluation, 14647, 149 Value, 13, 14, 19,87, 113, 167; evaluation of, 53, 76 n.4, 138 von Hippel, Frank, 48 Walker, Robert S., 53, 54, 55, 68 Washington State: broad analytical scope, 183; comparative risk analysis, 73, 76, 87-92, 167; consensus building, 169; process and analysis, 178; project scope, 165; support from decision makers, 178 Washington State Department of Ecology. See Washington State: comparative risk analysis. Weber, Max, 14,31,35 Whiteman, David, 53, 57 Wildavsky, Aaron, 20, 31 Wilson, Pete, 96, 97
About the Author CLINTON J. ANDREWS is Assistant Professor at the Edward J. Bloustein School of Planning and Public Policy at Rutgers University. His previous works include Regulating Regional Power Systems (Quorum, 1995), and Industrial Ecology and Global Change (Cambridge, 1994). He is currently president of the IEEE Society on Social Implications of Technology.