'seap,
Pipeline Risk Management Manual Ideas, Techniques, and Resources Third Edition
Pipeline Risk Management Manual Ideas, Techniques, and Resources Third Edition
W. Kent Muhlbauer
-
-
-
-
AMSTERDAM. BOSTON HEIDELBERG * LONDON * NEWYORK OXFORD PARIS * SANDIEGO * SANFRANCISCO SINGAPORE SYDNEY -TOKYO
ELSEVIER
Gulf Professional Publishing IS an lrnprrnt of Elsevier 1°C
Gulf Professional Publishing is an imprint of Elsevier 200 Wheeler Road, Burlington, MA 01803, USA Linacre House, Jordan Hill, Oxford OX2 8DP, UK Copyright 02004, Elsevier Inc. All rights reserved. No part ofthis publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher.
Permissions may be sought directly from Elsevier’s Science &Technology Rights Department in Oxford, UK: phone: ( 4 4 ) 1865 843830, fax: ( 4 4 ) 1865 853333, e-mail:
[email protected] may also complete your request on-line via the Elsevier homepage (http://elsevier.com), by selecting “Customer Support” and then “Obtaining Permissions.”
8
Recognizing the importance of preserving what has been written, Elsvier prints its books on acid-free paper whenever possible. Library of Congress Cataloging-in-PublicationData Muhlbauer, W. Kent. Pipeline risk management manual : a tested and proven system to prevent loss and assess risk / by W. Kent Muhlbauer.-3rd ed. p. cm. Includes bibliographical references and index. ISBN 0-7506-7579-9 1. Pipelines-Safety measures-Handbooks, manuals, etc. 2. Pipelines-Reliability-Handbooks, manuals, etc. I. Title. TJ930.M84 2004 6213 ’ 6 7 2 4 ~ 2 2 20030583 15 British Library Cataloguing-in-PublicationData A catalogue record for this book is available from the British Library. ISBN: 0-7506-7579-9
For information on all Gulf Professional Publishing publications visit our Web site at www.gulfpp.com 03 04 05 06 07 08 09
10 9
Printed in the United States ofAmerica
8 7 6 5 4
3 2
1
Contents Acknowledgements
vii
Preface
ix
Introduction
xi
Risk Assessment at a Glance
xv
Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Appendix A Appendix B Appendix C Appendix D Appendix E Appendix F Appendix G
Risk: Theory and Application Risk Assessment Process Third-party Damage Index Corrosion Index Design Index Incorrect Operations Index Leak Impact Factor Data Management and Analyses Additional Risk Modules Service Interruption Risk Distribution Systems Offshore Pipeline Systems Stations and Surface Facilities Absolute Risk Estimates Risk Management Typical Pipeline Products Leak Rate Determination Pipe Strength Determination Surge Pressure Calculations Sample Pipeline Risk Assessment Algorithms Receptor Risk Evaluation Examples of Common Pipeline Inspection and Survey Techniques
1 21 43 61 91 117 133 177 197 209 223 243 257 293 33 1 357 361 363 367 369 375 379
Glossary
38 1
References Index
385 389
Acknowledgments As in the last edition, the author wishes to express his gratitude to the many practitioners of formal pipeline risk management who have improved the processes and shared their ideas. The author also wishes to thank reviewers of this edition who contributed their time and expertise to improving portions of this book, most notably Dr. Karl Muhlbauer and Mr. Bruce Beighle.
Preface The first edition of this book was written at a time when formal risk assessments of pipelines were fairly rare. To be sure, there were some repairheplace models out there, some maintenance prioritization schemes, and the occasional regulatory approval study, but, generally, those who embarked on a formal process for assessing pipeline risks were doing so for very specific needs and were not following a prescribed methodology. The situation is decidedly different now. Risk management is increasingly being mandated by regulations. A risk assessment seems to be the centerpiece of every approval process and every pipeline litigation. Regulators are directly auditing risk assessment programs. Risk management plans are increasingly coming under direct public scrutiny. While risk has always been an interesting topic to many, it is also often clouded by preconceptions of requirements of huge databases, complex statistical analyses, and obscure probabilistic techniques. In reality, good risk assessments can be done even in a data-scarce environment. This was the major premise of the earlier editions. The first edition even had a certain sense of being a risk assessment cookbook-“Here are the ingredients and how to combine them.” Feedback from readers indicates that this was useful to them. Nonetheless, there also seems to be an increasing desire for more sophistication in risk modeling. This is no doubt the result of more practitioners than ever before-pushing the boundaries-as well as the more widespread availability of data and the more powerful computing environments that make it easy and cost effective to consider many more details in a risk model. Initiatives are currently under way to generate more
complete and useful databases to further our knowledge and to support detailed risk modeling efforts. Given this as a backdrop, one objective ofthis third edition is to again provide a simple approach to help a reader put together some kind of assessment tool with a minimum of aggravation. However, the primary objective of this edition is to provide a reference book for concepts, ideas, and maybe a few templates covering a wider range of pipeline risk issues and modeling options. This is done with the belief that an idea and reference book will best serve the present needs ofpipeline risk managers and anyone interested in the field. While I generally shy away from technical books that get too philosophical and are weak in specific how-to’s, it is just simply not possible to adequately discuss risk without getting into some social and psychological issues. It is also doing a disservice to the reader to imply that there is only one correct risk management approach. Just as an engineer will need to engage in a give-and-take process when designing the optimum building or automobile, so too will the designer of a risk assessment/management process. Those embarking on a pipeline risk management process should realize that, once some basic understanding is obtained, they have many options in specific approach. This should be viewed as an exciting feature, in my opinion. Imagine how mundane would be the practice of engineering if there were little variation in problem solving. So, my advice to the beginner is simple: arm yourself with knowledge, approach this as you would any significant engineering project, and then enjoy the journey !
Introduction As with previous editions of this book, the chief objective of this edition is to make pipelines safer. This is hopefully accomplished by enhancing readers’ understanding of pipeline risk issues and equipping them with ideas to measure, track, and continuouslyimprove pipeline safety. We in the pipeline industry are obviously very familiar with all aspects of pipelining. This familiarity can diminish our sensitivity to the complexity and inherent risk of this undertaking. The transportation of large quantities of sometimes very hazardous products over great distances through a pressurized pipeline system, often with zero-leak tolerance, is not a trivial thing. It is useful to occasionally step back and re-assess what a pipeline really is, through fresh eyes. We are placing a very complex, carefully engineered structure into an enormously variable, ever-changing, and usually hostile environment. One might reply, “complex!? It’s just a pipe!” But the underlying technical issues can be enormous. Metallurgy, fracture mechanics, welding processes, stress-strain reactions, soilinterface mechanical properties of the coating as well as their critical electrochemical properties, soil chemistry, every conceivable geotechnical event creating a myriad of forces and loadings, sophisticated computerized SCADA systems, and we’re not even to rotating equipment or the complex electrochemical reactions involved in corrosion prevention yet! A pipeline is indeed a complex system that must coexist with all of nature’s and man’s frequent lack of hospitality. The variation in this system is also enormous. Material and environmental changes over time are of chief concern. The pipeline must literally respond to the full range of possible ambient conditions of today as well as events of months and years past that are still impacting water tables, soil chemistry, land movements, etc. Out of all this variation, we are seeking risk ‘signals.’Our measuring ofrisk must therefore identify and properly consider all of the variables in such a way that we can indeed pick out risk signals from all of the background ‘noise’ created by the variability. Underlying most meanings of risk is the key issue of ‘probability.’ As is discussed in this text, probability expresses a degree ofbelief:This is the most compelling definition of probability because it encompasses statistical evidence as well as interpretations and judgment. Our beliefs should be firmly rooted in solid, old-fashioned engineering judgment and reasoning. This does not mean ignoring statistics-rather, using data appropriately-for diagnosis; to test hypotheses; to
uncover new information. Ideally, the degree of belief would also be determined in some consistent fashion so that any two estimators would arrive at the same conclusion given the same evidence. This is the purpose of this book-to provide frameworks in which a given set of evidence consistently leads to a specific degree of belief regarding the safety of a pipeline. Some of the key beliefs underpinning pipeline risk management, in this author’s view, include: Risk management techniques are fundamentally decision support tools. We must go through some complexity in order to achieve “intelligent simplification.” In most cases, we are more interested in identifying locations where a potential failure mechanism is more aggressive, rather than predicting the length of time the mechanism must be active before failure occurs. Many variables impact pipeline risk. Among all possible variables, choices are required to strike a balance between a comprehensive model (one that covers all of the important stuff) and an unwieldy model (one with too many relatively unimportant details). Resource allocation (or reallocation) towards reduction of failure probability is normally the most effectiveway to practice risk management. (The complete list can be seen in Chapter 2) The most critical beliefunderlying this book is that all available information should be used in a risk assessment. There are very few pieces of collected pipeline information that are not useful to the risk model. The risk evaluator should expect any piece of information to be useful until he absolutely cannot see any way that it can be relevant to risk or decides its inclusion is not cost effective. Any and all expert’s opinions and thought processes can and should be codified, thereby demystifymg their personal assessment processes.The experts’ analysis steps and logic processes can be duplicated to a large extent in the risk model. A very detailed model should ultimately be smarter than any single individual or group of individuals operating or maintaining the p i p e l i n e includingthat retired guy who knew everything. It is often useful to thinkof the model building process as ‘teaching the model’ rather than ‘designing the model.’ We are training the model to ‘think’
xii Introduction
like the best experts and giving it the collective knowledge of the entire organization and all the years ofrecord-keeping.
Changes from Previous Editions This edition offers some new example assessment schemes for evaluating various aspects of pipeline risk. After several years of use, some changes are also suggested for the model proposed in previous editions of this book. Changes reflect the input of pipeline operators, pipeline experts, and changes in technology. They are thought to improve our ability to measure pipeline risks in the model. Changes to risk algorithms have always been anticipated, and every risk model should be regularly reviewed in light of its ability to incorporate new knowledge and the latest information. Today’s computer systems are much more robust than in past years, so short-cuts, very general assumptions, and simplistic approximations to avoid costly data integrations are lessjustifiable. It was more appropriate to advocate a very simple approach when practitioners were picking this up only as a ‘good thing’ to do, rather than as a mandated and highly scrutinized activity. There is certainly still a place for the simple risk assessment. As with the most robust approach, even the simple techniques support decision makmg by crystallizing thinking, removing much subjectivity,helping to ensure consistency, and generating a host of other benefits. So, the basic risk assessment model of the second edition is preserved in this edition, although it is tempered with many alternative and supporting evaluation ideas. The most significant changes for this edition are seen in the Corrosion Index and Leak Impact Factor (LIF). In the former, variables have been extensively re-arranged to better reflect those variables’ relationships and interactions. In the case of LIF, the math by which the consequence variables are com-
bined has been made more intuitive. In both cases, the variables to consider are mostly the same as in previous editions. As with previous editions, the best practice is to assess major risk variables by evaluating and combining many lesser variables, generally available from the operator’s records or public domain databases. This allows assessments to benefit from direct use of measurements or at least qualitative evaluationsof several small variables, rather than a single, larger variable, thereby reducing subjectivity. For those who have risk assessment systems in place based on previous editions, the recommendation is simple: retain your current model and all its variables, but build a modern foundation beneath those variables (if you haven’t already done so). In other words, bolster the current assessments with more complete consideration of all available information. Work to replace the high-level assessments of ‘good,’ ‘fair,’ and ‘poor,’ with evaluations that combine several data-rich subvariables such as pipe-to-soil potential readings, house counts, ILI anomaly indications, soil resistivities, visual inspection results, and all the many other measurements taken. In many cases, this allows your ‘ascollected’ data and measurements to be used directly in the risk model-no extra interpretation steps required. This is straightforward and will be a worthwhile effort, yielding gains in efficiency and accuracy. As risks are re-assessed with new techniques and new information, the results will often be very similar to previous assessments. After all, the previous higher-level assessments were no doubt based on these same subvariables,only informally. If the new processes do yield different results than the previous assessments, then some valuable knowledge can be gained. This new knowledge is obtained by finding the disconnectthe basis of the differences-and learning why one of the approaches was not ‘thinking’ correctly. In the end, the risk assessment has been improved.
Disclaimer The user of this book is urged to exercise judgment in the use of the data presented here. Neither the author nor the publisher provides any guarantee, expressed or implied with regard to the general or specific application of the data, the range of errors that may be associated with any of the data, or the appropriateness of using any of the data. The author accepts no responsibility for damages, if any, suffered by any reader or user of this book as a result of decisions made or actions taken on information contained herein.
Risk Assessment at a Glance The following is a summary of the risk evaluation framework described in Chapters 3 through 7. It is one of several approaches to basic pipeline risk assessmentin which the main consequences of concern are related to public health and safety, including environmental considerations.Regardlessof the risk assessment methodologyused, this summary can be useful as a checklist to ensure that all risk issues are addressed.
Relative Risk Score
t
1
Leak Impact Factor
I
Incorrect Operations Figure0.1 Risk assessment model flowchart.
xvi Risk assessment at a glance
Relative Risk Rating Index Sum
A. B. C. D. E. F. G.
= =
(Index Sum) f (Leak Impact Factor) [(Third Party) +(Corrosion) +(Design) +(Incorrect Operations)]
Third-party Index Minimum Depth of Cover. ................. 0-20 pts Activity Level. ........................... 0-20 pts 0-10 pts Aboveground Facilities .................... 0-15 pts LineLocating .......................... 0-1 5 pts Public Education .......................... Right-of-way Condition. . . . . . . . . . . . . . . . . . . . 0-5 pts Patrol.. ................................. . O-15 pts
20% 20% 10% 15% 15%
0-100 pts
100%
Corrosion Index A. Atmospheric Corrosion. . . . A l . Atmospheric Exposure 0-2 pts A2. AtmosphericType ..................... A3. Atmospheric Coating. ................. 0-3 pts
5 yo 15%
10%
B. Internal Corrosion. . . . . . . 0-20 pts B1. Product Corrosivity . . . . . . . . . . . . . . . . . . .0-10 pts B2. Internal Protection. . . . . . . . . . . . . . . . . . . .0-10 pts
20%
C. Subsurface Corrosion. .................... .&70 pts C 1. Subsurface Environment ...............0-20 pts
70%
Mechanical Corrosion. . . . . . . . . . . . . . . . . .0-5 pts C2. Cathodic Protection. ................... 0-8 pts ... 0-15 pts Effectiveness. . . . Interference Potential. ................ 0-10 pts C3. Coating.. ............................ 0-10 pts Fitness ........... 0-10 pts Condition. ........................... 0-1 5 pts
Design Index A. Safety Factor. . . . . . . . . . . . . . . .
.......................... 0-15 pts C. Surge Potential. .......................... .O-10 pts .... .0-25 pts D. Integrity Verifications E. Land Movements. ........................ , 6 1 5 pts
. .O-35 pts
35% 15% 10% 25% 15%
0-100 pts
100%
Incorrect Operations Index A. Design A l . Hazard Identification .......... .. W p t s A2. MAOP Potential . . . . . . . . . . . . . . 0-12 pts A3. Safety Systems. . . . . 0-10pts A4. Material Selection. .................... 0-2 pts A5. Checks.. ............................. 0-2 pts 0-30 pts
30%
Risk assessment at a glance xvii
6. Construction BI. Inspection.. . . . . . . . . . . . . . . . . . . . . . . . . . .&IO pts 0-2 pts 8 2 . Materials.. . . . . . . . . . . . . . . . . . . . . . . . . . . . B3. Joining,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-2 pts B4. Backfill. ............................. &2 pts 65. Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-2 pts 6 6 . Coating,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-2 pts 0-20 pts
20%
C. Operation C1. Procedures. ........................... 0-7 pts C2. SCADNCommunications . . . . . . . . . . . . . . 0-3 pts C3. DrugTesting . . . . . . . . . . . . . . . . . . . . . . . . . . . O-2 pts C4. Safety Programs. ...................... 0-2 pts C5. SurveydMapdRecords . . . . . . . . . . . . . . . . . 0-5 pts 0-10 pts C6. Training. ............................ C7. Mechanical Error Preventers . . . . . . . . . . . . 0-6 pts 0-35 pts
35%
D. Maintenance DI. Documentation. . . . . . . . . . . . . . . . . . . . . . . . 0-2 pts D2. Schedule.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . O-3 pts D3. Procedures,. . . . . . . . . . . . . . . . . . . . . . . . . .@-lopts 0-15 pts
15%
Total Index Sum 0-400 pts
Leak Impact Factor Leak Impact Factor = Product Hazard (PH) x Leakvolume (LV) x Dispersion (D)x Receptors (R) A. Product Hazard (Acute + Chronic Hazards) 0-22 points A 1. Acute Hazards a. N,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 04pts b. N r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 04pts c. N,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-4pts Total(Nf+N,+Nh) 2. Chronic Hazard RQ B. Leak Volume ( LV) C. Dispersion (D)
D. Receptors (R) D 1. Population Density (Pop) D2. Environmental Considerations (Env) D3. High-Value Areas (HVA) Total (Pop + Env + HVA)
0-12 pts 0-10 pts
Risk: Theory and Application Contents I The science and philosophyof ris Embracing paranoia 111 The scientificmethod 1/2 Modeling 113 II. Basicconcepts 113 Hazard 113 Risk 1/4 Farlure 114 Probability 114 Frequency, statistics, and probabi Failure rates 115 Consequences 116 Risk assessment 117 Riskmanagement 117 Experts 118 111 Uncertainty 118 IY Bsk process-the general steps
1. The science and philosophyof risk Embracingparanoia One of Murphy’s’ famous laws states that “left to themselves, things will always go from bad to worse.” This humorous prediction is, in a way, echoed in the second law of thermodynamics. That law deals with the concept of entropy. Stated simply, entropy
I Murphy$ laws arefamousparodies on scientific laws and l*, humorously pointing out all the things that can and often do go wrong in science and life.
is a measure of the disorder of a system.The thermodynamics law states that “entropy must always increase in the universe and in any hypothetical isolated system within it” [34]. Practical application of this law says that to offset the effects of entropy, energy must be injected into any system. Without adding energy, the system becomes increasingly disordered. Although the law was intended to be a statement of a scientific property, it was seized upon by “philosophers” who defined system to mean a car, a house, economics, a civilization, or anything that became disordered.By this extrapolation, the law explains why a desk or a garage becomes increasingly cluttered until a cleanup (injection of energy) is initiated. Gases
1/2 Risk: Theory and Application
diffuse and mix in irreversible processes, unmaintained buildings eventually crumble, and engines (highly ordered systems) break down without the constant infusion of maintenance energy. Here is another way of looking at the concept: “Mother Nature hates things she didn’t create.” Forces of nature seek to disorder man’s creations until the creation is reduced to the most basic components. Rust is an example-metal seeks to disorder itself by reverting to its original mineral components. If we indulge ourselves with this line of reasoning, we may soon conclude that pipeline failures will always occur unless an appropriate type of energy is applied. Transport of products in a closed conduit, often under high pressure, is a highly ordered, highly structured undertaking. If nature indeed seeks increasing disorder, forces are continuously at work to disrupt this structured process. According to this way of thinlang, a failed pipeline with all its product released into the atmosphere or into the ground or equipment and components decaying and reverting to their original premanufactured states represent the less ordered, more natural state of things. These quasi-scientific theories actually provide a useful way of looking at portions of our world. If we adopt a somewhat paranoid view of forces continuously acting to disrupt our creations, we become more vigilant. We take actions to offset those forces. We inject energy into a system to counteract the effects of entropy. In pipelines, this energy takes the forms of maintenance, inspection, and patrolling; that is, protecting the pipeline from the forces seeking to tear it apart. After years of experience in the pipeline industry, experts have established activities that are thought to directly offset specific threats to the pipeline. Such activities include patrolling, valve maintenance, corrosion control, and all of the other actions discussed in this text. Many of these activities have been mandated by governmental regulations, but usually only after their value has been established by industry practice. Where the activity has not proven to be effective in addressing a threat, it has eventuallybeen changed or eliminated. This evaluation process is ongoing. When new technology or techniques emerge, they are incorporated into operations protocols. The pipeline activity list is therefore being continuously refined. A basic premise of this book is that a risk assessment methodology should follow these same lines of reasoning. All activities that influence, favorably or unfavorably, the pipeline should be considered-even if comprehensive, historical data on the effectivenessof a particular activity are not yet available. Industry experience and operator intuition can and should be included in the risk assessment.
The scientific method This text advocates the use of simplifications to better understand and manage the complex interactions of the many variables that make up pipeline risk. This approach may appear to some to be inconsistent with their notions about scientific process. Therefore, it may be useful to briefly review some pertinent concepts related to science, engineering, and even philosophy. The results of a good risk assessment are in fact the advancement of a theory. The theory is a description of the expected behavior, in risk terms, of a pipeline system over some future period of time. Ideally, the theory is formulated from a risk assessment technique that conforms with appropriate scientific
methodologies and has made appropriate use of information and logic to create a model that can reliably produce such theories. It is hoped that the theory is a fair representation of actual risks. To be judged a superior theory by the scientific community, it will use all available information in the most rigorous fashion and be consistent with all available evidence. To be judged a superior theory by most engineers, it will additionally have a level of rigor and sophistication commensurate with its predictive capability; that is, the cost of the assessment and its use will not exceed the benefits derived from its use. If the pipeline actually behaves as predicted, then everyone’s confidence in the theory will grow, although results consistent with the predictions will never “prove” the theory. Much has been written about the generation and use of theories and the scientific method. One useful explanation of the scientific method is that it is the process by which scientists endeavorto construct a reliable and consistent representation of the world. In many common definitions, the methodology involves hypothesis generation and testing of that hypothesis: 1. Observe a phenomenon. 2. Hypothesize an explanation for the phenomenon. 3. Predict some measurable consequence that your hypothesis would have if it turned out to be true. 4. Test the predictions experimentally.
Much has also been written about the fallacy of believing that scientists use only a single method of discovery and that some special type of knowledge is thereby generated by this special method. For example, the classic methodology shown above would not help much with investigation of the nature of the cosmos. No single path to discovery exists in science, and no one clear-cut description can be given that accounts for all the ways in which scientific truth is pursued [56,88]. Common definitions of the scientific method note aspects such as objectivity and acceptability of results from scientific study. Objectivity indicates the attempt to observe things as they are, without altering observations to make them consistent with some preconceived world view. From a risk perspective, we want our models to be objective and unbiased (see the discussion of bias later in this chapter). However, our data sources often cannot be taken at face value. Some interpretation and, hence, alteration is usually warranted, thereby introducing some subjectivity. Acceptability is judged in terms ofthe degree to which observations and experimentationscan be reproduced. Of course, the ideal risk model will be accurate, but accuracy may only be verified after many years. Reproducibility is another characteristicthat is sought and immediately verifiable. If multiple assessors examine the same situation, they should come to similar conclusions if our model is acceptable. The scientific method requires both inductive reasoning and deductive reasoning. Induction or inference is the process of drawing a conclusion about an object or event that has yet to be observed or occur on the basis of previous observations of similar objects or events. In both everyday reasoning and scientific reasoning regarding matters of fact, induction plays a central role. In an inductive inference, for example, we draw conclusions about an entire group of things, or a population, on the basis of data about a sample of that group or population; or we predict the occurrence of a future event on the basis of observations of similar past events; or we attribute a property to a nonobserved thing on the grounds that all observed things of
Basic concepts 113 the same kind have that property; or we draw conclusions about causes of an illness based on observations of symptoms. Inductive inference permeates almost all fields, including education, psychology, physics, chemistry, biology, and sociology 1561.The role of induction is central to many of our processes of reasoning. At least one application of inductive reasoning in pipeline risk assessment is obvious-using past failures to predict future performance. A more narrow example of inductive reasoning for pipeline risk assessment would be: “Pipeline ABC is shallow and fails often, therefore all pipelines that are shallow fail more often.” Deduction on the other hand, reasons forward from established rules: “All shallow pipelines fail more frequently; pipeline ABC is shallow; therefore pipeline ABC fails more frequently.” As an interesting aside to inductive reasoning, philosophers have struggled with the question of what justification we have to take for granted the common assumptions used with induction: that the future will follow the same patterns as the past; that a whole population will behave roughly like a randomly chosen sample; that the laws of nature governing causes and effects are uniform; or that we can presume that a sufficiently large number of observed objects gives us grounds to attribute something to another object we have not yet observed. In short, what is the justification for induction itself? Although it is tempting to try to justify induction by pointing out that inductive reasoning is commonly used in both everyday life and science. and its conclusions are. by and large, proven to be correct. this justification is itself an induction and therefore it raises the same problem: Nothing guarantees that simply because induction has worked in the past it will continue to work in the future. The problem of induction raises important questions for the philosopher and logician whose concern it is to provide a basis for assessment of the correctness and the value of methods of reasoning [56,88]. Beyond the reasoning foundations of the scientific method, there is another important characteristic of a scientific theory or hypothesis that differentiates it from, for example, an act of faith: A theory must be “falsifiable.”This means that there must be some experiment or possible discovery that could prove the theory untrue. For example. Einstein’s theory of relativity made predictions about the results of experiments. These experiments could have produced results that contradicted Einstein, so the theory was (and still is) falsifiable [56]. On the other hand, the existence of God is an example of a proposition that cannot be falsified by any known experiment. Risk assessment results, or “theories” will predict very rare events and hence not be falsifiable for many years. This implies an element offaith in accepting such results. Because most risk assessment practitioners are primarily interested in the immediate predictive power of their assessments. many of these issues can largely be left to the philosophers. However, it is useful to understand the implications and underpinnings of our beliefs.
Modeling As previously noted, the scientific method is a process by which we create representations or models of our world. Science and engineering (as applied science) are and always have been concerned with creating models of how things work.
As it is used here, the term model refers to a set of rules that are used to describe a phenomenon. Models can range from very simple screening tools (Le., “ifA and not B, then risk = low”) to enormously complex sets of algorithms involving hundreds of variables that employ concepts from expert systems, fuzzy logic, and other artificial intelligence constructs. Model construction enables us to better understand our physical world and hence to create better engineered systems. Engineers actively apply such models in order to build more robust systems. Model building and model applicatiodevaluation are therefore the foundation of engineering. Similarly, risk assessment is the application of models to increase the understanding of risk, as discussed later in this chapter. In addition to the classical models of logic. logic techniques are emerging that seek to better deal with uncertainty and incomplete knowledge. Methods of measuring “partial truths”-when a thing is neither completely true nor completely false-have been created based on fuuy logic originating in the 1960s from the University of California at Berkley as techniques to model the uncertainty of natural language. Fuzzy logic or fuzzy set theory resembles human reasoning in the face of uncertainty and approximate information. Questions such as “To what degree is1 safe?’ can be addressed through these techniques. They have found engineering application in many control systems ranging from “smart” clothes dryers to automatic trains.
II. Basic concepts Hazard Underlying the definition of risk is the concept of hazard. The word hazard comes from a1 zahr: the Arabic word for “dice” that referred to an ancient game of chance [lo]. We typically define a hazard as a characteristic or group of characteristics that provides the potential for a loss. Flammability and toxicity are examples of such characteristics. It is important to make the distinction between a hozard and a risk because we can change the risk without changing a hazard. When a person crosses a busy street, the hazard should be clear to that person. Loosely defined it is the prospect that the person must place himself in the path of moving vehicles that can cause him great bodily harm were he to be struck by one or more of them. The hazard is therefore injury or fatality as a result of being struck by a moving vehicle. The risk, however, is dependent on how that person conducts himself in the crossing of the street. He most likely realizes that the risk is reduced if he crosses in a designated trafficcontrolled area and takes extra precautions against vehicle operators who may not see him. He has not changed the hazard-he can still be struck by a vehicle-but his risk of injury or death is reduced by prudent actions. Were he to encase himself in an armored vehicle for the trip across the street, his risk would be reduced even further-he has reduced the consequences of the hazard. Several methodologies are available to identify hazards and threats in a formal and structured way. A hazard and operability (HAZOP) study is a technique in which a team of system experts is guided through a formal process in which imaginative scenarios are developed using specific guide words and analyzed by the team. Event-tree and fault-tree analyses are other tools. Such techniques underlie the identified threats to pipeline integrity that are presented in this book. Identified
1/4 Risk: Theory and Application
threats can be generally grouped into two categories: timedependent failuremechanisms and random failuremechanisms, as discussed later. The phrases threat assessment and hazard identification are sometimesused interchangeably in this book when they refer to identifyingmechanisms that can lead to a pipeline failure with accompanying consequences.
Risk Risk is most commonly defined as the probability of an event that causes a loss and the potential magnitude of that loss. By this definition, risk is increased when either the probability of the event increases or when the magnitude of the potential loss (the consequencesof the event) increases. Transportation ofproducts by pipeline is a risk because there is some probability of the pipeline failing, releasing its contents, and causing damage (in addition to the potential loss of the product itself). The most commonly accepted definition of risk is often expressed as a mathematical relationship: Risk = (event likelihood) x (event consequence)
As such, a risk is often expressed in measurable quantities such as the expectedfrequency of fatalities, injuries, or economic loss. Monetary costs are often used as part of an overall expression of risk however, the difficult task of assigning a dollar value to human life or environmental damage is necessary in using this as a metric. Related risk terms include Acceptable risk, tolerable risk, risk tolerance, and negligibie risk, in which risk assessment and decision making meet. These are discussed in Chapters 14 and 1 5 . A complete understanding of the risk requires that three questionsbe answered: 1. What can go wrong? 2. How likely is it? 3. What are the consequences?
By answering these questions,the risk is defined.
Failure Answering the question of “what can go wrong?’ begins with defining a pipeline failure. The unintentional release of pipeline contents is one definition. Loss ofintegrity is another way to characterize pipeline failure. However, a pipeline can fail in other ways that do not involve a loss of contents. A more general definition is failure to perform its intended function. In assessing the risk of service interruption, for example, a pipeline can fail by not meeting its delivery requirements (its intended purpose). This can occur through blockage, contamination, equipment failure, and so on, as discussed in Chapter 10. Further complicating the quest for a universal definition of failure is the fact that municipalpipeline systemslike water and wastewater and even natural gas distribution systems tolerate some amount of leakage (unlike most transmission pipelines). Therefore, they might be considered to have failed only when the leakage becomes excessive by some measure. Except in the
case of service interruption discussed in Chapter 10, the general definition of failure in this book will be excessive leakage. The term leakage implies that the release of pipeline contents is unintentional. This lets our definition distinguish a failure from a venting, de-pressuring, blow down, flaring, or other deliberateproduct release. Under this working definition, a failure will be clearer in some cases than others. For most hydrocarbon transmission pipelines, any leakage (beyond minor, molecular level emissions) is excessive, so any leak means that the pipeline has failed. For municipal systems, determinationof failure will not be as precise for several reasons, such as the fact that some leakage is only excessive-that is, a pipe failure-after it has continued for a period of time. Failure occurs when the structure is subjected to stresses beyond its capabilities,resulting in its structural integritybeing compromised.Internal pressure, soil overburden, extreme temperatures, external forces, and fatigue are examples of stresses that must be resisted by pipelines. Failure or loss of strength leading to failure can also occur through loss of material by corrosion or from mechanical damage such as scratches and gouges. The answers to what can go wrong must be comprehensive in order for a risk assessmentto be complete.Every possible failure mode and initiating cause must be identified. Every threat to the pipeline, even the more remotely possible ones, must be identified. Chapters 3 through 6 detail possible pipeline failure mechanisms grouped into the four categories of Third Par& Corrosion. Design, and Incorrect Opemtions. These roughly correspond to the dominant failure modes that have been historically observedin pipelines.
Probability By the commonly accepted definition of risk, it is apparent that probability is a critical aspect of all risk assessments. Some estimate of the probability of failure will be required in order to assess risks. This addresses the second question of the risk definition: “How likely is it?” Some think of probability as inextricably intertwined with statistics. That is, “real” probability estimates arise only from statistical analyses-relying solely on measured data or observed occurrences.However, this is only one of five definitions of probability offered in Ref. 88. It is a compelling definition since it is rooted in aspects of the scientific process and the familiar inductive reasoning. However, it is almost always woefully incomplete as a stand-alonebasis for probability estimates of complex systems. In reality, there are no systems beyond very simple, fixed-outcome-typesystems that can be fully understood solely on the basis of past observations-the core of statistics.Almost any system of a complexity beyond a simple roll of a die, spin of a roulette wheel, or draw from a deck of cards will not be static enough or allow enough trials for statistical analysis to completely characterize its behavior. Statisticsrequires data samples-past observationsfrom which inferencescan be drawn. More interestingsystems tend to have fewer available observations that are strictly representative of their current states. Data interpretation becomes more and more necessary to obtain meaningful estimates. As systems become more complex, more variable in nature, and where trial observations are less available, the historical frequency approach
Basic concepts 115 will often provide answers that are highly inappropriate estimates of probability. Even in cases where past frequencies lead to more reliable estimates of future events for populations, those estimates are often only poor estimates of individual events. It is relatively easy to estimate the average adulthood height of a class of third graders, but more problematic when we try to predict the height of a specific student solely on the basis of averages. Similarly, just because the national average ofpipeline failures might be 1 per 1,000 mile-years, the 1,000-mile-longABC pipeline could be failure free for 50 years or more. The point is that observed past occurrences are rarely sufficient information on which to base probability estimates. Many other types of information can and should play an important role in determining a probability.Weather forecasting is a good example of how various sources of information come together to form the best models. The use of historical statistics (climatological data-what has the weather been like historically on this date) turns out to be a fairly decent forecasting tool (producing probability estimates), even in the absence of any meteorological interpretations. However, a forecast based solely on what has happened in previous years on certain dates would ignore knowledge of frontal movements, pressure zones, current conditions, and other information commonly available. The forecasts become much more accurate as meteorological information and expert judgment are used to adjust the base case climatological forecasts [88]. Underlying most of the complete definitions of probability is the concept of degree of belief:A probability expresses a degree of belief. This is the most compelling interpretation of probability because it encompasses the statistical evidence as well as the interpretations and judgment. Ideally, the degree of belief could be determined in some consistent fashion so that any two estimators would arrive at the same conclusion given the same evidence. It is a key purpose of this book to provide a framework by which a given set of evidence consistently leads to a specific degree of belief regarding the safety of a pipeline. (Note that the terms likelihood. probability, and chance are often used interchangeably in this text.)
Frequency, statistics, and probability As used in this book,frequency usually refers to a count of past observations; statistics refers to the analyses of the past observations; and the definition ofprobability is “degree of belief,” which normally utilizes statistics but is rarely based entirely on them. A statistic is not a probability. Statistics are only numbers or methods of analyzing numbers. They are based on observations-past events. Statistics do not imply anything about future events until inductive reasoning is employed. Therefore, a probabilistic analysis is not only a statistical analysis. As previously noted, probability is a degree of belief. It is influenced by statistics (past observations), but only in rare cases do the statistics completely determine our belief. Such a rare case would be where we have exactly the same situation as that from which the past observations were made and we are making estimates for a population exactly like the one from which the past data arose-a very simple system. Historical failure frequencies-and the associated statistical values-are normally used in a risk assessment. Historical
data, however, are not generally available in sufficient quantity or quality for most event sequences. Furthermore, when data are available, it is normally rare-event d a t a 4 n e failure in many years of service on a specific pipeline, for instance. Extrapolating future failure probabilities from small amounts of information can lead to significant errors. However, historical data are very valuable when combined with all other information available to the evaluator. Another possible problem with using historical data is the assumption that the conditions remain constant. This is rarely true, even for a particular pipeline. For example, when historical data show a high occurrence of corrosion-related leaks, the operator presumably takes appropriate action to reduce those leaks. His actions have changed the situation and previous experience is now weaker evidence. History will foretell the future only when no offsetting actions are taken. Although important pieces of evidence, historical data alone are rarely sufficient to properly estimate failure probabilities.
Failure rates A failure rate is simply a count of failures over time. It is usually first a frequency observation of how often the pipeline has failed over some previous period of time. A failure rate can also be a prediction of the number of failures to be expected in a given future time period. The failure rate is normally divided into rates of failure for each failure mechanism. The ways in which a pipeline can fail can be loosely categorized according to the behavior of the failure rate over time. When the failure rate tends to vary only with a changing environment, the underlying mechanism is usually random and should exhibit a constant failure rate as long as the environment stays constant. When the failure rate tends to increase with time and is logically linked with an aging effect, the underlying mechanism is time dependent. Some failure mechanisms and their respective categories are shown in Table 1.1.There is certainly an aspect of randomness in the mechanisms labeled time dependent and the possibility of time dependency for some of the mechanisms labeled random. The labels point to the probability estimation protocol that seems to be most appropriate for the mechanism. The historical rate of failures on a particular pipeline system may tell an evaluator something about that system. Figure 1.1 is a graph that illustrates the well-known “bathtub shape of failure rate changes over time. This general shape represents the failure rate for many manufactured components and systems over their lifetimes. Figure 1.2 is a theorized bathtub curve for pipelines. Table 1.1
Failure rates vs. failuremechanisms
Failure mechanism Corrosion Cracking Third-party damage Laminationsiblistering Earth movements Material degradation Material defects
Nature of mechanism
Failure rate tendency
Time dependent Time dependent Random Random Random (except for slow-acting instabilities) Time dependent Random
Increase Increase Constant Constant Constant Increase Constant
they reach the end of their useful service life. Where a timedependent failure mechanism (corrosion or fatigue) is involved, its effects will be observed in this wear-outphase of the curve. An examination of the failure data of a particular system may suggest such a curve and theoretically tell the evaluator what stage the system is in and what can be expected. Failure rates are further discussed in Chapter 14.
ul
2?
.-3 m LL
c
0
t
n
f
I I I I I I I
Consequences
z I I I
Time
-
I I
!
Figure 1.1 Common failure rate curve (bathtubcurve)
Some pieces of equipment or installations have a high initial rate of failure. This first portion of the curve is called the burninphase or infant mortalityphase. Here, defects that developed during initial manufacture of a component cause failures. As these defects are eliminated, the curve levels off into the second zone. This is the so-called constantfailurezone and reflects the phase where random accidents maintain a fairly constant failure rate. Components that survive the bum-in phase tend to fail at a constant rate. Failure mechanisms that are more random in nature-third-party damages or most land movements for example-tend to drive the failure rate in this part of the curve. Far into the life of the component, the failure rate may begin to increase. This is the zone where things begin to wear out as
Inherent in any risk evaluation is a judgment of the potential consequences. This is the last of the three risk-defining questions: If something goes wrong, what are the consequences? Consequence implies a loss of some kind. Many of the aspects of potential losses are readily quantified. In the case of a major hydrocarbon pipeline accident (product escaping, perhaps causing an explosion and fire), we could quantify losses such as damaged buildings, vehicles, and other property; costs of service interruption; cost of the product lost; cost of the cleanup; and so on. Consequences are sometimes grouped into direct and indirect categories, where direct costs include Property damages Damages to human health Environmental damages Loss ofproduct Repair costs Cleanup and remediation costs Indirect costs can include litigation, contract violations, customer dissatisfaction, political reactions, loss of market share, and government fines and penalties.
Failures
Third-party; earth movements; material<defects
Time
Corrosion; fatigue
-
Figure 1.2 Theorized failure rate curve for pipelines.
Basic concepts 117 As a common denominator, the monetary value of losses is often used to quantify consequences. Such “monetizing” of consequences-assigning dollar values to damages-is straightforward for some damages. For others, such as loss of life and environmental impacts, it is more difficult to apply. Much has been written on the topic of the value of human life, and this is further discussed in absolute risk quantification (see Chapter 14). Placing a value on the consequences of an accident is a key component in society’s determination of how much it is willing to spend to prevent that accident. This involves concepts of acceptable risk and is discussed in Chapter 15. The hazards that cause consequences and are created by the loss of integrity of an operating pipeline will include some or all ofthe following:
0
0
Toxicityiasphyxiation threats from released productscontact toxicity or exclusion of air from confined spaces Contaminatiodpollution from released productsdamage to flora, fauna, drinking waters, etc. Mechanical effects from force of escaping producterosion, washouts, projectiles, etc. Firehgnition scenarios involving released products-pool fires, fireballs,jet fires, explosions
These hazards are fully discussed in following chapters, beginning with Chapter 7.
Risk assessment Risk assessment is a measuring process and a risk model is a measuring tool. Included in most quality and management concepts is the need for measurement. It has been said that “If you don’t have a number, you don’t have a fact-you have an opinion.” While the notion of a “quantified opinion” adds shades of gray to an absolute statement like this, most would agree that quantifying something is at least the beginning of establishing its factual nature. It is always possible to quantify things we truly understand. When we find it difficult to express something in numbers, it is usually because we don’t have a complete understanding of the concept. Risk assessment must measure both the probability and consequences of all of the potential events that comprise the hazard. Using the risk assessment. we can make decisions related to managing those risks. Note that risk is not a static quantity. Along the length of a pipeline, conditions are usually changing. As they change, the risk is also changing in terms of what can go wrong, the likelihood of something going wrong, andor the potential consequences. Because conditions also change with time, risk is not constant even at a fixed location. When we perform a risk evaluation, we are actually taking a snapshot of the risk picture at a moment in time. There is no universally accepted method for measuring risk. The relative advantages and disadvantages of several approaches are discussed later in this chapter. It is important to recognize what a risk assessment can and cannot do, regardless of the methodology employed. The ability to predict pipeline failures-when and where they will occur-would obviously be a great advantage in reducing risk. Unfortunately, this cannot be done at present. Pipeline accidents are relatively rare and often involve the simultaneous failure of several safety provi-
sions. This makes accurate failure predictions almost impossible. So, modem risk assessment methodologies provide a surrogate for such predictions. Assessment efforts by pipeline operating companies are normally not attempts to predict how many failures will occur or where the next failure will occur. Rather, efforts are designed to systematically and objectively capture everything that can be known about the pipeline and its environment, to put this information into a risk context, and then to use it to make better decisions. Risk assessments normally involve examining the factors or variables that combine to create the whole risk picture. A complete list of underlying risk factors-that is. those items that add to or subtract from the amount of risk-can be identified for a pipeline system. Including all of these items in an assessment, however, could create a somewhat unwieldy system and one of questionable utility. Therefore, a list of critical risk indicators is usually selected based on their ability to provide useful risk signals without adding unnecessary complexities. Most common approaches advocate the use of a model to organize or enhance our understanding of the factors and their myriad possible interactions. A risk assessment therefore involves tradeoffs between the number of factors considered and the ease of use or cost of the assessment model. The important variables are widely recognized, but the number to be considered in the model (and the depth ofthat consideration)is a matter ofchoice for the model developers. The concept of the signal-to-noise ratio is pertinent here. In risk assessment, we are interested in measuring risk levels-the risk is the signal we are trying to detect. We are measuring in a very “noisy” environment. in which random fluctuations and high uncertainty tend to obscure the signal. The signal-to-noise ratio concept tells us that the signal has to be of a certain strength before we can reliably pick it out of the background noise. Perhaps only very large differences in risk will be detectable with our risk models. Smaller differences might be indistinguishable from the background noise or uncertainty in our measurements. We must recognize the limitations of our measuring tool so that we are not wasting time chasing apparent signals that are, in fact, false-positives or falsenegatives. The statistical quality control processes acknowledge this and employ statistical control charts to determine which measurements are worth investigating further. Some variables will intuitively contribute more to the signal; that is, the risk level. Changes in variables such as population density, type of product, and pipe stress level will very obviously change the possible consequences or failure probability. Others. such as flow rate and depth of cover will also impact the risk, but perhaps not as dramatically. Still others, such as soil moisture, soil pH, and type of public education advertising, will certainly have some effect, but the magnitude of that effect is arguable. These latter are not arguable in the sense that they cannot contribute to a failure, because they certainly can in some imaginable scenarios, but in the sense that they may be more noise than signal, as far as a model can distinguish. That is, their contributions to risk may be below the sensitivity thresholds ofthe risk assessment.
Risk management Risk management is a reaction to perceived risks. It is practiced everyday by every individual. In operating a motor vehicle,
1/8 Risk: Theory and Application
compensating for poor visibility by slowing down demonstrates a simple application of risk management. The driver knows that a change in the weather variable of visibility impacts the risk because her reaction times will be reduced. Reducing vehicle speed compensates for the reduced reaction time. While this example appears obvious, reaching this conclusion without some mental model of risk would be difficult. Risk management, for the purposes of this book, is the set of actions adopted to control risk. It entails a process of first assessing a level of risk associated with a facility and then preparing and executing an action plan to address current and future risks. The assimilation of complex data and the subsequent integration of sometimes competing risk reduction and profit goals are at the heart of any debate about how best to manage pipeline risks. Decision making is the core of risk management. Many challenging questions are implied in risk management: Where and when should resources be applied? How much urgency should be attached to any specific risk mitigation? Should only the worst segmentsbe addressed first? Should resources be diverted from less risky segments in order to better mitigate risks in higher risk areas? How much will risk change if we do nothing differently? An appropriate risk mitigation strategy might involve risk reductions for very specific areas or, alternatively, improving the risk situation in general for long stretches of pipeline. Note also that a risk reduction project may impact many variables for a few segments or, alternatively, might impact a few variables but for many segments. Although the process of pipeline risk management does not have to be complex,it can incorporate some very sophisticated engineeringand statisticalconcepts. A good risk assessment process leads the user directly into risk management by highlighting specific actions that can reduce risks. Risk mitigation plans are often developed using “what-if” scenariosin the risk assessment. The intention is not to make risk disappear. If we make any risk disappear, we will likely have sacrificed some other aspect of our lifestyles that we probably don’t want to give up. As an analogy, we can eliminate highway fatalities, but are we really ready to give up our cars? Risks can be minimized however-at least to the extent that no unacceptablerisks remain.
Experts The term experts as it is used here refers to people most knowledgeable in the subject matter. An expert is not restricted to a scientist or other technical person. The greatest expertise for a specific pipeline system probably lies with the workforce that has operated and maintained that system for many years. The experience and intuition of the entire workforce should be tapped as much as is practical when performing a risk assessment. Experts bring to the assessment a body of knowledge that goes beyond statistical data. Experts will discount some data that do not adequately represent the scenario being judged. Similarly, they will extrapolate from dissimilar situations that may have better data available.
The experience factor and the intuition of experts should not be discountedmerely because they cannot be easily quantified. Normally little disagreement will exist among knowledgeable persons when risk contributorsand risk reducers are evaluated. If differences arise that cannot be resolved, the risk evaluator can have each opinion quantified and then produce a compiled value to use in the assessment. When knowledge is incomplete and opinion, experience, intuition, and other unquantifiable resources are used, the assessment of risk becomes at least partially subjective. As it turns out, knowledge is always incomplete and some aspect of judgment will always be needed for a complete assessment. Hence, subjectivity is found in any and all risk assessment methodologies. Humans tend to have bias and experts are not immune from this. Knowledge of possible bias is the first step toward minimizing it. One source [88] identifies many types of bias and heuristic assumptions that are related to learning based on experiment or observation.These are shown inTable 1.2.
111. Uncertainty As noted previously, risk assessment is a measuring process.
Like all measuring systems, measurement error and uncertainty arise as a result of the limitations of the measuring tool, the process oftaking the measurement, and the person performing the measurement. Pipeline risk assessmentis also the compilation of many other measurements (depth of cover, wall thickness, pipe-to-soil voltages, pressure, etc.) and hence absorbs all of those measurement uncertainties. It makes use of engineering and scientific models (stress formulas, vapor dispersion and thermal effects modeling, etc.) that also have accompanying errors and uncertainties. In the use of past failure rate information, additional uncertainty results from small sample sizes and comparability, as discussed previously. Further adding to the uncertainty is the fact that the thing being measured is constantly changing. It is perhaps useful to view a pipeline system, including its operating environment, as a complex entity with behavior similar to that seen in dynamic or chaotic systems. Here the term chaotic is being used in its scientific meaning (chaos theory) rather than implying a disorganized or random nature in the conventional sense of the word. In science, dynamic or chaotic systems refer to the many systems in our world that do not behave in strictly predictable or linear fashions. They are not completely deterministic nor completely random, and things never happen in exactly the same way. A pipeline, with its infinite combinations of historical, environmental, structural, operational, and maintenance parameters, can be expected to behave as a so-called dynamic system-perhaps establishing patterns over time, but never repetition.As such, we recognize that, as one possible outcome of the process of pipelining, the risk of pipeline failure is sensitive to immeasurable or unknowable initial conditions. In essence, we are trying to find differences in risk out of all the many sources of variation inherent in a system that places a man-made structure in a complex and ever-changing environment. Recall the earlier discussion on signal-to-noise considerationsin risk assessment. In more practical terms, we can identify all of the threats to the pipeline. We understand the mechanisms underlying the
Risk process-the general steps 119 Table 1.2
Types of bias and heuristics
Heuristic or bias
Description
Availability heuristic Availability bias Hindsight bias Anchoringand adjustment heuristic Insufficientadjustment Conjunctive distortion Representativeness heuristic Representativeness bias
Judging likelihoodby instances most easily or vividly recalled Overemphasizing available or salient instances Exaggerating in retrospectwhat was known in advance Adjustingan initial probability to a final value Insufficientlymodifying the initial value Misjudging the probabilityof combined events relative to their individual values Judging likelihood by similarityto some referenceclass Overemphasizing similaritiesand neglecting other information;confusing“probability ofA given B’ with “probabilityofB given A” Exaggerating the predictive validity of some method or indicator Overlooking frequency information Overemphasizing significanceof limited data Greater confidencethan warranted, with probabilitiesthat are too extreme or distributionstoo narrow about the mean Less confidencethan warranted in evidence with high weight but low strength Intentional distortionof assessedprobabilitiesto advance an assessor’s self-interest Intentionaldistortionof assessed probabilitiesto advance a sponsor’s interest in achieving an outcome
Insensitivity to predictability Base-rateneglect Insensitivityto sample size Overconfidencebias Underconfidencebias Personal bias Organizationalbias
Source: From Vick. Steven G.. Degrees of Belief:Subjective Probability and EngineeringJudgment. ASCE Press, Reston, VA, 2002.
threats. We know the options in mitigating the threats. But in knowing these things, we also must know the uncertainty involved-we cannot know and control enough of the details to entirely eliminate risk. At any point in time, thousands of forces are acting on a pipeline, the magnitude of which are “unknown and unknowable.” An operator will never have all of the relevant information he needs to absolutely guarantee safe operations. There will always be an element of the unknown. Managers must control the “right” risks with limited resources because there will always be limits on the amount of time, manpower, or money that can be applied to a risk situation. Managers must weigh their decisions carefully in light of what is known and unknown. It is usually best to assume that Uncertainty= increased risks
This impacts risk assessment in several ways. First, when information is unknown, it is conservatively assumed that unfavorable conditions exist. This not only encourages the frequent acquisition of information, but it also enhances the risk assessment’s credibility,especially to outside observers. It also makes sense from an error analysis standpoint. Two possible errors can occur when assessing a condition-saying it is “good,” when it is actually “bad,” and saying it is “bad” when it is actually “good.” If a condition is assumed to be good when it is actually bad, this error will probably not be discovered until some unfortunate event occurs. The operator will most likely be directing resources toward suspected deficiencies, not recognizing that an actual deficiency has been hidden by an optimistic evaluation. At the point of discovery by incident, the ability of the risk assessment to point out any other deficiency is highly suspect. An outside observer can say, “Look, this model is assuming that everything is rosy-how can we believe anything it says?!” On the other hand, assuming a condition is bad when it is actually good merely has the effect of highlighting the condition until better information makes the “red flag” disappear. Consequences are far less with this
latter type of error. The only cost is the effort to get the correct information. So, this “guilty until proven innocent” approach is actually an incentive to reduce uncertainty. Uncertainty also plays a role in inspection information. Many conditions continuously change over time. As inspection information gets older, its relevance to current conditions becomes more uncertain. All inspection data should therefore be assumed to deteriorate in usefulness and, hence, in its risk-reducing ability. This is further discussed in Chapter 2. The great promise of risk analysis is its use in decision support. However, this promise is not without its own element of risk-the misuse of risk analysis, perhaps through failure to consider uncertainty. This is discussed as a part of risk management in Chapter 15. As noted in Ref. [74]: The primary problem with risk assessment is that the informationon which decisions must be based is usually inadequate. Because the decisions cannot wait, the gaps in information must be bridged by inferenceand belief, and these cannot be evaluated in the same way as facts. Improving the quality and comprehensiveness of knowledge is by far the most effective way to improve risk assessment, but some limitationsare inherent and unresolvable, and inferenceswill always be required.
IV. Risk process-the general steps Having defined some basic terms and discussed general risk issues, we can now focus on the actual steps involved in risk management. The following are the recommended basic steps. These steps are all fully detailed in this text.
Step 1: Risk modeling The acquisition of a risk assessment process, usually in the form of a model, is a logical first step. A pipeline risk assessment model is a set of algorithms or rules that use available information and data relationships to measure levels of risk along a pipeline. An assessment model can be selected
1/10 Risk: Theory and Application
from some commercially available existing models, customized from existing models, or created “from scratch” depending on your requirements. Multiple models can be run against the same set of data for comparisons and model evaluations.
Step 2: Data collection and preparation Data collection entails the gathering of everything that can be known about the pipeline, including all inspection data, original construction information, environmental conditions, operating and maintenance history, past failures, and so on. Data preparation is an exercise that results in data sets that are ready to be read into and used directly by the risk assessment model. A collection of tools enables users to smooth or enhance data points into zones of influence, categories, or bands to convert certain data sets into risk information. Data collection is discussed later in this chapter and data preparation issues are detailed in Chapter 8.
Step 3: Segmentation Because risks are rarely constant along a pipeline, it is advantageous to segment the line into sections with constant risk characteristics (dynamic segmentation) or otherwise divide the pipeline into manageable pieces. Segmentation strategies and techniques are discussed in Chapters 2 and 8, respectively.
V. Data collection Data and information are essential to good risk assessment. Appendix G shows some typical information-gathering efforts that are routinely performed by pipeline operators. After several years of operation, some large databases will have developed. Will these pieces of data predict pipeline failures? Only in extreme cases. Will they, in aggregate, tell us where risk hot spots are? Certainly. We ohviously feel that all of this information is important-we collect it, base standards on it, base regulations on it, etc. It just needs to be placed into a risk context so that a picture of the risk emerges and better resource allocation decisions can be made based on that picture. The risk model transforms the data into risk knowledge. Given the importance of data to risk assessment, it is important to have a clear understanding of the data collection process. There exists a discipline to measuring. Before the data gathering effort is started, four questions should be addressed 1. What will the data represent? 2. How will the values be obtained? 3. What sources ofvariation exist? 4. Why are the data being collected?
What will the data represent?
Now the previously selected risk assessment model can be applied to each segment to get a unique risk “score” for that segment. These relative risk numbers can later be converted into absolute risk numbers. Working with results of risk assessments is discussed in Chapters 8,14, and 15.
The data are the sum of our knowledge about the pipeline section: everything we know, think, and feel about it-when it was built, how it was built, how it is operated, how often it has failed or come close, what condition it is in now, what threats exist, what its surroundings are, and so on-all in great detail. Using the risk model, this compilation of information will be transformed into a representation of risk associated with that section. Inherent in the risk numbers will be a complete evaluation of the section’s environment and operation.
Step 5: Managing risks
How will the values be obtained?
Having performed a risk assessment for the segmented pipeline, we now face the critical step of managing the risks. In this area, the emphasis is on decision support-providing the tools needed to best optimize resource allocation. This process generally involves steps such as the following:
Some rules for data acquisition will often be necessary. Issues requiring early standardization might include the following:
Step 4:Assessing risks
Analyzing data (graphically and with tables and simple statistics) Calculating cumulative risks and trends Creating an overall risk management strategy Identifying mitigation projects Performing what-if’s These are fully discussed in subsequent chapters, especially Chapter 15. The first two steps in the overall process, (1) risk model and (2) data collection, are sometimes done in reverse order. An experienced risk modeler might begin with an examination of the types and quantity of data available and from that select a modeling approach. In light of this, the discussion of data collection issues precedes the model-selection discussion.
Who will be performing the evaluations? The data can be obtained by a single evaluator or team of evaluators who will visit the pipeline operations offices personally to gather the information required to make the assessment. Alternatively, each portion of a pipeline system can be evaluated by those directly involved in its operations and maintenance. This becomes a self-evaluation in some respects. Each approach has advantages. In the former, it is easier to ensure consistency; in the latter, acceptance by the workforce might be greater. What manuals or procedures will be used? Steps should be taken to ensure consistency in the evaluations. How often will evaluations be repeated? Reevaluations should be scheduled periodically or the operators should be required to update the records periodically. Will “hard proof” or documentation be a requirement in all cases? Or can the evaluator accept “opinion” data in some circumstances? An evaluator will usually interview pipeline operators to help assign risk scores. Possibly the most com-
Conceptualizing a risk assessment approach 1/11 mon question asked by the evaluator will be “How do you know?” This should be asked in response to almost every assertion by the interviewee(s). Answers will determine the uncertainty around the item, and item scoring should reflect this uncertainty. This issue is discussed in many of the suggested scoring protocols in subsequent chapters. What defaults are to be used when no information is available? See the discussion on uncertainty in this chapter and Chapter 2.
the mission statement or objective of the risk management program. The underlying reason may vary depending on the user, but it is hoped that the common link will be the desire to create a better understanding of the pipeline and its risks in order to make improvements in the risk picture. Secondary reasons or reasons embedded in the general purpose may include
0
What sources of variation exist? Typical sources of variation in a pipeline risk assessment include
0
Differences in the pipeline section environments Differences in the pipeline section operation Differences in the amount of information available on the pipeline section Evaluator-to-evaluator variation in information gathering and interpretation Day-to-day variation in the way a single evaluator assigns scores
Every measurement has a level of uncertainty associated with it. To be precise, a measurement should express this uncertainty: 10 f t i 1 in., 15.7”F~0.2’.Thisuncertaintyvaluerepresents some of the sources of variations previously listed: operator effects, instrument effects, day-to-day effects, etc. These effects are sometimes called measurement “noise” as noted previously in the signal-to-noise discussion. The variations that we are trying to measure. the relative pipeline risks. are hopefully much greater than the noise. If the noise level is too high relative to the variation of interest, or if the measurement is too insensitive to the variation of interest, the data become less meaningful. Reference [92] provides detailed statistical methods for determining the “usefulness” of the measurements. If more than one evaluator is to be used, it is wise to quantify the variation that may exist between the evaluators. This is easily done by comparing scoring by different evaluators of the same pipeline section. The repeatability of the evaluator can be judged by having her perform multiple scorings of the same section (this should be done without the evaluator’s knowledge that she is repeating a previously performed evaluation). If these sources of variation are high, steps should be taken to reduce the variation. These steps may include 0
0
Improved documentation and procedures Evaluator training Refinement of the assessment technique to remove more subjectivity Changes in the information-gathering activity Use of only one evaluator
Why are the data being collected? Clearly defining the purpose for collecting the data is important. but often overlooked. The purpose should tie back to
0 0
Identify relative risk hot spots Ensure regulatory compliance Set insurance rates Define acceptable risk levels Prioritize maintenance spending Build a resource allocation model Assign dollar values to pipeline systems Track pipelining activities
Having built a database for risk assessment purposes, some companies find much use for the information other than risk management. Since the information requirements for comprehensive risk assessment are so encompassing, these databases often become a central depository and the best reference source for all pipeline inquiries.
VI. Conceptualizing a risk assessment approach Checklist for design As the first and arguably the most important step in risk management, an assessment ofrisk must be performed. Many decisions will be required in determining arisk assessment approach. While all decisions do not have to be made during initial model design. it is useful to have a rather complete list of issues available early in the process. This might help to avoid backtracking in later stages, which can result in significant nonproductive time and cost. For example, is the risk assessment model to be used only as a high-level screening tool or might it ultimately be used as a stepping stone to a risk expressed in absolute terms? The earlier this determination is made, the more direct will be the path between the model’s design and its intended use. The following is a partial list of considerations in the design of a risk assessment system. Most of these are discussed in subsequent paragraphs of this chapter. 1. Purpose-A short, overall mission statement including the objectives and intent of the risk assessment project. 2. Audience-Who will see and use the results of the risk assessment? General public or special interest groups Local, state, or federal regulators Company-all employees Company-management only Company-specific departments only 3. Uses-How will the results be used? 0 Risk identrficafion-the acquisition of knowledge, such as levels of integrity threats, failure consequences and overall system risk, to allow for comparison of pipeline risk levels and evaluation of risk drivers
1/12 Risk: Theory and Application
Resource allocation-where and when to spend discretionary and/or mandated capital andor maintenance funds Design or mod& an operating discipline-reate an O&M plan consistent with risk management concepts Regulatory compliance for risk assessment-if risk assessment itself is mandated Regulatory compliancefor all required activities- flags are raised to indicate potential noncompliances Regulatory compliance waivers-where risk-based justifications provide the basis to request waivers of specific integrity assessment or maintenance activities Project appmvals+ostlbenefit calculations, project prioritizations and justifications Preventive maintenance schedules-creating multiyear integrity assessment plans or overall maintenance priorities and schedules Due diligence-investigation and evaluation of assets that might be acquired, leased, abandoned, or sold, from a risk perspective Liability reduction-reduce the number, frequency, and severity of failures, as well as the severity of failure consequences, to lower current operating and indirect liability-related costs Risk communications-present risk information to a number of different audiences with different interests and levels of technical abilities 4. Users-This might overlap the audience group: Internal only Technical staffonlwngineering, compliance, integrity, and information technology (IT) departments Managers-budget authorization, technical support, operations Planning department-facility expansion, acquisitions, and operations District-level supervisors-maintenance and operations Regulators-if regulators are shown the risk model or its results Other oversight-ity council, investment partners, insurance carrier, etc.-if access given in order to do what-ifs, etc. Public presentations-public bearings for proposed projects 5. Resources-Who and what is available to support the program? Data-type, format, and quality of existing data Sofhvare-urrent environments’suitability as residence for risk model Hardware-urrent communications and data management systems Staff--availability of qualified people to design the model and populate it with required data Monq~-availability of funds to outsource data collection, database and model design, etc. Industry-access to best industry practices, standards, and knowledge 6. Designxhoices in model features, format, and capabilities: Scope Failure causes considered-corrosion, sabotage, land movements, third party, human error, etc.
Consequences considered-public safety only, environment, cost of service interruption, employee safety, etc. Facilities covered-pipe only, valves, fittings, pumps, tanks, loading facilities, compressor stations, etc. Scoring-define scoring protocols, establish point ranges (resolution) Direction of scale-higher points can indicate either more safety or more risk Point assignments-addition of points only, multiplications, conditionals (if X then Y), category weightings, independent variables, flat or multilevel structures Resolution issues-range of diameters, pressures, and products Defaults-philosophy of assigning values when little or no information is available Zone-ofinfluence distances-for what distance does a piece of data provide evidence on adjacent lengths ofpipe Relative versus absolure4hoice of presentation format and possibly model approach Reporting-types and frequency of output and presentations needed
General beliefs In addition to basic assumptions regarding the risk assessment model, some philosophical beliefs underpin this entire book. It is u s e l l to state these clearly at this point, so the reader may be alerted to any possible differences from her own beliefs. These are stated as beliefs rather than facts since they are arguable and others might disagree to some extent: 0
0
Risk management techniques are fundamentally decision support tools. Pipeline operators in particular will find most valuable a process that takes available information and assimilates it into some clear, simple results. Actions can then be based directly on those simple results. We must go through some complexity in order to achieve “intelligent simplification.” Many processes, originating from sometimes complex scientific principles, are “behind the scenes” in a good risk assessment system. These must be well documented and available, but need not interfere with the casual users of the methodology (everyone does not need to understand the engine in order to benefit from use of the vehicle). Engineers will normally seek a rational basis underpinning a system before they will accept it. Therefore, the basis must be well documented. In most cases, we are more interested in identifying locations where a potential failure mechanism is more aggressive rather than predicting the length of time the mechanism must be active before failure occurs. A proper amount of modeling resolution is needed. The model should be able to quantify the benefit of any and all actions, from something as simple as “add 2 new ROW markers” all the way up to “reroute the entire pipeline.” Many variables impact pipeline risk. Among all possible variables, choices are required that yield a balance between a comprehensive model (one that covers all of the important stuff) and an unwieldy model (one with too many relatively unimportant details). Users should be allowed to determine their own optimum level of complexity. Some will choose to
Conceptualizing a risk assessment approach 1/13 capture much detailed information because they already have it available; others will want to get started with a very simple framework. However, by using the same overall risk assessment framework, results can still be compared: from very detailed approaches to overview approaches. Resource allocation (or reallocation) is normally the most effective way to practice risk management. Costs must therefore play a role in risk management. Because resources are finite, the optimum allocation of those scarce resources is sought. The methodology should “get smarter” as we ourselves learn. As more information becomes available or as new techniques come into favor, the methodology should be flexible enough to incorporate the new knowledge, whether that new knowledge is in the form of hard statistics, new beliefs, or better ways to combine risk variables. Methodology should be robust enough to apply to small as well as large facilities, allowing an operator to divide a large facility into subsets for comparisons within a system as well as between systems. Methodology should have the ability to distinguish between products handled by including critical fluid properties, which are derived from easy-to-obtain product information. Methodology should be easy to set up on paper or in an electronic spreadsheet and also easy to migrate to more robust database software environments for more rigorous applications. Methodology documentation should provide the user with simple steps, but also provide the background (sometimes complex) underlying the simple steps. Administrative elements of a risk management program are necessary to ensure continuity and consistency of the effort. Note that ifthe reader concurs with these beliefs, the bulleted items above can form the foundation for a model design or an inquiry to service providers who offer pipeline risk assessmenurisk management products and services.
Scope and limitations Having made some preliminary decisions regarding the risk management’s program scope and content, some documentation should be established. This should become a part of the overall control document set as discussed in Chapter 15. Because a pipeline risk assessment cannot be all things at once, a statement of the program’s scope and limitations is usually appropriate. The scope should address exactly what portions of the pipeline system are included and what risks are being evaluated. The following statements are examples of scope and limitation statements that are common to many relative risk assessments. This risk assessment covers all pipe and appurtenances that are a part of the ABC Pipeline Company from Station Alpha to Station Beta as shown on system maps. This assessment is complete and comprehensive in terms ofits ability to capture all pertinent information and provide meaningful analyses of current risks. Since the objective of the risk assessment is to provide a useful tool to support decision making, and since it is intended to continuously evolve as new information is received, some aspects of academician-type risk assessment methodologies are intentionally omitted. These are not thought to produce limitations in the
assessment for its intended use but rather are deviations from other possible risk assessment approaches. These deviations include the following: Relative risks only: Absolute risk estimations are not included because of their highly uncertain nature and potential for misunderstanding. Due to the lack of historical pipeline failure data for various failure mechanisms, and incomplete incident data for a multitude of integrity threats and release impacts, a statistically valid database is not thought to be available to adequately quantify the probability of a failure (e.g., failureskm-year), the monetized consequences of a failure (e.g., dollars/failure), or the combined total risk of a failure (e.g., dollarskm-year) on apipeline-specific basis. Certuin consequences: The focus ofthis assessment is on risks to public safety and the environment. Other consequences such as cost of business interruption and risks to company employees are not specifically quantified. However, most other consequences are thought to be proportional to the public safety and environmental threats so the results will generally apply to most consequences. Abnormal conditions: This risk assessment shows the relative risks along the pipeline during its operation. The focus is on abnormal conditions, specifically the unintentional releases of product. Risks from normal operations include those from employee vehicle and watercraft operation; other equipment operation; use of tools and cleaning and maintenance fluids; and other aspects that are considered to add normal and/or negligible additional risks to the public. Potential construction risks associated with new pipeline installations are also not considered. Insensitivity to length: The pipeline risk scores represent the relative level of risk that each point along the pipeline presents to its surroundings. is the scores are insensitive to length. If two pipeline segments, 100 and 2600 ft, respectively, have the same risk score, then each point along the 100-8 segment presents the same risk as does each point along the 2600-ft length. Of course, the 2600-ft length presents more overall risk than does the 100-A length, because it has many more risk-producing points. Note: With regard to length sensitivity, a cumulative risk calculation adds the length aspect so that a 100-A length ofpipeline with one risk score can be compared against a 2600-A length with a different risk score. Use of judgment: As with any risk assessment methodology, some subjectivity in the form of expert opinion and engineering judgments are required when “hard” data provide incomplete knowledge. This is a limitation of this assessment only in that it might he considered a limitation of all risk assessments. See also discussions in this section dealing with uncertainty.
Related to these statements is a list of assumptions that might underlie a risk assessment. An example of documented assumptions that overlap the above list to some extent is provided elsewhere.
Formal vs. informal risk management Although formal pipeline risk management is growing in popularity among pipeline operators and is increasingly mandated by governmentregulations, it is important to note that risk management has always been practiced by these pipeline operators. Every time a decision is made to spend resources in a certain way, a risk management decision has been made. This informal approach to risk management has served us well, as evidenced by the very good safety record of pipelines versus other modes of transportation. An informal approach to risk management can have the further advantages of being simple, easy to comprehend and to communicate, and the product of expert engineering consensus built on solid experience.
1/14 Risk: Theory and Application
However, an informal approach to risk management does not hold up well to close scrutiny, since the process is often poorly documented and not structured to ensure objectivity and consistency of decision making. Expanding public concerns over human safety and environmental protection have contributed significantly to raising the visibility of risk management. Although the pipeline safety record is good, the violent intensity and dramatic consequences of some accidents, an aging pipeline infrastructure, and the continued urbanization of formerly rural areas has increased perceived, if not actual, risks. Historical (Informal) risk management, therefore has these pluses and minuses:
strengths and weaknesses, including costs ofthe evaluation and appropriatenessto a situation:
Advantages 0 Simplehtuitive Consensus is often sought 0 Utilizes experience and engineeringjudgment 0 Successful, based on pipeline safety record
0
0 0
0 0
0
0
Some of the more formal risk tools in common use by the pipeline industry include some of the above and others as discussed below.
Reasons to Change 0 Consequences of mistakes are more serious Inefficiencies/subjectivities Lack of consistency and continuity in a changing workforce Need for better evaluation of complicated risk factors and their interactions
Developing a risk assessment model In moving toward formal risk management, a structure and process for assessing risks i s required. In this book, this structure and process is called the risk assessment model. A risk assessment model can take many forms, but the best ones will have several common characteristics as discussed later in this chapter. They will also all generally originate from some basic techniques that underlie the final model-the building blocks. It is useful to become familiar with these building blocks of risk assessment because they form the foundation of most models and may be called on to tune a model from time to time. Scenarios, event trees, and fault trees are the core building blocks of any risk assessment. Even if the model author does not specifically reference such tools, models cannot be constructed without at least a mental process that parallels the use of these tools. They are not, however, risk assessments themselves. Rather, they are techniques and methodologies we use to crystallize and document our understanding of sequences that lead to failures. They form a basis for a risk model by forcing the logical identification of all risk variables. They should not be considered risk models themselves, in this author’s opinion, because they do not pass the tests of a fully functional model, which are proposed later in this chapter.
Risk assessment building blocks Eleven hazard evaluation procedures in common use by the chemical industry have been identified [9]. These are examples of the aforementioned building blocks that lay the foundation for a risk assessment model. Each of these tools has
Checklists Safety review Relative ranking Preliminary hazard analysis “What-if” analysis HAZOPstudy FMEA analysis Fault-tree analysis Event-tree analysis Cause-and-consequenceanalysis Human-error analysis
0
HAZOP. A hazard and operability study is a team technique that examines all possible failure events and operability issues through the use of keywords prompting the team for input in a very structured format. Scenarios and potential consequences are identified, but likelihood is usually not quantified in a HAZOP. Strict discipline ensures that all possibilities are covered by the team. When done properly, the technique is very thorough but time consuming and costly in terms of person-hours expended. HAZOP and failure modes and effects analysis (FMEA) studies are especially useful tools when the risk assessments include complex facilities such as tank farms and pump/compressor stations. Fault-tree/event-tree analysis. Tracing the sequence of events backward from a failure yields afault tree. In an event tree, the process begins from an event and progresses forward through all possible subsequent events to determine possible failures. Probabilities can be assigned to each branch and then combined to arrive at complete event probabilities. An example of this application is discussed below and in Chapter 14. Scenarios. “Most probable” or “most severe” pipeline failure scenarios are envisioned. Resulting damages are estimated and mitigating responses and preventions are designed. This is often a modified fault-tree or event-tree analysis.
Scenario-based tools such as event trees and fault trees are particularly common because they underlie every other approach. They are always used, even if informally or as a thought process, to better understand the event sequences that produce failures and consequences. They are also extremely useful in examining specific situations. They can assist in incident investigation, determining optimum valve siting, safety system installation, pipeline routing, and other common pipeline analyses. These are often highly focused applications. These techniques are further discussed in Chapter 14. Figure 1.3 is an example of a partial event-tree analysis. The event tree shows the probability of a certain failure-initiation event, possible next events with their likelihood, interactions of some possible mitigating events or features, and, finally, possible end consequences. This illustration demonstrates
Risk assessment issues 1/15
(1/600)
Ignition (1/100) Large rupture
(500/600)High thermal
1
Thirdparty damage
equipmentcontacts line (1:2 years)
Detonation
damages (99/600) Torch fire only
(29/30) No ignition
(1/20) Corrosion (6/10)
-Reported - - - - - - - - -
[
@/lo)Cathodic 1t;o in
(4/10)
Unreported
survey leak
(2/10)
(’/’ O0) No damage -No event Figure 1.3
Event-treeanalysis
how quickly the interrelationships make an event tree very large and complex. especially when all possible initiating events are considered. The probabilities associated with events will also normally be hard to determine. For example, Figure 1.3suggests that for every 600 ignitions of product from a large rupture. one will result in a detonation, 500 will result in high thermal damages, and 99 will result in localized fire damage only. This only occurs after a Ym chance of ignition, which occurs after a Yim chance of a large rupture, and after a once-every-two-years line strike. In reality, these numbers will be difficult to estimate. Because the probabilities must then be combined (multiplied) along any path in this diagram, inaccuracies will build quickly. 0
Screening analyses. This is a quantitative or qualitative technique in which only the most critical variables are assessed. Certain combinations of variable assessments are judged to represent more risk than others. In this fashion, the process acts as a high-level screening tool to identify relatively risky portions of a system. It requires elements of suh-
jectivity and judgment and should be carefully documented. While a screening analysis is a logical process to be used subsequent to almost any risk assessment, it is noted here as a possible stand-alone risk tool. As such, it takes on many characteristics of the more complete models to be described, especially the scoring-type or indexing method.
VII. Risk assessment issues In comparing risk assessment approaches, some issues arise that can lead to confusion. The following subsections discuss some ofthose issues.
Absolute vs. relative risks Risks can be expressed in absolute terms, for example, “number of fatalities per mile year for permanent residents within one-half mile of pipeline. . . .” Also common is the use of relative risk measures, whereby hazards are prioritized such that the examiner can distinguish which portions of the
III 6 Risk: Theory and Application
facilities pose more risk than others. The former is a frequencybased measure that estimates the probability of a specific type of failure consequence. The latter is a comparative measure of current risks, in terms of both failure likelihood and consequence. A criticism of the relative scale is its inability to compare risks from dissimilar systems-pipelines versus highway transportation,for example-and its inabilityto directly provide failure predictions. However, the absolute scale often fails in relying heavily on historical point estimates, particularly for rare events that are extremely difficult to quantify, and in the unwieldy numbers that often generate a negative reaction from the public. The absolute scale also often implies a precision that is simply not available to any risk assessment method. So, the “absolute scale” offers the benefit of comparability with other types of risks, while the “relative scale” offers the advantage of ease-of-use and customizability to the specific risk being studied. In practical applications and for purposes of communications, this is not really an important issue. The two scales are not mutually exclusive. Either scale can be readily converted to the other scale if circumstances so warrant. A relative risk scale is converted to an absolute scale by correlating relative risk scores with appropriate historical failure rates or other risk estimates expressed in absolute terms. In other words, the relative scale is calibrated with some absolute numbers. The absolute scale is converted to more manageable and understandable (nontechnical) relative scales by simple mathematical relationships. A possible misunderstanding underlying this issue is the common misconception that a precise-looking number, expressed in scientific notation, is more accurate than a simple number. In reality, either method should use the same available data pool and be forced to make the same number of assumptions when data are not available. The use of subjective judgment is necessary in any risk assessment, regardless of how results are presented. Any good risk evaluation will require the generation of scenarios to represent all possible event sequences that lead to possible damage states (consequences). Each event in each sequence is assigned a probability. The assigned probabilities are assigned either in absolute terms or, in the case of a relative risk application, relative to other probabilities. In either case, the probability assigned should be based on all available information. For a relative model, these event trees are examined, and critical variables with their relative weightings (based on probabilities) are extracted. In a risk assessment expressing results in absolute numbers, the probabilities must be preserved in order to produce the absolute terms. Combining the advantages of relative and absolute approaches is discussed in Chapter 14.
Quantitativevs. qualitative models It is sometimes difficult to make distinctions between qualitative and quantitative analyses. Most techniques use numbers, which would imply a quantitative analysis, but sometimes the numbers are only representations of qualitative beliefs. For example, a qualitative analysis might use scores of 1,2, and 3 to replace the labels of “low,” “medium,” and “high.” To some, these are insufficient grounds to now call the analysis quantitative.
The terms quantitative and qualitative are often used to distinguish the amount of historical failure-related data analyzed in the model and the amount of mathematical calculations employed in arriving at a risk answer. A model that exclusively uses historical frequency data is sometimes referred to as quantitative whereas a model employing relative scales, even if later assigned numbers, is referred to as qualitative or semi-quantitative. The danger in such labeling is that they imply a level of accuracy that may not exist. In reality, the labels often tell more about the level of modeling effort, cost, and data sources than the accuracy of the results.
Subjectivity vs. objectivity In theory, a purely objective model will strictly adhere to scientific practice and will have no opinion data. A purely subjective model implies complete reliance on expert opinion. In practice, no pipeline risk model fully adheres to either. Objectivity cannot be purely maintained while dealing with the real-world situation of missing data and variables that are highly confounded. On the other hand, subjective models certainly use objective data to form or support judgments.
Use of unquantifiable evidence In any of the many difficult-to-quantify aspects of risk, some would argue that nonstatistical analyses are potentially damaging. Although this danger of misunderstanding the role of a factor always exists, there is similarly the more immediate danger of an incomplete analysis by omission of a factor. For example, public education is seen by most pipeline professionals to be a very important aspect of reducing the number of thirdparty damages and improving leak reporting and emergency response. However, quantifying this level of importance and correlating it with the many varied approaches to public education is quite difficult. A concerted effort to study this data is needed to determine how they affect risk. In the absence of such a study, most would agree that a company that has a strong public education program will achieve some level ofrisk reduction over a company that does not. A risk model should reflect this belief, even if it cannot be precisely quantified. Otherwise, the benefits of efforts such as public education would not be supported by risk assessment results. In summary, all methodologieshave access to the same databases (at least when publicly available) and all must address what to do when data are insufficient to generate meaningful statistical input for a model. Data are not available for most of the relevant risk variables of pipelines. Including risk variables that have insufficient data requires an element of “qualitative” evaluation.The only alternative is to ignore the variable, resulting in a model that does not consider variables that intuitively seem important to the risk picture. Therefore, all models that attempt to represent all risk aspects must incorporate qualitative evaluations.
VIII. Choosing a risk assessment technique Several questions to the pipeline operator may direct the choice of risk assessment technique:
Choosing a risk assessmenttechnique 1117
What data do you have? What is your confidence in the predictive value of the data? What resources are available in terms of money, personhours, and time? What benefits do you expect to accrue in terms of cost savings, reduced regulatory burdens, improved public support, and operational efficiency? These questions should be kept in mind when selecting the specific risk assessment methodology, as discussed further in Chapter 2. Regardless ofthe specific approach, some properties of the ideal risk assessment tool will include the following:
Appropriate costs. The value or benefits derived from the risk assessment process should clearly outweigh the costs of setting up, implementing, and maintaining the program. Ability to learn. Because risk is not constant over the length of a pipeline or over a period of time, the model must be able to “learn” as information changes. This means that new data should be easy to incorporate into the model. Signal-to-noise ratio. Because the model is in effect a measurement tool, it must have a suitable signal-to-noise ratio, as discussed previously. This means that the “noise,” the amount of uncertainty in the measurement (resulting from numerous causes), must be low enough so that the “signal:’ the risk value of interest, can be read. This is similar to the accuracy of the model, but involves additional considerations that surround the high level of uncertainty associated with risk management.
Comparisons can be made against fixed or floating “standards” or benchmarks Finally, a view to the next step, risk management, should be taken. A good risk assessment technique will allow a smooth transition into the management of the observed risks. This means that provisions for resource allocation modeling and the evolution ofthe overall risk model must be made. The ideal risk assessment will readily highlight specific deficiencies and point to appropriate mitigation possibilities. We noted previously that some risk assessment techniques are more appropriately considered to be “building blocks” while others are complete models. This distinction has to do with the risk assessment’s ability to not only measure risks, but also to directly support risk management. As it is used here. a complete model is one that will measure the risks at all points along a pipeline, readily show the accompanying variables driving the risks, and thereby directly indicate specific system vulnerabilities and consequences. A one-time risk analysis-a study to determine the risk level-may not need a complete model. For instance, an event-tree analysis can be used to estimate overall risk levels or risks from a specific failure mode. However, the risk assessment should not be considered to be a complete model unless it is packaged in such a way that it efficiently provides input for risk management.
Four tests Four informal tests are proposed here by which the difference between the building block and complete model can be seen. The proposition is that any complete risk assessment model should be able to pass the following four tests:
Model performance tests (See also Chapter 8 for discussion of model sensitivity analyses.) In examining a proposed risk assessment effort, it may be wise to evaluate the risk assessment model to ensure the following: All failure modes are considered All risk elements are considered and the most critical ones included Failure modes are considered independently as well as in aggregate All available information is being appropriately utilized Provisions exist for regular updates of information, including new types of data Consequence factors are separable from probability factors Weightings, or other methods to recognize the relative importance of factors, are established The rationale behind weightings is well documented and consistent A sensitivity analysis has been performed The model reacts appropriately to failures of any type Risk elements are combined appropriately (“and” versus “or” combinations) ’ Steps are taken to ensure consistency of evaluation Risk assessment results form a reasonable statistical distribution (outliers?) There is adequate discrimination in the measured results (signal-to-noise ratio)
1. The “I didn’t know that!” test 2. The “Why is that?” test 3. The “point to amap” test 4. The “What about -?’test
Again, these tests are very informal but illustrate some key characteristics that should be present in any methodology that purports to be a full risk assessment model. In keeping with the informality, the descriptions below are written in the familiar, instructional voice used as if speaking directly to the operator of a pipeline.
The “Ididn ’t know that! ” test (new knowledge) The risk model should be able to do more than you can do in your head or even with an informal gathering of your experts. Most humans can simultaneously consider a handful of factors in making a decision. The real-world situation might be influenced by dozens of variables simultaneously. Your model should be able to simultaneously consider dozens or even hundreds of pieces of information. The model should tell you things you did not already know. Some scenario-based techniques only tend to document what is already obvious. If there aren’t some surprises in the assessment results, you should be suspicious ofthe model’s completeness. It is difficult to believe that simultaneous consideration of many variables will not generate some combinations in certain locations that were not otherwise intuitively obvious.
1/18 Risk: Theoryand Application
Naturally, when given a surprise, you should then be skeptical and ask to be convinced. That helps to validate your model and leads to the next points.
The “Whyis that? ” test (drill down) So let’s say that the new knowledge proposed by your model is that your pipeline XYZ in Barker County is high risk. You say, “What?! Why is that high risk?”You should be initially skeptical, by the way, as noted before. Well, the model should be able to tell you its reasons; perhaps it is because coincident occurrences of population density, a vulnerable aquifer, and state park lands, coupled with 5 years since a close interval survey, no ILI, high stress levels, and a questionable coating condition make for a riskier than normal situation. Your response should be to say, “Well, okay, looking at all that, it makes sense. . . .” In other words, you should be able to interrogate the model and receive acceptable answers to your challenges. If an operator’s intuition is not consistent with model outputs, then one or the other is in error. Resolution of the discrepancy will often improve the capabilities of both operator and model.
The “point to map ” test (location specific and complete) This test is often overlooked. Basically, it means that you should be able to pull out a map of your system, put your finger on any point along the pipeline, and determine the risk at that point--either relative or absolute. Furthermore, you should be able to determine specifically the corrosion risk, the third-party risk, the types of receptors, the spill volume, etc., and quickly determine the prime drivers of the apparently higher risk. This may seem an obvious thing for a risk assessment to do, but many recommended techniques cannot do this. Some have predetermined their risk areas so they know little about other areas (and one must wonder about this predetermination). Others do not retain information specific to a given location. Others do not compile risks into summary judgments. The risk information should be a characteristic of the pipeline at all points, just like the pipe specification.
The “Whatabout -? completeness)
”
test (a measure of
Someone should be able to query the model on any aspect of risk, such as “What about subsidence risk? What about stress corrosion cracking?” Make sure all probability issues are addressed. All known failure modes should be considered, even if they are very rare or have never been observed for your particular system. You never know when you will be comparing your system against one that has that failure mode or will be asked to perform a due diligence on a possible pipeline acquisition.
IX. Quality and risk management In many management and industry circles, quality is a popular concept--extending far beyond the most common uses of the term. As a management concept, it implies a way of think-
ing and a way ofdoing business. It is widely believed that attention to quality concepts is a requirement to remain in business in today’s competitive world markets. Risk management can be thought of as a method to improve quality. In its best application, it goes beyond basic safety issues to address cost control, planning, and customer satisfaction aspects of quality. For those who link quality with competitiveness and survival in the business world, there is an immediate connection to risk management. The prospect of a company failure due to poor cost control or poor decisions is a risk that can also be managed. Quality is difficult to define precisely. While several different definitions are possible, they typically refer to concepts such as (1) fitness-for-use, (2) consistency with specifications, and (3) freedom from defects, all with regard to the product or service that the company is producing. Central to many of the quality concepts is the notion of reducing variation. This is the discipline that may ultimately be the main “secret” of the most successful companies. Variation normally is evidence of waste. Performing tasks optimally usually means little variation is seen. All definitions incorporate (directly or by inference) some reference to customers. Broadly defined, a customer is anyone to whom a company provides a product, service, or information. Under this definition, almost any exchange or relationship involves a customer. The customer drives the relationship because he specifies what product, service, or information he wants and what he is willing to pay for it. In the pipeline business, typical customers include those who rely on product movements for raw materials, such as refineries; those who are end users of products delivered, such as residential gas users; and those who are affected by pipelining activities, such as adjacent landowners. As a whole, customers ask for adequate quantities of products to be delivered With no service interruptions (reliability) With no safety incidents At lowest cost This is quite a broad brush approach. To be more accurate, the qualifiers of “no” and “lowest” in the preceding list must be defined. Obviously, trade-offs are involved-improved safety and reliability may increase costs. Different customers will place differing values on these requirements as was previously discussed in terms of acceptable risk levels. For our purposes, we can view regulatory agencies as representing the public since regulations exist to serve the public interest. The public includes several customer groups with sometimes conflicting needs. Those vitally concerned with public safety versus those vitally concerned with costs, for instance, are occasionally at odds with one another. When a regulatory agency mandates a pipeline safety or maintenance program, this can be viewed as a customer requirement originating from that sector ofthe public that is most concerned with the safety of pipelines. When increased regulation leads to higher costs, the segment of the public more concerned with costs will take notice.
Reliability 1/19
As a fundamental part ofthe quality process, we must make a distinction between types ofwork performed in the name of the customer: Value-added work. These are work activities that directly add value, as defined by the customer, to the product or service. By moving a product from point A to point B, value has been added to that product because it is more valuable (to the customer) at point B than it was at pointA. Necessary work. These are work activities that are not value added, but are necessary in order to complete the valueadded work. Protecting the pipeline from corrosion does not directly move the product, but it is necessary in order to ensure that the product movements continue uninterrupted. Waste. This is the popular name for a category that includes all activities performed that are unnecessary. Repeating a task because it was done improperly the first time is called rework and is included in this category. Tasks that are done routinely, but really do not directly or indirectly support the customer needs, are considered to be waste. Profitability is linked to reducing the waste category while optimizing the value-added and necessary work categories. A risk management program is an integral part of this. as will be seen. The simplified process for quality management goes something like this: The proper work (value added and necessary) is identified by studying customer needs and creating ideal processes to satisfy those needs in the most efficient manner. Once the proper work is identified the processes that make up that work should be clearly defined and measured. Deviations from the ideal processes are waste. When the company can produce exactly what the customer wants without any variation in that production, that company has gained control over waste in its processes. From there. the processes can be even further improved to reduce costs and increase output, all the while measuring to ensure that variation does not return. This is exactly what risk management should do: identify needs, analyze cost versus benefit of various choices, establish an operating discipline, measure all processes, and continuously improve all aspects of the operation. Because the pipeline capacity is set by system hydraulics, line size, regulated operating limits. and other fixed constraints, gains in pipeline efficiencies are made primarily by reducing the incremental costs associated with moving the products. Costs are reduced by spending in ways that reap the largest benefits, namely, increasing the reliability of the pipeline. Spending to prevent losses and service interruptions is an integral part of optimizing pipeline costs. The pipeline risk items considered in this book are all either existing conditions or work processes. The conditions are characteristics of the pipeline environment and are not normally changeable. The work processes, however, are changeable and should be directly linked to the conditions. The purpose of every work process, every activity. even every individual motion is to meet customer requirements. A risk management program should assess each activity in terms of its benefit from a risk perspective. Because every activity and process costs something, it must generate some benefit-thenvise it is waste. Measuring the benefit, including the benefit of loss prevention, allows spending to be prioritized.
Rather than having a broad pipeline operating program to allow for all contingencies, risk management allows the direction of more energy to the areas that need it more. Pipelining activities can be fine-tuned to the specific needs of the various pipeline sections. Time and money should be spent in the areas where the return (the benefit) is the greatest. Again, measurement systems are required to track progress, for without measurements, progress is only an opinion. The risk evaluation program described here provides a tool to improve the overall quality of a pipeline operation. It does not necessarily suggest any new techniques; instead it introduces a discipline to evaluate all pipeline activities and to score them in terms of their benefit to customer needs. When an extra dollar is to be spent, the risk evaluation program points to where that dollar will do the most good. Dollars presently being spent on one activity may produce more value to the customer if they were being spent another way. The risk evaluation program points this out and measures results.
X. Reliability Reliability is often defined as the probability that equipment, machinery, or systems will perform their required functions satisfactorily under specific conditions within a certain time period. This can also mean the duration or probability of failure-free performance under the stated condition. As is apparent from this definition, reliability concepts are identical to risk concepts in many regards. In fact, sometimes the only differences are the scenarios of interest. Where risk often focuses on scenarios involving fatality, injury, property damage, etc.. reliability focuses on scenarios that lead to equipment unavailability, repair costs, etc. [45] Risk analysis is often more of a diagnostic tool, helping us to better understand and make decisions about an overall existing system. Reliability techniques are more naturally applied to new structures or the performance of specific components. Many of the same techniques are used, including FMEA, root cause analyses, and event-tree/fault-tree analyses. This is logical since many ofthe same issues underlie risk and reliability. These include failure rates, failure modes, mitigating or offsetting actions, etc. Common reliability measurement and control efforts involve issues of ( I ) equipment performance, as measured by availability, uptime, MTTF (mean time to failure), MTBF (mean time between failures), and Weibull analyses; (2) reliability as a component of operation cost or ownership costs, sometimes measured by life-cycle cost; and (3) reliability analysis techniques applied to maintenance optimization, including reliability centered maintenance (RCM), predictive preventive maintenance (PPM). and root cause analysis. Many of these are, at least partially, risk analysis techniques, the results of which can feed directly into a risk assessment model. This text does not delve deeply into specialized reliability engineering concepts. Chapter 10, Service Interruption Risk, discusses issues of pipeline availability and delivery failures.
2/21
Risk Assessment Process
1. Using this manual
To get answers quick!
Formal risk management can become a useful tool for pipeline operators, managers, and others interested in pipeline safety and/or efficiency. Benefits are not only obtained from an enhanced ability to improve safety and reduce risk, but experience has shown that the risk assessmentprocess draws together so much useful information into a central location that it becomes a constant reference point and information repository for decision making all across the organization. The purpose of the pipeline risk assessment method described in Chapters 3 through 7 of this book is to evaluate a pipeline’s risk exposure to the public and to identify ways to effectively manage that risk. Chapters 8 through 14 discuss special risk assessment considerations, including special pipeline facilities and the use of absolute risk results. Chapter 15 describesthe transition from risk assessment to risk management.
While the topic ofpipeline risk management does fill the pages of this book, the process does not have to be highly complex or expensive. Portionsof this book can be used as a “cookbook” to quickly implement a risk management system or simply provide ideas to pipeline evaluators.A fairly detailed pipeline risk assessment system can be set up and functioning in a relatively short time by just one evaluator. A reader could adopt the risk assessment framework described in Chapters 3 through 7 to begin assessingrisk immediately. An overview of the base model with suggested weightings of all risk variables is shown in Risk Assessment af a Glance, with each variable fully described in later chapters. A risk evaluator with little or no pipeline operating experience could most certainly adopt this approach, at least initially. Similarly, an evaluatorwho wants to assess pipelines covering a wide range of services, environments, and operators may wish
2/22Risk Assessment Process
to use this general approach, since that was the original purpose of the basic framework. By using simple computer tools such as a spreadsheet or desktop database to hold risk data, and then establishing some administrativeprocesses around the maintenance and use ofthe information, the quick-start applicator now has a system to support risk management. Experienced risk managers may balk at such a simplification of an often complex and time-consuming process. However, the point is that the process and underlying ideas are straightforward, and rapid establishment of a very useful decision support system is certainly possible. It may not be of sufficient rigor for a very detailed assessment, but the user will nonetheless have a more formal structure from which to better ensure decisions of consistency and completeness of information.
For pipeline operators Whereas the approach described above is a way to get started quickly, this tool becomes even more powerful if the user customizes it, perhaps adding new dimensions to the process to better suit his or her particular needs. As with any engineered system (the risk assessment system described herein employs many engineering principles), a degree of due diligence is also warranted. The experienced pipeline operator should challenge the example point schedules: Do they match your operating experience? Read the reasoning behind the schedules: Do you agree with that reasoning? Invite (or require) input from employees at all levels. Most pipeline operators have a wealth of practical expertise that can be used to fine-tune this tool to their unique operating environment. Although customizing can create some new issues, problems can be avoided for the most part by carefully planning and controlling the process of model setup and maintenance. The point here again is to build a useful toolbone that is regularly used to aid in everyday business and operating decision making, one that is accepted and used throughout the organization. Refer also to Chapter 1 for ideas on evaluating the measuring capability of the tool.
11. Beginning risk management Chapter 1 suggests the following as basic steps in risk management:
Step 1:Acquire a risk assessment model A pipeline risk assessment model is a set of algorithms or “rules” that use available information and data relationships to measure levels of risk along a pipeline. A risk assessment model can be selected from some commercially available models, customized from existing models, or created “from scratch” depending on requirements.
Step 2: Collect and prepare data Data preparation are the processes that result in data sets that are ready to be read into and used by the risk assessment model.
Step 3: Devise and implement a segmentation strategy Because risks are rarely constant along a pipeline, it is advantageous to first segment the line into sections with constant risk characteristics (dynamic segmentation) or otherwise divide the pipeline into manageable pieces.
Step 4:Assess the risks After a risk model has been selected and the data have been prepared, risks along the pipeline route can be assessed. This is the process of applying the algorithn-the rules-to the collected data. Each pipeline segment will get a unique risk score that reflects its current condition, environment, and the operating/ maintenance activities.These relative risk numbers can later be converted into absolute risk numbers. Risk assessment will need to be repeated periodically to capture changing conditions.
Step 5: Manage the risks This step consists of determining what actions are appropriate given the risk assessment results. This is discussed in Chapter 15. Model design and data collection are often the most costly parts of the process. These steps can be time consuming not only in the hands-on aspects, but also in obtaining the necessary consensus from all key players. The initial consensus often makes the difference between a widely accepted and a partially resisted system. Time and resources spent in these steps can be viewed as initial investments in a successfil risk management tool. Program management and maintenance are normally small relative to initial setup costs.
111. Risk assessment models What is a model? Armed with an understanding of the scenarios that compose the hazard (see Chapter 1 discussion of risk model building blocks), a risk assessment model can be constructed. The model is the set of rules by which we will predict the future performance of the pipeline from a risk perspective. The model will be the constructor’s representation of risk. The goal of any risk assessment model is to quantify the risks, in either a relative or absolute sense. The risk assessment phase is the critical first step in practicing risk management. It is also the most difficult phase. Although we understand engineering concepts about corrosion and fluids flow, predicting failures beyond the laboratory in a complex “real” environment can prove impossible. No one can definitively state where or when an accidental pipeline failure will occur. However, the more likely failure mechanisms, locations, and frequencies can be estimated in order to focus risk efforts. Some make a distinction between a model and a simulation, where a model is a simplification of the real process and a simulation is a direct replica. A model seeks to increase our understanding at the expense of realism, whereas a simulation attempts to duplicate reality, perhaps at the expense of understandability and usability. Neither is necessarily superior-
Risk assessment models 2/23
either might be more appropriate for specific applications. Desired accuracy, achievable accuracy, intended use, and availability of resources are considerationsin choosing an approach. Most pipeline risk efforts generally fall into the “model” category-seeking to gain risk understanding in the most efficient manner. Although not always apparent, the most simple to the most complex models all make use of probability theory and statistics. In a very simple application,these manifest themselves in experience factors and engineering judgments that are themselves based on past observationsand inductive reasoning; that is, they are the underlying basis of sound judgments. In the more mathematically rigorous models, historical failure data may drive the model almost exclusively. Especially in the fields of toxicology and medical research, risk assessments incorporate dose-response and exposure assessments into the overall risk evaluation. Dose-response assessment deals with the relationship between quantities of exposureand probabilities of adverse health effects in exposed populations. Exposure assessment deals with the possible pathways, the intensity of exposure, and the amount of time a receptor could be vulnerable. In the case of hazardous materials pipelines, the exposure agents of concern are both chemical (contaminationscenarios) and thermal (fire related hazards) in nature. These issues are discussed in Chapters 7 and 14.
Three general approaches Three general types of models, from simplestto most complex, are matrix, probabilistic, and indexing models. Each has strengthsand weaknesses, as discussed below.
Matrix models One of the simplest risk assessment structures is a decisionanalysis matrix. It ranks pipeline risks according to the likelihood and the potential consequences of an event by a simple scale, such as high, medium, or low,or a numerical scale; from 1 to 5 , for example. Each threat is assigned to a cell of the matrix based on its perceived likelihood and perceived consequence. Events with both a high likelihood and a high consequence appear higher on the resulting prioritized list. This approach may simply use expert opinion or a more complicated application might use quantitativeinformationto rank risks. Figure 2.1 shows a matrix model. While this approach cannot consider all pertinent factors and their relationships, it does help to crystallize thinking by at least breaking the problem into two parts (probabilityand consequence)for separate examination.
Probabilistic models The most rigorous and complex risk assessment model is a modeling approach commonly referred to as probabilistic risk assessment (PRA) and sometimes also called quantitative risk assessment (QRA)or numerical risk assessment (NRA). Note that these terms carry implicationsthat are not necessarily appropriate as discussed elsewhere. This technique is used in the nuclear, chemical, and aerospace industries and, to some extent, in the petrochemical industry. PRA is a rigorousmathematicaland statisticaltechnique that relies heavily on historical failure data and event-treelfault-tree
Highest risk High
’I
Consequence
Il Low
Lowest risk L o w < I Likelihood =>High Figure 2.1
Simple risk matrix.
analyses. Initiating events such as equipment failure and safety system malfunction are flowcharted forward to all possible concluding events, with probabilities being assigned to each branch along the way. Failures are backward flowchartedto all possible initiating events, again with probabilities assigned to all branches.All possible paths can then be quantified based on the branch probabilities along the way. Final accident probabilities are achieved by chaining the estimated probabilities of individualevents. This technique is very data intensive. It yields absolute risk assessmentsof all possible failure events. These more elaborate models are generally more costly than other risk assessments. They are technologicallymore demanding to develop, require trained operators, and need extensive data. A detailed PRA is usually the most expensive of the risk assessmenttechniques. The output of a PRA is usually in a form whereby its output can be directly compared to other risks such as motor vehicle fatalities or tornado damages. However, in rare-event occurrences,historical data present an arguably blurred view. The PRA methodology was first popularized through opposition to various controversialfacilities, such as large chemical plants andnuclear reactors [88]. In addressingthe concerns, the intent was to obtain objective assessments of risk that were grounded in indisputable scientific facts and rigorous engineering analyses.The technique therefore makes extensive use of failure statistics of components as foundationsfor estimates of future failure probabilities. However, statistics paints an incompletepicture at best, and many probabilitiesmust still be based on expertjudgment. In attemptsto minimize subjectivity, applicationsof this techniquebecame increasinglycomprehensive and complex, requiring thousands of probability estimates and like numbers ofpages to document. Nevertheless,variation in probability estimates remains, and the complexity and cost of this method does not seem to yield commensurateincreases in accuracy or applicability [MI.In addition to sometimes widely differing results from “duplicate” PRAs performed on the same system by different evaluators, another criticism
2/24 Risk Assessment Process
includes the perception that underlying assumptions and input data can easily be adjusted to achieve some predetermined result. Of course, this latter criticism can be applied to any process involving much uncertainty and the need for assumptions. PRA-type techniques are required in order to obtain estimates of absolute risk values, expressed in fatalities, injuries, property damages, etc., per specific time period. This is the subject of Chapter 14. Some guidanceon evaluating the quality of a PRA-type technique is also offered in Chapter 14.
Indexing models Perhaps the most popular pipeline risk assessment technique in current use is the index model or some similar scoring technique. In this approach, numerical values (scores) are assigned to important conditions and activities on the pipeline system that contribute to the risk picture. This includes both riskreducing and risk-increasing items, or variables. Weightings are assigned to each risk variable. The relative weight reflects the importance of the item in the risk assessment and is based on statistics where available and on engineering judgment where data are not available. Each pipeline section is scored based on all of its attributes. The various pipe segments may then be ranked according to their relative risk scores in order to prioritize repairs, inspections,and other risk mitigating efforts. Among pipeline operators today, this technique is widely used and ranges from a simple one- or two-factor model (where only factors such as leak history and population density are considered) to models with hundreds of factors consideringvirtually every item that impacts risk. Although each risk assessmentmethod discussed has its own strengths and weaknesses, the indexing approach is especially appealingfor several reasons: Provides immediate answers Is a low-cost analysis (an intuitive approach using available information) Is comprehensive (allows for incomplete knowledge and is easily modified as new informationbecomes available) Acts as a decision support tool for resource allocation modeling Identifies and places values on risk mitigation opportunities
An indexing-typemodel for pipelinerisk assessmentis a recommended feature of a pipeline risk management program and is hlly described in this book. It is a hybrid of several of the methods listed previously. The great advantage of this technique is that a much broader spectrum of information can he included; for example, near misses as well as actual failures are considered.A drawback is the possible subjectivity of the scoring. Extra efforts must be employed to ensure consistency in the scoring and the use of weightings that fairly represent real-world risks. It is reasonableto assumethat not all variable weightings will prove to be correct in any risk model. Actual research and failure data will doubtlessly demonstrate that some were initially set too high and some too low. This is the result of modelers misjudging the relative importance of some of the variables. However, even if the quantification of the risk factors is imperfect, the results nonetheless will usually give a reliable picture
of places where risks are relatively lower (fewer “bad” factors present) and where they are relatively higher (more “bad” factors are present). An indexing approach to risk assessment is the emphasis of much ofthis book.
Further discussion on scoring-type risk assessments Scoring-typetechniques are in common use in many applications. They range from judging sports and beauty contests to medical diagnosis and credit card fraud detection, as are discussed later. Any time we need to consider many factors simultaneously and our knowledge is incomplete, a scoring system becomes practical. Done properly, it combines the best of all other approaches because critical variables are identified from scenario-based approaches and weightings are established from probabilistic conceptswhen possible. The genesis of scoring-typeapproaches is readily illustrated by the following example. As operators of motor vehicles, we generally know the hazards associated with driving as well as the consequencesof vehicle accidents.At one time or another, most drivers have been exposed to driving accident statistics as well as pictures or graphic commentary of the consequencesof accidents. Were we to perform a scientific quantitative risk analysis, we might begin by investigating the accident statistics of the particular make and model of the vehicle we operate.We would also want to know something about the crash survivability of the vehicle. Vehicle condition would also have to be included in our analysis. We might then analyze various roadways for accident history including the accident severity. We would naturally have to compensate for newer roads that have had less opportunityto accumulate an accident frequency base. To be complete, we would have to analyze driver conditionas it contributesto accident frequency or severity, as well as weather and road conditions. Some of these variables would he quite difficult to quantify scientifically. After a great deal of research and using a number of critical assumptions,we may be able to build a system model to give us an accident probability number for each combination of variables. For instance, we may conclude that, for vehicle type A, driven by driver B, in condition C, on roadway D, during weather and road conditions E, the accident frequency for an accident of severity F is once for every 200,000 miles driven. This system could take the form of a scenario approach or a scoring system. Does this now mean that until 200,000 miles are driven, no accidentsshould be expected? Does 600,000 miles driven guarantee three accidents?Of course not. What we do believe from our study of statistics is that, given a large enough data set, the accident frequency for this set of variables should tend to move toward once every 200,000 miles on average, if our underlying frequencies are representative of fume frequencies.This may mean an accident every 10,000miles for the first 100,000miles followed by no accidents for the next 1,900,000 miles-the average is still once every 200,000 miles. What we are perhaps most interestedin, however, is the relative amount of risk to which we are exposing ourselvesduring a single drive. Our study has told us little ahout the risk of this drive until we compare this drive with other drives. Suppose we change weather and road conditionsto state G from state F and find that the accident frequency is now once every 190,000
Risk assessment models 2/25
miles. This finding now tells us that condition G has increased the risk by a small amount. Suppose we change roadway D to roadway H and find that our accident frequency is now once every 300,000 miles driven. This tells us that by using road H we have reduced the risk quite substantially compared with using road D. Chances are, however, we could have made these general statements without the complicated exercise of calculating statistics for each variable and combining them for an overall accident frequency. So why use numbers at all? Suppose we now make both variable changes simultaneously. The risk reduction obtained by road H is somewhat offset by the increasedrisk associated with road and weather condition F, but what is the result when we combine a small risk increase with a substantial risk reduction? Because all of the variables are subject to change, we need some method to see the overall picture. This requires numbers, but the numbers can be relative-showing only that variable H has a greater effect on the risk picture than does variable G. Absolute numbers, such as the accident frequency numbers used earlier, are not only difficult to obtain, they also give a false sense of precision to the analysis. If we can only be sure of the fact that change X reduces the risk and it reduces it more than change Y does, it may be of little W h e r value to say that a once in 200,000 frequency has been reduced to a once in 2 10,000 frequency by change X and only a once in 205,000 fiequency by change Y. We are ultimately most interested in the relative risk picture of changeXversus change Y. This reasoning forms the basis of the scoring risk assessment. The experts come to a consensus as to how a change in a variable impacts the risk picture, relative to other variables in the risk picture. If frequency data are available, they are certainly used, but they are used outside the risk analysis system. The data are used to help the experts reach a consensus on the importance of the variable and its effects (or weighting) on the risk picture. The consensus is then used in the risk analysis. As previously noted, scoring systems are common in many applications. In fact, whenever information is incomplete and many aspects or variables must be simultaneously considered, a scoring system tends to emerge. Examples include sporting events that have some difficult-to-measure aspects like artistic expression or complexity, form, or aggressiveness. These include gymnastics, figure skating, boxing, and karate and other martial arts. Beauty contests are another application. More examples are found in the financial world. Many economic models use scoring systems to assess current conditions and forecast future conditions and market movements. Credit card fraud assessment is another example where some purchases trigger a model that combines variables such as purchase location, the card owner’s purchase history, items Table 2.1
purchased, time of day, and other factors to rate the probability of a fraudulent card use. Scoring systems are also used for psychological profiles, job applicant screening, career counseling, medical diagnostics, and a host of other applications.
Choosing a risk assessment approach Any or all ofthe above-describedtechniques might have a place in risk assessment/management. Understanding the strengths and weaknesses of the different risk assessment methodologies gives the decision-maker the basis for choosing one. A case can be made for using each in certain situations. For example, a simple matrix approach helps to organize thinking and is a first step towards formal risk assessment. If the need is to evaluate specific events at any point in time, a narrowly focused probabilistic risk analysis might be the tool of choice. Ifthe need is to weigh immediate risk trade-offs or perform inexpensive overall assessments, indexing models might be the best choice. These options are summarized in Table 2.1.
Uncertainty It is important that a risk assessment identify the role of uncertainty in its use of assumptions and also identify how the state of “no information” is assessed. The philosophy behind uncertainty and risk is discussed in Chapter 1. The recommendation from Chapter 1 is that a risk model generally assumes that things are “ b a d until data show otherwise. So, an underlying theme in the assessment is that “uncertainty increases risk.” This is a conservative approach requiring that, in the absence of meaningful data or the opportunity to assimilate all available data, risk should be overestimated rather than underestimated. So, lower ratings are assigned, reflecting the assumption of reasonably poor conditions, in order to accommodate the uncertainty. This results in a more conservative overall risk assessment. As a general philosophy, this approach to uncertainty has the added long-term benefit of encouraging data collection via inspections and testing. Uncertainty also plays a role in scoring aspects of operations and maintenance. Information should be considered to have a life span because users must realize that conditions are always changing and recent information is more useful than older information. Eventually, certain information has little value at all in the risk analysis.This applies to inspections, surveys, and so on. The scenarios shown inTable 2.2 illustrate the relative value of several knowledge states for purposes of evaluating risk where uncertainty is involved. Some assumptions and “reasonableness” are employed in setting risk scores in the absence of
Choosing a risk assessment technique
When the need is t o . . .
A technique to use might be
Study specific events. perform post-incidentinvestigations,compare risks of specific failures. calculatespecific event probabilities Obtain an inexpensive overall risk model, create a resource allocation model, model the interaction of many potential failure mechanisms, study or create an operatingdiscipline Better quantify a belief, create a simple decision support tool, combine several beliefs into a single solution, document choices in resource allocation
Event trees. fault trees, FMEA. PRA, HAZOP Indexing model Matrix
2/26 Risk Assessment Process Table 2.2 Uncertainty and risk assessment ~~
Action
Inspection results
Risk relevance
Timely and comprehensive inspection performed Timely and comprehensive inspection performed
No risk issues identified Some risk issues or indications of flaw potential identified: root
Least risk
cause analysis and proper follow-up to mitigate risk High uncertainty regarding risk issues Some nsk issues or Indications of flaw potential identifieduncertain reactions,uncertain mitigation of risk
More risk
No timely and comprehensiveinspection performed
Timely and comprehensive inspection performed
data; in general, however, worst-case conditions are conservatively used for default values. Uncertainty also arises in using the risk assessment model since there are inaccuracies inherent in any measuring tool. A signal-to-noise ratio analogy is a useful way to look at the tool and highlights precautions in its use. This is discussed in Chapter 1.
Sectioning or segmenting the pipeline It is generally recognized that, unlike most other facilities that undergo a risk assessment, a pipeline usually does not have a constant hazard potential over its entire length. As conditions along the line’s route change, so too does the risk picture. Because the risk picture is not constant, it is efficient to examine a long pipeline in shorter sections. The risk evaluator must decide on a strategy for creating these sections in order to obtain an accurate risk picture. Each section will have its own risk assessment results. Breaking the line into many short sections increases the accuracy of the assessment for each section, hut may result in higher costs of data collection, handling, and maintenance (although higher costs are rarely an issue with modern computing capabilities). Longer sections (fewer in number) on the other hand, may reduce data costs but also reduce accuracy, because average or worst case characteristics must govern if conditions change within the section.
Fixed-length approach A fixed-length method of sectioning, based on rules such as “every mile” or “between pump stations” or “between block valves,” is often proposed. While such an approach may be initially appealing (perhaps for reasons of consistency with existing accounting or personnel systems), it will usually reduce accuracy and increase costs. Inappropriate and unnecessary break points that are chosen limit the model’s usefulness and hide risk hot spots if conditions are averaged in the section, or risks will be exaggerated if worst case conditions are used for the entire length. It will also interfere with an otherwise efficient ability of the risk model to identify risk mitigation projects. Many pipeline projects are done in very specific locations, as is appropriate. The risk of such specific locations is often lost under a fixed-length sectioning scheme.
Dynamic segmentation approach The most appropriate method for sectioning the pipeline is to insert a break point wherever significant risk changes occur. A significant condition change must be determined by the eval-
Most risk
uator with consideration given to data costs and desired accuracy. The idea is for each pipeline section to be unique, from a risk perspective, from its neighbors. So, within a pipeline section, we recognize no differences in risk, from beginning to end. Each foot ofpipe is the same as any other foot, as far as we know from our data. But we know that the neighboring sections do differ in at least one risk variable. It might he a change in pipe specification (wall thickness. diameter, etc.), soil conditions (pH, moisture, etc.), population, or any of dozens of other risk variables, but at least one aspect is different from section to section. Section length is not important as long as characteristics remain constant. There is no reason to subdivide a 10-mile section of pipe if no real risk changes occur within those 10 miles. This type of sectioning is sometimes called dynamic segnienfution. It can be done very efficiently using modern computers. It can also be done manually, of course, and the manual process might be suitable for setting up a high-level screening assessment.
Manually establishing sections With today’s common computing environments, there is really no reason to follow the relatively inefficient option of manually establishing pipeline sections. However. envisioning the manual process of segmentation might be helpful for obtaining a better understanding of the concept. The evaluator should first scan Chapters 3 through 7 of this text to get a feel for the types ofconditions that make up the risk picture. He should note those conditions that are most variable in the pipeline system being studied and rank those items with regard to magnitude of change and frequency of change. This ranking will be rather subjective and perhaps incomplete, but it will serve as a good starting point for sectioning the line(s).An example of a short list ofprioritized conditions is as follows: 1. Population density 2. Soil conditions 3. Coating condition 4. Age ofpipeline
In this example, the evaluator(s) foresees the most significant changes along the pipeline route to be population density, followed by varying soil conditions, then coating condition, and pipeline age. This list was designed for an aging 60-mile pipeline in Louisiana that passes close to several rural communities and alternating between marshlands (clay)and sandy soil conditions. Furthermore, the coating is in various states ofdeterioration (maybe roughly corresponding to the changing soil
Risk assessment models 2/27
conditions) and the line has had sections replaced with new pipe during the last few years. Next. the evaluator should insert break points for the sections based on the top items on the prioritized list of condition changes. This produces a trial sectioning of the pipeline. If the number of sections resulting from this process is deemed to be too large, the evaluator needs to merely reduce the list (eliminating conditions from the bottom of the prioritized list) until an appropriate number of sections are obtained. This trial-anderror process is repeated until a cost-effective sectioning has been completed.
E.xaniple 2.1: Sectioning the Pipeline Following this philosophy, suppose that the evaluator of this hypothetical Louisiana pipeline decides to section the line according to the following rules he has developed: Insert a section break each time the population density along a 1-mile section changes by more than 10%. These popula-
tion section breaks will not occur more often than each mile, and as long as the population density remains constant, a section break is unwarranted. Insert a section break each time the soil corrosivity changes by 30%. In this example, data are available showing the average soil corrosivity for each 500-ft section of line. Therefore, section breaks may occur a maximum of I O times (5280 ft per mile divided by 500-ft sections) for each mile ofpipeline. Insert a section break each time the coating condition changes significantly. This will be measured by the corrosion engineer’s assessment. Because this assessment is subjective and based on sketchy data, such section breaks may occur as often as every mile. Insert a section break each time a difference in age of the pipeline is seen. This is measured by comparing the installation dates. Over the total length of the line, six new
sections have been installed to replace unacceptable older sections. Following these rules, the evaluator finds that his top listed condition causes 15 sections to be created. By applying the second condition rule, he has created an additional 8 sections. bringing the total to 23 sections. The third rule yields an additional 14 sections, and the fourth causes an additional 6 sections. This brings the total to 43 sections in the 60-mile pipeline. The evaluator can now decide if this is an appropriate number of sections. As previously noted, factors such as the desired accuracy of the evaluation and the cost of data gathering and analysis should be considered. If he decides that 43 sections is too many for the company’s needs, he can reduce the number of sections by first eliminating the additional sectioning caused by application of his fourth rule. Elimination of these 6 sections caused by age differences in the pipe is appropriate because it had already been established that this was a lower-priority item. That is, it is thought that the age differences in the pipe are not as significant a factor as the other conditions on the list. If the section count (now down to 37) is still too high, the evaluator can eliminate or reduce sectioning caused by his third rule. Perhaps combining the corrosion engineer’s “good’ and “fair” coating ratings would reduce the number of sections from I4 to 8. In the preceding example, the evaluator has roughed out a plan to break down the pipeline into an appropriate number of sections. Again, this is an inefficient way to section a pipeline and leads to further inefficiencies in risk assessment. This example is provided only for illustration purposes. Figure 2.2 illustrates a piece of pipeline sectioned based on population density and soil conditions. For many items in this evaluation (especially in the incorrect operations index) new section lines will not be created. Items such as training or procedures are generally applied uniformly across the entire pipeline system or at least within a single
I
Section 4
I
Section 5
I
I I
Section 6
I
Town
\
Pipeline
Figure 2.2
Sectioning of the pipeline.
w‘
I
2/28Risk Assessment Process
operations area. This should not be universally assumed, however, during the data-gathering step.
detail and complexity. Appendix E shows some samples of risk algorithms. Readers will find a review of some database design concepts to be useful (see Chapter 8).
Persistence of segments Another decision to make is how often segment boundaries will be changed. Under a dynamic segmentation strategy, segments are subject to change with each change of data. This results in the best risk assessments, but may create problems when tracking changes in risk over time. Difficulties can be readily overcome by calculating cumulative risks (see Chapter 15) or tracking specific points rather than tracking segments.
Results roll-ups The pipeline risk scores represent the relative level of risk that each point along the pipeline presents to its surroundings. It is insensitive to length. If two pipeline segments, say, 100 and 2600 ft, respectively, have the same risk score, then each point along the 100-ft segment presents the same risk as does each point along the 2600-ft length. Of course, the 2600-ft length presents more overall risk than does the 100-ft length because it has many more riskproducing points. A cumulative risk calculation adds the length aspect so that a 100-ft length of pipeline with one risk score can be compared against a 2600-ft length with a different risk score. As noted earlier, dividing the pipeline into segments based on any criteria other than all risk variables will lead to inefficiencies in risk assessment. However, it is common practice to report risk results in terms of fixed lengths such as “per mile” or “between valve stations,” even if a dynamic segmentation protocol has been applied. This “rolling up” of risk assessment results is often thought to be necessary for summarization and perhaps linking to other administrative systems such as accounting. To minimize the masking effect that such roll-ups might create, it is recommended that several measures be simultaneously examined to ensure a more complete use of information. For instance, when an average risk value is reported, a worst-case risk value, reflecting the worst length of pipe in the section, can be simultaneously reported. Length-weighted averages can also be used to better capture information, but those too must be used with caution. A very short, but very risky stretch of pipe is still of concern, even if the rest of the pipeline shows low risks. In Chapter 15, a system of calculating cumulative risk is offered. This system takes into account the varying section lengths and offers a way to examine and compare the effects of various risk mitigation efforts. Other aspects of data roll-ups are discussed in Chapters 8 and 15.
IV. Designing a risk assessment model A good risk model will be firmly rooted in engineering concepts and be consistent with experience and intuition. This leads to the many similarities in the efforts of many different modelers examining many different systems at many different times. Beyond compatibility with engineering and experience, a model can take many forms, especially in differing levels of
Data first or framework first? There are two possible scenarios for beginning a relative risk assessment. In one, a risk model (or at least a framework for a model) has already been developed, and the evaluator takes this model and begins collecting data to populate her model’s variables. In the second possibility, the modeler compiles a list of all available information and then puts this information into a framework from which risk patterns emerge and risk-based decisions can be made. The difference between these two approaches can be summarized in a question: Does the model drive data collection or does data availability drive model development? Ideally, each will be the driver at various stages of the process. One of the primary intents of risk assessment is to capture and use all available information and identify information gaps. Having data drive the process ensures complete usage of all data, while having a predetermined model allows data gaps to be easily identified. A blend of both is therefore recommended, especially considering possible pitfalls of taking either exclusively. Although a predefined set of risk algorithms defining how every piece of data is to be used is attractive, it has the potential to cause problems, such as: 0
Rigidity of approach.Difficulty is experienced in accepting new data or data in and unexpected format or information that is loosely structured. Relative scoring. Weightings are set in relation to types of information to be used. Weightings would need to be adjusted if unexpected data become available.
On the other hand, a pure custom development approach (building a model exclusively from available data) suffers from lack of consistency and inefficiency. An experienced evaluator or a checklist is required to ensure that significant aspects of the evaluation are not omitted as a result of lack of information. Therefore, the recommendation is to begin with lists of standard higher level variables that comprise all of the critical aspects of risk. Chapters 3 through 7 provide such lists for common pipeline components, and Chapters 9 through 13 list additional variables that might be appropriate for special situations. Then, use all available information to evaluate each variable. For example, the higher level variable of activity (as one measure of third-party damage potential) might be created from data such as number ofone-call reports, population density, previous thirdparty damages, and so on. So, higher level variable selection is standardized and consistent, yet the model is flexible enough to incorporate any and all information that is available or becomes available in the future. The experienced evaluator, or any evaluator armed with a comprehensive list of higher level variables, will quickly find many useful pieces of information that provide evidence on many variables. She may also see risk variables for which no information is available. Similar to piecing together a puzzle, a picture will emerge that readily displays all knowledge and knowledge gaps.
Designinga risk assessment model 2/29
Risk factors Tvpes of information Central to the design ofa risk model are the risk factors or variables (these terms are used interchangeably in this text) that will be included in the assessment. A complete list of risk factors, those items that add to or subtract from the amount of risk, can be readily identified for any pipeline system. There is widespread agreement on failure mechanisms and underlying factors influencing those mechanisms. Setting up a risk assessment model involves trade-offs between the number of factors to be considered and the ease of use of the model. Including all possible factors in a decision support system, however, can create a somewhat unwieldy system. So, the important variables are widely recognized, but the number to be considered in the model (and the depth of that consideration) is amatter of choice for the model developers. In this book, lists ofpossible risk indicators are offered based on their ability to provide useful risk signals. Each item’s specific ability to contribute without adding unnecessary complexities will be a function of a user’s specific system, needs, and ability to obtain the required data. The variables and the rationale for their possible inclusion are described in the following chapters. It is usually the case that some data impact several different aspects of risk. For example, pipe wall thickness is a factor in almost all potential failure modes: It determines time to failure for a given corrosion rate, partly determines ability to survive external forces, and so on. Population density is a consequence variable as well as a third-party damage indicator (as a possible measure of potential activity). Inspection results yield evidence regarding current pipe integrity as well as possibly active failure mechanisms. A single detected defect can yield much information. It could change our beliefs about coating condition, CP effectiveness, pipe strength, overall operating safety margin, and maybe even provides new information about soil corrosivity, interference currents, third-party activity, and so on. All of this arises from a single piece of data (evidence). Many companies now avoid the use of casings. But casings were put in place for a reason. The presence of a casing is a mitigation measure for external force damage potential, but is often seen to increase corrosion potential. The risk model should capture both of the risk implications from the presence of a casing. Numerous other examples can be shown. A great deal of information is usually available in a pipeline operation. Information that can routinely be used to update the risk assessment includes
0
0
All survey results such as pipe-to-soil voltage readings, leak surveys, patrols, depth of cover, population density, etc. Documentation of all repairs Documentation of all excavations Operational data including pressures and flow rates Results of integrity assessments Maintenance reports Updated consequence information Updated receptor information-new housing, high occupancy buildings. changes in population density or environmental sensitivities, etc. Results of root cause analyses and incident investigations Availability and capabilities of new technologies
Attributes andpreventions Because the ultimate goal of the risk assessment is to provide a means of risk management, it is sometimes useful to make a distinction between two types of risk variables. As noted earlier, there is a difference between a hazard and a risk. We can usually do little to change the hazard, but we can take actions to affect the risk. Following this reasoning, the evaluator can categorize each index risk variable as either an attribute or a prevention. The attributes correspond loosely to the characteristics of the hazard, while the preventions reflect the risk mitigation measures. Attributes reflect the pipeline’s environment-characteristics that are difficult or impossible to change. They are characteristics over which the operator usually has little or no control. Preventions are actions taken in response to that environment. Both impact the risk, but a distinction may be useful, especially in risk management analyses. Examples of aspects that are not routinely changed, and are therefore considered attributes, include Soil characteristics Type of atmosphere Product characteristics The presence and nature ofnearby buried utilities The other category, preventions, includes actions that the pipeline designer or operator can reasonably take to offset risks. Examples ofpreventions include Pipeline patrol frequency Operator training programs Right-of-way (ROW) maintenance programs The above examples of each category are pretty clear-cut. The evaluator should expect to encounter some gray areas of distinction between an attribute and a prevention. For instance. consider the proximity of population centers to the pipeline. In many risk assessments, this impacts the potential for third-party damage to the pipeline. This is obviously not an unchangeable characteristic because rerouting of the line is usually an option. But in an economic sense. this characteristic may be unchangeable due to unrecoverable expenses that may be incurred to change the pipeline’s location. Another example would be the pipeline depth of cover. To change this characteristic would mean a reburial or the addition of more cover. Neither of these is an uncommon action, but the practicality of such options must be weighed by the evaluator as he classifies a risk component as an attribute or a prevention. Figure 2.3 illustrates how some of the risk assessment variables are thought to appear on a scale with preventions at one extreme and attributes at the other. The distinction between attributes and preventions is especially useful in risk management policy making. Company standards can be developed to require certain risk-reducing actions to be taken in response to certain harsh environments. For example, more patrols might be required in highly populated areas or more corrosion-prevention verifications might be required under certain soil conditions. Such a procedure would provide for assigning a level of preventions based on the level of attributes. The standards can be predefined and programmed into a database program to adjust automatically the standards to
2/30Risk Assessment Process
r D e p t h cover
I
I
I
1
--------------Conditions * - - - - - - - - - - - - - - - - - - - - +Actions Figure 2.3
the environment of the section-harsh preventions to meet the standard.
Example items on attributes-preventions scale.
conditions require more
Model scope and resolution Assessment scope and resolution issues further complicate model design. Both involve choices of the ranges of certain risk variables. The assessment of relative risk characteristics is especially sensitive to the range of possible characteristics in the pipeline systems to be assessed. If only natural gas transmission pipelines are to be assessed then the model does not necessarily have to capture liquid pipeline variables such as surge potential. The model designer can either keep this variable and score it as “no threat” or she can redistribute the weighting points to other variables that do impact the risk. As another example, earth movements often pose a very localized threat on a relatively few stretches of pipeline. When the vast majority of a pipeline system to be evaluatedis not exposed to any land movement threats, risk points assigned to earth movements will not help to make risk distinctions among most pipeline segments. It may seem beneficial to reassign them to other variables, such as those that warrant full consideration. However, without the direct consideration for this variable, comparisons with the small portions of the system that are exposed, or future acquisitions of systems that have the threat, will be difficult. Model resolution-the signal-to-noise ratio as discussed in Chapter I-is also sensitive to the characteristics of the systems to be assessed. A model that is built for parameters ranging from, say, a 40-inch, 2000-psig propane pipeline to a 1-inch, 20psig fuel oil pipeline will not be able to make many risk distinctions between a 6-inch natural gas pipeline and an 8-inch natural gas pipeline. Similarly, a model that is sensitive to differences between a pipeline at 1 100 psig and one at 1200psig might have to treat all lines above a certain pressure/diameter threshold as the same. This is an issue ofmodeling resolution. Common risk variables that should have a range established as part of the model design include Diameter range Pressure range Products to be included
The range should include the smallest to largest values in systems to be studied as well as future systems to be acquired or other systems that might be used as comparisons. Given the difficulties in predicting future uses of the model, a more generic model-widely applicable to many different pipeline systems-might be appropriate.
Special Risk Factors Two possible risk factors deserve special consideration since they have a general impact on many other risk considerations. Age as a risk variable Some risk models use age as a risk variable. It is a tempting choice since many man-made systems experience deterioration that is proportional to their years in service. However, age itself is not a failure mechanism-at most it is a contributing factor. Using it as a stand-alone risk variable can detract from the actual failure mechanisms and can also unfairly penalize portions of the system being evaluated. Recall the discussion on time-dependent failure rates in Chapter 1, including the concept of the bathtub failure rate curve. Penalizing a pipeline for its age presupposes knowledge of that pipeline’s failure rate curve. Age alone is not a reliable indicator ofpipeline risk, as is evidenced by some pipelines found in excellent operating condition even after many decades of service. A perception that age always causes an inevitable, irreversible process of decay is not an appropriate characterization ofpipeline failure mechanisms. Mechanisms that can threaten pipe integrity exist but may or may not be active at any point on the line. Integrity threats are well understood and can normally be counteracted with a degree of confidence. Possible threats to pipe integrity are not necessarily strongly correlated with the passage of time, although the “area of opportunity” for something to go wrong obviously does increase with more time. The ways in which the age of a pipeline can influence the potential for failures are through specific failure mechanisms such as corrosion and fatigue, or in consideration of changes in manufacturing and construction methods since the pipeline was built. These age effects are well understood and can normally be countered by appropriate mitigation measures.
Designing a risk assessment model 2/31 Experts believe that there is no effect of age on the microcrystalline structure of steel such that the strength and ductility properties of steel pipe are degraded over time. The primary metal-related phenomena are the potential for corrosion and development of cracks from fatigue stresses. In the cases of certain other materials, mechanisms of strength degradation might be present and should be included in the assessment, Examples include creep and UV degradation possibilities in certain plastics and concrete deterioration when exposed to certain chemical environments. In some situations, a slow-acting earth movement could also be modeled with an age component. Such special situations are discussed in Chapters 4 and 5. Manufacturing and construction methods have changed over time. presumably improving and reflecting learning experiences from past failures. Hence, more recently manufactured and constructed systems may be less susceptible to failure mechanisms of the past. This can be included in the risk model and is discussed in Chapter 5. The recommendation here is that age not be used as an independent risk variable. unless the risk model is only a very high-level screening application. Preferably, the underlying mechanisms and mitigations should be evaluated to determine ifthere are any age-related effects.
rating tasks.) It is therefore useful for capturing expert judgments. However, these advantages are at least partially offset by inferior measurement quality, especially regarding obtaining consistency. Some emerging techniques for artificial intelligence systems seek to make better use of human reasoning to solve problems involving incomplete knowledge and the use of descriptive terms. In mirroring human decision making. fuzzy logic interprets and makes use of natural language in ways similar to our risk models. Much research can be found regarding transforming verbal expressions into quantitative or numerical probability values. Most conclude that there is relatively consistent usage of terms. This is useful when polling experts, weighing evidence. and devising quantitative measures from subject judgments. For example. Table 2.4 shows the results of a study where certain expressions, obtained from interviews of individuals, were correlated against numerical values. Using relationships like those shown in Table 2.4 can help bridge the gap between interview or survey results and numerical quantification of beliefs.
Table 2.4
Inspecfion age Inspection age should play a role in assessments that use the results of inspections or surveys. Since conditions should not be assumed to be static, inspection data becomes increasingly less valuable as it ages. One way to account for inspection age is to make a graduated scale indicating the decreasing usefulness of inspection data over time. This measure of information degradation can be applied to the scores as a percentage. After a predetermined time period scores based on previous inspections degrade to some predetermined value. An example is shown in Table 2.3. In this example, the evaluator has determined that a previous inspection yields no useful information after 5 years and that the usefulness degrades 20% per year. By this scale, point values based on inspection results will therefore change by 20% per year. A more scientific way to gauge the time degradation of integrity inspection data is shown in Chapter 5.
Inteniew dutu Collecting information via an interview will often require the use of qualitative descriptive terms. Such verbal labeling has some advantages, including ease of explanation and familiarity. (In fact. most people prefer verbal responses when replying to
Table 2.3
Assigning numbers to qualitative assessments
E-rpression
Almost certain Very high chance Very likely High chance Very probable Very possible Likely Probable Even chance Medium chance Possible Low chance Unlikely Improbable Very low chance Very unlikely Very improbable Almost impossible
Median prohahilrw equl~'ulellr YO 90
85 80 80 RO 70 70
Ruii,yt 9&99 5
85-')9 75-90 x0 Y?
75-92 70 87.5 65 85 h&75
15
45-55 40-6(1 40-70 I &70 IO 3
15
5-?0
10 10 5 2
5-15 2 I
50 50 40
70
1-15
0-5
Source: From Rohrmann, 6.. "Verbal Qualifiers for Ratlng Scales: Sociolinguistic Considerations and Psychometric Data," Project report, Universityof Melbourne,Australia, September 2002
Example of inspection degradations
Inspection age (j'ear.YJ
Adjustment (degradation) fuctor /%i
IO0 80 60
Nota Fresh data; no degradation Inspection data is 1 year old and less representative ofactual conditions
40
Inspection data is now 3 years old and current conditions might now be significantly di tErent
20 0
Inspection results assumed to no longer yield useful information
2/32Risk Assessment Process
Additional studies have yielded similar correlations with terms relating to quality and frequency. In Tables 2.5 and 2.6, some test results are summarized using the median numerical value for all qualitative interpretations along with the standard deviation. The former shows the midpoint of responses (equal number of answers above and below this value) and the latter indicates how much variability there is in the answers. Terms that have more variability suggest wider interpretations of their meanings. The terms in the tables relate quality to a 1-to 10-pointnumerical scale.
Variablegrouping The grouping or categorizing of failure modes, consequences, and underlying factors is a model design decision that must be made. Use of variables and subvariables helps understandability when variables are grouped in a logical fashion, but also creates intermediate calculations. Some view this as an attractive Table 2.5 Expressions of quality Term Outstanding Excellent Very good Good Satisfactory Adequate Fair Medium Average Not too bad
so-so Inadequate Unsatisfactoiry
Poor Bad
Median
Standard deviation
9.9 9.7 8.5 7.2 5.9 5.6 5.2 5 4.9 4.6 4.5 1.9 1.8 1.5 1
0.4 0.6 0.7 0.8 1.2 1.2 1.1 0.6 0.5 1.3 0.7 1.2 I .3 1.1 1
Source: From Rohrmann. B.. “Verbal Qualifiers for Rating Scales: Sociolinguistic Considerations and Psychometric Data,” Project report, University of Melbourne,Australia,September 2002.
Table 2.6 Expressions of frequency Term Always Very often Mostly Frequently Often Fairly often Moderately often Sometimes Occasionally Seldom Rarely Never
Median
Standard deviation
10 8.3 8 7.4 6.6
0.2 0.9 1.3 1.2 1.2 1.1 1.2
6.1 5.7 3.6 3.2 1.7 1.3 0
1
1.1 0.7 0.6 0.1
Source: From Rohrmann, B.,“Verbal Qualifiers for Rating Scales: Sociolinguistic Considerations and Psychometric Data,” Project report, University of Melbourne, Australia, September 2002.
aspect of a model, while others might merely see it as an unnecessary complication.Without categories of variables,the model takes on the look of a flat file, in database design analogy. When using categories that look more like those of a relational database design, the interdependenciesare more obvious.
Weightings The weightings of the risk variables, that is, their maximum possible point values or adjustment factors, reflect the relative importance of that item. Importance is based on the variable’s role in adding to or reducing risk. The following examples illustrate the way weightings can be viewed. Suppose that the threat of AC-induced corrosion is thought to represent 2% of the total threat of corrosion. It is a relatively rare phenomenon. Suppose further that all corrosion conditions and activities are thought to be worst case-the pipeline is in a harsh environment with no mitigation (no coatings, no cathodic protection, etc) and atmospheric, internal, and buried metal corrosion are all thought to be imminent. Ifwe now addressed all AC corrosion concerns only, then we would be adding 2% safety-reducing the threat of corrosion of any kind by 2% (and reducing the threat of AC-induced corrosion by 100%). As another example, if public education is assumed to carry a weight of 15 percent of the third-party threat, then doing public education as well as it can be done should reduce the relative failure rate from third-party damage scenariosby 15%. Weightings should be continuously revisited and modified whenever evidence shows that adjustments are appropriate. The weightings are especially important when absolute risk calculations are being performed. For example, if an extra foot of cover is assumed, via the weightings assigned,to reduce failure probability by 10% but an accumulation of statistical data suggests the effect is closer to 20%, obviously the predictive power of the model is improved by changing the weightings accordingly. In actuality, it is very difficult to extract the true influence of a single variable from the confounding influence of the multitude of other variables that are influencing the scenario simultaneously.In the depth of cover example, the reality is probably that the extra foot of cover impacts risk by 10% in some situations, 50% in others, and not at all in still others. (See also Chapter 8 for a discussion of sensitivity analysis.) The issue of assigning weightings to overall failure mechanisms also arises in model development.In a relative risk model with failure mechanisms of substantially equivalent orders of magnitude, a simplification can be used. The four indexes shown in Chapters 3 through 6 correspond to common failure modes and have equal @lo0 point scales-all failure modes are weighted equally. Because accident history (with regard to cause of failures) is not consistent from one company to another, it does not seem logical to rank one index over another on an accident history basis. Furthermore, if index weightings are based on a specific operator’s experience, that accident experience will probably change with the operator’s changing risk management focus. When an operator experiences many corrosion failures, he will presumably take actions to specifically reduce corrosion potential. Over time, a different mechanism may consequently become the chief failure cause. So, the weightings would need to change periodically, making the tracking of risk difficult. Weightings should, however, be used
Designing a risk assessment model 2/33 to reflect beliefs about frequency of certain failure types when linking relative models to absolute calculations or when there is large variations in expected failure frequencies among the possible failure types.
Risk scoring Direction ofpoint scale In a scoring-type relative risk assessment, one of two point schemes is possible: increasing scores versus decreasing to represent increased risk. Either can be effectively used and each has advantages. As a risk score, it makes sense that higher numbers mean more risk. However, as an analogy to a grading system and most sports and games (except golf), others prefer higher numbers being better-more safety and less risk. Perhaps the most compelling argument for the “increasing points = increasing safety” protocol is that it instills a mind-set of increasing safety. “Increasing safety” has a meaning subtly different from and certainly more positive than “lowering risks.” The implication is that additional safety is layered onto an already safe system, as points are acquired. This latter protocol also has the advantage of corresponding to certain common expressions such as “the risk situation has deteriorated’ = “scores have decreased and “risk situation has improved” = “scores have increased.” While this book uses an “increasing points = increasing safety” scale in all examples of failure probability, note that this choice can cause a slight complication if the relative risk assessments are linked to absolute risk values. The complication arises since the indexes actually represent relative probability of survival, and in order to calculate a relative probability of failure and link that to failure frequencies, an additional step is required. This is discussed in Chapter 14.
important the risk will be until she sees the weighting of that variable. Confusion can also arise in some models when the same variable is used in different parts of the model and has a locationspecific scoring scheme. For instance, in the offshore environment, water depth is a risk reducer when it makes anchoring damage less likely. It is a risk increaser when it increases the chance for buckling. So the same variable, water depth, is a “good” thing in one part of the model and a “ b a d thing somewhere else.
Combining variables An additional modeling design feature involves the choice of how variables will be combined. Because some variables will indicate increasing risk and others decreasing, a sign convention (positive versus negative) must be established. Increasing levels ofpreventions should lead to decreased risks while many attributes will be adding risks (see earlier discussion of preventions and attributes). For example, the prevention of performing additional inspections should improve risk scores, while risk scores deteriorate as more soil corrosivity indications (moisture, pH, contaminants, etc.) are found. Another aspect of combining variables involves the choice of multiplication versus addition. Each has advantages.Multiplication allows variables to independently have a great impact on a score. Adhtion better illustrates the layering of adverse conditions or mitigations. In formal probability calculations, multiplication usually represents the and operation: If corrosion prevention = “poor” AND soil comsivity = “high” then risk = “high.”Addition usually represents the or operation: If depth of cover = “good” OR activity levef= ‘‘low’’ then risk =“low.” Option 1 Risk variable = (sum of risk increasers) -(sum of nsk reducers)
Where to assign weightings In previous editions ofthis model, it is suggested that point values be set equal to weightings. That is, when a variable has a point value of 3, it represents 3% of the overall risk. The disadvantage of this system is that the user does not readily see what possible values that variable could take. Is it a 5-point variable, in which case a value of 3 means it is scoring midrange? Or is it a 15-point variable, for which a score of 3 means it is relatively low? An alternative point assignment scheme scores all variables on a fixed scale such as C L l O points. This has the advantage of letting the observer know immediately how “good” or “bad” the variable is. For example, a 2 always means 20% from the bottom and a 7 always means 70% of the maximum points that could be assigned. The disadvantage is that, in this system, weightings must be used in a subsequent calculation. This adds another step to the calculation and still does not make the point scale readily apparent. The observer does not know what the 70% variable score really means until he sees the weightings assigned. A score of 7 for a variable weighted at 20% is quite different from a score of 7 for a variable weighted at 5%. In one case, the user must see the point scale to know that a score of, say, 4 points represents the maximum level of mitigation. In the alternate case, the user knows that 10 always represents the maximum level of mitigation, but does not know how
where the point scales for each are in the same direction. For example, Corrosion threat = (environment) - [(coating) + (cathodic protection)]
Option 2 Risk variable = (sum ofrisk increasers) + (sum ofnsk reducers)
Point scales for risk increasers are often opposite from the scale of risk reducers. For example, in an “increasing points means increasing risk” scheme, Corrosion threat = (environment) + [(coating) + (cathodic protection)]
where actual point values might be (corrosion threat) = (24) + (-5
+-2)
=
17
Option 3 In this approach, we begin with an assessment ofthe threat level and then consider mitigation measures as adjustment factors. So, we begin with a risk and then adjust the risk downward (if increasing points = increasing risk) as mitigation is added: Risk variable = (threat) x (sum of% threat reduction through mitigations)
2/34 Risk Assessment Process
Exaniple Corrosion threat = (environment)x [(coating)+ (cathodic protection)] Option 3 avoids the need to create codes for interactions of variables. For example, a scoring rule such as “cathodic protection is not needed = 10 pts” would not be needed in this scheme. It would be needed in other scoring schemes to account for a case where risk is low not through mitigation but through absence of threat. The scoring should also attempt to define the interplay of certain variables. For example, if one variable can be done so well as to make certain others irrelevant, then the scoring protocol should allow for this. For example, ifpatrol (perhaps with a nominal weight 20% of the third-party damage potential) can be done so well that we do not care about any other activity or condition, then other pertinent variables (such as public education. activity level, and depth of‘cover) could be scored as NA (the best possible numerical score) and the entire index is then based solely on patrol. In theory, this could be the case for a continuous security presence in some situations. A scoring regime that uses multiplication rather than addition is better suited to capturing this nuance. The variables shown in Chapters 3 through 6 use a variation of option 2. All variables start at a value of 0, highest risk. Then safety points are awarded for knowledge of less threatening conditions and/or the presence of mitigations. Any of the options can be effective as long as a point assignment manual is availableto ensure proper and consistent scoring. Variable calculations Some risk assessment models in use today combine risk variables using only simple summations. Other mathematical relationships might be used to create variables before they are added to the model. The designer has the choice of where in the process certain variables are created. For instance, if D/t (pipe diameter divided by wall thickness) is often thought to be related to crack potential or strength or some other risk issue. A variable called D/t can be created during data collection and its value added to other risk variables. This eliminates the need to divide D by t in the actual model. Alternatively, data for diameter and wall thickness could be made directly available to the risk model’s algorithm which would calculate the variable D/t as part of the risk scoring. Given the increased robustness of computer environments, the ability to efficiently model more complex relationships is leading to risk assessment models that take advantage of this ability. Conditional statements “If X then Y,” including comparative relationships [“if b o p density) > 2 then (design factor) = 0.6, ELSE (design factor) = 0.72”] are becoming more prevalent. The use of these more complex algorithms to describe aspects of risk tend to mirror human reasoning and decisionmaking patterns. They are not unlike very sophisticated efforts to create expert systems and other artificial intelligence applications based on many simple rules that represent our understanding. Examples of more complex algorithms are shown in the following chapters and in Appendix E.
Direct evidence adjustments Risk evaluation is done primarily through the use of variables that provide indirect evidence of failure potential. This includes knowledge of pipe characteristics, measurements of environmental conditions, and results of surveys. From these, we infer the potential presence of active failure mechanisms or failure potential. However, active failure mechanisms are directly detected by in-line inspection (ILI), pressure testing, and/or visual inspections, including those that might be prompted by a leak. Pressure testing is included here as a direct means because it will either verify that failure mechanisms, even if present, have not compromised structural integrity or it will prompt a visual inspection. If direct evidence appears to be in conflict with risk assessment results (based on indirect evidence), then one of three scenarios is true: 1. The risk assessment model is wrong; an important variable
has been omitted or undervalued or some interaction of variables has not been properly modeled. 2. The data used in the risk assessment are wrong; actual conditions are not as thought. 3. There actually is no conflict; the direct evidence is being interpreted incorrectly or it represents an unlikely, but statistically possible event that the risk assessment had discounted due to its very low probability. It is prudent to perform an investigation to determine which scenario is the case. The first two each have significant implications regarding the utility of the risk management process. The last is a possible learning opportunity. Any conclusions based on previously gathered indirect evidence should be adjusted or overridden when appropriate, by direct evidence. This reflects common practice, especially for time-dependent mechanisms such as corrosionbest efforts produce an assessment of corrosion potential, but that assessment is periodically validated by direct observation. The recommendation is that, whenever direct evidence of failure mechanisms is obtained, assessments should assume that these mechanisms are active. This assumption should remain in place until an investigation, preferably a root cause analysis (discussed later in this chapter). demonstrates that the causes underlying the failure mechanisms are known and have been addressed. For example, an observation of external corrosion damage should not be assumed to reflect old, alreadymitigated corrosion. Rather, it should be assumed to represent active external corrosion unless the investigation concludes otherwise. Direct or confirmatory evidence includes leaks, breaks, anomalies detected by ILI, damages detected by visual inspection, and any other information that provides a direct indication of pipe integrity, if only at a very specific point. The use of ILI results in a risk assessment is discussed in Chapter 5 . The evidence should be captured in at least two areas of the assessment: pipe strength and failure potential. If reductions are not severe enough to warrant repairs, then the wall loss or strength reduction should be considered in the pipe strength evaluation (see Chapter 5). If repairs are questionable (use of nonstandard materials or practices), then the repair itself
Designing a risk assessment model 2/35 should be evaluated. This includes a repair’s potential to cause unwanted stress concentrations. If complete and acceptable repairs that restored full component strength have been made, then risk assessment “penalties” can be removed. Regardless of repair, evidence still suggests the potential for repeat failures in the same area until the root cause identification and elimination process has been completed. Whether or not a root cause analysis has been completed, direct evidence can be compiled in various ways for use in a relative risk assessment. A count of incidences or a density of incidences (leaks per mile, for example) will be an appropriate use of information in some cases, while a zoneofinfluence or anomaly-specific approach might be better suited in others. When such incidences are rather common-ccurring regularly or clustering in locations-the density or count approaches can be useful. For example, the density of ILI anomalies of a certain type and size in a transmission pipeline or the density ofnuisance leaks in a distribution main are useful risk indications (see Chapters 5 and 1 I). When direct evidence is rare in time andor space, a more compelling approach is to assign a zone qf influence around each incident. For example, a transmission pipe leak incident is rare and often directly affects only a few square inches of pipe. However, it yields evidence about the susceptibility of neighboring sections of pipeline. Therefore, a zone of influence, X number of feet on either side of the leak event, can be assigned around the leak. The length of pipeline within this zone of influence is then conservatively treated as having leaked and containing conditions that might suggest increased leak susceptibility in the future. The recommended process incorporating direct evidence into a relative risk assessment is as follows: A. Use all available leak history and ILI results---even when root cause investigations are not available-to help evaluate and score appropriate risk variables. Conservatively assume that damage mechanisms are still active. For example, the detection of pipe wall thinning due to external corrosion implies 0 The existence of a corrosive environment 0 Failure of both coating and cathodic protection systems or a special mechanism at work such as AC-induced corrosion or microbially induced corrosion 0 A pipe wall thickness that is not as thought-pipe strength must be recalculated Scores should be assigned accordingly. The detection of damaged coating, gouges, or dents suggests previous third-party damages or substandard installation practices. This implies that 0 Third-party damage activity is significant, or at least was at one time in the past 0 Errors occurred during construction Pipe strength must be recalculated Again, scores can be assigned accordingly. B. Use new direct evidence to directly validate or adjust risk scores. Compare actual coating condition, pipe wall thickness, pipe support condition, soil corrosivity, etc., with the corresponding risk variables’ scores. Compare the relative likelihood of each failure mode with the direct evi-
dence. How does the model’s implied corrosion rate compare with wall loss observations? How does third-party damage likelihood compare with dents and gouges on the top or side of pipe? Is the design index measure of land movement potential consistent with observed support condition or evidence of deformation? direct evidence says C. If disagreement is apparent-the something is actually “good’ or “bad” while the risk model says the opposite-then perform an investigation. Based on the investigation results, do one or more of the following: Modify risk algorithms based on new knowledge. Modify previous condition assessments to reflect new knowledge. For example, “coating condition is actually bad, not fair as previously thought” or “cathodicprotection levels are actually inadequate, despite 3-year-old close interval survey results.” 0 Monitor the situation carefully. For example, “existing third-party damage preventions are very protective of the pipe and this recent detection of a top side dent is a rare exception or old and not representative of the current situation. Rescoring is not appropriate unless additional evidence is obtained suggesting that third-party damage potential is actually higher than assumed.” Note that this example is a nonconservative use of information and is not generally recommended.
Role of leak history in riskassessment Pipeline failure data often come at a high cost-an accident happens. We can benefit from this unfortunate acquisition of data by refining our model to incorporate the new information. In actual practice, it is a common belief, which is sometimes backed by statistical analysis, that pipeline sections that have experienced previous leaks are more likely to have additional leaks. Intuitive reasoning suggests that conditions that promote one leak will most likely promote additional leaks in the same area. Leak history should be a part of any risk assessment. It is often the primary basis of risk estimations expressed in absolute terms (see Chapter 14). A leak is strong evidence of failure-promoting conditions nearby such as soil corrosivity, inadequate corrosion prevention, problematic pipe joints, failure of the one-call system, active earth movements, or any of many others. It is evidence of future leak potential. This evidence should be incorporated into a relative risk assessment because, hopefully, the evaluator’s “degree of belief” has been impacted by leaks. Each risk variable should always incorporate the best availableknowledge ofconditions andpossibilities for promoting failure. Where past leaks have had no root cause analysis and/or corrective action applied, risk scores for the type of failure can be adjusted to reflect the presence of higher failure probability factors. A zone of influence around the leak site can be established (see Chapter 8) to penalize nearby portions of the system. In some pipelines, such as distribution systems (see Chapter 11) where some leak rate is routinely seen, the determination as to whether a section of pipeline is experiencing a higher frequency of leaks must be made on a relative basis. This can be
2/36Risk Assessment Process
done by making comparisons with similar sections owned by the company or with industry-wide leak rates, as well as by benchmarking against specific other companies or by a combination of these. Note that an event history is only useful in predicting hture events to the extent that conditions remain unchanged. When corrective actions are applied, the event probability changes. Any adjustment for leak frequency should therefore be reanalyzed periodically.
Visual inspections A visual inspection of an internal or external pipe surface may be triggered by an ILI anomaly investigation, a leak, a pressure test, or routine maintenance. If a visual inspection detects pipe damage, then the respective failure mode score for that segment of pipe should reflect the new evidence. Points can be reassigned only after a root cause analysis has been done and demonstrates that the damage mechanism has been permanently removed. For risk assessment purposes, a visual inspection is often assumed to reflect conditionsfor some length ofpipe beyondthe portions actually viewed. A conservative zone some distance either side of the damage location can be assumed. This should reflect the degree of belief and be conservative. For instance, if poor coating condition is observed in one site, then poor coating condition should be assumed for as far as those conditions (coating type and age, soil conditions, etc.) might extend. As noted earlier, penalties from visual inspections are removed through root cause analysis and removal of the root cause. Historical records of leaks and visual inspectionsshould included in the risk assessment even if they do not completely document the inspection, leak cause, or repair as is often the case. Because root cause analyses for events long ago are problematic, and their value in a current condition assessment is arguable, the weighting of these events is often reduced, perhaps in proportion to the event’s age.
Root cause analyses Pipeline damage is very strong evidence of failure mechanisms at work. This should be captured in the risk assessment. However, once the cause of the damage has been removed, if it can be, then the risk assessment should reflect the now safer condition. Determining and removing the cause of a failure mechanism is not always easy. Before the evidenceprovided by actual damage is discounted, the evaluator should ensure that the true underlying cause has been identified and addressed. There are no rules for determining when a thorough and complete investigation has been performed. To help the evaluator make such a judgment, the following concepts regarding root cause analysesare offered [32]. A root cause analysis is a specializedtype of incident investigation process that is designed to find the lowest level contributingcauses to the incident. More conventional investigations often fail to arrive at this lowest level. For example, assume that a leak investigation reveals that a failed coating contributedto a leak. The coating is subsequently repaired and the previously assigned leak penalty is removed from the risk assessment results. But then, a few years later, another leak appears at the same location. It turns out that the
main root cause was actually soil movements that will damage any coating, eventually leadingto a repeat leak (discountingthe role of other corrosionpreventions; see Chapter 3). In this case, the leak penalty in the risk assessment should have been removed only after addressing the soil issue, not simply the coating repair. This example illustrates that the investigators stopped the analysis too early by not determining the causes of the damaged coating. The root is often a system of causes that should be defined in the analysis step. The very basic understanding of cause and effect is that every effect has causes (plural). There is rarely only one root cause. The focus of any investigation or risk assessment is ultimately on effective solutions that prevent recurrence. These effective solutions are found by being very diligent in the analysis step (the causes). A typical indication of an incompleteanalysis is missing evidence. Each cause-and-effect relationship should be validated with evidence. If we do not have evidence, then the causeand-effect relationship cannot be validated. Evidence must be added to all causes in the analysis step. In the previous example, the investigators were missing the additional causes and its evidence to causally explain why the coating was damaged. If the investigators had evidenceof coating damage, then the next question should have been “Why was the coating damaged?” A thorough analysis addresses the system of causes. If investigators cannot explain why the coating was damaged then they have not completed the investigation. Simply repairing the coating is not going to be an effective solution. Technically, there is no end to a cause-and-effect chainthere is no end to the “Why?” questions.Common terminology includes mot cause, direct cause, indirect cause, main cause, primaty cause, contributing cause, proximate cause, physical cause, and so on. It is also true that between any cause-andeffect relationshipthere are more causes that can be added-we can always ask more “Why?” questionsbetween any cause and effect. This allows an analysis to dig into whatever level of detail is necessary. The critical point here is that the risk evaluator should not discount strong direct evidence of damage potential unless there is also compelling evidence that the damage-causing mechanisms have been permanently removed.
V. Lessons learned in establishing a risk assessment program As the primary ingredient in a risk management system, a risk assessment process or model must first be established.This is no small undertaking and, as with any undertaking, is best accomplished with the benefit of experience. The following paragraphs offer some insights gained through development of many pipeline risk management programs for many varied circumstances.Of course, each situationis unique and any rules of thumb are necessarily general and subject to many exceptions to the rules. To some degree, they also reflect a personal preference, but nonetheless are offered here as food for thought for those embarking on such programs. These insights include some key points repeated from the first two chapters of this book.
Lessons learned in establishing a risk assessment program 2/37 The general lessons learned are as follows:
Avoid complexity
Work from general to specific. Think “organic.” Avoid complexity. Use computers wisely. Build the program as you would build a new pipeline. Study your results.
Every single component of the risk model should yield more benefits than the cost it adds in terms of complexity and datagathering efforts. Challenge every component of the risk model for its ability to genuinely improve the risk knowledge at a reasonable cost. For example: Don’t include an exotic variable unless that variable is a useful risk factor. Don’t use more significant digits than is justified. Don’t use exponential notation numbers if a relative scale can be appropriately used. Don’t duplicate existing databases; instead, access information from existing databases whenever possible. Duplicate data repositories will eventually lead to data inconsistencies. Don’t use special factors that are only designed to change numerical scales. These tend to add more confusion than their benefit in creating easy-to-use numbers. Avoid multiple levels of calculations whenever possible. Don’t overestimatethe accuracy of your results, especially in presentations and formal documentation. Remember the high degree ofuncertainty associated with this type of effort.
We now take a look at the specifics ofthese lessons learned.
Work from general to specific Get the big picture first. This means “Get an overview assessment done for the whole system rather than getting every detail for only a portion of the system.” This has two advantages: I. No matter how strongly the project begins, things may change before project completion. If an interruption does occur, at least a general assessment has been done and some useful information has been generated. 2. There are strong psychological benefits to having results (even if very preliminary--caution is needed here) early in the process. This provides incentives to refine and improve preliminary results. So, having the entire system evaluated to a preliminary level gives timely feedback and should encourage further work. It is easy to quickly assess an entire pipeline system by limiting the number of risk variables in the assessment. Use only a critical few, such as population density, type of product, operating pressure, perhaps incident experience, and a few others. The model can then later be “beefed up” by adding the variables that were not used in the first pass. Use readily available information whenever possible.
Think “organic” Imagine that the risk assessment process and even the model itself are living, breathing entities. They will grow and change over time. There is the fruit-the valuable answers that are used to directly improve decision making. The ideal process will continuously produce ready-to-eat fruit that is easy to “pick” and use without any more processing. There are also the roots-the hehind-the-scenes techniques and knowledge that create the fruit. To ensure the fruit is good, the roots must he properly cared for. Feed and strengthen the roots by using HAZOPS, statistical analysis, FEMA, event trees, fault trees, and other specific risk tools occasionally. Such tools provide the underpinnings for the risk model. Allow for growth because new inspection data, new inspection techniques, new statistical data sets to help determine weightings, missed risk indicators, new operating disciplines, and so on will arise. Plan for the most flexible environment possible, Make changes easy to incorporate. Anticipate that regardless of where the program begins and what the initial focus was, eventually, all company personnel might he visiting and “picking the fruit” provided by this process.
Use computers wisely Too much reliance on computers is probably more dangerous than too little. In the former, knowledge and insight can be obscured and even convoluted. In the latter, the chief danger is that inefficiencies will result-an undesirable, hut not critical, event. Regardless of potential misuse, however. computers can greatly increase the strength of the risk assessment process, and no modem program is complete without extensive use of them. The modem software environment is such that information is easily moved between applications. In the early stages of a project, the computer should serve chiefly as a data repository. Then, in subsequent stages, it should house the algorithnhow the raw information such as wall thickness, population density, soil type, etc., is turned into risk information. In later stages of the project, data analysis and display routines should he available. Finally, computer routines to ensure ease and consistency of data entry, model tweaking, and generation of required output should he available. Software use in risk modeling should always follow program development-not lead it. 0
Early stage. Use pencil and paper or simple graphics software to sketch preliminary designs of the risk assessment system. Also use project management tools if desired to plan the risk management project. Intermediate stages. Use software environments that can store, sort, and filter moderate amounts of data and generate new values from arithmetic and logical (if. . . then. . . else. . .) combinations of input data. Choices include modem spreadsheets and desktop databases. Later stages. Provide for larger quantity data entry, manipulation, query, display, etc., in a long-term, secure, and userfriendly environment. If spatial linking of information is desired, consider migrating to geographical information systems (GIS) platforms. If multiuser access is desired, consider robust database environments.
2/38Risk Assessment Process
Computer usage in pipeline risk assessment and management is further discussed in Chapter 8.
Build the program as you would build a new pipeline A useful way to view the establishment of a risk management program, and in particular the risk assessment process, is to consider a direct analogy with new pipeline construction. In either case, a certain discipline is required. As with new construction, failures in risk modeling occur through inappropriate expectations and poor planning, while success happens through thoughtful planning and management. Below. the project phases of a pipeline construction are compared to a risk assessment effort.
I. Conceptualization and scope creation phase: Pipeline: Determine the objective, the needed capacity, the delivery parameters and schedule. Risk assessment: Several questions to the pipeline operator may better focus the effort and direct the choice of a formal risk assessment technique: What data do you have? What is your confidence in the predictive value of the data? What are the resource demands (and availability) in terms of costs, man-hours, and time to set up and maintain a risk model? What benefits do you expect to accrue, in terms of cost savings, reduced regulatory burdens, improved public support, and operational efficiency? Subsequent defining questions might include: What portions of your system are to be evaluated-pipeline only? Tanks? Stations? Valve sites? Mainlines? Branch lines? Distribution systems? Gathering systems? Onshore/offshore? To what level of detail? Estimate the uses for the model, then add a margin of safety because there will be unanticipated uses. Develop a schedule and set milestones to measure progress. 11. Route selectiodROW acquisition: Pipeline: Determine the optimum routing, begin the process of acquiring needed ROW. Risk assessment: Determine the optimum location for the model and expertise. Centrally done from corporate headquarters? Field offices maintain and use information? Unlike the pipeline construction analogy, this aspect is readily changed at any point in the process and does not have to finally decided at this early stage of the project. 111. Design: Pipeline: Perform detailed design hydraulic calculations; specify equipment, control systems, and materials. Risk assessment: The heart of the risk assessment will be the model or algorithm-that component which takes raw information such as wall thickness, population density, soil type, etc., and turns it into risk information. Successful risk modeling involves a balancing between various issues including: Identifying an exhaustive list ofcontributing factors versus choosing the critical few to incorporate in a model (complex versus simple)
Hard data versus engineering judgment (how to incorporate widely held beliefs that do not have supporting statistical data) Uncertainty versus statistics (how much reliance to place on predictive power of limited data) Flexibility versus situation-specific model (ability to use same model for a variety of products, geographical locations, facility types, etc.) It is important that all risk variables be considered even if only to conclude that certain variables will not be included in the final model. In fact, many variables will not be included when such variables do not add significant value but reduce the usability of the model. These “use or don’t use” decisions should be done carefully and with full understanding ofthe role of the variables in the risk picture. Note that many simplifying assumptions are often made, especially in complex phenomena like dispersion modeling, fire and explosion potentials, etc.. in order to make the risk model easy to use and still relatively robust. Both probability variables and consequence variables are examined in most formal risk models. This is consistent with the most widely accepted definition of risk: Event risk = (eventprobability) x (eventconsequence) (See also “VI. Commissioning” for more aspects of a successful risk model design.) IV. Material procurement: Pipeline: Identify long-delivery-time items, prepare specifications, determine delivery and quality control processes. Risk assessment: Identify data needs that will take the longest to obtain and begin those efforts immediately. Identify data formats and level of detail. Take steps to minimize subjectivity in data collection. Prepare data collection forms or formats and train data collectors to ensure consistency. V Construction: Pipeline: Determine number of construction spreads, material staging, critical path schedule, inspection protocols. Risk assessment:Form the data collection team(s), clearly define roles and responsibilities, create critical path schedule to ensure timely data acquisition, schedule milestones, and take steps to ensure quality assurance/ quality control. VI. Commissioning: Pipeline: Testing of all components, start-up programs completed. Risk assessment: Use statistical analysis techniques to partially validate model results from a numerical basis. Perform a sensitivity analysis and some trial “what-ifs” to ensure that model results are believable and consistent. Perform validation exercises with experienced and knowledgeable operating and maintenance personnel. It is hoped that the risk assessment characteristics were earlier specified in the design and concept phase of the project. but here is a final place to check to ensure the following:
Examples of scoring algorithms 2/39 All failure modes are considered. All risk elements are considered and the most critical ones are included. Failure modes are considered independently as well as in aggregate. All available information is being appropriately utilized. Provisions exist for regular updates of information. including new types of data. Consequence factors are separable from probability factors. Weightings, or other methods to recognize relative importance of factors, are established. The rationale behind weightings is well documented and consistent. A sensitivity analysis has been performed. The model reacts appropriately to failures ofany type. Risk elements are combined appropriately (“and” versus “or” combinations). Steps are taken to ensure consistency of evaluation. Risk assessment results form a reasonable statistical distribution (outliers?). There is adequate discrimination in the measured results (signal-to-noise ratio). Comparisons can be made against fixed or floating standards or benchmarks. V11. Project completion: Pipeline: Finalize manuals, complete training, ensure maintenance protocols are in place, and turn system over to operations. Risk assessment: Carefully document the risk assessment process and all subprocesses. especially the detailed workings of the algorithm or central model. Set up administrative processes to support an ongoing program Ensure that control documents cover the details of all aspects of a good administrative program, including: Defining roles and responsibilities Performance monitoring and feedback Process procedures Management of change Communication protocols
Study the results This might seem obvious, but it is surprising how many owners really do not appreciate what they have available after completing a thorough risk assessment. Remember that your final risk numbers should be completely meaningful in a practical. real-world sense. They should represent everything you know about that piece of pipe (or other system component)-all of the collective years of experience of your organization, all the statistical data you can gather, all your gut feelings, all your sophisticated engineering calculations. If you can’t really believe your numbers. something is wrong with the model. When, through careful evaluation and much experience, you can really believe the numbers, you will find many ways to use them that you perhaps did not foresee. They can be used to 0 0
Design an operating discipline Assist in route selection
Optimize spending Strengthen project evaluation Determine project prioritization Determine resource allocation Ensure regulatory compliance
VI. Examples of scoring algorithms Sample relative risk model The relative risk assessment model outlined in Chapters 3 through 7 is designed to be a simple and straightforward pipeline risk assessment model that focuses on potential consequences to public safety and environment preservation. It provides a framework to ensure that all critical aspects of risk are captured. Figure 2.4 shows a flowchart of this model. This framework is flexible enough to accommodate any level of detail and data availability. For most variables. a sample point-scoring scheme is presented. In many cases, alternative scoring schemes are also shown. Additional risk assessment examples can be found in the case studies of Chapter 14 and in Appendix E. The pipeline risk picture is examined in two general parts. The first part is a detailed itemization and relative weighting of all reasonably foreseeable events that may lead to the failure of a pipeline: “What can go wrong?” and “How likely is it to go wrong?. This highlights operational and design options that can change the probability of failure (Chapters 3 through 6). The second part is an analysis of potential consequences if a failure should occur. This addresses the potential consequences should failure occur (Chapter 7). The two general parts correspond to the two factors used in the most commonly accepted definition of risk: Risk = (event likelihood) x (eventconsequence)
The failure potential component is further broken into four indexes (see Figure 2.4). The indexes roughly correspond to categories of reported pipeline accident failures. That is, each index reflects a general area to which, historically, pipeline accidents have been attributed. By considering each variable in each index, the evaluator arrives at a numerical value for that index. The four index values are then summed to a total value (called the index sum) representing the overall failure probability (or survival probability) for the segment evaluated. The individual variable values, not just the total index score, are preserved however, for detailed analysis later. The primary focus ofthe probability part ofthe assessment is the potential for a particular failure mechanism to be active. This is subtly different from the likelihood of failure. Especially in the case of a time-dependent mechanism such as corrosion. fatigue, or slow earth movements, the time to failure is related to factors beyond the presence of a failure mechanism. These include the resistance of the pipe material, the aggressiveness of the failure mechanism, and the time of exposure. These, in turn, can be furtherexamined. For instance. the material resistance is a function of material strength; dimensions, most notably pipe wall thickness; and the stress level. The additional aspects leading to a time-to-fail estimate are usually more appropriately considered in specific investigations.
2/40 Risk Assessment Process
In the second part of the evaluation,an assessmentis made of the potential consequences of a pipeline failure. Product characteristics,pipeline operating conditions, and the pipeline surroundings are considered in arriving at a consequence factor. The consequence score is called the leak impact factor and includes acute as well as chronic hazards associatedwith product releases. The leak impactfactor is combinedwith the index sum (by dividing) to arrive at a final risk score for each section of pipeline. The end result is a numerical risk value for each pipeline section. All of the information incorporated into this number is preserved for a detailed analysis, if required. The higher-level variables of the entire process can be seen in the flowchart in Figure 2.4.
Basic assumptions Some general assumptionsare built into the relative risk assessment model discussed in Chapters 3 through 7. The user, and especially, the customizer of this system, should be aware of these and make changes where appropriate. Independence Hazards are assumed to be additive but independent. Each item that influences the risk picture is considered separately from all other items-it independently influences the risk. The overall risk assessmentcombines all of the independent factors to get a final number. The final number reflects the “area of opportunity” for a failure mechanism to be active because the number of independentfactors is believed to be directly proportional to the risk. For example, if event B can only occur if event A has first occurred, then event B is given a lower weighting to reflect the fact that there is a lower probability of both events happening. However, the example risk model does not normally stipulate that event B cannot happen without eventA. Worst case When multiple conditions exist within the same pipeline segment, it is recommendedthat the worst-case condi-
I
tion for a section govern the point assignment.The rationale for this is discussed in Chapter 1. For instance, if a 5-mile section of pipeline has 3 ft of cover for all but 200 ft of its length (which has only 1 ft of cover), the section is still rated as if the entire 5mile length has only 1 ft of cover. The evaluator can work around this though his choice of section breaks (see Sectioning of the Pipeline section earlier in this chapter). Using modem segmentationstrategies,there is no reason to have differing risk conditionswithin the same pipeline segment. Relative Unless a correlation to absolute risk values has been established, point values are meaningfid only in a relative sense. A point score for one pipeline section only shows how that section compares with other scored sections. Higher point values represent increased safety-decreased probability of failure-in all index values (Chapters 3 through 6). Absolute risk values can be correlated to the relative risk values in some cases as is discussed in Chapter 14. Judgment bused The example point schedules reflect experts’ opinions based on their interpretations of pipeline industry experience as well as personal pipelining experience. The relative importance of each item (this is reflected in the weighting of the item) is similarly the experts’ judgments. If sound, statistical data are available, they are incorporated into these judgments. However, in many cases, useful fiequency-of-occurrence data are not available. Consequently, there is an element of subjectivityin this approach. Public Threats to the general public are of most interest here. Risks specific to pipeline operators and pipeline company personnel can be included as an expansion to this system, but only with great care since a careless additionmay interfere with the objectivesofthe evaluation.In most cases, it is believed that other possible consequences will be proportional to public safety risks, so the focus on public safety will usually fairly represent most risks.
Index sum
Figure 2.4 Flowchart of relative risk index system.
Examples of scoring algorithms 2/41 Mitigations It is assumed that mitigations never completely erase the threat. This is consistent with the idea that the condition 0f‘‘no threat” will have less fisk than the condition igated threat,’’ regardless of the robus~essof the mitigation measures. It also shows that even with much prevention in place, the hazard has not been removed.
Other examples See Appendix E for examples of other risk scoring algorithms for pipelines in general. Additional examples are included several 0 t h chapters, notably in Chapters 9 through 13, where discussions involve the assessments of special situations.
3/43
Third-party Damage Index
k
Third-partyDamage Index A. Minimum Depth of Cover B. Activity Level C. Aboveground Facilities D. Line Locating E. Public Education Programs E Right-of-way Condition G. Patrol Frequency
0-20 pts 0-20pts 0-10 pts 0-15 pts 0-15 pts 0 - 5 pts 0-15 pts
20% 20% 10% 15% 15% 5%
0-100pts
100%
15%
This table lists some possible variables and weightings that could be used to assess the potential for third-party damages to atypical transmission pipeline (see Figures 3.1 and 3.2).
Background Pipeline operators usually take steps to reduce the possibility of damage to their facilities by others. The extent to which mitigating steps are necessary depends on how readily the system can be damaged and bow often the chance for damage occurs.
Third-party damage, as the term is used here, refers to any accidental damage done to the pipe as a result ofactivities ofpersonnel not associated with the pipeline. This failure mode is also sometimes called outside force or external force, but those descriptions would presumably include damaging earth movements. We use third-party damage as the descriptor here to focus the analyses specifically on damage caused by people not associated with the pipeline. Potential earth movement damage is addressed in the design index discussion of Chapter 5. Intentional damages are covered in the sabotage module (Chapter 9). Accidental damages done by pipeline personnel and contractors are covered in the incorrect operations index chapter (Chapter 6). U.S. Department of Transportation (DOT) pipeline accident statistics indicate that third-party intrusions are often the leading cause of pipeline failure. Some 20 to 40 percent of all pipeline failures in most time periods are attributed to thirdparty damages. In spite of these statistics, the potential for third-party damage is often one of the least considered aspects of pipeline hazard assessment. The good safety record of pipelines has been attributed in part to their initial installation in sparsely populated areas and
3/44 Third-party Damage Index
Figure 3.1 Basic risk assessment model.
Soil cover Type of soil (rock, clay, sand, etc.) Pavement type (asphalt, concrete, none, etc.) Warning tape or mesh Water depth Population density Stability of the area (construction, renovation, etc.) One calls Other buried utilities Anchoring, dredging
Minimum depth of cover Activity level
--Aboveground facilities One-call system Public education Right-of-way condition Patrol
Vulnerability (distance, barriers, etc.) Threats (traffic volume, traffic type, aircraft, etC.)
. -
1
Mandated Response by owner Well-known and used
Methods (door-to-door, mail, advertisements, etC.) Frequency
Signs (size, spacing, lettering, phone numbers, etc.) Markers (air vs ground, size, visibility, spacing, etc.) Overgrowth Undergrowth Ground patrol frequency Ground patrol effectiveness Air patrol frequency Air patrol effectiveness Figure 3.2 Assessing third-partydamage potential:sample of data used to score the third-party damage index.
Riskvariables 3/45
their burial 2.5 to 3 feet deep. However, encroachments ofpopulation and land development activities are routinely threatening many pipelines today. In the period from 1983 through 1987, eight deaths, 25 injuries, and more than $14 million in property damage occurred in the hazardous liquid pipeline industry due solely to excavation damage by others. These types of pipeline failures represent 259 accidents out of a total of 969 accidents from all causes. This means that 26.7% of all hazardous liquid pipeline accidents were caused by excavation damage 1871. In the gas pipeline industry, a similar story emerges: 430 incidents from excavation damage were reported in the 1984-1987 period. These accidents resulted in 26 deaths, 148 injuries, and more than $18 million in property damage. Excavation damage is thought to be responsible for 10.5% of incidents reported for distribution systems, 22.7% of incidents reported for transmissiodgathering pipelines, and 14.6% of all incidents in gas pipelines [87]. European gas pipeline experience, based on almost 1.2 million mile-years of operations in nine Western European countries, shows that third-party interference represents approximately 50% of all pipeline failures [441.
Exposure To quantify the risk exposure from excavation damage, an estimate of the total number of excavations that present a chance for damage can be made. Reference 1641 discusses the Gas Research Institute’s (GRI’s) 1995 study that makes an effort to determine risk exposure for the gas industry. The study surveyed 65 local distribution companies and 35 transmission companies regarding line hits. The accuracy of the analysis was limited by the response-less than half (41%) of the companies responded, and several major gas-producing states were poorly represented (only one respondent from Texas and one from Oklahoma). The GRI estimate was determined by extrapolation and may be subject to a large degree of error because the data sample was not representative. Based on survey responses, however, GFU calculated an approximate magnitude of exposure. For those companies that responded, a total of25,123 hits to gas lines were recorded in 1993; from that, the GRI estimated total U.S. pipeline hits in 1993 to be 104,128. For a rate of exposure, this number can be compared to pipeline miles: For 1993, using a reported 1,778,600 miles of gas transmission, main, and service lines, the calculated exposure rate was 58 hits per 1000 line miles. Transmission lines had a substantially lower experience: a rate of 5.5 hits per 1000 miles, with distribution lines suffering 71 hits per 1000 miles [64]. All rates are based on limited data. Because the risk of excavation damage is associated with digging activity rather than system size, “hits per digs” is a useful measure of risk exposure. For the same year that GRI conducted its survey, one-call systems collectively received more than an estimated 20 million calls from excavators. (These calls generated 300 million work-site notifications for participating members to mark many different types of underground systems.) Using GRI’s estimate of hits. the risk exposure rate for 1993 was 5 hits per 1000 notifications to dig ~41.
Risk variables Many mitigation measures are in place in most Western countries to reduce the threat of third-party damages to pipelines. Nonetheless, recent experience in most countries shows that this remains a major threat, despite often mandatory systems such as one-call systems. Reasons for continued third-party damage, especially in urban areas, include Smaller contractors ignorant of permit or notification process No incentive for excavators to avoid damaging the lines when repair cost (to damaging party) is smaller than avoidance cost Inaccurate mapshecords Imprecise locations by operator. Many of these situations are evaluated as variables in the suggested risk assessment model. The pipeline designer a n 4 perhaps to an even greater extent, the operator can affect the probability of damage from thirdparty activities. As an element ofthe total risk picture, the probability of accidental third-party damage to a facility depends on The ease with which the facility can be reached by a third party The frequency and type ofthird-party activities nearby. Possible offenders include Excavating equipment Projectiles Vehicular traffic Trains Farming equipment Seismic charges Fenceposts Telephone posts Wildlife (cattle, elephants, birds, etc.) Anchors Dredges. Factors that affect the susceptibility of the facility include Depth of cover Nature of cover (earth, rock, concrete, paving, etc.) Man-made barriers (fences, barricades, levees, ditches. etc.) Natural barriers (trees, rivers, ditches, rocks, etc.) Presence of pipeline markers Condition of right ofway (ROW) Frequency and thoroughness of patrolling Response time to reported threats. The activity level is often judged by items such as: Population density Construction activities nearby Proximity and volume of rail or vehicular traffic Number of other buried utilities in the area.
3/46 Third-party Damage Index
Serious damage to a pipeline is not limited to actual punctures of the line. A mere scratch on a coated steel pipeline damages the corrosion-resistant coating. Such damage can lead to accelerated corrosion and ultimately a corrosion failure perhaps years in the future. If the scratch is deep enough to have removed enough metal, a stress concentration area (see Chapter 5 ) could be formed, which again, perhaps years later, may lead to a failure from fatigue, either alone or in combination with some form of corrosion-accelerated cracking. This is one reason why public education plays such an important role in damage prevention. To the casual observer, a minor dent or scratch in a steel pipeline may appear insignificantcertainly not worthy of mention. A pipeline operator knows the potential impact of any disturbance to the line. Communicating this to the general public increases pipeline safety. Several variables are thought to play a critical role in the threat of third-party damages. Measuring these variables can therefore provide an assessment of the overall threat. Note that in the approach described here, this index measures the potential for third-party damage-not the potential for pipeline failure from third-party damages. This is a subtle but important distinction. Ifthe evaluator wishes to measure the latter in a single assessment, additional variables such as pipe strength, operating stress level, and characteristics of the potential third-party intrusions (such as equipment type and strength) would need to be added to the assessment. What are believed to be the key variables to consider in assessing the potential for third-party damage, are discussed in the following sections. Weightings reflect the relative percentage contribution of the variable to the overall threat of thirdparty damage.
Assessing third-party damage potential A. Minimum depth of cover (weighting: 20%) The minimum depth of cover is the amount of earth, or equivalent cover, over the pipeline that serves to protect the pipe from third-party activities. A schedule or simple formula can be developed to assign point values based on depth of cover. In this formula, increasing points indicate a safer condition; this convention is used throughout this book. A sample formula for depth of cover is as follows:
-
Amount of cover in inches 3 =point value up to a maximum of 20 points For instance, 42 in. of cover = 42 + 3 points = 14 points 24 in. of cover = 24 + 3 points = 8 points
Points should be assessed based on the shallowest location within the section being evaluated. The evaluator should feel confident that the depth of cover data are current and accurate; otherwise, the point assessments should reflect the uncertainty. Experience and logic indicates that less than one foot of cover may actually do more harm than good. It is enough cover to conceal the line but not enough to protect the line from even shallow earth moving equipment (such as agricultural equip-
ment). Three feet of cover is a common amount of cover required by many regulatory agencies for new construction. Credit should also be given for comparable means of protecting the line from mechanical damage. A schedule can be developed for these other means, perhaps by equating the mechanical protection to an amount of additional earth cover that is thought to provide equivalent protection. For example, 2 in. ofconcrete coating = 8 in. of additional earth cover 4 In. of concrete coating = 12 in. of additional earth cover Pipe casing = 24 in. of additional cover Concrete slab (reinforced)= 24 in. of additional cover.
Using the example formula above, a pipe section that has 14 in. of cover and is encased in a casing pipe would have an equivalent earth cover of 14 + 24 = 38 in., yielding a point value of 38 + 3 = 12.7. Burial of a warning tape-a highly visible strip of material with warnings clearly printed on it-may help to avert damage to a pipeline (Figure 3.3). Such flagging or tape is commercially available and is usually installed just beneath the ground surface directly over the pipeline. Hopefully, an excavator will discover the warning tape, cease the excavation, and avoid damage to the line. Although this early warning system provides no physical protection, its benefit from a failureprevention standpoint can be included in this model. A derivative of this system is a warning mesh where instead of a single strip of low-strength tape, a tough, high-visibility plastic mesh, perhaps 30 to 36 in. wide is used. This provides some physical protection because most excavation equipment will have at least some minor difficulty penetrating it. It also provides additional protection via the increased width, reducing the likelihood of the excavation equipment striking the pipe before the warning mesh. Either system can be valued in terms of an equivalent amount of earth cover. For example: Warning tape = 6 in. of additional cover Warning mesh = 18 in of additional covet As with all items in this risk assessment system, the evaluator should use his company’s best experience or other available information to create his point values and weightings. Common situations that may need to be addressed include rocks in one region, sand in another (is the protection value equivalent?) and pipelines under different roadway types (concrete versus asphalt versus compacted stone, etc.). The evaluator need only remember the goal of consistency and the intent of assessing the amount of real protection from mechanical damage. Ifthe wall thickness is greater than what is required for anticipated pressures and external loadings, the extra thickness is available to provide additional protection against failure from external damage or corrosion. Mechanical protection that may be available from extra pipe wall material is accounted for in the design index (Chapter 5). In the case of pipelines submerged at water crossings, the intent is the same: Evaluate the ease with which a third party can physically access and damage the pipe. Credit should be given for water depth, concrete coatings, depth below seafloor, extra damage protection coatings, etc. A point schedule for submerged lines in navigable waterways might look something like the following:
Assessing third-party damage potential 3/47
Minimum depth of cover
,-
Ground surface
1
Warning tape J Pipeline
Figure 3.3 Minimum
Depth below)water surJace: 0 pts 3 pts 7 pts
0-5 ft 5 +Maximum anchor depth >Maximum anchor depth
Depth below bottom of waterway (add thesepoints to the points.from depth below water surface): 0 pts &2 ft 3 pts 2-3 ft 5 pts 3-5 ft 1 pts 5 %Maximum dredge depth 10 pts >Maximum dredge depth Concrete coating (add these points to the points assigned fcr uuter depth and burial depth): None 0 pts Minimum I in. 5 pts The total for all three categories may not exceed 20 pts if a weighting of 20% is used.
depth of cover
The above schedule assumes that water depth offers some protection against third-party damage. This may not be a valid assumption in every case; such an assumption should be confirmed by the evaluator. Point schedules might also reflect the anticipated sources of damage. If only small boats can anchor in the area, perhaps this results in less vulnerability and the point scores can reflect this. Reported depths must reflect the current situation because sea or riverbed scour can rapidly change the depth of cover. The use of water crossing surveys to determine the condition of the line, especially the extent of its exposure to external force damage, indirectly impacts the risk picture (Figure 3.4).Such a survey may be the only way to establish the pipeline depth and extent of its exposureto boat trafic, currents, floatingdebris,etc. Because conditions can change dramatically when flowing water is involved, the time since the last survey is also a factor to be considered.Such surveys are considered in the incorrect operations index chapter (Chapter 6).Points can be adjusted to reflect the evaluators’confidence that cover information is current with the recommendation to penalize (show increased risk) wherever uncertainty is higher. (See also Chapter 12 on offshore pipelines systems.)
River bank Previous survey
Figure 3.4
-,
River crossing survey
3/48 Third-party Damage Index
Example 3.1: Scoring the depth of cover In this example, apipeline section has burial depths of 10 and 30 in. In the shallowest portions, a concrete slab has been placed over and along the length of the line. The 4-in. slab is 3 ft wide and reinforced with steel mesh. Using the above schedule, the evaluator calculates points for the shallow sections with additional protection and for the sections buried with 30 in. of cover. For the shallow case: I O in. of cover + 24 in. of additional (equivalent) cover due to slab = (10 + 24)/3 pts = 11.3 pts. Second case: 30 in. of cover = 30/3 = I O pts. Because the minimum cover (including extra protection) yields the higher point value, the evaluator uses the IO-pt score for the pipe buried with 30 in. of cover as the worst case and,hence, the governing point value for this section. A better solution to this example would be to separate the 10-inch and 30-inch portions into separate pipeline sections for independent assessment. In this section, a submerged line lies unburied on a river bottom, 30 ft below the surface at the river midpoint, rising to the water surface at shore. At the shoreline, the line is buried with 36 in. of cover. The line has 4 in. of concrete coating around it throughout the entire section. Points are assessed as follows: The shore aonroaches are very shallow; although boat anchoring is rare, it is possible. No protection is offered by water depth, so 0 pts are given here. The 4 in. of concrete coating yields 5 pts. Because the pipe is not buried beneath the river bottom, 0 pts are awarded for cover. I I
Total score = O + 5 + 0 = 5 pts
B. Activity level (weighting: 20%) Fundamental to any risk assessment is the area ofopportunity. For an analysis of third-party damage potential, the area of opportunity is strongly affected by the level of activity near the pipeline. It is intuitively apparent that more digging activity near the line increases the opportunity for a line strike. Excavation OCCUTS frequently in the United States. The excavation notification system in the state of Illinois recorded more than 100,000 calls during the month ofApril 1997. New Jersey’s one-call system records 2.2 million excavation markings per year, an average of more than 6000 per day [64]. As noted previously, it is estimated that gas pipelines are accidentally struck at the rate of 5 hits per every 1000 one-call notifications. DOT accident statistics for gas pipelines indicate that, in the 1984-1987 period, 35% of excavation damage accidents occurred in Class 1 and 2 locations, as defined hy DOT gas pipeline regulations [87]. These are the less populated areas. This tends to support the hypothesis that a higher population density means more accident potential. Other considerations include nearby rail systems and high volumes of nearby traffic, especially where heavy vehicles such as trucks or trains are prevalent or speeds are high. Aboveground facilities and even buried pipe are at risk because an automobile or train wreck has tremendous destructiveenergy potential. In some areas, wildlife damage is common. Heavy animals such as elephants, bison, and cattle can damage instrumenta-
tion and pipe coatings, if not the pipe itself. Birds and other smaller animals and even insects can also cause damage by their normal activities. Again, coatings and instrumentation of aboveground facilities are usually most threatened. Where such activity presents a threat of external force damage to the pipeline, it can be assessed as a contributor to activity level here. The activity level item is normally a risk variable that may change over time, but is relatively unchangeable by the pipeline operator. Relocation is usually the only means for the pipeline operator to change this variable, and relocation is not normally a routine risk mitigation option. The evaluator can create several classifications of activity levels for risk scoring purposes. She does this by describing sufficient conditions such that an area falls into one of her classifications. The following example provides a sample of some of the conditions that may be appropriate. Further explanation follows the example classifications. High activity level (0 points) one or more of the following:
0
0
This area is characterized by
Class 3 population density (as defined by DOT CFR49Part 192) High population density as measured by some other scale Frequent construction activities High volume of one-call or reconnaissance reports (>2 per week) Rail or roadway traffic that poses a threat Many other buried utilities nearby Frequent damage from wildlife Normal anchoring area when offshore Frequent dredging near the offshore line.
Medium activiy level (8 points) by one or more of the following:
This area is characterized
Class 2 population density (as defined by DOT) Medium population density nearby, as measured by some other scale No routine construction activities that could pose a threat Few one-call or reconnaissance reports ( 60'F tad-wall thickness > 0.5 in.
thermal reliefdevices thermal relief valves-inspectionimaintenance torque specsltorque inspections traffic exposures-airimarine traffic exposures-ground outside station traffic exposures--overall susceptibility traffic exposures-preventions traffic exposures-ground within station traffic panernslroutingiflow training-completeness of subject matter training-job needs analysis training-testing, certification, and retesting use of colorslsignsllocksi"idiot-proofing" use of temporary workers UST-material of construction UST pressure UST volume UST-number of independent walls vacuum truck(s) vessel level safety systems vibration vibration: antivibration actions wall thickness walls < 6 A high walls > 6 A high water bodies nearby water body type (river, stream, creek, lake. etc.) water intakes nearby weather events-floods weather events-freeze weather events-hailhceisnow loading weather events-lightning weather events-potential weather events-windstorm wetlands nearby workplace ergonomics workplace human stress environment
141293
Absolute Risk Estimates
duction 14/293 General failure data 141295 Additional failure data 14/2 Relative to absoluterisk 14 V. Index sums versus failure pro diction 141301 obabilitzes 1413 e limits 141304 EX. Receutorvulnerabilities 141305 Population 141305 Generalizeddamage states
1. Introduction As noted in Chapter 1, risks can be expressed in absolute terms, for example, “number of fatalities per mile year for permanent residents within one-half mile of pipeline. . .” Also common is the use of relative risk measures, whereby hazards are prioritized such that the examiner can distinguish which aspects of the facilities pose more risk than others. The former is a frequency-based measure that estimates the probability of a specific type of failure consequence. The latter is a comparative measure of current risks, in terms of both failure likelihood and consequence. A criticism of the relative scale is its inability to compare risks from dissimilar systems-pipelines versus highway transportation, for example-and its inability to provide direct failure predictions.The absolute scale often fails in relying heavily on historical data, particularlyfor rare events that are extremely difficult to quantify, and on the unwieldy numbers that often generate a negativereaction from the public. The absolute scale
Ph.rt.
also often implies a precision that is usually not available to any risk assessmentmethod. So, the “absolutescale” offers the benefit of comparabilitywith other types of risks, whereas the “relative scale” offers the advantage of ease of use and customization to the specific risk being studied. Note that the two scales are not mutually exclusive. A relative risk ranking is converted into an absolute scale by equating previous accident histories with their respective relative risk values. This conversion is discussed in section IV on page 298. Absolute risk estimates are converted into relative numbers by simple mathematical relationships. Each scale has advantages, and a risk analysis that marries the two approaches may be the best approach. A relative assessment of the probability of failure can efficiently capture the many details that impact this probability.That estimate can then be used in post-failure event sequences that determine absolute risk values. (Also see Chapter 1 for discussion of issues such as objectivity and qualitative versus quantitative risk models.)
14294 Absolute Risk Estimates
Although risk management can be efficiently practiced exclusively on the basis of relative risks, occasionally it becomes desirable to deal in absolute risks. This chapter provides some guidance and examples for risk assessments requiring absolute results-risk estimates expressed in fatalities, injuries, property damages, or some other measure of damage, in a certain time period-rather than relative results. This requires concepts commonly seen in probabilistic risk assessments (PRAs), also called numerical risk assessments (NRAs) or quantitative risk assessments (QRAs). These techniques have their strengths and weaknesses as dmussed on pages 23-25, and they are heavily dependent on historical failure frequencies. Several sources of failure data are cited and their data presented in this chapter. In most instances, details of the assumptions employed and the calculation procedures used to generate these data are not provided. Therefore, it is imperative that data tables not be used for specific applications unless the user has determined that such data appropriately reflect that application. The user must decide what information may be appropriate to use in any particular risk assessment. Case studies are also presented to further illustrate possible approaches to the generation of absolute risk values. This chapter therefore becomes a compilation of ideas and data that might be helpful in producing risk estimates in absolute terms. The careful reader may conclude several things about the generation of absolute risk values for pipelines: 0
0
Results are very sensitive to data interpretation. Results are very sensitive to assumptions. Much variation is seen in the level of detail of analyses. A consistency of approach is important for a given level of detail of analysis.
II. Absolute risks As noted in Chapter 1, any good risk evaluation will require the generation of scenarios to represent all possible event sequences that lead to all possible damage states (consequences).To estimate the probability of any particular damage state, each event in the sequence is assigned a probability. The probabilities can be assigned either in absolute terms or, in the case of a relative risk assessment, in relative term-showing which events happen relatively more often than others. In either case, the probability assigned should be based on all available information. In a relative assessment, these event trees are examined and critical variables with their relative weighting (based on probabilities) are extracted as part of the model design. In a risk assessment expressing results in absolute numbers, the probabilities are assigned as part of the evaluation process. Absolute risk estimates require the predetermination of a damage state or consequence level of interest. Most common is the use of human fatalities as the consequence measure. Most risk criteria are also based on fatalities (see page 305) and are often shown on FN curves (see Figure 14.1 and Figure 15.1) where the relationship between event frequency and severity (measured by number of fatalities) is shown. Other options for consequence measures include
Humaninjuries Environmental damages
Property damages Thermal radiation levels Overpressure levels from explosions. Total consequences expressed in dollars Ifthe damage state of interest is more than a “stress” level such as a thermal radiation level or blast overpressure level, then a hazard area or hazard zone will also need to be defined. The hazard area is an estimate of the physical distances from the pipeline release that are potentially exposed to the threat. They are often based on the “stress” levelsjust noted and will vary in size depending on the scenario (product type, hole size, pressure, etc.) and the assumptions (wind, temperature, topography, soil infiltration, etc.). Hazard areas are discussed later in this chapter and also in Chapter 7. Receptors within the defined hazard area must be characterized. All exposure pathways to potential receptors, as discussed in Chapter 7 should be considered. Population densities, both permanent and transient (vehicle traffic, time-of-day, day-ofweek, and seasonal considerations, etc.); environmental sensitivities; property types; land use; and groundwater are some of the receptors typically characterized.The receptor’s vulnerability will often be a function of exposure time, which is a function of the receptor’s mobility-that is, its ability to escape the area. The event sequences are generated for all permutations of many parameters. For a hazardous substance pipeline, important parameters will generally involve Chance of failure Chance of failure hole size Spill size (considering leak detection and reaction scenarios) Chance of immediate ignition Spill dispersion Chance of delayed ignition Hazard area size (for each scenario) Chance of receptor@)being in hazard area Chance of various damage states to various receptor. A frequency of occurrence must be assigned to the selected damage state-how often might this potential consequence occur? This frequency involves first an estimate of the probability of failure of the pipeline. This is most often derived in part from historical data as discussed below. Then, given that failure has occurred, the probability of subsequent, consequenceinfluencing events is assessed. This often provides a logical breakpoint where the risk analysis can be enhanced by combining a detail-oriented assessment of the relative probability of failure with an absolute-type consequence assessment that is sensitive to the potential chains of events.
111. Failure rates Pipeline failure rates are required starting points for determining absolute risk values. Past failures on the pipeline of interest are naturally pertinent. Beyond that, representative data from other pipelines are sought. Failure rates are commonly derived from historical failure rates of similar pipelines in similar environments. That derivation is by no means a straightforward exercise. In most cases, the evaluator must first find a general pipeline failure database and then make assumptions
Failure rates 14/295
.00E-02
.00E-03
.00E-04
.00E-05
.00E-06
.00E-07 1
1( 30
10
Number of Fatalities (N) Flgure 14.1 FN curve for riskcharacterization.
regarding the best “slice” of data to use. This involvesattempts to extract from an existing database of pipeline failures a subset that approximates the characteristics of the pipeline being evaluating. Ideally, the evaluator desires a subset of pipelines with similar products, pressures, diameters, wall thicknesses, environments, age, operations and maintenances protocols, etc. It is very rare to fiid enough historical data on pipelines with enough similaritiesto provide data that can lead to confident estimates of future performance for a particular pipeline type. Even if such data are found, estimating the performance of the individual from the performance of the group presents another difficulty. In many cases, the results of the historical data analysis will only provide starting points or comparison points for the “best” estimates of future failure frequency. The evaluator will usually make adjustments to the historical failure frequencies in order to more appropriately capture a specific situation. The assumptions and adjustments required often put this risk assessment methodology on par with a relative risk assessment in terms of accuracy and predictive capabilities. This underlies the belief that, given some work in correlating the two scales, absolute and relative risks can be related and used interchangeably. This is discussed below.
General failuredata As a common damage stateof interest, fatalityrates are a subset of pipeline failure rates. Very few failures result in a fatality. A
rudimentary frequency-based assessment will simply identify the number of fatalitiesor injuriesper incidentand use this ratio to predict future human effects. For example, even in a database with much missing detail (as is typically the case in pipeline failure databases),one can extract an overall failure rate and the number of fatalities per length-time (i.e., mile-year or kmyear). From this, a “fatalities per failure” ratio can be calculated. These values can then be scaled to the length and design life of the subject pipeline to obtain some very high-level risk estimates on that pipeline. A sample of high-level data that might be useful in frequency estimates for failure and fatality rates is given inTables 14.1 through 14.4. A recent study [67] for pipeline risk assessment methodologies in Australia recommends that the generic failure rates shown in Table 14.5 be used. These are based on U.S., European, and Australiangas pipeline failure rates and are presumably recommended for gas transmission pipelines (although the report addresses both gas and liquid pipelines). Using the rates from Table 14.5 and additional assumptions, this study producesthe more detailedTable 14.6, a table of failure rates related to hole size and wall thickness. (Note: Table 14.6 is also a basis for the results shown later in this chapter for Case Study B.) As discussedin earlier chapters,there is a difference between ‘frequency’ and ‘probability’ even though in some uses, they are somewhat interchangeable. At very low frequencies of occurrence,the probabilityof failure will be numerically equal to the frequency of failure. However, the actual relationship between failure frequency and failure probability is often
14/296AbsoluteRisk Estimates Table 14.1 Compilation of pipeline failure data for frequency estimates
Location
Trpe
Canada USA USA USA USA USA USA USA USA Western Europe
Oiligas Oiligas Oil Gas Gas transmission Refinedproducts Hazardousliquids Crude oil Hazardousliquid Gas
Period 1989-92 1987-91 1982-91 1987-9 1 1986-2002 1975-1999 1975-1999 1975-1999 1986-2002
Fatality rate (no.perfailure)
Length
Failure rate
294,030 km 1,725,156 km 344,649 km 1,382,105 !an 300,000 miles
0.16h-year 0.25 0.55 0.17 0.267 failures/lOOO mile-year 0.6811000 mile-year 0.8911000 mile-year 0.11ilOOOmile-year
1.2 million mile-year
Re$
0.025 0.043 0.01 0.07
95 95 95 95
0.0086 0.0049 0.0024
86 86 86
0.29 /lo00 mile-year
44
Table 14.2 U.S. national hazardous liquids spill data (1975-1999)
Event category
Crude oil reportable rate
Refinedproducts reportable rate
Crude oil + refinedproducts reportable rate
Spill frequency Deaths Injuries
1 . 1 x 10-3 2.4 x 10-3 2.0 x 10-2
6.8 x IO4 8.6 x lo-' 6.1 x
8 . 9 IO" ~ 4.9 x 10-3 3.6~
Units Spillsiyearimile Deathsiincidents Injuriesiincidents
Source: URS Radian Corporation, "EnvironmentalAssessment of Lonqhorn Partners Pipeline," report prepared for U.S. EPA and DOT, September
2000.
modeled by assuming a Poisson distribution of actual frequencies. The Poisson equation relating spill probability and fiequency for a pipeline segment is P(X)SPILL = [(f *t)X/X !] * exp (-f * t )
where P(X)SPILL =probability of exactly X spills f =the average spill frequency for a segment of interest (spills/year) t =the time period for which the probability is sought (years) X =the number of spills for which the probability is sought, in the pipeline segment of interest. The probability for one or more spills is evaluated as follows: P(prohahi1ity nfone 0rmore)SPILL = 1 - P(X)SPILL
where X = 0.
Table 14.4 Comparisonof commonfailure causes for U.S. hazardous liquidpipelines ~~
Outside forces Corrosion Equipmentfailure(meta1fatigue, seal, gasket, age) Weld failure (all welds except longitudinal seam welds) Incorrect operation Unknown Repairiinstall Other Seam split Total
Tabla 14.5 Table 14.3 Average US. national hazardous liquid spill volumes and frequencies (1990-1997)
US. national average Pipe spill frequency Pipe spill volume Pipe and station spill frequency Pipe and stations spill volume
0.86 spillsiyearl1000 miles 0.70 hhl/year/mile 1.3 spillsiyearlmile 0.94 bhllyearimile
Source: URS Radian Corporation, "Environmental Assessment of Longhorn Partners Pipeline." report prepared for U.S. EPA and DOT, September 2000.
Percent of total
Cause
25 25 6
5 7 14 7
I 5 100
Generic failure rates recommendedin Australia
Cause ofFailure External force Corrosion Material defect Other Total
Failure rate p e r h - y e a r ) 3.00E-4 1.DOE4 1.00E-4 5.OOE-5 5.50E4
Source: Office of Gas Safety, "Guide to Quantitative Risk Assessment (ORA)," Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd.. April 2002.
Failure rates 14/297 Table 14.6
Failure rates related to hole size and wall thickness ~
Hole size (mm)
Wall thickness (mm)
Impuct facto?
CorrosionfactoP
I.3
2 0.95 0 2 0.95 0 2 0.95 0 2 0.95 0 2 0.95 0
10
0.04 I.3 0.36 0.04 I .3 0.36 0.04 I .3 0.36 0.04 1.3 0.36 0.04
I0
I0 16 6-10 >I0 I0
IO0
I50
~~~~
Externalforce Ifraction)
Corrosion Ifraction)
Material defect Other (fraction) (fraction) Failures
2.08E4
0.125
0.5
0.34
1.20E4
0.5
6.05E-5 2.08E4 1.20E4 6.05E-5
0.125
0.5
0.34
0.5
0.285
0
0
0
3.08E-5 3.42E4
0.285
0
0
0
0.18
0
0
0
3.088-5 3.42E-6 7.02E-5 1.94E-5 2.16E-h
I.IIE4
l.llE-4
3 00E-4
Generic failure ratesb(overall = 5.50E")
1 .0E4
1.0E4
5.0E-5
Source: Office of Gas Safety, "Guide to Quantitative Risk Assessment (QRA)." Standards Australia ME-038.01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd.. April 2002. a See wall thickness adjustments, Table 14.8. These are the study-recommended generic failure rates to use for QRA in Australia (see Table 14.5).
fit from a mitigation is derived from engineering models or simply from logical analysis with assumptions Some observations from various studies are discussed next. The European Gas Pipeline Incident Group database (representing nine Western European countries and 1.2 million mileyears of operations as of this writing) gives the relative
Additional failure data A limited amount of data are also available to help make distinctions for pipeline characteristics such as wall thickness, diameter, depth of cover, and potential failure hole size. Several studies estimate the benefits of particular mitigation measures or design characteristics. These estimates are based on statistical analyses in some cases. These are ofienmerely the historical failure rate of a pipeline with a particular characteristic, such as a particular wall thickness pipe or diameter or depth of cover.This type ofanalyses must isolate the factor from other confounding factors and should also produce a rationale for the observation. For example, if data suggest that a larger diameter pipe ruptures less often on a per-length, per-year basis, is there a plausible explanation? In that particular case, higher strength due to geometrical factors, better quality control, and higher level of attention by operators are plausible explanations, so the premise could be tentatively accepted. In other cases, the bene-
Table 14.7
Table 14.8 Suggested wall thickness adjustments
Wall thickness (mml
External force coejficient
10
Source: Office of Gas Safety, "Guide to Quantitative Risk Assessment (QRA)," Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd , ADril2002.
European Gas Pipeline Incident Group database relative frequency of failure data
Percent ofdifferent hole size
Case Third-party interference Construction defect Corrosion Land movement Otheriunknown Total
Failure rate (mile-year)-'
Percent of total failure rufe
I.50E-04 5.30E-05 4.40E-05 1.80E-05 3.20E-05 2,90E+4
50 18
15 6 II I00
400 mm)
0.027 0.019 0.099 0.235
Source. Office of Gas Safety, "Guide to Quantitative Risk Assessment (QRA)." Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd., April 2002. Derived from the European Gas pipeline incident data Group (EGIG) foronshore pipelinesfrom 1970to 1992. Note that these findings are based on hole size and not on release rate, which will vary with pipeline pressure.
One study uses 12% as the ignition probability ofNGL (natural gas liquids, referring to highly volatile liquids such as propane) based on U.S. data [43]. Another study concludes that the overall ignition probability for natural gas pipeline accidents is about 3.2% [95]. A more extensive model of natural gas risk assessment, called GRI (Gas Research Institute) PIMOS [33], estimates ignition probabilities for natural gas leaks and ruptures under various conditions. This model is discussed in the following paragraphs. In the GRI model, the nominal natural gas leak ignition probabilities range from 3.1 to 7.2% depending on accumulation potential and proximity t o structures (confinement).The higher range occurs for accumulations in or near buildings. There is a 30% chance of accumulation following a leak and a 30% chance of that accumulation being in or near a building, given that accumulation has occurred and an 80% chance of ignition when near or in a building, given an accumulation. Hence, that scenario leads to a 7.1% chance of ignition (30% x 30% X 80% = 7.1%). The other extreme scenario is (30% chance of accuTable 14.16 Estimates of ignition probabilitiesfor various products
Gasoline Gasoline and crude oil
Above and belousground
Belowgmundonl)
Crude oil Diesel oil Fuel oil Gasoline Kerosene Jet fuel Oil and gasoline
3.1 1.8 2 6 0 4.5 3.4
2
All
3.6
~~
Table 14.15 Estimates of ignition probabilities of natural gasfora range of hole sizes (European onshore pipelines)
Product
Ignition probability Pi)
Ignition probahilir);
Source: Office of Gas Safety, "Guide to Quantitative Risk Assessment (QRA)," Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd.. April 2002..
Failure mode
Table 14.17 Estimates of ignition probabilities for various products above and below grade
lgnition probahiliiy 5(%)
4-6 3
Source-Table created from statements in Ref. [86], which cites various sources for these probabilities.
~
1.5 0 3.1 0 38
0 2.1
mulation) x (70% chance ofnot near a building) x (1 5% chance o f ignition when not near building) = 3.1%. For ruptures, the ignition probabilities nominally range from about 4 to 15% with the higher probability occurring when ignition occurs immediately at the rupture location. Given a rupture, the probability of subsequent ignition at the rupture location is given a value 15%. If ignition does not occur at the rupture (85% chance of no ignition at rupture), then the probability of subsequent ignition is 5%. So, the latter leads to a probability estimate of 85% x 5% = 4.3%. In both the leak and rupture scenarios, these estimates are referred to as base caseprobabilities.They can be subsequently adjusted by the factors shown inTables 14.19 and 14.20. These probabilities are reportedly derived from U.S. gas transmission pipeline incident rates (U.S. Department o f Transportation,
Table 14.18 Estimates of ignition probabilities for below-grade gasoline pipelines
Ignition probabiliy (9% Location
Rupture
Hole
Leak
Rural Urban Rural Urban Rural Urban
3.1 6.2 1.55 3.1
3.1 6.2 1.55 3. I 1.55 3.1
0.62
~~~
Overall Immediate Delayed
1.55 3.1
1.24 0 31 0.62 0.31 0.62
Source: Morgan, B.,et al., "An Approach to the Risk Assessment of Gasoline Pipelines," presented at Pipeline Reliability Conference, Houston, TX, November 1996. Notes: US. experience is approximately 1.5 times higher than CONCAWE (data shown above are from CONCAWE). Assumes the urban is 2x base rates and that base rates reflect mostly rural experience. Leak ignition probability is 20% of that for ruptures or holes. Immediate and delayed ignitions occur with equal likelihood. Rupture is defined as 0.5 diameter or larger. Hole is > I O mm, but less than the rupture Leak is 30 in., and pressures > 1000 psig
u?
660 800 1000
660 1000
In cases of HVL pipeline modeling, default distances of 1000 to 1500 ft are commonly seen, depending on pipeline diameter, pressure, and product characteristics. HVL releases cases are very sensitive to weather conditions and carry the potential for unconfined vapor cloud explosions, each of which can greatly extend impact zones to more than a mile. (See also the discussion on land-use issues in a following section for thoughts on setback distances that are logically related to hazard zones.) A draft Michigan regulatory document suggests setback distances for buried high-pressure gas pipelines based on the HUD guideline thermal radiation criteria. The proposed setback distances are tabularized for pipeline diameters (from 4 to 26 in.) and pressures (from 400 to 1800 psig in 100-psig increments). The end points of the various tables are shown inTabIe 14.35. It is not known ifthese distances will be codified into regulations. In some cases, the larger distances might cause repercussions regarding alternative land uses for existing pipelines. Land use regulations can have significant social, political, and economic ramifications, as are discussed in Chapter 15. The U.S. Coast Guard (USCG) provides p d a n c e on the safe distance for people and wooden buildings from the edge of a burning spill in their Hazard Assessment Handbook, Commandant Instruction Manual M 16465.13 . Safe distances range widely depending on the size of the burning area, which is assumed to be on open water. For people, the distances vary from 150 to 10,100 ft, whereas for buildings the distances vary from 32 to 1900 ft for the same size spill. The spill radii for these distances range between 10 and 2000 ft [35]. A summary of setback distances was published in a consultant report and is shown in Table 14.36.
Table 14.35 Sample proposed setback distances
Minimum setback Ifr,
Facility
Multifamily developments (10,000 Btu/hr-A2 criteria) Elderly and handicappedunits Unprotected areas of congregation (450 Btuihr-A2criteria) Primary egress
4-in.pipeline a1400psig
26-in.pipeline at 1800psig
Table 14.36 Summary of setback requirements in codes. standards, and other guides
Code, standard, guide
Setback requirement for tankporn public (jii
IFC 2000 (adopted in Alaska and proposed in municipality ofhchorage UFC 2000 @re-2001 in Alaska) UFC 1997 APA
5-175
Tank size and type of adjacent use
5-175
Tank size and type of adjacent
50-75 Performance standard
HUD
Buildings: 130-155 People:650-775 150-> 10,000
Type ofadjacentuse Site specific and process driven Product and tank size Diameter of spill
USCG (open-water fire)
Yariables
Source: Golder and Associates, "Report on Hazard Study for the Bulk POL Facilities in the POA Area," prepared for Municipality of Anchorage POL Task Force, August 9,2002. Notes: APA. American Planning Association; USCG. US. Coast Guard (USCG); HUD, Department of Housing and Urban Development (HUD). The National Fire Protection Association (NFPA) publishes NFPA Code 30, Flammable and Combustible Liquids Code, 2000 Edition. The lnternational Code Council publishes the lnternational Fife Code 2000 (IFC). The Western Fire Chiefs Association publishes the Unifofm Fife Code 2000 Edition(UFC).
Any time default hazard zone distances replace situationspecific calculations, the defaults should be validated by actual calculations to ensure that they encompass most, ifnot all, possible release scenarios for the pipeline systems being evaluated.
XI. Case studies The following case stumes illustrate some techmques that are more numerically rigorous in producing absolute risk estimates. These are all extracted from public domain documents readily obtained from Internet sources and/or proceedings from regulatory approval processes. Company names and locations have been changed since the focus here is solely on illustrating the technique. Other minor modifications to the extracted materials include the changing of table, figure, and reference numbering to correspond to the sequencing in this book.
Case StudyA: natural gas Quantitative risk calculationsfor XYZpipeline
67
318 772
147
3164
40
1489
40
The following case study illustrates the estimation ofrisk using calculated hazard zones and frequency-based failure frequencies for a natural gas pipeline. Portions of this discussion were extracted or are based on Ref. [18], in which a proposed highpressure gas pipeline, having both onshore and offshore components, was being evaluated. For this example, the proposed
Case studies 141313
pipeline name is XYZ and the ownerioperator company will be called ACME. In this case, a relative risk assessment has been performed, but is to be supplemented by an evaluation of risks presented in absolute terms. The introduction very briefly describes the purpose and scope of the analysis. This document presents preliminary estimates of risks to the public that might be created by the proposed operation of the XYZ pipeline. The additional risk calculations build on the worst case estimates already provided in the regulatory application and will be used for emergency response planning. This analysis is preliminary and requires verification and review before using in connection with emergency pianning.
A frequency of failures, fatalities, and injuries is estimated based on available data sets. As it is used here, “failure” refers to an incident that triggers the necessity of filing a report to the governing regulatory agency. So failure counts are counts of “reportable incidents.”The failure frequency estimates are also later used with hazard area calculations.
Normalized frequency-basedprobabilistic risk estimates Risk is examined in two parts: probability of a pipeline failure and consequences of a failure. In order to produce failure probabilities for a specific pipeline that is not yet operational, a failure frequency estimate based on other pipeline experience is required. Four sets of calculations, each based on a different underlying failure frequency, have been performed to produce four risk estimates for the proposed XYZ pipeline. The estimates rely on frequencies of reportable incidents, fatalities, and injuries as recorded in the referenced databases. The incident rate is used to calculate the probability of failure and the fatalityiinjury rates are used to estimate consequences. The frequency estimates that underlie each of the four cases are generally described as follows: Case I . The subject pipeline is assumed to behave exactly like a hypothetical, statistically “average” Acme-owned (ACME) gas transmission pipeline. For this case, ACME system leak experiences are used to predict future performance ofthe subject pipeline. Case 2. The subject pipeline is assumed to behave exactly like a hypothetical, statistically “average” Canadian gas transmission pipeline.
In this case, the Canadian Transportation Safety Board historical leak frequency is used to predict future performance of the subject pipeline. Case 3 . The subject pipeline is assumed to behave exactly like a hypothetical, statistically “average” US. gas transmission pipeline. In this case, the U.S. historical leak frequency is used to predict future performance ofthe subject pipeline. Case 4 . The subject pipeline is assumed to behave like some U.S. gas transmission pipelines; in particular, those with similar diameter, age, stress level, burial depth, and integrity verification protocols. In this case, the U.S. historical leak frequency is used as a starting point to predict future performance of the subject pipeline. In all cases, failures are as defined by the respective regulations (“reportable accidents”) using regulatory criteria for reportable incidents. The calculation results for the four cases applied to the proposed 37.3 miles(60.0 km)ofXYZpipelineareshowninTable 14.37: The preceding part of this analysis illustrates a chief issue regarding the use of historical incident frequencies. In order for past frequencies to appropriately represent future frequencies, the past frequencies must be from a population of pipelines that is similar to the subject pipeline. As is seen in the table, basing the future fatality and injury rate on the experiences of the first two populations of pipelines results in an estimate of zero future such events since none have occurred in the past. The last column presents annual probability numbers for individuals. Such numbers are often desired so that risks can be compared to other risks to which an individual might be exposed. In this application, the individual risk was assumed to be the risks from 2000 ft of pipeline, 1000 ft either side of a hypothetical leak location.
Case 4 discussion Case 4 produces the best pomt estimate for risk for the XYZ pipeline. Note that all estimates suggest that the XYZ pipeline will experience no reportable failures during its design life. Probabilities of injuries andor fatalities are extremely low in all cases. The US.DOT database of pipeline failures provides the best set of pertinent data from which to infer a failure frequency. It is used to support calculations for Cases 3 and 4 above. Primarily basing failure calculations on U S . statistics, rather than Canadian, is appropriate because:
Table 14.37 Calculationsfor Cases 1 through 4
Comparison criteria ~
~~
Failuresper year
Injuriesper year
Fatalitiesper year
Years to fail
Years fo injua
Years to Annual fataliy
0.01055 0.01200 0.01015 0.04344 0.00507
0 0 0.00167 0.00348 0.00084
0 0 0.00044 0.00050 0.00022
100.4 83.3 98.6 23.0 197.26
Never Never 600.2 287.4 1,200.4
Never Never 2278.8 1987.6 4557.6
Annual Probabilit?,ofan individualfatali$
~~
Case I: ACME’ Case 2: Canada2 ca~e3:U.S.~ U.S. liquid3 Case 4: U S . adjusted4
0 0 4.8E-06 4.7E-06 2.4E-06
Notes: ACME, all Acme gas transmission systems, 1986-2000. TSB, Canadian gas transmission pipelines, 1994-1998; only one fatality (in 1985 third-party excavation) reported for NEB jurisdictional pipelines since 1959; a significant change in definition of reportable incidents occurred in 1989. OPS, US. gas transmission pipelines, 19862002. Adjusted by assuming failure rate of subject pipeline is -50% of U.S. gas transmission average, by rationale discussed. Assumes an individual is threatened by 2000 fl of pipe (directlyover pipeline, 1000 ft either side, 24i7 exposure); 2000 ft is chosen a s a conservative length based on hazard zone calculations.
’
*
14/314 Absolute Risk Estimates 0 0 0
0
More complete data are available (larger historical failure database and data are better characterized). Strong influence by a major US. operator on design, operations, and maintenance. Similar regulatory codes, pipeline environments, and failure experiences. Apparently similar failure experience between the countries.
Since the combined experience of all US.pipelines cannot realistically represent this pipeline’s future performance (it may “encompass” this pipeline, hut not represent it), a suitable comparison subset of the data is desired. Variables that tend to influence failure rates and hence are candidates for criteria by which to divide the data, include time period, location, age, diameter, stress level, wall thickness, product type, depth ofcover, etc. Unfortunately, no database can be found that is complete enough to allow such characterization of a subset. Therefore, it is reasonable to supplement the statistical data with adjustment factors to account for the more significant differences between the subject pipeline and the population of pipelines from which the statistics arise. Rationale supporting the adjustment factors is as follows: 0
Larger diameter is 4 0 % of failures in the complete database (90+% benefit from higher diameter is implied by the database but only 25% reduction in failures is assumed) Lower stress decreases failure rate by 10% (assumption based on the role of stress in many failure mechanisms) New coating decreases failure rate by 5% (assumption note the well-documented problem with PE tape coatings in Canada) New IMP (integrity management program) procedures decreases failure rate 10% (assumption based on judgment of ability for IMP to interrupt incident event sequence) Deeper cover (2 f i of additional depth is estimated to be worth 30% reduction in third-party damages according to one European study so a 10%reduction in overall failures is assumed) More challenging offshore environment leads to 10% increase in failures (somewhat arbitrary assumption, conservative since there are no known unresolved offshore design issues).
Combining these factors leads to the use of a -50% reduction from the average US. gas transmission failure rate. This is conservativeaccepting a bias on the side of over-predicting the failure frequency. Additional conservatism comes from the omission of other factors that logically would suggest laver failure frequencies. Such factors include Initial failure frequency is derived from pipelines that are predominantly pre- 1970 construction-there are more stringent practices in current pipe and coating manufacture and pipeline construction Better one-call (more often mandated, better publicized, in more common use) Better continuing public education Designed and mostly operated to Class 3 requirements where Class 3 pipelines have lower failure rates compared to other classes from which baseline failure rates have been derived Leaks versus ruptures (leaks less damaging, but counted if reporting criteria are triggered) Company employee fatalities are included in frequency data, even though general public fatalitieshjuries are being estimated Knowledge that frequency data do not represent the event of “one or more fatalities,” even though that is the event being estimated.
Model-basedfailure consequence estimates An analysis of consequence, beyond the use of the historical fatalityhnjury rate described above, has also been undertaken. The
severity of consequences (solely from a public safety perspective) associated with a pipeline’s failure depends on the extent of the product release, thermal effects from potential ignition of the released product, and the nature of any damage receptors within the affected area. The area affected is primarily a function of the pipeline’s diameter, pressure, and weather conditions at the time of the event. Secondary considerations include characteristics of the area including topography, terrain, vegetation, and structures.
Failure discussion The potential consequences from a pipeline release will depend on the failure mode (e.g., leak versus rupture), discharge configuration (e.g., vertical versus inclined jet, obstructed versus unobstructed), and the time to ignite (e.g., immediate versus delayed). For natural gas pipelines, the possibility of a significant flash fire or vapor cloud explosion resulting from delayed remote ignition is extremely low due to the buoyant nature of gas, which prevents the formation of apersistent flammable vapor cloud near common ignition sources. ACME applied a “Model of Sizing High Consequence Areas (HCAs)Associated with Natural Gas Pipelines” [83] to determine the potential worst case ACME Pipeline failure impacts on surrounding people and property. The Gas Research Institute (GRI) funded the development of this model for U.S. gas transmission lines in 2000, in association with the U S . Office of Pipeline Safety (OPS), to help define and size HCAs as part of new integrity management regulations. This model uses a conservative and simple equation that calculates the size of the affected worst case failure release area based on the pipeline’s diameter and operating pressure.
Failure scenarios There are an infinite number of possible failure scenarios encompassing all possible combinations of failure parameters. For evaluation purposes, nine different scenarios are examined involving permutations of three failure (hole) sizes and three possible pressures at the time of failure. These are used to represent the complete range of possibilities so that all probabilities sum to 100%. Probabilities of each bole size and pressure are assigned, as are probabilities for ignition in each case. For each of the nine cases, four possible damage ranges (resulting from thermal effects) are calculated. Parameters used in the nine failure scenarios are shown in Table 14.38.
Table 14.38 Parameters for the nine failure scenarios under
discussion Probability of occurrence (99)
Hole s u e (in.) 50% to full-bore rupture(8-16) 0.5-8 1800 psig would not be normal.
Case studies 14/315 For ACME Pipeline release modeling, a worst case rupture is assumed to be guillotine-type failure, in which the hole size is equal to the pipe diameter. at the pipeline’s 15,305-kPa (2220-psig) Maximum Allowable Operating Pressures (MAOP). This worst case rupture is further assumed to include a double-ended gas release that is almost immediately ignited and becomes a trench fire. Note that the majority of the ACME Pipeline will normally operate well below its post-installation, pressure-tested MAOP in Canada. Anticipated normal operating pressures in Canada are in the range of 800 to I 100 psig, even though this range is given only a 40% probability and all other scenarios conservatively involve higher pressures. Therefore the worst case release modeling assumptions are very conservative and cover all operational scenarios up to the 15.305-kPa (2220-psig) MAOP at any point along the pipeline. Other parameters used in the failure scenarios cases are ignition probability and thermal radiation intensity (Table 14.39). Ignition probability estimates usually fall in the range of 5 to 12% based on pipeline industry experience; 65% is conservatively used in this analysis. The four potential damage ranges that are calculated for each of the nine failure scenarios are a function of thermal radiation intensity. The thresholds were chosen to represent specific potential damages that are of interest. They are described generally inTable 14.40. These were chosen as being representative of the types of potential damages of interest. Reference [83] recommends the use of 5000 Btu/hr-ft* as a heat intensity threshold for defining a “high consequence area.” It is chosen because it corresponds to a level below which: -Property, as represented by a typical wooden structure would not be expected to burn -People located indoors at the time of failure would likely be afforded indefinite protection and -People located outdoors at the time of failure would be exposed to a finite but low chance of fatality. Note that these thermal radiation intensity levels only imply damage states. Actual damages are dependent on the quantity and types of receptors that are potentially exposed to these levels. A preliminary assessment of structures has been performed, identifying the types of buildings and distances from the pipeline. This information is not yet included in these calculations but will be used in emergency planning.
Table 14.40 Four potential damage ranges for each of the nine failure scenarios under discussion
Thermal radiation level (Btu/hr-ft2i
Description
12,000 5,000 4,000 1,600
100%mortality in -30 sec 1 % mortality in -30 sec Eventual wood ignition Onset injury -30 sec
impacted by any assumptions relative to leak detection capabilities. This is especially true since the damage states use an exposure time of -30 seconds in the analysis.
Results Results of calculations involving nine failure scenarios and four damage (consequence) states as measured by potential thermal radiation intensity are shown in Table 14.41 The nine cases are shown graphically in Figure 14.3. The rightmost end of each bar represents the total distance of any consequence type. The farthest extent of each damage type is shown by the right-most end point of the consequence type’s color. These nine cases can also be grouped into three categories as shown in Figure 14.4, which illustrates that 11% of all possible failure scenarios would not have any of the specified damages beyond 29 ft from the failure point. Of all possible failure scenarios, 55% (44% + 11%) would not have any specified damages beyond 457 ft. No failure scenario is envisioned that would produce the assessed damage states beyond913 ft. In these groupings, the worst case (largest distance) IS displayed. For example, the specific damage types can be interpreted from the chart as follows: Given a pipeline failure, 100% (-44% + -44% + -1 1%) of the possible damage scenarios have a fatality range of 333 ft or less (the longest bar). There is also a 56% chance that, given a pipeline failure, the fatality range would be 167 ft or less (the second longest bar).
Case Study B: natural gas Role ofleak detection in consequence reduction The nine failure scenarios analyzed represent the vast majority of all possible failure scenarios. Leak detection plays a relatively minor role in minimizing hazards to the public in most of these possible scenarios. Therefore, the analysis presented is not significantly Table 14.39 Additional parameters for the nine failure scenarios
under discussion
Hole size (in.)
Ignition probabiliti: given failure has occurred (%)
50% to fnll-bore rupture (8-16)
40
0.5-8 I o x 10-4
Insignificant, no action justifiable Unacceptable, action to reduce risk mandatory Action to reduce nsk may be warrantea but should be justified on a costbenefit hasis
1O4