Zhaoyang Dong Pei Zhang et al. Emerging Techniques in Power System Analysis
Zhaoyang Dong Pei Zhang et al.
Emerging Techniques in Power System Analysis
With 67 Figures
Authors Zhaoyang Dong
Pei Zhang
Department of Electrical Engineering
Electric Power Research Institute
The Hong Kong Polytechnic University
3412 Hillview Ave, Palo Alto,
Hong Kong, China
CA 94304-1395, USA
E-mail:
[email protected] E-mail:
[email protected] ISBN 978-7-04-027977-1 Higher Education Press, Beijing ISBN 978-3-642-04281-2
e-ISBN 978-3-642-04282-9
Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2009933777 c Higher Education Press, Beijing and Springer-Verlag Berlin Heidelberg 2010 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: Frido Steinen-Broo, EStudio Calamar, Spain Printed on acid-free paper Springer is part of Springer Science + Business Media (www.springer.com)
Preface
Electrical power systems are one of the most complex large scale systems. Over the past decades, with deregulation and increasing demand in many countries, power systems have been operated in a stressed condition and subject to higher risks of instability and more uncertainties. System operators are responsible for secure system operations in order to supply electricity to consumers efficiently and reliably. Consequently, power system analysis tasks have become increasingly challenging and require more advanced techniques. This book provides an overview of some the key emerging techniques for power system analysis. It also sheds lights on the next generation technology innovations given the rapid changes occurring in the power industry, especially with the recent initiatives toward a smart grid. Chapter 1 introduces the recent changes of the power industry and the challenging issues including, load modeling, distributed generations, situational awareness, and control and protection. Chapter 2 provides an overview of the key emerging technologies following the evolvement of the power industry. Since it is impossible to cover all of emerging technologies in this book, only selected key emerging technologies are described in details in the subsequent chapters. Other techniques are recommended for further reading. Chapter 3 describes s the first key emerging technique: data mining. Data mining has been proved an effective technology to analyze very complex problems, e.g. cascading failure and electricity market signal analysis. Data mining theories and application examples are presented in this chapter. Chapter 4 covers another important technique: grid computing. Grid computing techniques provide an effective approach to improve computational efficiency. The methodology has been used in practice for real time power system stability assessment. Grid computing platforms and application examples are described in this chapter. Chapter 5 emphasizes the importance of probabilistic power system analysis, including load flow, stability, reliability, and planning tasks. Probabilistic approaches can effectively quantify the increasing uncertainties in power systems and assist operators and planning in making objective decisions... Various probabilistic analysis techniques are introduced in this chapter.
vi
Preface
Chapter 6 describes the application of an increasingly important device, phasor measurement units (PMUs) in power system analysis. PMUs are able to provide real time synchronized system measurement information which can be used for various operational and planning analyses such as load modeling and dynamic security assessment. The PMU technology is the last key emerging technique covered in this book. Chapter 7 provides information leading to further reading on emerging techniques for power system analysis. With the new initiatives and continuously evolving power industry, technology advances will continue and more emerging techniques will appear., The emerging technologies such as smart grid, renewable energy, plug-in electric vehicles, emission trading, distributed generation, UVAC/DC transmission, FACTS, and demand side response will create significant impact on power system. Hopefully, this book will increase the awareness of this trend and provide a useful reference for the selected key emerging techniques covered.
Zhaoyang Dong, Pei Zhang Hong Kong and Palo Alto August 2009
Contents
1
Introduction· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
1
1.1 Principles of Deregulation· · · · · · · · · · · · · · · · · · · · · · · · · · ·
1
1.2 Overview of Deregulation Worldwide· · · · · · · · · · · · · · · · · · ·
2
1.2.1 Regulated vs Deregulated · · · · · · · · · · · · · · · · · · · · · ·
3
1.2.2 Typical Electricity Markets· · · · · · · · · · · · · · · · · · · · ·
5
1.3 Uncertainties in a Power System · · · · · · · · · · · · · · · · · · · · · ·
6
1.3.1 Load Modeling Issues · · · · · · · · · · · · · · · · · · · · · · · · ·
7
1.3.2 Distributed Generation· · · · · · · · · · · · · · · · · · · · · · · ·
10
1.4 Situational Awareness · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
10
1.5 Control Performance · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
11
1.5.1 Local Protection and Control · · · · · · · · · · · · · · · · · · ·
12
1.5.2 Centralized Protection and Control · · · · · · · · · · · · · · ·
14
1.5.3 Possible Coordination Problem in the Existing Protection and Control System · · · · · · · · · · · · · · · · · ·
15
1.5.4 Two Scenarios to Illustrate the Coordination Issues
2
Among Protection and Control Systems · · · · · · · · · · ·
16
1.6 Summary· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
19
References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
19
Fundamentals of Emerging Techniques · · · · · · · · · · · · · · · · ·
23
2.1 Power System Cascading Failure and Analysis Techniques · · ·
23
2.2 Data Mining and Its Application in Power System Analysis · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
27
2.3 Grid Computing· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
29
viii
3
4
Contents
2.4 Probabilistic vs Deterministic Approaches · · · · · · · · · · · · · · ·
31
2.5 Phasor Measurement Units · · · · · · · · · · · · · · · · · · · · · · · · · ·
34
2.6 Topological Methods · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
35
2.7 Power System Vulnerability Assessment· · · · · · · · · · · · · · · · ·
36
2.8 Summary· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
39
References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
39
Data Mining Techniques and Its Application in Power Industry · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
45
3.1 Introduction · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
45
3.2 Fundamentals of Data Mining· · · · · · · · · · · · · · · · · · · · · · · ·
46
3.3 Correlation, Classification and Regression · · · · · · · · · · · · · · ·
47
3.4 Available Data Mining Tools· · · · · · · · · · · · · · · · · · · · · · · · ·
49
3.5 Data Mining based Market Data Analysis · · · · · · · · · · · · · · ·
51
3.5.1 Introduction to Electricity Price Forecasting · · · · · · · ·
51
3.5.2 The Price Spikes in an Electricity Market · · · · · · · · · ·
52
3.5.3 Framework for Price Spike Forecasting · · · · · · · · · · · ·
54
3.5.4 Problem Formulation of Interval Price Forecasting · · · ·
63
3.5.5 The Interval Forecasting Approach · · · · · · · · · · · · · · ·
65
3.6 Data Mining based Power System Security Assessment· · · · · ·
70
3.6.1 Background · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
72
3.6.2 Network Pattern Mining and Instability Prediction · · ·
74
3.7 Case Studies · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
79
3.7.1 Case Study on Price Spike Forecasting · · · · · · · · · · · ·
80
3.7.2 Case Study on Interval Price Forecasting · · · · · · · · · · ·
83
3.7.3 Case Study on Security Assessment· · · · · · · · · · · · · · ·
89
3.8 Summary· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
92
References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
92
Grid Computing · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
95
4.1 Introduction · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
95
4.2 Fundamentals of Grid Computing · · · · · · · · · · · · · · · · · · · · ·
96
4.2.1 Architecture· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
97
4.2.2 Features and Functionalities · · · · · · · · · · · · · · · · · · · ·
98
Contents
ix
4.2.3 Grid Computing vs Parallel and Distributed Computing · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 100 4.3 Commonly used Grid Computing Packages · · · · · · · · · · · · · · 101 4.3.1 Available Packages · · · · · · · · · · · · · · · · · · · · · · · · · · · 101 4.3.2 Projects· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 102 4.3.3 Applications in Power Systems · · · · · · · · · · · · · · · · · · 104 4.4 Grid Computing based Security Assessment· · · · · · · · · · · · · · 105 4.5 Grid Computing based Reliability Assessment · · · · · · · · · · · · 107 4.6 Grid Computing based Power Market Analysis · · · · · · · · · · · 108 4.7 Case Studies · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 109 4.7.1 Probabilistic Load Flow · · · · · · · · · · · · · · · · · · · · · · · 109 4.7.2 Power System Contingency Analysis · · · · · · · · · · · · · · 111 4.7.3 Performance Comparison · · · · · · · · · · · · · · · · · · · · · · 111 4.8 Summary· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 113 References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 113 5
Probabilistic vs Deterministic Power System Stability and Reliability Assessment · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 117 5.1 Introduction · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 117 5.2 Identify the Needs for The Probabilistic Approach · · · · · · · · · 118 5.2.1 Power System Stability Analysis · · · · · · · · · · · · · · · · · 118 5.2.2 Power System Reliability Analysis· · · · · · · · · · · · · · · · 119 5.2.3 Power System Planning · · · · · · · · · · · · · · · · · · · · · · · 120 5.3 Available Tools for Probabilistic Analysis · · · · · · · · · · · · · · · 121 5.3.1 Power System Stability Analysis · · · · · · · · · · · · · · · · · 121 5.3.2 Power System Reliability Analysis· · · · · · · · · · · · · · · · 123 5.3.3 Power System Planning · · · · · · · · · · · · · · · · · · · · · · · 123 5.4 Probabilistic Stability Assessment · · · · · · · · · · · · · · · · · · · · · 125 5.4.1 Probabilistic Transient Stability Assessment Methodology · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 125 5.4.2 Probabilistic Small Signal Stability Assessment Methodology · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 127
x
Contents
5.5 Probabilistic Reliability Assessment · · · · · · · · · · · · · · · · · · · 128 5.5.1 Power System Reliability Assessment · · · · · · · · · · · · · 128 5.5.2 Probabilistic Reliability Assessment Methodology · · · · 131 5.6 Probabilistic System Planning· · · · · · · · · · · · · · · · · · · · · · · · 135 5.6.1 Candidates Pool Construction· · · · · · · · · · · · · · · · · · · 136 5.6.2 Feasible Options Selection · · · · · · · · · · · · · · · · · · · · · 136 5.6.3 Reliability and Cost Evaluation· · · · · · · · · · · · · · · · · · 136 5.6.4 Final Adjustment · · · · · · · · · · · · · · · · · · · · · · · · · · · · 136 5.7 Case Studies · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 137 5.7.1 A Probabilistic Small Signal Stability Assessment Example · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 137 5.7.2 Probabilistic Load Flow · · · · · · · · · · · · · · · · · · · · · · · 140 5.8 Summary· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 142 References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 143 6
Phasor Measurement Unit and Its Application in Modern Power Systems · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 147 6.1 Introduction · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 147 6.2 State Estimation · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 151 6.2.1 An Overview · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 151 6.2.2 Weighted Least Squares Method · · · · · · · · · · · · · · · · 152 6.2.3 Enhanced State Estimation· · · · · · · · · · · · · · · · · · · · · 154 6.3 Stability Analysis · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 157 6.3.1 Voltage and Transient Stability · · · · · · · · · · · · · · · · · · 158 6.3.2 Small Signal Stability — Oscillations · · · · · · · · · · · · · · 160 6.4 Event Identification and Fault Location· · · · · · · · · · · · · · · · · 162 6.5 Enhance Situation Awareness · · · · · · · · · · · · · · · · · · · · · · · · 164 6.6 Model Validation · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 167 6.7 Case Study · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 169 6.7.1 Overview · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 170 6.7.2 Formulation of Characteristic Ellipsoids· · · · · · · · · · · · 170 6.7.3 Geometry Properties of Characteristic Ellipsoids · · · · · 172 6.7.4 Interpretation Rules for Characteristic Ellipsoids · · · · · 173
Contents
xi
6.7.5 Simulation Results · · · · · · · · · · · · · · · · · · · · · · · · · · · 175 6.8 Conclusion· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 179 References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 179 7
Conclusions and Future Trends in Emerging Techniques · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 185 7.1 Identified Emerging Techniques· · · · · · · · · · · · · · · · · · · · · · · 185 7.2 Trends in Emerging Techniques· · · · · · · · · · · · · · · · · · · · · · · 186 7.3 Further Reading· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 187 7.3.1 Economic Impact of Emission Trading Schemes and Carbon Production Reduction Schemes · · · · · · · · · · · · 187 7.3.2 Power Generation based on Renewable Resources such as Wind· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 189 7.3.3 Smart Grid · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 190 7.4 Summary· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 191 References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 191
Appendix · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 195 A.1
Weibull Distribution · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 195 A1.1
An Illustrative Example· · · · · · · · · · · · · · · · · · · · · · · 196
A.2
Eigenvalues and Eigenvectors · · · · · · · · · · · · · · · · · · · · · · · · 197
A.3
Eigenvalues and Stability · · · · · · · · · · · · · · · · · · · · · · · · · · · 198
References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 200 Index · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 201
1 Introduction Zhaoyang Dong and Pei Zhang
With the deregulation of the power industry having occurred in many countries across the world, the industry has been experiencing many changes leading to increasing complexity, interconnectivity, and uncertainties. Demand for electricity has also increased significantly in many countries, which resulted in increasingly stressed power systems. The insufficient investment in the infrastructure for reliable electricity supply had been regarded as a key factor leading to several major blackouts in North America and Europe in 2003. More recently, the initiative toward development of the smart grid again introduced many additional new challenges and uncertainties to the power industry. In this chapter, a general overview will be given starting from deregulation, covering electricity markets, present uncertainties, load modeling, situational awareness, and control issues.
1.1 Principles of Deregulation The electricity industry has been undergoing a significant transformation over the past decade. Deregulation of the industry is one of the most important milestones. The industry had been moving from a regulated monopoly structure to a deregulated market structure in many countries including the US, UK, Scandinavian countries, Australia, New Zealand, and some South American countries. Deregulation of the power industry is also in the process recently in some Asian countries as well. The main motivations of deregulation are to: • increase efficiency; • reduce prices; • improve services; • foster customer choices; • foster innovation through competition; • ensure competitiveness in generation;
2
1 Introduction
• promote transmission open access. Together with deregulation, there are two major objectives for establishing electricity markets. They are (1) to ensure a secure operation and (2) to facilitate an economical operation (Shahidehpour et al., 2002).
1.2 Overview of Deregulation Worldwide In South America, Chile started the development of a competitive system for its generation services based on marginal prices as early as the early 1980s. Argentina deregulated its power industry in 1992 to form generation, transmission, and distribution companies into a competitive electricity market where generators compete. Other South America countries followed the trend as well. In the UK, the National Grid Company plc was established on March 31, 1990, as the owner and operator of the high voltage transmission system in England and Wales. Prior to March 1990, the vast majority of electricity supplied in England and Wales was generated by the Central Electricity Generating Board (CEGB), which also owned and operated the transmission system and the interconnectors with Scotland and France. The great majority of the output of the CEGB was purchased by the 12 area electricity boards; each of which distributed and sold it to customers. On March 31, 1990, the electricity industry was restructured and then privatized under the terms of the Electricity Act 1989. The National Grid Company plc assumed ownership and control of the transmission system and joint ownership of the interconnectors with Scotland and France, together with the two pumped storage stations in North Wales. But, these stations were subsequently sold off. In the early 1990s, the Scandinavian countries (Norway, Sweden, Finland and Denmark) created a Nordic wholesale electricity market – Nord Pool (www.nordpool.com). The corresponding Nordic Power Exchange is the world’s first international commodity exchange for electrical power. It serves customers in the four Scandinavian countries. Being the Nordic Power Exchange, Nord Pool plays a key role as a part of the infrastructure of the Nordic electricity power market and thereby provides an efficient, publicly known price of electricity of both the spot and the derivatives market. In Australia, the National Electricity Market (NEM) was first commenced in December 1998, in order to increase the transmission efficiency and reduce electricity prices. NEM serves as a wholesale market for the supply of electricity to retailers and end use customers in five interconnected regions: Queensland (QLD), New South Wales (NSW), Snowy, Victoria (VIC), and
1.2 Overview of Deregulation Worldwide
3
South Australia (SA). Tasmania (TAS) joined the Australian NEM on May 29, 2005, through Basslink. The Snowy region was later abolished on July 1, 2008. In 2006 – 2007, the average daily demands in the current five regions of QLD, NSW, VIC, SA, and TAS are 5 886 MW, 8 944 MW, 5 913 MW, 1 524 MW, and 1 162 MW, respectively. The NEM system is one of the world’s longest interconnected power systems connecting 8 million end use consumers with AUD 7 billion of electricity traded annually (2004 data) and spans over 4 000 km. The Unserved Energy (USE) of the NEM system is 0.002%. In the United States, deregulation occurred in several regions. One of the major electricity markets is the California electricity market, which is part of the PJM (Pennsylvania-New Jersey-Maryland) market. The deregulation of the California electricity market followed a series of stages, starting from the late 1970s, to allow non-utility generators to enter the wholesale power market. In 1992, the Energy Policy Act (EPACT) formed the foundation for wholesale electricity deregulation. Similar deregulation processes have occurred in New Zealand and part of Canada as well (Shahidehpour et al., 2002).
1.2.1 Regulated vs Deregulated Traditionally the power industry is a vertically integrated single utility and a monopoly in its service area. It normally is owned by the government, a cooperative of consumers, or privately. As the single electricity service provider, the industry is also obligated to provide electricity to all customers in the service area. With the electricity supply service provider’s monopoly status, the regulator sets the tariff (electricity price) to earn a fair rate of return on investments and to recover operational expenses. Under the regulated environment, companies maximize profits while being subject to many regulatory constraints. From microeconomics, the sole service provider of a monopoly market has the absolute market power. In addition, because the costs are allowed by the regulator to be passed to the customers, the utility has fewer incentives to reduce costs or to make investments considering the associated risks. Consequently, the customers have no choices for their electricity supply service providers and have no choices on the tariffs (except in case of service contracts). As compared with a monopoly market, an ideal competitive market normally has many sellers/service providers and buyers/customers. As a result of competition, the market price is equal to the cost of producing the last unit sold, which is the economically efficient solution. The role of deregulation is to structure a competitive market with enough generators to eliminate market power. With the deregulation, traditional vertically integrated power utilities are split into generation, transmission, and distribution service providers to form
4
1 Introduction
a competitive electricity market. Accordingly, the market operation decision model also changes as shown in Figs. 1.1 and 1.2.
Fig. 1.1. Market Operation Decision Model for the Regulated Power Industry – Central Utility Decision Model
Fig. 1.2. Market Operation Decision Model for the Deregulated Power Utility – Competitive Market Decision Model
In the deregulated market, the economic decision making mechanism responds to a decentralized process. Each participant aims at profit maximization. Unlike that of the regulated environment, the recovery of the
1.2 Overview of Deregulation Worldwide
5
investment in a new plan is not guaranteed in a deregulated environment. Consequently, risk management has become a critical part of the electricity business in a market environment. Another key change resulted from the electricity market is the introduction of more uncertainties and stake holders into the power industry. This helps to increase the complexity of power system analysis and leads to the need for new techniques.
1.2.2 Typical Electricity Markets There are three major electricity market models in practice worldwide. These models include the PoolCo model, the bilateral contracts model, and the hybrid model. 1) PoolCo Model A PoolCo is defined as a centralized marketplace that clears the market for buyers and sellers. A typical PoolCo model is shown in Fig.1.3.
Fig. 1.3. Spot Market Structure (National Grid Management Council, 1994)
In a PoolCo market, buyers and sellers submit bids to the pool for the amounts of power they are willing to trade in the market. Sellers in an electricity market would compete for the right to supply energy to the grid and not for specific customers. If a seller (normally a generation company or GENCO) bids too high, it may not be able to sell. In some markets, buyers also bid
6
1 Introduction
into the pool to buy electricity. If a buyer bids too low, it may not be able to buy. It should be noted that in some markets such as the Australian NEM, only the sellers bid into the pool while the buyers do not, which means that the buyers will pay at a pool price determined by the market clearing process. There is an independent system operator (ISO) in a PoolCo market to implement economic dispatch and produce a single spot price for electricity. In an ideal competitive market, the market dynamics will drive the spot price to a competitive level equal to the marginal cost of the most efficient bidders provided the GENCOs bid into the market with their marginal costs in order to get dispatched by the ISO. In such a market low cost generators will normally benefit by getting dispatched by the ISO. An ideal PoolCo market is a competitive market where the GENCOs bid with their marginal costs. When market power exists, the dominating GENCOs may not necessarily bid with their marginal costs. 2) Bilateral Contracts Model Bilateral contracts are negotiable agreements on delivery and receipt of electricity between two traders. These contracts set the terms and conditions of agreements independent of the ISO. However, in this model the ISO will verify that a sufficient transmission capacity exists to complete the transactions and maintain the transmission security. The bilateral contract model is very flexible, as trading parties specify their desired contract terms. However, its disadvantages arise from the high costs of negotiating and writing contracts and the risk of creditworthiness of counterparties. 3) Hybrid Model The hybrid model combines various features of the previous two models. In the hybrid model, the utilization of a PoolCo is not obligatory, and any customer will be allowed to negotiate a power supply agreement directly with suppliers or choose to accept power at the spot market price. In the model, PoolCo will serve all participants who choose not to sign bilateral contracts. However, allowing customers to negotiate power purchase arrangements with suppliers will offer a true customer choice and an impetus for the creation of a wide variety of services and pricing options to best meet individual customer needs (Shahidehpour et al., 2002).
1.3 Uncertainties in a Power System Uncertainties have existed in power systems from the beginning of the power industry. Uncertainties from demand and generator availability have been studied in reliability assessment for decades. However, with the deregula-
1.3 Uncertainties in a Power System
7
tion and other new initiatives happening in the power industry, the level of uncertainty has been increasing dramatically. For example, in a deregulated environment, although generation planning is considered in the overall planning process, it is difficult for the transmission planner to access accurate information concerning generation expansion. Transmission planning is no longer coordinated with generation planning by a single planner. Future generation capacities and system load flow patterns also become more uncertain. In this new environment, other possible sources of uncertainty include (Buygi et al., 2006; Zhao et al., 2009): • system load; • bidding behaviors of generators; • availability of generators, transmission lines, and other system facilities; • installation/closure/replacement of other transmission facilities; • carbon prices and other environmental costs; • market rules and government policies.
1.3.1 Load Modeling Issues Among the sources of uncertainties, power system load plays an important role. In addition to the uncertainties coming from forecast demand, load models also contribute to system uncertainty, especially for power system simulation and stability assessment tasks. Inappropriate load models may lead to the wrong conclusion and possibly cause serious damage to the system. It is necessary to give a brief discussion of the load modeling issues here. Power system simulation is the most important tool guiding the operation and control of a power grid. The accuracy of the power system simulation relies heavily on the model reliability. Among all the components in a power system, the load model is one of the least well known elements; however, its significant influences on the system stability and control have long been recognized (Concordia and Ihara, 1982; Undrill and Laskowski, 1982; Kundur 1993; IEEE 1993a; IEEE 1993b). Moreover, the load model has direct influences on power system security. On August 10, 1996, WSCC (Western Systems Coordinating Council) in the USA collapsed following power oscillations. The blackout caused huge economic losses and endangered state security. However, the system model guiding the WSCC operation had failed to predict the blackout. Therefore, the model validation process, following this outage, indicated that the load model in WSCC database was not adequate to reproduce the event. This strongly suggests that a more reliable load model is desperately needed. The load model also has great effects on economic operation of a power system. The available transfer capability of the transmission corridor is highly affected by the accuracy of the load models used. Due to the limited understanding of load models, a power system is usually operated very conservatively, leading to the poor utilization of both
8
1 Introduction
the transmission and the generation assets. Nevertheless, it is also widely known that modeling the load is difficult due to the uncertainty and the complexity of the load. The power load consists of various components, each with their own characteristics. Furthermore, load is always changing, both in its amount and composition. Thus, how to describe the aggregated dynamic characteristic of the load has been unsolved so far. Due to the blackouts which occurred all around the world in the last few years, load modeling has received more attention and has become a new research focus. The state of the art for research on load modeling is mainly dedicated to the structure of the load model and algorithms to find its parameters. The structure of the load model has great impacts on the results of power system analysis. It has been observed that different load models will lead to various, even completely contrary conclusions on system stability (Kosterev et al., 1999; Pereira et al., 2002). The traditional production-grade power system analysis tools often use the constant impedance, constant current, and constant power load model, namely the ZIP load model. However, simulation results by modeling load with ZIP often deviate from the field test results, which indicate the inefficiency of the ZIP load model. To capture the strong nonlinear characteristic of load under the recovery of the voltage, a load model with a nonlinear structure was proposed by (Hill, 1993). Load structure in terms of nonlinear dynamic equations was later proposed by (Karlsson, Hill, 1994; Lin et al., 1993) identified two dynamic load model structures based on measurements, stating that a second order transfer function captures the load characteristics better than a first order transfer function. The recent trend has been to combine the dynamic load model with the static model (Lin et al., 1993; Wang et al., 1994; He et al., 2006; Ma et al., 2006; Wang et al., 1994) developed a load model as a combination of a RC circuit in parallel with an induction motor equivalent circuit. Ma et al. (Ma et al., 2006; He et al., 2006; Ma et al., 2007; Ma et al., 2008) proposed a composite load model of the ZIP in combination with the motor. An interim composite load model that is 80% static and 20% induction motor model is proposed by (Perira et al., 2002) for WSCC system simulation. Except for the load model structure, the identification algorithm to find the load model parameters is also widely researched. Both linear and nonlinear optimization algorithms are applied to solve the load modeling problem. However, the identification algorithm is based on the model structure and it cannot give reliable results without a sound model structure. Although various model structures have been proposed for modeling load for research purposes, the power industry still uses very simple static load models. The reason is that some basic problems on composite load modeling are still open, which mainly include three key points: First, which model structure among proposed various ones is most appropriate to represent the dynamic characteristic of the load and is it the model with the simplest structure? Second, can this model structure be identified? Is the parameter
1.3 Uncertainties in a Power System
9
set given by the optimization process really the true one, since optimization may easily stick into some local minima? Third, how is the generalization capability of the proposed load model? Load is always changing; however, a model can only be built on available measurements. So, the generalization capability of the load model reflects its validity. Theoretically, the first point involves the minimized realization problem, the second point addresses the identification problem, and the third point closely relates to the statistic distribution of the load. A sound load model structure is the basis for all other load modeling practice. Without a good model structure, all the efforts to find reliable load models are in vain. Based on the Occam’s razor principle, which states that from all models describing a process accurately, the simplest one is the best (Nelles, 2001). Correspondingly, simplification of the model structure is an important step in obtaining reliable load models (Ma et al., 2008). Currently, ZIP in combination with a motor is used to represent the dynamic characteristic of the load model. However, there are various components of a load. Take motors as an example, there are big motors and small motors, industry motors and domestic motors, three-phase motors and single-phase motors. Correspondingly, different load compositions are used to model different loads or loads at different operating conditions. Once the load model structure is selected, proper load model parameter values are needed. Given the variations of the actual loads in a power system, a proper range of parameter values can be used to provide a useful guide in selecting suitable load models for further simulation purposes. Parameter estimation is required in order to calculate the parameter values for a given load model with system response measurement data. This often involves optimization algorithms and linear/nonlinear least squares estimation (LSE) techniques, or a combination of both approaches. A model with the appropriate structure and parameters usually has good performance when fitting the available data. However, it does not necessarily mean it is a good model. A good load model must have good generalization capability. Since a load is always changing, the model built on the available data must also have the strong capability to describe the unseen data. Methodologies used for generalization capability analysis include statistical analysis and various machine learning methods. Even if a model with good generalization capability has been obtained, cross validation is still needed because it is still possible that the derived load model may fail to present the system dynamics in some system operating conditions involving system transients. It is worth noting that both research and engineering practice in load modeling are still facing many challenges. There are many complex load modeling problems causing difficulties to the power industry; consequently, static load models are still used by some companies in their operations and planning practices.
10
1 Introduction
1.3.2 Distributed Generation In addition to those uncertainty factors discussed previously, another important issue is the potential large-scale penetration of distributed generation (DG) into the power system. Traditionally, the global power industry has been dominated by large, centralized generation units which are able to exploit significant economies of scale. In recent decades, the centralized generation model has been the focus of concern on its costs, security vulnerability, and environmental impacts, while DG is expected to play an increasingly important role in the future provision of a sustainable electricity supply. Large-scale implementation of DG will cause significant changes in the power industry and deeply influence the transmission planning process. For example, DG can reduce local power demand; thus, it can potentially defer investments in the transmission and distribution sectors. On the other hand, when the penetration of DG in the market reaches a certain level, its suppliers will have to get involved in the spot market and trade the electricity through the transmission and distribution networks, which may need to be further expanded. Reliability of some types of DGs is also of a concern for the transmission and distribution network service providers (TNSPs and DNSPs). Therefore, it is important to investigate the impacts of DG on power system analysis, especially in the planning process. The uncertainties DG brings to the system also need to be considered in power system analysis.
1.4 Situational Awareness The huge impact in economic terms as well as interruptions of daily life from the 2003 blackouts in North America and the following blackouts in UL and Italy clearly showed the need for techniques to analyze and prevent such devastating events. According to the Electricity Consumers Resource Council (2004), the blackout in August 2004 in America and Canada had left 50 million people without power supply and with an economic cost estimated at up to $10 billion. The many studies of this major blackout concluded that a lack of situational awareness is one of the key factors that resulted in the wide spread power system outage. It has been concluded that the lack of situational awareness was composed of a number of factors such as deficiencies in operator training, lack of coordination and ineffectiveness in communications, and inadequate tools for system reliability assessment. This lack of situational awareness also applies to other major system blackouts as well. As a result, operators and coordinators were unable to visualize the security and reliability status of the overall power system following some disturbance events. Such poor understanding of the system modes of opera-
1.5 Control Performance
11
tions and health of the network equipments also resulted in the Scandinavian blackout incident of 2003. As the complexity and connectivity of power systems continue to grow, for the system operators and coordinators, situational awareness becomes more and more important. New methodologies needed for better awareness of system operating conditions can be achieved. The capability of control centres will be enhanced with better situational awareness. This can be partially promoted by development of operator and control centre tools which allows for more efficient proactive control actions as compared with the conventional preventative tools. Real time tools, which are able to perform robust real time system security assessment even with the presence of system wide structural variations, are very useful in allowing operators to have the better mental model of the system’s health. Therefore, prompt control actions can be taken to prevent possible system wide outages. In its report for blackouts, NERC Real-Time Tools Best Practices Task Force (RTTBPTF) defined situational awareness as “knowing what is going on around you and understanding what needs to be done and when to maintain, or return to, a reliable operating state.” NERC’s Real-Time Tools Survey report presented situational awareness practices and procedures, which should be used to define requirements or guidelines in practice. According to the article by Endsley, 1998, there are three levels for the term situational awareness or situation awareness: (1) perception of elements, (2) comprehending the meaning of these elements, and (3) projecting future system states based on the understanding from levels 1 and 2. For level 1 of situational awareness, operators can use tools which provide real time visual and audio alarm signals which serve as indicators of the operating states of the power system. According to NERC (NERC 2005, NERC 2008) there are three ways of implementing such alarm tools which are being within the SCADA/EMS system, external functions, or a combination of the two. NERC Best Practices Task Force Report (2008) summarized the following situational awareness practice areas in its report: reserve monitoring for both reactive reserve capability and operating reserve capability; alarm response procedures; conservative operations to move the system from unknown and potentially risky conditions into a secure state; operating guides defining procedures about preventive actions; load shed capability for emergency control; system reassessment practices, and blackstart capability practices.
1.5 Control Performance This section provides a review of the present framework of power system protection and control (EPRI, 2004; EPRI, 2007; SEL-421 Manual; ALSTOM, 2002; Mooney and Fischer, 2006; Hou et al., 1997; IEEE PSRC WG, 2005;
12
1 Introduction
Tzaiouvaras, 2006; Plumptre et al., 2006). Both protection and control can be viewed as corrective and/or preventive activities to enhance system security. Meanwhile, protection can be viewed as activities to disconnect and de-energize some components, while control can be viewed as activities without physical disconnection of a significant portion of system components. In this report, we do not intend to make a clear distinction between protection and control. We collectively use the term “protection and control” to indicate the activities to enhance system security. In addition, although there are a number of ways to classify the protection and control systems based on different viewpoints, this report classifies protection and control as local and centralized to emphasize the need for better coordination in the future.
1.5.1 Local Protection and Control A distance relay is the mostly commonly used relay for local protection of transmission lines. Distance relays measure voltage and current and also compare the apparent impedance with relay setting. When the tripping criteria are reached, distance relays will trip the breakers and clear the fault. Typical forms of distance relays include impedance relay, mho relay, modified mho relay, and combinations thereof. Usually, distance relays may have Zone 1, Zone 2, and Zone 3 relays to cover longer distances of transmission lines with the delayed response time as shown below: • Zone 1 relay time and the circuit breaker response time may be as fast as 2 – 3 cycles; • Zone 2 relay response time is typically 0.3 – 0.5 seconds; • Zone 3 relay response time is about 2 seconds. Fig.1.4 shows the Zone 1, Zone 2, and Zone 3 distance relay characteristics.
Fig. 1.4. R-X diagram of Zone 1, Zone 2, and Zone 3 Distance Relay Characteristics
Prime Mover Control and Automatic Generation Control (AGC) is applied to maintain the power system frequency within a required range by the control of the active power output of a generator. Prime movers of
1.5 Control Performance
13
a synchronous generator can be either hydraulic turbines or steam turbines. The control of prime movers is based on the frequency deviation and load characteristics. The AGC is used to restore the frequency and the tie-line flow to their original and scheduled values. The input signal of AGC is called Area Control Error (ACE), which is the sum of the tie-line flow deviation and the frequency deviation multiplied by a frequency-bias factor. Power System Stabilizer (PSS) technology’s purpose is to improve small signal stability or improve damping. PSSs are installed in the excitation system to provide auxiliary signals to the excitation system voltage regulating loop. The input signals of PSSs are usually signals that reflect the oscillation characteristics, such as the shaft speed, terminal frequency, and power. Generator Excitation System is utilized to improve power system stability and power transfer capability, which are the most important issues in bulk power systems under heavy load flow. The primary task of the excitation system in synchronous generators is to maintain the terminal voltage of the generator at a constant level and guarantee reliable machine operations for all operating points. The governing functions achieved are (1) voltage control, (2) reactive power control, and (3) power factor control. The power factor control uses the excitation current limitation, stator current limitation, and rotor displacement angle limitation linked to the governor. On-Load Tap Changer (OLTC) is applied to keep the voltage on the low voltage (LV) side of a power transformer within a preset dead band, such that the power supplied to voltage sensitive loads is restored to the pre-disturbance level. Usually, OLTC takes tens of seconds to minutes to respond to the low voltage event. OLTC may have a negative impact to voltage stability, because the higher voltage at the load side may demand higher reactive current to worsen the reactive problem during a voltage instability event. Shunt Compensation in bulk power systems includes traditional technology like capacitor banks and new technologies like the static var compensator (SVC) and the static compensator (STATCOM). An SVC consists of shunt capacitors and reactors connected via thyristors that operate as power electronics switches. They can consume or produce reactive power at speeds in the order of milliseconds. One main disadvantage of the SVC is that their reactive power output varies according to the square of the voltage they are connected to, which is similar to capacitors. STATCOMs are power electronics based SVCs. They use gate turn off thyristors or insulated gate bipolar transistors (IGBTs) to convert a DC voltage input to an AC signal that is chopped into pulses that are then recombined to correct the phase angle between voltage and current. STATCOMs have a response time in the order of microseconds. Load shedding is performed only under an extreme emergency in modern electric power system operation, such as faults, loss of generation, switching errors, lightning strikes, and so on. For example, when system frequency drops due to insufficient generation under a large system disturbance, load shedding should be done to bring frequency back to normal. Also, if bus voltage slides
14
1 Introduction
down due to an insufficient supply of reactive power, load shedding should also be performed to bring voltage back to normal. The formal load shedding scheme can be realized via under-frequency load shedding (UFLS) while the latter scheme can be realized via under-voltage load shedding (UVLS).
1.5.2 Centralized Protection and Control Out-of-step (OOS) relaying provides blocking or tripping functions to separate the system when loss of synchronism occurs. Ideally, the system should be separated at such points as to maintain a balance between load and generation in each separated area. Moreover, separation should be performed quickly and automatically in order to minimize the disturbance to the system and to maintain maximum service continuity via the OOS blocking relay and tripping relay. During a transient swing, the OOS condition can be detected by using two relays having vertical (or circular) characteristics on an R-X plane as shown in Fig.1.5. If the time required to cross the two characteristics (OOS1 and OOS2) of the apparent impedance locus exceeds a specified value, the OOS function is initiated. Otherwise, the disturbance will be identified as a line fault. The OOS tripping relays should not operate for stable swings. They must detect all unstable swings and must be set so that normal load conditions are not picked up. The OOS blocking relays must detect the condition before the line protection operates. To ensure that line relaying is not blocked for fault conditions, the setting of the relays must be such that normal load conditions are not in the blocking area.
Fig. 1.5. Tripping zones and out-of-step relay
Special Protection Systems (SPS), also known as Remedial Action Schemes (RAS) or System Integrity Protection Systems (SIPS), have become more widely used in recent years to provide protection for power systems against problems that do not directly involve specific equipment fault protection. A SPS is applied to solve single and credible multiple contingency problems.
1.5 Control Performance
15
These schemes have become more common primarily because they are less costly and quicker to permit, design, and build than other alternatives such as constructing major transmission lines and power plants. A SPS senses abnormal system conditions and (often) takes pre-determined or pre-designed actions to prevent those conditions from escalating into major system disturbances. SPS actions minimize equipment damage and prevent cascading outages, uncontrolled loss of generation, and interruptions to customer electric service. SPS remedial actions may be initiated by critical system conditions which can be system parameter changes, events, responses, or a combination of them. SPS remedial actions include generation rejection, load shedding, controlling reactive units, or/and using braking resistors. SCADA/EMS is the most typical application of centralized control in power systems. It is a hardware and software system used by operators to monitor, control, and optimize a power system. The monitor and control functions are known as SCADA; the advanced analytical functions such as state estimation, contingency analysis, and optimization are often referred to as EMS. Typical benefits of SCADA/EMS systems include: improved quality of supply, improved system reliability, and better asset utilization and allocation. An increasing interest in the EMS functions is the online security analysis software tools, which typically provide transient stability analysis, voltage security analysis, and small – signal stability analysis. The latest development in computer hardware and software and in power system simulation algorithms has at present more accurate results for these functions in real-time, which could not be achieved online in the past.
1.5.3 Possible Coordination Problem in the Existing Protection and Control System Fig.1.6 summarizes the time delay, in the logarithmic scale, of various protections and controls based on a number of literatures (4 – 10). As shown in this figure, the time delays of many different control systems or strategies have some considerable overlaps. The reason is historical. In the past, the design of different control was originally based on a single goal to solve a particular problem. As modern power systems are more interconnected and have increasing stress levels, disturbances may cause multiple controls to respond, among which some may be undesired. This trend presents great challenges and risks in protection and control, as evidence by increasing occurrences of blackout events in North America. This challenge will be illustrated with two case analyses in the next section.
16
1 Introduction
Fig. 1.6. Time frame of the present protection and control system
1.5.4 Two Scenarios to Illustrate the Coordination Issues among Protection and Control Systems 1) Load Shedding or Generator Tripping This case analysis shows a potential coordination problem in a two-area system with a generation center (see the left part in Fig.1.7) and a load pocket (see the right part in Fig.1.7). Assume the load pocket experiences a heavy load increase on a hot summer day. Meanwhile, a transmission contingency event occurs in the tie-line between the generation center and the load pocket to cause a reduction of the power import to the load pocket. Then, the load in the load pocket may be significantly greater than the sum of total local generation, the (reduced) import from the tie-line, and the spinning reserves. This may lead to a decrease of both frequency and voltage. Certainly, under this scenario, excessive load is the root cause of imbalance, and load shedding in the load pocket is an effective short-term solution. However, there may be a potential risk of blackouts if the local generators’ under-frequency (UF) tripping scheme and the loads’ under-voltage (UV) shedding scheme are not well coordinated. Likely, the under-frequency generation tripping scheme will disconnect some generation from the system before the load shedding scheme is completed, since the present setting in generation tripping is usually very fast. This will worsen the imbalance between load and generation in the load pocket. Hence, both voltage and frequency may decrease further. This may lead to more generation to be quickly
1.5 Control Performance
17
Fig. 1.7. A two-area sample system
tripped and the local load pocket will lose a large amount of reactive power for voltage support. Therefore, this may lead to a sharp drop of voltage and eventually a fast voltage collapse at the end. Even though this is initially a real power imbalance or frequency stability problem, the final consequence is a voltage collapse. Fig.1.8 shows the gradual process based on the above analysis.
Fig. 1.8. The process to instability
As previously mentioned, the root cause is the imbalance of generation and load in the load pocket. The coordination of generation tripping and load shedding is not optimized or well coordinated to perform load shedding
18
1 Introduction
in order to avoid the generation tripping, which eventually causes a sharp voltage collapse. 2) Zone 3 Protection The second example is from the July 2, 1996, WSCC blackout. At the very beginning of the blackout, two parallel lines were tripped due to fault and mis-operation, and consequently some generation was tripped as a correct SPS response. Then, a third line was disconnected due to bad connectors in a distance relay. More than 20 seconds after these events, the last straw of the collapse occurred. This last straw was the trip of the Mill Creek-Antelope line due to the undesired Zone 3 protective relay. After this tripping, the system collapsed within 3 seconds. The relay of the Mill Creek-Antelope line did as it should do based on its Zone 3 setting, which was to trip the line when the observed apparent impedance encroached upon the circle of the Zone 3 relay as shown in Figs.1.9 and 1.10. In this case, the low apparent impedance was the consequence of the power system conditions at that moment. Obviously,
Fig. 1.9. The line tripping immediately leading to a fast, large-area collapse during the WSCC July 2, 1996, Blackout
1.6 Summary
19
if the setting of the Zone 3 relay can be dynamically reconfigured, considering the heavily loaded system condition, the system operators may have enough time to perform some corrective actions to save the system from a fast collapse.
Fig. 1.10. Observed impedance encroaching the Zone 3 circle
1.6 Summary Power systems have been experiencing dramatic changes over the past decade. Deregulation is one of the main changes occurring across the world. Increased connectivity and resultant nonlinear complexity of power system is another trend. The consequences of such changes are various uncertainties and difficulties in power system analysis. Recent major power system blackouts also remind the power industry of the need for situational awareness and more effective tools in order to ensure more secure operation of the system. This chapter has reviewed these important aspects of the power system worldwide. This chapter serves as an introduction and forms the basis for further discussion on the emerging techniques in power system analysis.
References ALSTOM (2002) Network Protection & Automation Guide. ALSTOM, LevalloisPerret Buygi MO, Shanechi HM, Balzer G et al (2006) Network planning in unbundled power systems. IEEE Trans Power Syst 21(3) Concordia C, Ihara S (1982) Load representation in power systems stability studies. IEEE Trans. Power App Syst 101: 969 – 977
20
1 Introduction
Endsley MR (1988) Situation awareness global assessment technique. Proceedings of The National Aerospace and Electronics Conference. IEEE, pp789 – 795 EPRI Project Opportunities (2007) PMU-based Out-of-step Protection Scheme General Electric Company (1987) Load modeling for power flow and transient stability computer studies, Vol 1 – 4, EPRI Report EL-5003 IEEE Task Force on Load Representation for Dynamic Performance (1993) Load representation for dynamic performance analysis. IEEE Trans Power Syst 8(2): 472 – 482 IEEE Task Force on Load Representation for Dynamic Performance (1995) Bibliography on load models for Power flow and dynamic performance simulation. IEEE Trans Power Syst 10(1): 523 – 538 IEEE Task Force on Load Representation for Dynamic Performance (1995) Standard load models for power flow and dynamic performance simulation. IEEE Trans Power Syst 10(3): 1302 – 1313 Hill DJ (1993) Nonlinear dynamic load models with recovery for voltage stability studies. IEEE Trans Power Syst 8(1): 166 – 176 He RM, Ma J, Hill DJ (2006) Composite load modeling via measurement approach. IEEE Trans Power Syst 21(2): 663 – 672 Hou D, Chen S, Turner S (1997) SEL – 321 – 5 relay out-of-step logic. Schweitzer Engineering Laboratories, Inc Application Guide AG97-13 Karlsson D, Hill DJ (1994) Modeling and identification of nonlinear dynamic loads in power systems. IEEE Trans Power Syst 9(1): 157 – 166 Kundur P (1993) Power system stability and control. McGraw-Hill, New York Kosterev DN, Taylor CW, Mittelstadt WA (1999) Model validation for the august 10, 1996 WSCC system outage. IEEE Trans Power Syst 14(3): 967 – 979 Lin CJ, Chen YT, Chiang HD et al (1993) Dynamic load models in power systems using the measurement approach. IEEE Trans Power Syst 8(1) Ma J, He RM, Hill DJ (2006) Load modeling by finding support vectors of load data from field Measurements, IEEE Trans Power Syst 21(2): 726 – 735 Ma J, Han D, He R et al (2008) Reducing identified parameters of measurementbased composite load model. IEEE Trans Power Syst 23(1): 76 – 83 Ma J, Dong ZY, He R et al (2007) System energy analysis incorporating comprehensive load characteristics. IET Gen Trans Dist, 1(6): 855 – 863 Mooney J, Fischer N (2006) Application guidelines for power swing detection on transmission systems. Proceedings of the 59th annual conference for protective relay engineers. 2006 IEEE, 289 – 298 National Grid Management Council. Empowering the market–national electricity reform for australia. December 1994 Nelles O (2001) Nonlinear system identification. Springer, Heidelberg NERC (North American Electric Reliability Council) (2005) Best practices task force report. Discussions, Conclusions, and Recommendations NERC Real-Time Tools Best Practices Task Force (2008) Real-time tools survey analysis and recommendations. Final Report Pereira L, Kosterev D, Mackin P et al (2002) An interim dynamic induction motor model for stability studies in the WSCC. IEEE Trans Power Syst 17(4): 1108 – 1115 Plumptre F, Brettschneider S, Hiebert A et al (2006) Validation of out-of-step protection with a real time digital simulator. TP6241-01, BC hydro, Cegertec, BC Transmission Corporation and Schweitzer Engineering Laboratories inc Price WW, Wirgau KA, Murdoch A et al (1988) Load modeling for load flow and transient stability computer studies. IEEE Trans Power Syst 3, pp180 – 187 Shahidehpour M, Ymin H, Li Z (2002) Market operations in electric power systems. Forecasting, Scheduling, and Risk Management, IEEE, Wiley, New York Tzaiouvaras D (2006) Relay performance during major system disturbances.
References
21
TP6244 – 01, SEL Thorpe GH (1998) Competitive electricity market development in australia. Proceedings of ARC Workshop on Emerging Issues and Methods in the Restructuring of the electric Power Industry, The University of Western Australia, 20 – 22 July 1998 Wang JC, Chiang HD, Chang CL et al (1994) Development of a frequency-dependent composite load model using the measurement approach. IEEE Trans Power Syst 9(3): 1546 – 1556 Undrill JM, Laskowski TF (1982) Model selection and data assembly for power system simulation. IEEE Trans Power App Syst, 101, pp. 3333 – 3341 SEL-421 Manual, Schweitzer Engineering Laboratories, SEL-421 Relay Protection Automation Control, 2001 Zhao J, Dong ZY, Lindsay P et al (2009) Flexible transmission expansion planning in a market environment. IEEE Trans Power Syst 24(1): 479 – 488 Zhang P, Min L, Hopkins L, Fardanesh B (2007) Utility Experience Performing Probabilistic Risk Assessment for Operational Planning. Proceedings of the of the14th ISAP, November, 2007
2 Fundamentals of Emerging Techniques Xia Yin, Zhaoyang Dong, and Pei Zhang
Following the new challenges of the power industry outlined in Chapter 1, new techniques for power system analysis are needed. These emerging techniques cover various aspects of power system analysis including stability assessment, reliability, planning, cascading failure analysis, and market analysis. In order to better understand the functionalities and needs for these emerging techniques, it is necessary to give an overview of these emerging techniques and compare these emerging ones with traditional approaches. In this chapter, the following emerging techniques will be outlined. Some of the key techniques and their applications in power engineering will be detailed in the subsequent chapters. The main objective is to provide a holistic picture of the technological trends in power system analysis over the recent years.
2.1 Power System Cascading Failure and Analysis Techniques In 2003, there were several major blackouts, which were regarded as results of cascading failures of power systems. The increasing number of system instability events is mainly because of the operation of market mechanisms which has driven more generation investments but provided insufficient transmission expansion investments. With the increased demand for electricity, many power systems have been heavily loaded. As a result, power systems are running close to their security limits and therefore vulnerable to disturbances (Dong et al., 1995). The blackout of 14 August 2003 (Michigan Public Service Commission 2003) in the USA has so far been the worst case which affected Michigan, Ohio, New York City, Ontario, Quebec, northern New Jersey, Massachusetts, and Connecticut, according to a North American Electric Reliability Coun-
24
2 Fundamentals of Emerging Techniques
cil (NERC) report. Over 50 million people experienced that blackout over a considerable number of hours. The economic loss and political impact were enormous, and concerns regarding national security rose from the power sector. The major reasons for the blackout were identified as (U.S.-Canada Power System Outage Task Force, 2004): • failure to identify emergency conditions and communicate to neighboring systems; • inefficient communication and/or sharing of system wide data; • failure to ensure operation within secure limits; • failure to assess system stability conditions in some affected areas; • inadequate regional-scale visibility over the bulk power system; • failure of the reliability organizations to provide effective real-time diagnostic support; • a number of other reasons. According to an EPRI report (Lee, 2003), in the 1990s, electricity demand in the US grew by 30%, but for the same period there was only a 15% increase in new transmission capacity. Such imbalance continues to grow; it is estimated that from 2002 to 2011, demand will grow a further 20% with only a 3.5% increase in new transmission capacity. This has caused a significant increment in transmission congestion and has created many new bottlenecks in the flows of bulk power. This situation has further stressed the power system. It is a far more complex problem than a simple voltage collapse based on the information available so far. As clearly indicated in many literatures about this event, the reasons for such large scale blackouts are extremely complex, and have yet to be fully understood. Although there are established system security assessment tools in operation with the power companies over the blackout affected region, the system operators were unable to identify the severity of emerging system signals and therefore unable to reach a timely remedial decision to prevent such cascading system failure. The state-of-the-art power system stability analysis leads to the following conclusions: • many power systems are vulnerable to multiple contingency events; • the current design approaches to maintain stability are based on deterministic approaches which do not correctly include the uncertainty in the power system parameters or the failures which can impact the system; • this explicit consideration of the uncertainties in disturbances and of power system parameters can impact on the decisions on placement of correction devices such as FACTS devices or on the control design of excitation controllers; • the explicit consideration of where the system breaks under multiple contingencies can be used to adjust the controllers and the links to be strengthened in power system design;
2.1 Power System Cascading Failure and Analysis Techniques
25
• the mechanism of cascading failure blackouts has not been fully understood; • if timely information about system security is available even a short time beforehand, many of the severe system security problems such as blackouts could be avoided. It can be seen that the information involved to properly assess the security of a power system is increasingly complex with open access and deregulation. New techniques are needed to handle such problems. Cascading failure is a main form of system failure leading to blackouts. However, the mechanism of cascading failure is still difficult to analyze in order to develop reliable algorithms to monitor, predict, and prevent blackouts. To face the impending challenges from operation and planning with respect to cascading failure avoidance, power system reliability analysis needs new evaluation tools. So far, the widely recognized contingency analytical method of large interconnection power systems is the N-1 criterion (CIGRE, 1992). In some cases, the N-1 even can be defined as the loss of a set of components of the system within a short time. The merits of the N-1 criterion are the flexibility, clarity, and simplicity of implementation. However, with the increasing risk of the occurrence of catastrophic failure and system complexity, this criterion may not provide sufficient information of the vulnerability and severity level of the system. Since catastrophic disruptions are normally caused by cascading failures of electrical components, the importance of studying the inherent mechanism of cascading outages is attracting more and more attention. So far, many models have been documented on simulating cascading failures. In the article by Dobson et al., 2003, a load-dependent model is proposed from a probabilistic point of view. At start, the system components will be allocated a virtual load randomly. Then the model will be initiated by adding a disturbance load to all the components. A component will be tripped when its load exceeds the maximum limit, and other unfailed components will receive a constant load from this failure. This cascading procedure will terminate when there are no component failures within a cascading scenario. This model can fully explore all the possibilities of cascading cases of the system. This cascading model is further improved by incorporating branching process approximation in the article by Dobson et al., 2004, so that the propagation of cascading failures can be demonstrated. However, both of them did not address the joint interactions among system components during cascading scenarios. In the article by Chen et al., 2005, cascading dynamics is investigated under different system operating conditions via a hidden failure model. This model employs linear programming (LP) generation redispatch jointed with dc load flow for power distribution and emphasizes the possible failures existing in the relay system. Chen et al. (Chen et al., 2006) study the mechanism of cascading outages by estimating the probability distribution of
26
2 Fundamentals of Emerging Techniques
historical data of transmission outages. However, both methods above do not consider failures of other network components, such as generators and loads. In the article by Stubna and Fowler, 2003, to describe the statistics of robust complex systems under uncertain conditions, highly optimised tolerance (HOT) model is introduced in simulating blackout phenomena in power systems. A simulation result shows that this model reasonably fits the historical data set of one realistic test power system. Besides these proposed models, the investigation of critical transitions of a system according to the system loading conditions during cascading procedure is also studied (Carreras et al., 2002). The paper finds that the size of the blackouts will experience a sharp increase once the system loading condition is over a critical transition point. Efforts also have been dedicated to understand the cascading faults from global system perspectives. Since the inherent differences of systems make it difficult to propose a generalized mathematic model for all the networks, these analysis approaches are normally established by probabilistic and statistic theories. In the article by Carreras et al., 2004, from the detailed time series analysis of the North American Electrical Reliability Council (NERC) 15 years historical blackout data, the authors find that cascading failures occurring in the system had exhibited self organised criticality (SOC) dynamics. This work shows that the cascading collapse of systems may be caused by the power system global nonlinear dynamics instead of weather or other external triggering disturbances. This evidence provides a global philosophy for understanding the catastrophic failures in power systems. It has been recognised that the structures of complex networks always affect their functions (Strogatz, 2001). Due to the complexity inherit in power grids the study of system topology is another interesting approach. In the article by Lu et al. 2004, “small world” is introduced for analysing and comparing the topology characteristics of power networks in China and the United States. The result shows that many power grids fall within the “small world” category. Paper (Xu and Wang, 2005) employs scale-free coupled map lattices (CML) models to investigate the cascading phenomena. The result indicates that the increase in the homogeneity of the network will be helpful to enhance the system stability. However, since topology analyses normally require networks to be homogeneous and non-weighted, it might need approximations when dealing with power grid issues. Recent NERC studies of major blackouts (NERC US Canada Power System Outage Task Force 2004) have shown that more than 70% of those blackouts involved hidden failures, which are incorrect relay operations, namely removing a circuit element(s) as a direct consequence of another switching event (Chen et al., 2005; Jun et al., 2006). When a transmission line trip, there is a small but significant probability that lines sharing a bus (those lines are called as expose to hidden failures) with the tripped line may incorrectly
2.2 Data Mining and Its Application in Power System Analysis
27
trip due to the relay malfunctioning. The Electric Power Research Institute (EPRI) and Southern Company jointly developed a cascading failure analysis software, called Transmission Reliability Evaluation of Large-Scale Systems (TRELSS), which has been applied in real systems for several years (Makarov and Hardiman, 2003). The model addresses the trips of loads, generators, and protection control groups (PCG). In every cascading scenario, the value of load node voltages, generator node voltages as well as circuit overloads will be investigated sequentially, and the next cascading fault will be determined from the result. The model is very complex for application (Makarov and Hardiman, 2003). IEEE PES CAMS Task Force (2008, 2009) on Understanding, Prediction, Mitigation and Restoration of Cascading Failures provides a detailed review of the issues of cascading failure analysis. The research and development in this area continue with various techniques (Liu et al., 2007; Nedic et al., 2006; Kirschen et al., 2004; Dobson et al., 2005; Dobson et al., 2007; Chen et al., 2005; Sun and Lee, 2008; Hung and Nieplocha, 2008; Zhao et al., 2007; Mili et al., 2004; Kinney et al., 2005).
2.2 Data Mining and Its Application in Power System Analysis Data mining is the process to identify hidden, potentially useful and understandable information and patterns from large data bases; or in short it is the process to discover hidden patterns from data bases. It is an important step in the process of knowledge discovery in databases (Olaru and Wehenkel, 1999). It has been used in a number of areas for power system analysis where large amount data are involved such as forecasting and contingency assessment. It is well known that online contingency assessment or online dynamic security assessment (DSA) is a very complex task that requires a significant amount of computational costs for many real interconnected power systems. With increasing complexity in modern power systems, the corresponding system data are exponentially increasing. Many companies store such data but are not yet able to fully utilize them. Under such emerging complexity, it is desirable to have reliable and fast algorithms to perform such duties instead of the traditional time-consuming security assessment/dynamic simulation based ones. It should be noted that artificial intelligence (AI) techniques such as neural networks (NNs) have been used for similar purposes as well. However, AI based methods suffer a number of shortcomings which have prevented their wider application in realistic situations so far. The major shortcomings of
28
2 Fundamentals of Emerging Techniques
NN based online dynamic security assessment are the inference opacity, the over-fitting problem, and applicability to a large scale system. Lack of statistical information from NN outputs is also a major concern which limits its application. Data mining based real time security assessment approaches are able to provide statistically reliable results and have been widely practiced in many complex systems such as telecommunications system and internet security areas. In power engineering, data mining has been successfully employed in a number of areas including fault diagnosis and condition monitoring of power system equipment, customer load profile analysis (Figueiredo et al., 2005), nontechnical loss analysis (Nizar, 2008), electricity market demand and price forecasting (Zhao et al., 2007a; Zhao et al., 2007b; Zhao et al., 2008), power system contingency assessment (Zhao, 2008c), and many other tasks for power system operations (Madan et al., 1995; Tso et al., 2004; Pecas Lopes and Vasconcelos, 2000). However, there is still a lack in systematic application of data mining techniques in some specific areas such as large scale power system contingency assessment and predictions (Taskforce 2009). For applications such as a power system online DSA, it is critical to have assessment results within a very short time in order for the system operator to take corresponding control actions to prevent series system security problems. Data mining based approaches, with their mathematically and statistically reliable characteristics open up a realistic solution for on-line DSA type tasks. They outperform the traditional AI based approach in many aspects. First, data mining is originally designed to discover useful patterns in large-scale databases, in which AI approaches usually face unaffordable time complexity. Therefore, data mining based approach are able to provide the fast response in user friendly efficient forms. Second, a variety of data cleaning techniques have been incorporated into data mining algorithms, hence enabling data mining algorithms with strong noisy input tolerance capabilities. The most important feature is that a number of data mining methods actually come from the modification of traditional statistic theory. For instance, the Bayesian classifier is from Bayesian decision theory and support vector machine (SVM) is based on statistical learning theory. As a result, these techniques are able to handle large-scale data sets. Moreover, they have strong statistical robustness and the ability to overcome over-fitting problems as compared with AI techniques. The statistical robustness means that if the system is assessed to have a security problem, it will experience such a problem with a given probability of occurrence if no actions are taken. This characteristic is very important for the system operator managing the system security in a market environment where any major actions are associated with potentially huge financial risks. The operator needs to be sure that a costly remedial action (such as load shedding) is necessary before that action takes place. Data mining normally involves four types of tasks
2.3 Grid Computing
29
including the classification, clustering, regression, and association rule learning (Wikipedia) (Han, 2006). Classification is an important task in the data mining and so is presented in more detail here. According to the article by Vapnik, 1995, the classification problem belongs to supervised learning problems, which can be described using three components: • a generator of random vectors X, drawn independently from a fixed but unknown distribution P (X); • a supervisor that returns an output value y for every input vector (in classification problems, y should be discrete and is called class label for a given X), according to a conditional distribution function P (y|X), also fixed but unknown; • a learning machine capable of implementing a set of functions f (X, α), α ∈ Λ. The object of a classifier is to give the f (X, α), α ∈ Λ with best approximation to the supervisor’s response. Predicting the occurrence of system contingency is a typical binary classification problem. The factors which are relevant to the contingencies (e.g., demand and weather) can be seen as the dimensions of the input vector X = (x1 , x2 , . . . , xn ), and xi , i ∈ [1, n] is a relevant factor. So far, there have been a number of classification algorithms in practice. According to the article by Sebastiani, 2002, the main classification algorithms can be categorized as: decision tree and rule based approaches such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1998); examplebased methods such as k-nearest neighbors (Duda and Hart, 1973); and SVM (Cortes and Vapnik, 1995). Similar to classification, clustering also allocates similar data into groups but the groups are not pre-defined. Regression is used to model the data series with the least error. Association rule learning is used to discover relationships between variables in a data base (Han, 2006). More detailed discussion on data mining will be given in Chapter 3 of this book.
2.3 Grid Computing With the deregulation and constant expansion of power systems, the demand of high performance computing (HPC) for power system adequacy and security analysis has increased rapidly. HPC also plays an important role in ensuring efficient and reliable communication for power system operation and
30
2 Fundamentals of Emerging Techniques
control. In the past few years, grid computing technology has been catching up and is receiving much attention from power engineers and researchers (Ali et al., 2009; Irving et al., 2004). Grid computing technology is an infrastructure, which can provide high performance computing and a communication mechanism for providing services in these areas of the power system. It has been recognized that the commonly used Energy Management Systems (EMS) are unable to provide solutions to meet such requirements of HPC and data and resource sharing (Chen et al., 2004) for its operations. In the past, some efforts had been made in order to enhance the computational power of EMS (Chen et al., 2004) in the form of parallel processing, but only the centralized resources were used, and an equal distribution of computing tasks among participating computers was assumed. In parallel processing, the tasks can be divided into a number of subtasks of equal size to all systems. For this purpose, all machines need to be dedicated and should be homogeneous, i.e. they should have common configurations and capabilities, otherwise different computers may return results at different times depending on their availability when the tasks were assigned to the computers. In parallel processing, there is a need for collaboration of data from different organizations, which is sometimes very hard due to various technical or security issues (Chen et al., 2004). Consequently, there should be a mechanism for processing the distributed and multi-owner data repositories (Cannataro and Talia, 2001). Some distributed computing solutions also have been proposed previously for getting high efficiency computation, but they demand homogeneous resources and are not scalable. In addition, the parallel processing techniques involve tightly coupling of the machines (Chen et al., 2004). Use of super computers is another solution, but it is very expensive and often not suitable, especially for a single organization which may be constrained by resources. Grid computing is an infrastructure that can provide an integrated environment for all these participants in the electricity market and power system operations by providing secured resources as well as data sharing and high performance computing for power system analysis. Grid computing can be involved in all fields in which computers are involved, and these fields can be related to communications, analysis, and organizational decision making. Grid computing is a new technology that involves the integrated and collaborative use of computers, networks, databases, and scientific instruments owned and managed by multiple organizations (Foster and Kesselman, 1997; Foster et al., 2001). It is able to provide HPC and access to remote, heterogeneous and geographically separated data over the vast area. This technology is mainly developed by E-science community (EUROGRID, NASA IPG, PPDG, GridPP), but nowadays it is widely used in many research fields like oil and gas fields, banking, and education. Grid computing has provided large contributions in these areas. In the past few years grid computing technology has gained much attention from the power engineering field and significant research is being done at
2.4 Probabilistic vs Deterministic Approaches
31
numerous places in order to investigate the potential use of grid computing technology and in order to apply this technology in power engineering field (Chen et al., 2004; Taylor et al., 2006; Ali et al., 2006; Wang and Liu, 2005; Ali et al., 2005; Axceleon and PTI, 2003). Grid computing can provide efficient and effective computing services in order to meet the increasing need of high performance computation in power system reliability and security analyses which are facing today’s power industry. It can also provide remote access to distributed resources of the power system, thereby providing effective and fast mechanisms of monitoring and control of power systems. Overall, it can provide efficient services in power system monitoring and control, scheduling, power system reliability and security analysis, planning, and electricity market forecast (Chen et al., 2004; Ali et al., 2005). Grid Computing is a form of parallel and distributed computing that involves coordination and sharing of computing, application, data storage, and network resources across dynamic and geographically distributed organizations (Asadzadeh et al., 2004). This integration creates a virtual organization where in a number of mutually distrustful participants with varying degrees of prior relationship want to share resources to perform some computational tasks (Foster and Kesselman, 1997; Foster et al., 2001). Some of the commonly used grid computing tools include Globus (Foster and Kesselman, 1997) and EnFuzion (Axceleon). EnHuzion is a distributed computing tool developed by Turbolinux. It has strong robustness, high reliability, efficient network utilization, intuitive GUI interfaces, multi platform support, multi-core processors, flexible scheduling and lights-out option, and extensive administrative tools (Axceleon). Detailed discussion on grid computing will be given in Chapter 4 of this book.
2.4 Probabilistic vs Deterministic Approaches The power systems must be planned to reliably supply electricity to end users with a high level of reliability and meet the security requirements. Fundamentally, these requirements conflict with economic concerns and usually tradeoffs have to be made in system operation and planning. Moreover, because the power system has been operating for many years following a similar pattern, system operators and engineers could predict future conditions with reasonable accuracy. However, with the changes over the past few years, especially with deregulation and increased interconnections, it is more and more difficult to predict the system conditions, although forecasting is an important task for system operators. Traditionally, the system security and reliability are evaluated under the
32
2 Fundamentals of Emerging Techniques
deterministic framework. The deterministic approach basically studies the system stability, given a set of network configurations, system loading conditions and disturbances, etc. (Kundur, 1994). Since the operation of the power system is stochastic in nature and so are the disturbances, engineers have to run thousands of time domain simulations to determine the system stability for a set of credible disturbances before dispatching. Under this deterministic regime, system operations and planning require experience and judgment from the system operators. Similarly, in the planning stage, planning engineers need to carry out such analysis to evaluate system reliability, and adjust the expansion plans if necessary. Despite its popularity with many research organizations and utilities, the time-domain simulation method suffers from intensive and time-consuming computation and is only feasible in recent years with progresses in computer engineering. This significant disadvantage has motivated engineers and scholars to develop new methods to account for the stochastic nature of system stability. Studying only the worst case scenario is one solution to the problem, but the obtained result is, however, too conservative and therefore unpractical for economy concerns in both operation and planning. In the articles by Billinton and Kuruganty, 1980; Billinton and Kuruganty, 1979; Hsu and Chang, 1988, probabilistic indexes for transient stability have been proposed. These methods consider various uncertainties in power systems, such as the loading conditions, fault locations and types, etc. The system stability can be assessed in the probabilistic framework which provides the system operator and planner with a clearer picture of stability status. The idea of probabilistic stability assessment is extended to the small signal stability analysis in this paper via a Monte Carlo simulation approach. In the probabilistic study of power system stability, several methods such as the cumulant and moment methods can be applied. These methods use cumulant or moment models to calculate the statistics of system eigenvalues using mathematical equations such as the Gram-Charlier equations (Hsu and Chang, 1988; Wang et al., 2000; Zhang and Lee, 2004; Da Silva et al., 1990). The advantage of these methods is fast computational speed. However, approximation is usually needed in these methods (Wang et al., 2000; Zhang and Lee, 2004). The Monte Carlo technique is another option which is more appropriate for analyzing the complexities in large-scale power systems with high accuracy, though it may require more computational effort (Robert and Casella, 2004; Billinton and Li, 1994). The Monte Carlo method involves using random numbers and probabilistic models to solve problems with uncertainties. Reliability study in power systems is a case in point (Billinton and Li, 1994). Simply speaking, it is a method for iteratively evaluating a deterministic model using sets of random numbers. Take power system small signal stability assessment for example. The Monte Carlo method can be applied for probabilistic small signal stability analysis. The method starts from the probabilistic modeling of system parameters of interest, such as the dispatching
2.4 Probabilistic vs Deterministic Approaches
33
of generators, electric loads at various nodal locations and network parameters, etc. Next, a set of random numbers with a uniform distribution will be generated. Subsequently, these random numbers are fed into the probabilistic models to generate actual values of the parameters. The load flow analysis and system eigenvalue calculation can then be carried out, followed by the small signal stability assessment via system modal analysis. The Monte Carlo method can also be used for many other probabilistic system analysis tasks. For transmission system planning, the deterministic criteria may ignore important system parameters which may have significant impacts on the system reliability. The deterministic planning also favors a conservative result based on the commonly used worst case conditions. According to EPRI (EPRI, 2004), deterministic transmission planning fails to provide a measure of the reliability of the transmission system design. The techniques which can effectively consider uncertainties in the planning process have been investigated by researchers and engineers for probabilistic transmission planning practices. Under the probabilistic approach, system failure risk reduction can be clearly illustrated. The impact of system failure can be assessed and considered in the planning process. The probabilistic transmission planning methods developed enable quantification of risks associated with different planning options. They also provide useful insights to the design process as well. EPRI (EPRI, 2004; Zhang, et al., 2004; Choi et al., 2005; EPRI-PRA, 2002) proposed probabilistic power system planning to consider the stochastic nature of the power system and compared the traditional deterministic approach vs. the probabilistic approach. A summary of deterministic and probabilistic system analysis approaches are given in Table 2.1. Table 2.1. A Summary of Deterministic vs Probabilistic Approaches Deterministic Deterministic Deterministic Deterministic Deterministic Deterministic Deterministic
Approach load flow stability assessment small signal stability transient stability voltage stability power system planning
Probabilistic Probabilistic Probabilistic Probabilistic Probabilistic Probabilistic Probabilistic
Approach load flow stability assessment small signal stability transient stability voltage stability power system planning
For transmission system planning, generally speaking, the deterministic method uses simple rules compared with probabilistic methods. Deterministic methods have been implemented in computer software for easy analysis over the years of system planning practices. However, probabilistic methods normally require new software and higher computational costs in order to cope with the more comprehensive analysis tasks involved. Although the probabilistic method is more complex than the deterministic method and requires more computational power, the benefits of the probabilistic method out-weight the deterministic one because (1) it enables the tradeoff between reliability and economics in transmission planning; and (2) it is able to evalu-
34
2 Fundamentals of Emerging Techniques
ate risks in the process so as to enable risk management in the planning process. Transmission system planning easily involves tens of millions of dollars; the two advantages of the probabilistic approach make it a very attractive option for system planners. Detailed discussions on probabilistic vs deterministic methods will be given in Chapter 5.
2.5 Phasor Measurement Units Conventionally, power system control and protection is normally designed to respond to large disturbances, mainly faults, in the system. Following the lessons learnt from the 2003 blackout, protection system fault has been identified as a major factor leading to the cascading failure of a power system. Consequently, the traditional system protection and control need to be reviewed and new techniques are needed to cope with today’s power system operational needs (EPRI, 2007). The phasor measurement unit (PMU) is digital equipment which records the magnitude and phase angles of currents and voltages in a power system. They can be used to provide real-time power system information in a synchronized way as either standalone devices or they can be integrated into other protective devices. PMUs have been installed in the power systems across large geographical areas. They provide valuable potential for improving the monitoring, control, and protection of the power system in many countries. The synchronized phasor measurement data provides highly useful system dynamics information. Such information is particularly useful when the system is in a stressed operating state or subject to potential system instability. Such information can be used to assist the situational awareness for the system control centre operators. In the article by Sun and Lee, 2008, a method is proposed to use phase-space visualization and pattern recognition to identify abnormal patterns in system dynamics in order to predict system cascading failure. By strategically selecting the locations for PMU installations in a transmission network, the real time synchronized phasor measurement data can be used to calculate indices which can be used to measure the vulnerability of the system against possible cascading failures (IEEE PES CAMS Taskforce, 2009; Taylor et al., 2005; Zima and Andersson, 2004). The increasingly popular wide area monitoring, protection, and control scheme highly relies on synchronized real time system information. PMUs together with advanced telecommunication techniques are essential for this scheme. In summary, PMUs can be used to assist in state estimation, detect system inter-area oscillations and assist in determining corresponding controls, provide system voltage stability monitoring and control, facilitate
2.6 Topological Methods
35
load modeling and analysis tasks, and assist in system restoration and event analysis with the synchronized measurement data (EPRI, 2007). Detailed discussions on PMUs and their applications are given in Chapter 6 of this book.
2.6 Topological Methods It is widely accepted that the power system is one of the most complex systems in the world. It is a highly nonlinear, interactive, and interconnected system. Correspondingly, complex system analysis methods and topological methods have been explored for system vulnerability analysis. The frequent occurrence of blackouts exposes potential problems of current mathematical models and analysis methodology in power systems, which stimulates researchers to seek solutions from alternative means. Because of the relatively new nature of this problem, some of the analytical methods also extend beyond the scope of the traditional power system analysis techniques. In the article by Chen et al., 2009a, the authors claim that complex network theory and its error and attack tolerance methodology had drawn the link between the topological structure and the vulnerability of networks. Initially, the methodology was proposed by physicists and they mainly focused on complex abstract networks, such as Erdos-Renyi (ER) random networks, Barabasi-Albert (BA) scale-free networks, etc. (Albert et al., 2000; Crucitti et al., 2003; Crucitti et al., 2004; Latora and Marchiori, 2001; Motter et al., 2002). Then, some physicists tried to employ the methodology into analyzing structural vulnerability of power networks because mathematically, power networks can be described as a complex network with nodes connected by edges (Hill and Chen, 2006). Motter et al. (Motter et al., 2002) discussed cascade-based attacks on real complex networks and pointed out that the Internet and power grids were vulnerable to important node attacks but evolved to be quite resistant to random failure of nodes. Topological vulnerability of the European power grid was studied in the article by Casals et al., 2007, and it was found that power grids display patterns of reaction to node loss similar to those observed in scale-free networks, namely robust-yet fragile property. The Italian and North American power grids are also studied in the article by Crucitti et al., 2004; Kinney et al., 2005, respectively, with similar findings. Chen et al. (Chen et al., 2009a) proposed a hybrid algorithm to investigate the vulnerability of power networks. This algorithm employs DC power flow equations, hidden failures, and error and attack tolerance methodology together to form a comprehensive approach for power network vulnerability assessment and modeling. More research in this area can be found in the
36
2 Fundamentals of Emerging Techniques
articles by Chen et al., 2009b; Ten et al., 2007; National Research Council, 2002; Holmgren et al., 2007.
2.7 Power System Vulnerability Assessment In addition to the many system parameters closely related to power system vulnerability, over recent years, terrorist attacks have been recognized as an emerging scenario need to be considered in system planning and operations. Correspondingly, given that the loss of large blackouts is usually huge, identifying the vulnerability of power grids and defending terrorist attacks becomes urgent and important work for governments and researchers. The power system is one of the most important critical infrastructures for a country. Severe blackouts will result in direct loss up to billions of dollars. Furthermore, the failures of power systems usually will propagate into other critical infrastructures such as communications, water supply, natural gas, transportation, etc. Therefore, it will cause a large disturbance of our society and panic among citizens. Ensuring system security and reliability is among the most important responsibilities of a system operator. In recent years, with the growth of terrorism activities, power systems are likely to become the targets of terrorists.The current reliability and security framework is vulnerable against terrorist attacks, because terrorists can be highly intelligent and/or they can even hire scientists and power engineers to seek the vulnerability of power systems and then launch a critical attack. If this happens, the impact and the loss of our society are immense (Chen et al., 2009c; Chen et al., 2009b; Chen et al., 2009a). The broad range of terrorism motives make it likely that power systems, as one of the most important critical infrastructures, might become a highly desirable target of terrorists (National Research Council, 2002; Salmeron et al., 2004). Thus, traditional security framework of power systems is facing an immense challenge, because terrorists are often considered as highly intelligent and strategic actors, who can deliberately trigger those low probability events which lack necessary protection but can cause serious damages. If this happens, the impact is significant. Some researchers have studied the power grid security problems under terrorist attacks. By studying how to attack power grids, they tried to explore new vulnerabilities. Salmeron et al. (Salmeron et al., 2004) first formulized the terrorism threat problem in power systems, in which the terrorists tries to maximize load shed. Arroy and Galiana (Arroyo and Galiana, 2005) generalized Salmeron’s model to a bilevel programming problem which is more flexible. (Motto et al., 2005) transformed the problem to a mixed integer bilevel programming model and presented a solution procedure. From the articles by Arroyo and Galiana,
2.7 Power System Vulnerability Assessment
37
2005; Motto et al., 2005; Salmeron et al., 2004, it can be observed that in the new context where terrorists come into play, traditionally robust power systems are vulnerable. Therefore, seeking new methodology and security criteria for defending power systems under potential terrorist threat is an urgent and important work. Game theory (Von Neumann and Morgenstern, 1944; Owen, 1995) by contrast, does treat actors as fully strategic and has been successfully applied to many disciplines including economics (Kreps, 1990), political science, and military (Cornell and Guikema, 1991; Hohzaki and Nagashima, 2009), where multiple players with different objectives can compete and interact with each other. Recently, Holmegren et al. (Holmegren et al., 2007) proposed a static two-player zero-sum game model for studying the strategies of defending electric power systems under terrorist attacks. In the model, simultaneously, defenders deploy a strategy with a limited budget for protecting each element of power systems, and terrorists choose a target to attack. Furthermore, they study a number of attack strategies and found that a dominant defense strategy, which is optimal for all attack scenarios, did not exist. For every attack strategy, there exists an optimized defense strategy against it. This is a good start to seek defense methodology for power systems protection under terrorism threats and game theory inaugurates a new direction for power systems vulnerability research. However, it is obvious that successfully applying those optimized defense strategies requires defenders knowing about the attack strategy of terrorists beforehand. Otherwise, those optimized strategies might not be effective. The defender-attacker model of electrical power systems, previously reported in the article by Holmgren et al., 2007, is provided below with essentially the same notion. The defenders are governments (or power companies) who have limited budget to protect power systems as much as possible. The attackers are terrorists who can have a different scale, i.e. a single terrorist, a terrorism organization, or even an enemy country, etc. Attackers with different scales can have the capability to perform a successful attack on a target with different sizes. For example, a single terrorist can break a transmission line or a transformer; a terrorism organization can disable several elements of a power system; an enemy country even has the competence to destroy a whole power system. (Holmgren et al., 2007) assumed that every combination of elements of power systems could be considered as a target. A successful attack will lead to a failure of a target, which may cause the loss of supply to customers. According to the article by Chen et al., 2009c, there are basically three types of games between defenders and terrorists: 1) Static game (Owen, 1995) Simultaneously, defenders deploy a strategy c (an allocation plan of budget for N elements of a power system) to defend, and terrorists choose a strategy q to launch an attack. Simultaneousness (Owen, 1995; Rustem and Howe, 2002) includes another equivalent case: they do not move at the same
38
2 Fundamentals of Emerging Techniques
time, but the latter player is unaware of earlier player’s action. In the game of defenders and terrorists, the latter player must be terrorists. Otherwise if the terrorists attack first, the latter defender is useless, because if the terrorists attack a power system without defense, the loss must be immense. That is, terrorists can obtain a large payoff and the latter defense cannot change the result which has already been gained by terrorists. 2) Dynamic game The static game assumes that terrorists know nothing about defenders’ strategy. However, in real situation, terrorists can try their best to seek the information they need. For example, they can use threat, blackmail, torture, and bribery to acquire the power system protection information, namely the strategy of defenders. Therefore, we extend the static game to a dynamic version: Defenders deploy a strategy c first. Terrorists can see the action c and then choose a strategy q to launch an attack. 3) Manifoldness of games The static model and dynamic one are two extreme cases which assume that terrorists know nothing or fully know the strategy of defenders. Sometimes terrorists can only partly know the strategy and we can form many cases based on how much terrorists know about the strategy. Consequently, the game models between defenders and terrorists are manifold, which make the problem quite complicated if we discuss them one by one. For facilitating the analysis, we generalize the diversity into the following comprehensive framework. Chen et al. (Chen et al., 2009c) proposed a comprehensive and quantitative mathematical framework to study the new power systems security problem under terrorism threat, in which the interactions between the defenders and terrorists are formed as several kinds of games. Game theory is a useful mathematical tool by which terrorists can be modeled as highly intelligent and strategic players. Specially, we have derived a reliable strategy of defenders against potential terrorist threats. When defenders deploy the strategy for power systems protection before terrorists launch an attack, the loss of them can be predicted and minimized. Moreover, some new criteria are also achieved and they can provide useful guides for governments or power companies to make rapid and correct decisions, when they confront potential terrorist threats. More studies on power system vulnerability with respect to terrorist attacks can be found in the articles by Allanach et al,, 2004; Chen et al., 2009; Wang, 2003; Powell, 2007).
2.8 Summary
39
2.8 Summary The recent evolution of the power industry in most countries, including increased complexity and deregulations, has created many new challenges for researchers and engineers. In many aspects of power system analysis, from operations to planning, from monitoring to protection and control, from power system stability to cascading failures, the new challenges require new techniques in order to perform reliable and accurate analysis. Given the wide variety of challenges and techniques, this book could not cover all such emerging techniques; instead it summarized only some of them. This chapter serves as a general overview of specific emerging techniques for power system analysis. These techniques include cascading failure analysis, data mining and its applications in power system analysis, grid computing, phasor measurement units, topological and complex system theory applications in power system vulnerability assessment, and issues with terrorist attacks on critical infrastructure such as power systems. Some of these techniques are covered in more detail in the following chapters. Comprehensive references are given for further reading to those applications not covered in details in this book.
References Albert R, Jeong H, Barabasi AL (2000) Error and attack tolerance of complex networks. Nature 406: 378 – 382 Ali M, Dong ZY, Li X et al (2005) Applications of grid computing in power systems Procedings of Australasian Universities Power Engineering Conference, Hobart, Australia Ali M, Dong ZY Y, Zhang P (2009) Adoptability of grid computing in power systems analysis, operations and control. IET Generation, Trans Distribu Ali M, Dong ZY, Li X et al (2006) RSA-Grid: A grid computing based framework for power system reliability and security analysis. IEEE-PES General Meeting 2006, Montreal, Canada, 18 – 22 June 2006 Allanach J, Haiying Tu, Singh S et al (2004) Detecting, tracking, and counteracting terrorist networks via hidden Markov models. Proceedings of the 2004 IEEE Aerospace Conference. Big sky, Montana, 6 – 13 March 2004 Arroyo JM, Galiana FD (2005) On the solution of the bilevel programming formulation of the terrorist threat problem. IEEE Trans Power Syst 20(2): 789 – 797 Asadzadeh P, Buyya R, Kei CL et al (2004) Global grids and software toolkits: a study of four grid middleware technologies. Technical Report. Grid Computing and Distributed Systems Laboratory, University of Melbourne. Australia, 1 July 2004 Axceleon website. http://www.axceleon.com. Accessed 3 July 2009 Axceleon and Power Technologies Inc (2003) Partner to Deliver Grid Computing Solution. http://www.axceleon.com/press/release030318.html. Accessed 3
40
2 Fundamentals of Emerging Techniques
July 2009 Billinton R, Kuruganty PRS (1980) A probabilistic index for transient stability assessment. IEEE Trans PAS 99: 195 – 206 Billinton R, Kuruganty PRS (1979) Probabilistic evaluation of transient stability in a multimachine power system. Proc IEE 126: 321 – 326 Billinton R, Li W (1994) Reliability assessment of electric power systems using Monte Carlo methods. Plenum Press, New York Cannataro M, Talia D (2003) The knowledge grid. Communications of the ACM 46(1): 89 – 93 Carreras B, Lynch V, Dobson I et al (2002) Critical points and transitions in an electric power transmission model for cascading failure blackouts. Chaos 12(4): 985 – 994 Carreras BA, Newman DE, Dobson I et al (2004) Evidence for self-organized criticality in a time series of electric power system blackouts. IEEE Trans Circ Syst 51(9): 1733 – 1740 Casals MR, Valverde S, Sole R (2007) Topological vulnerability of the European power grid under errors and attacks. Int J Bifurcation Chaos 17(7): 2465 – 75 Chen J, Thorp JS, Dobson I (2005) Cascading dynamics and mitigation assessment in power systems disturbances via a hidden failure model. Power Energy Syst 27(4): 318 – 326 Chen QM, Jiang CW, Qiu WZ et al (2006) Probability models for estimating the probabilities of cascading outages in highvoltage transmission network. IEEE Trans Power Syst 21(3): 1423 – 1431 Chen Y, Shen C, Zhang W et al (2004) II-GRID: grid computing infrastructure for power systems. Proceedings of the 39th International Universities Power Engineering Conference (UPEC 2004): 1204 – 1208 CIGRE (1992) GTF 38-03-10, Power system reliability analysis, Vol 2 composite power system reliability evaluation, Paris Chen G, Dong ZY, Hill DJ et al (2009a) Attack structural vulnerability of complex power networks. IEEE Trans Power Syst (submitted to) Chen G, Dong ZY, Hill DJ et al (2009b) An improved model for structural vulnerability analysis of power networks. Physica A 388: 4259 – 4266 Chen G, Dong ZY, Hill DJ et al (2009) Exploring reliable strategies for defending power systems under terrorism threat. IEEE Trans Power Syst (submitted to) Chen YM, Wu D, Wu CK (2009) A game theory approach for the reallocation of security forces against terrorist diversionary attacks. IEEE International Conference on Intelligence and Security Informatics, 8 – 11 June 2009, pp 89 – 94 Choi J, Tran T, El-Keib AA et al (2005) A mehtod for transmission system expansion planning considering probabilistic reliability criteria. IEEE Trans Power Syst 20(3): 1606 – 1615 Cornell E, Guikema S (1991) Probabilistic modeling of terrorist threat: A systems analysis approach to setting priorities among countermeasures, Military Oper Res 7(3) Cohen R, Erez K, ben-Avraham D et al (2000) Physical Review Letters 85: 4626 Cortes C, Vapnik V (1995) Support vector networks. Machine Learning 20: 273 – 297 Crucitti P, Latorab V, Marchiori M (2004) Error and attack tolerance of complex networks. Physica A 340: 388 – 394 Crucitti P, Latora V, Marchiori M (2003) Efficiency of scale-free networks: error and attack tolerance. Physica A 320: 622 – 642 Crucitti P, Latora V (2004) A topological analysis of the italian electric power grid. Physica A 338: 92 – 97 Chen J, Thorp JS, Dobson I (2005) Cascading dynamics and mitigation assessment
References
41
in power system disturbances via a hidden failure model, Int J Electr Power Energy Syst 27(4): 318 – 326 Dobson I, Carreras BA, Lynch VE et al (2007) Complex systems analysis of series of blackouts: cascading failure, critical points, and self-organization. Chaos 17(2): 026103 Dobson I, Carreras BA, Newman DE (2005) A loading-dependent model of probabilistic cascading failure. Probab Eng Inf Sci 19(1):15 – 32 Dobson I, Carreras BA, Newman DE (2003) A probabilistic loadingdependent model of cascading failure and possible implications for blackouts. Proceedings the 36th International Conferece on System Sciences, Hawaii, 6 – 9 January 2003 Dobson I, Carreras BA, Newman DE (2004) A branching process approximation to cascading load-dependent system failure. Proceedings of the 37th Ann Hawaii Int Conf Syst Sci, vol 37, pp 915 – 924 Dong ZY, Hill DJ, Guo Y (2005) A power system control scheme based on security visualisation in parameter space. Int J Electr Power Energy Syst 27(7): 488 – 495 Duda R, Hart P (1973) Pattern Classification and Scene Analysis. Wiley, New York EPRI (2002) Probabilistic Reliability Assessment Software users’ guide by EDF R&D, 1 December 2002 EPRI (2004) Probabilistic Transmission Planning: Summary of Tools, Status, and Future Plans, EPRI, Palo Alto, California: 2004. 1008612 EPRI (2007) PMU Implementation and Application. EPRI, Palo Alto Sun K, Lee ST (2008) Power system security pattern recognition based on phase space visualiza-tion. IEEE Int Conf on Electric Utility Deregulation and Restructuring and Power Technolo-gies (DRPT 2008), Nanjing, 6 – 9 September 2008 EUROGRID Project: Application Testbed for European GRID computing. http:// www.eurogrid.org/. Accessed 18 July 2009 Figueiredo V, Rodrigues F, Vale Z et al (2005) An electric energy consumer characterization framework based on data mining techniques. IEEE Trans Power Syst 20(2): 596 – 602 Foster I, Kesselman C (1997) Globus: a metacomputing infrastructure toolkit. Int J Supercomput Appl 11(2): 115 – 128 Foster I, Kesselman C, Tuecke S (2001) The Anatomy of the Grid: Enabling Scalable Virtual Organizations. Int J High Perform Comput Appl 15(3): 200 – 222 GridPP, UK Computing for Particle Physics. http://www.gridpp.ac.uk. Accessed 8 July 2009 Han JW (2006) Data mining: concepts and techniques. Morgan Kaufmann, San Francisco Hill DJ, Chen GR (2006) Power systems as dynamic networks. Proceedings of IEEE International Symposium on Circuits and Systems, Island of kos, 21 – 24 May 2006 Holmgren AJ, Jenelius E, Westin J (2007) Evaluating strategies for defending electric power networks against antagonistic attacks, IEEE Trans Power Syst 22(1): 76 – 84 Hohzaki R, Nagashima S (2009) A stackelberg equilibrium for a missile procurement problem. Eur J Operational Res, p193 Hsu YY, Chang CL (1988) Probabilistic transient stability studies using the conditional probability approach. IEEE Trans Power Syst 3(4): 1565 – 1572 Huang Z, Nieplocha J (2008) Transforming power grid operations via highperformance computing. Proceedings of the IEEE Power and Energy Society General Meeting, Pittsburgh, 20 – 24 July 2008
42
2 Fundamentals of Emerging Techniques
IEEE PES CAMS Task Force on Understanding, Prediction, Mitigation and Restoration of Cascading Failures (2009) Vulnerability assessment for predicting cascading failures in electric power transmission systems. Proc of IEEE Power and Energy Society Power System Conference and Exposition, Seattle, 15 – 18 March 2009, IEEE PES CAMS Task Force on Understanding, Prediction, Mitigation and Restoration of Cascading Failures (2008) Initial review of methods for cascading failure analysis in electric power transmission systems. Proc IEEE Power and Energy Society General Meeting, Pittsburgh, 20 – 24 July 2008 Irving M, Taylor G, Hobson P (2004) Plug in to grid computing, moving beyond the web, a look at the potential benefits of grid computing for future power networks. IEEE Power Energy Mag, pp 40 – 44 Jie Chen, James S. Thorp and Ian Dobson. Cascading dynamics and mitigation assessment in power system disturbances via a hidden failure model. Electr Power Energy Syst 27 (2005): 318 – 326 Kreps DM (1990) Game, Theory and Economic Modeling. Oxford University Press, Oxford Kunder P (1994) Power System Stability and Control. McGraw-Hill, New York Kinney R, Crucitti P, Albert R (2005) Modeling cascading failures in the north American power grid. Eur Phys J B 46 Kirschen DS, Jawayeera D, Nedic DP et al (2004) A probabilistic indicator of system stress. IEEE Trans Power Syst 19: 1650 – 1657 Latora V, Marchiori M (2001) Efficient behavior of small-world networks. Phys Rev Lett 87: 198 – 701 Lee ST (2003) Factors related to the series of outages on august 14, 2003. EPRI Product ID 1009317. www.epri.com. Accessed 18 July 2009 Leite da Silva AM, Ribeiro SMP, Arienti VL et al (1990) Probabilistic load flow techniques applied to power system expansion planning. IEEE Trans Power Syst 5(4): 1047 – 1053 Lewis DD (1998) An na¨ıve (bayes) at forty: the independence assumption in information retrieval. Proc ECML-98, 10th European Conference on Machine Learning. Chemnitz, DE, 1998, Springer, Heidelberg, pp 4 – 15 Lewis DD (1998) An na¨ıve (bayes) at forty: the independence assumption in information retrieval. Proceedings of the 10th European Conference on Machine Learning, Chem-nitz, 21 – 24 April 1998, Springer, Heidelberg, p 415 Littlestone N (1998) Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. Machine Learning 2(4): 285 – 318 Liu CC et al (2007) Learning to Recognize the Vulnerable Patterns of Cascaded Events. EPRI Technical Report Madan S, Son W-K, Bollinger KE (1997) Applications of data mining for power systems. Proceedings of Canadian Conference on Electrical and Computer Engineering, 25 – 28 May 1997, pp 403 – 406 Makarov YV, Hardiman RC (2003) Risk, reliability, cascading, and restructuring. IEEE PES General Meeting, vol 3, pp 1417 – 1429 Michigan Public Service Commission (2003) Report on august 14th Blackout Mili L, Qui Q, Phadke AG (2004) Risk assessment of catastrophic failures in electric power systems, Int J crit infrastruct 1(1): 38 – 63 Motter AE, Nishikawa T, Lai YC (2002) Range-based attack on links in scale-free networks: are long-range links responsible for the small-world phenomenon. Phys Rev E 66, 065103 Motto A, Arroyo JM, Galiana FD (2005) A mixed-integer LP procedure for the analysis of electric grid security under disruptive threat. IEEE Trans Power Syst 20(3): 1357 – 1365 NASA Information Power Grid (IPG) Infrastructure. http://www.gloriad.org/glo-
References
43
riad/projects/project000053.html. Accessed 27 May 2009 National Research Council (2002) Committee on Science and Technology for Countering Terrorism, National Academy Press, Washington NERC, US-Canada Power System Outage Task Force (2004) Final Report on the August 14, 2003 Blackout in the United States and Canada: Causes and Recommendations. http://www.nerc.com/filez/blackout.html. Accessed 3 July 2009 Nedic DP, Dobson I, Kirschen DS et al (2006) Criticality in a cascading failure blackout model. Int J Electr Power Energy Syst 28: 627 – 633 Nizar AH, Dong ZY, Wang Y (2008) Power utility nontechnical loss analysis with extreme learning machine method. IEEE Trans Power Syst 23(3): 946 – 955 Olaru C, Wehenkel L (1999) Data Mining. CAP Tutorial, pp 19 – 25 Owen G (1995) Game Theory, 3rd edn. Academic, New York Particle Physics Data Grid Collaboratory Pilot (PPDG). http://www.ppdg.net/. Accessed 3 July 2009 Pecas Lopes JA, Vasconcelos MH (2000) On-line dynamic security assessment based on kernel regression trees. Proceeding of IEEE PES Winter Meeting, 2: 1075 – 1080 Powell R (2007) Defending against terrorist attacks with limited resources. American political science review 101(3) Quinlan TR (1996) Improved use of continuous attributes in C4.5. J Art Int Res 4, 77 – 90 Robert CP, Casella G (1004) Monte Carlo Statistical Methods, 2nd Edn. Springer, New York Rumelhart DE, GE Hinton, RJ Williams (1986) Learning internal representations by error propagation. in: Rumelhart DE, McClelland JL eds, Parallel Distributed Processing. MIT press, Cambridge Rustem B, Howe M (2002) Algorithms for Worst-Case Design and Applications to Risk Management. Princeton University Press, Princeton Sebastiani F (2002) Machine Learning in Automated Text Categorization, ACM Computing Surveys (CSUR) 34(1): 1 – 47 Salmeron J, Wood K, Baldick R (2004) Analysis of electric grid security under terrorist threat. IEEE Trans Power syst 19(2): 905 – 912 Stubna MD, Fowler J (2003) An application of the highly optimized tolerance model to electrical blackouts. Bifurcation Chaos Appl, Sci Eng 13(1): 237 – 242 Strogatz SH (2001) Exploring complex networks. Nature 410 (6825): 268 – 276 Task Force on Understanding, Prediction, Mitigation and Restoration of Cascading Failures, IEEE PES Computer and Analytical Methods Subcommittee (2009), Vulnerability Assessment for Cascading Failures in Electric Power Systems. Proc IEEE Power and Energy Society Power Systems Conference and Exposition, 15 – 18 March 2009 Taylor GA, Irving MR, Hobson PR et al (2006) Distributed monitoring and control of future power systems via grid computing. IEEE PES General meeting 2006, Montreal, 18 – 22 June 2006 Taylor Cm, Erickson D, Martin K et al (2005) WACS —— wide area stability and voltage control system: R&D and online demonstration. Proceedings of the IEEE 93(5): 892 – 906 Ten C, Liu CC, Govindarasu M (2007) Vulnerability assessment of cybersecurity for SCADA systems using attack trees. Proceedings of PES General Meeting, Tampa, 24 – 28 June 2007 Tso SK, Lin JK, Ho HK et al (2004) Data mining for detection of sensitive buses and influential buses in a power system subjected to disturbances. IEEE Trans Power Syst 19(1): 563 – 568 Vapnik V (1995) The Nature of Statistical Learning Theory. Springer, New York U.S.-Canada Power System Outage Task Force (2004) Final report on the August
44
2 Fundamentals of Emerging Techniques
14, 2003 blackout in the united states and canada: causes and recommendations. http://www.nerc.com/filez/blackout.html. Accessed 9 May 2009 Von Neumann I, Morgenstern O (1944) Theory of Games and Economic Behavior. Princeton University Press, Princeton Wang KW, Chung CY, Tse CT et al (2000) Improved probabilistic method for power system dynamic stability studies. IEE Proc Gen, Trans Distr 147(1): 27 – 43 Wang HM (2003) Contingency planning: emergency preparedness for terrorist attacks. Proceedings of IEEE 37th Annual International Carnahan Conference on Security Technology, Taipei, 14 – 16 October 2003, pp 535 – 543 Wang H, Liu Y (2005) Power system restoration collaborative grid based on grid computing environment. Proceedings of IEEE Power Engineering Society General Meeting 2005, San Francisco, 12 – 16 June 2005 Wikipedia. Data Mining. http://en.wikipedia.org/wiki/Data mining. Accessed 3 July 2009 Xu J, Wang XF (2005) Cascading failures in scale-free coupled map lattices. Physica A: Statistica Mech Appl 349(3 – 4): 685 – 692 Xu Z, Dong ZY (2005) Probabilistic small signal analysis using monte carlo simulation. Proceedings of IEEE PES General Meeting, San Francisco, 12 – 16 June 2005 Yi Jun, Zhou Xiaoxin, Xiao Yunan (2006) Model of Cascading Failure in Power Systems. Proceedings of International Conference on Power System Technology, Chongqing, 22 –26 October 2006 Zhao JH, Dong ZY, Xu Z et al (2008) A statistical approach for interval forecasting of the electricity price. IEEE Trans Power Syst 23(2): 267 – 276 Zhao JH, Dong ZY, Li X (2007) Electricity market price spike forecasting and decision making, IET Gen Trans Dist 1(4): 647 – 654 Zhao JH, Dong ZY, Li X et al (2007) A framework for electricity price spike analysis with advanced data mining methods. IEEE Trans Power Syst 22(1): 376 – 385 Zhao JH, Dong ZY, Zhang P (2007) Mining complex power networks for blackout prevention. Proceedings of the Thirteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Jose, 12 – 15 August 2007 Zhang P, Lee ST (2004) Probabilistic load flow computation using the method of combined cumulants and Gram-Charlier expansion. IEEE Trans Power Syst 19(1): 676 – 682 Zhang P, Lee ST, Sobajic D (2004) Moving toward probabilitic reliability assessment methods. 2004 International Conference on Probabilistic Methods Applied to Power Systems, Ames, 12 – 14 September 2004, pp 906 – 913 Zima M, Andersson G (2004) Wide area monitoring and control as a tool for mitigation of cascading failures. The 8th International Conference on Probabilistic Methods Applied to Power Systems Iowa State University, Ames, 12 – 16 September 2004
3 Data Mining Techniques and Its Application in Power Industry Junhua Zhao, Zhaoyang Dong, and Pei Zhang
3.1 Introduction Coinciding with the economic development and population growth, the size of modern power system is also quickly growing. Therefore, the information systems of the power industry are also becoming increasingly complex. A huge amount of data can be collected by the SCADA system and then transmitted to and stored in a central database. These data potentially contain a large quantity of information useful for system operation and planning. However, no one can actually understand the data and extract useful knowledge because of the huge volume and complicated relationships. The deregulation of the power industry has also contributed to the difficulty of power system data analysis. Every day the market operator can collect a large amount of market data, such as the spot price, load level, generation capacity, bidding information, temperature, and many other relevant market factors. These data are again difficult to understand and call for powerful data analysis tools. Because of the potentially significant usefulness of power system data analysis, data mining has attracted increasing interests from both the power engineering society and power industry. Generally speaking, data mining is the technique to extract useful information from a large amount of data. Applying data mining in the power industry can take advantage of the abundant information hidden in market and system data, to assist the system operation, planning, and risk management. Currently, studies have been conducted to apply data mining techniques in a variety of topics, such as power system security assessment, load and price forecasting, bidding strategy design, and generation risk management. In the following sections of this chapter, the fundamentals of data mining will be introduced. We will also discuss some important applications of data
46
3 Data Mining Techniques and Its Application in Power Industry
mining in the power industry.
3.2 Fundamentals of Data Mining Data mining is the process of extracting hidden knowledge from a large amount of data. In data mining, knowledge is defined as novel, useful, and understandable patterns. In the computer science society, the term data mining is interchangeable with knowledge discovery in database (KDD); both of which refer to the processes and tools used for transforming data into useful knowledge and information. The patterns or relations discovered by data mining should be novel, in that these patterns have not been discovered or assumed before the data mining process is performed. In this sense, data mining is a tool to discover new knowledge rather than validating existing knowledge. Data mining has attracted increasing interests from a variety of different industries recently because of the increasing volume and complexity of data. Traditionally, humans extract knowledge from data by manually conducting data analysis before the computer was invented. Early methods of data analysis include widely used statistical methods such as regression analysis and hypothesis testing. In recent years however, the data to be analyzed have grown significantly in size and complexity. Manual processes therefore become infeasible, and there are increasing needs for automated data analysis tools. The emergence of data mining can also be attributed to the fast growing power of computing technology. With the computers of higher speed and larger memory capacity, processing of large amounts of data becomes possible. Data mining has close relationships with statistics, since statistics also focuses on finding patterns from data. Some ideas of data mining are directly drawn from statistics, such as the Bayes’ theorem and regression. The developments of data mining are also aided by the advances in other computing areas such as artificial intelligence, machine learning, neural networks, evolutionary computation, signal processing, parallel computing, and grid computing. The process of data mining can usually be divided into the following major steps (Fayyad et al., 1996). 1) Data Preprocessing The data should be pre-processed before the mining algorithms can be applied. Pre-processing can be further divided into the following sub-tasks: data cleaning, data selection, and data transformation. Data cleaning improves the quality of raw data by removing redundant data, filling in miss-
3.3 Correlation, Classification and Regression
47
ing data, or correcting data inconsistency. Data selection selects the most relevant data for the objectives of data mining, so as to reduce the data size and improve the computational efficiency. Data transformation transforms the raw data into another format so that mining algorithms can be applied easily. 2) Data Mining After the data has been processed, a variety of mining algorithms can be applied to identify different information according to the interests of customers. These algorithms can be mainly divided into four categories: correlation, classification, regression, and clustering. Each category has its unique application. 3) Results Validation and Knowledge Description The patterns discovered in the mining process should be validated since not all of them are necessarily valid and relevant to our interests. The patterns discovered are usually tested with a test data set which is different from the data used for training. A number of statistical criteria can be applied to evaluate the results such as ROC curves. Knowledge description is the process to change the discovered patterns into the formats that are understandable. For example, the patterns can be transformed into rules. The patterns may also be illustrated as pictures with a visualization process.
3.3 Correlation, Classification and Regression Correlation, classification, and regression are three main research directions of data mining. Each of them can solve different problems and has different applications in power industry. We briefly introduce their main ideas in this section. Correlation analysis (Zhao et al., 2007a) is a tool used for studying the relationships between several variables. In statistics, the term correlation originally indicates the strength and direction of linear relationship between random variables. It then has been extended to a broader sense to represent any linear or nonlinear relationships between random variables. The simplest measure of correlation is the Pearson’s correlation coefficient (Tamhane and Dunlop, 2000), which is defined as the covariance of two variables divided by the product of their standard deviations. This measure can only represent the linear correlation, and is not robust to outliers. Other nonparametric correlation coefficients, such as chi-square correlation coefficient, point biserial correlation, and Spearman’s correlation coefficient, have also been proposed to handle datasets with outliers. These correlation measures
48
3 Data Mining Techniques and Its Application in Power Industry
have direct applications in data mining. For example, they can be used as means of feature selection (Zhao et al., 2007a,b,c), which is a sub-task of data selection. Correlation measures can be used to determine the variables that are most relevant to the target variable of interest. The irrelevant variables can then be discarded to reduce the size and dimension of the data. Besides the above measures, association rule learning (Agrawal et al., 1993) is also a widely used method to study relationships between variables. Association rules describe co-occurrence relationships in an event database. For example, it can tell with what probability the voltage collapse may occur if the load level exceeds a certain level. Association rule learning can automatically search this kind of co-occurrence relationships in a large event database. It is useful for discovering previously unknown correlations between a number of phenomena. Its main weakness is that it lacks sound mathematical justification. Moreover, it sometimes will generate rules that are only applicable to the training dataset, but have no statistical significance in greater populations. These rules thus are meaningless and misleading. This phenomenon is called data dredging. Classification (Han and Kamber, 2006) is the process of dividing a dataset into several groups based on the quantitative information of each data item. Here each group is called a class with a pre-defined name (class label). In practice, the data items are usually represented as vectors that can be either discrete or continuous. A classifier is a functional mapping from a vector to a discrete variable (class label). We usually estimate this mapping based on a training dataset in which the class labels of all data items have been given. Classification is therefore a supervised learning problem in the sense that the estimation of the mapping is supervised by the training data. Classification problems have also been studied in statistics. Two statistical methods for classification are linear discriminant analysis (LDA) and logistic regression. LDA assumes that the probability density function of a data item conditional on a class label follows a normal distribution. Under this assumption, the Bayes optimal solution is used to assign the class label by comparing the likelihood ratio with a certain threshold. Logistic regression calculates the probabilities of class labels by fitting a logistic curve. It can handle both discrete and continuous variables. The main drawback of the two methods is that they usually perform poorly when the mapping is significantly nonlinear. Many different non-parametric classification methods have been proposed by the data mining society. These methods include decision trees, Na¨ıve Bayesian classifier (NBC), neural networks (NN), k-nearest neighboring, support vector machine (SVM), and other kernel methods. Most of these methods can estimate complex nonlinear functional mappings and therefore usually perform well on large and nonlinear datasets. Regression (Tamhane and Dunlop, 2000) is similar to classification in the sense that it also estimates a mapping between a data vector and a target variable. The main difference is that classification aims at determining a
3.4 Available Data Mining Tools
49
discrete target variable (class label), while regression aims at determining a continuous target variable, which is usually named as a dependent variable, while the data item itself is usually called the independent variables, explanatory variables, or predictors. For example, in electricity price forecasting, the predictors can be load level, temperature and generation capacity, while the dependent variable is the spot price. Similar to classification, regression estimates the functional mapping based on a training set in which the values of dependent variable of all data items have been given. Regression is therefore also a supervised learning problem. Regression is an important research area of statistics. The most important statistical method is linear regression, which assumes that the dependent variable is determined by a linear function of predictors. Moreover, linear regression usually assumes that the dependent variable has a normally distributed random error. There are also non-linear statistical methods such as segmented linear regression and nonlinear least square. Besides statistical methods, the data mining society has also proposed many other regression methods, such as neural networks and support vector machine. Informally, a neural network is a set of information processing elements (neutrons) connected with each other. The weights of connections can be adapted based on the training data. The output of a neural network is the dependent variable. The neural network is well-known for its powerful capability to estimate any nonlinear relationship. However, it usually has severe over-fitting problems, which means that the estimated mapping has poor generalization ability and poor predictive performance on test data. The support vector machine (SVM) belongs to a set of algorithms, namely kernel methods. They use a method called “kernel trick” to transform a linear mapping into a high dimensional nonlinear space so as to handle nonlinear data. Moreover, SVM employs structural risk minimization as its optimization objective to achieve a tradeoff between empirical accuracy and generalization ability. Therefore, it is usually more robust to over-fitting. Correlation, classification and regression all have important applications in the power industry, which will be introduced in following sections. For example, correlation can be used to determine the system variables most relevant to security assessment. Classification will be employed to predict the occurrences of price spikes. Regression has wide applications in load forecasting, price forecasting, and bidding strategy design.
3.4 Available Data Mining Tools A number of data mining systems, including both commercial and free ones, are currently available:
50
3 Data Mining Techniques and Its Application in Power Industry
1) RapidMiner This system was initially developed by the Artificial Intelligence Unit of University of Dortmund in 2001. It is distributed under a GNU license, and has been hosted by SourceForge since 2004. The system is written in JAVA and allows experiments to be made up of a large number of arbitrarily nestable operators, described in XML files which can easily be created with RapidMiner’s graphical user interface. It provides more than 500 functions for almost all major data mining procedures. 2) Weka It is also free software written in JAVA and distributed under a GNU license. Weka is developed and maintained by University of Waikato. It includes a comprehensive collection of data mining techniques, including preprocessing, clustering, classification, regression, and visualization. Weka provides access to SQL databases using Java Database Connectivity and can process the result returned by a database query. 3) SAS Enterprise Miner This is a commercial data mining system developed by SAS, which is the leader in statistics and data analysis software market. SAS enterprise miner is enterprise software which has powerful functionality for large-scale data analysis and can be used to solve complex business problems. It is based on a JAVA client/server architecture, which can easily extend from a single-user to large enterprise solutions. It also supports all sorts of database including Oracle, DB2, and Microsoft SQL Server. SAS enterprise miner has powerful tools for data modeling, mining, and visualization. It provides many popular algorithms such as association rules, linear and logistic regression, decision trees, neural networks, SVM, boosting, and clustering. 4) SPSS Clementine It is commercial data mining software developed by SPSS, which is another famous statistics software provider. It has been renamed by SPSS as PASW modeler 13 recently. Clementine uses a three tier design. Users manipulate the front-end application. The front-end client application then communicates with a Clementine Server software, or directly with a database or dataset. Clementine also supports a variety of popular mining algorithms. There are also other data mining systems such as IBM intelligent miner and the R project. Some toolboxes of Matlab also provide functionality of data mining.
3.5 Data Mining based Market Data Analysis
51
3.5 Data Mining based Market Data Analysis Data mining methods have wide applications in electricity market analysis problems, such as electricity price forecasting, bidding strategy design, and generation risk management. In this chapter, we mainly introduce two data mining based methods for predicting electricity price spikes and forecasting price intervals (Zhao et al., 2007 b,c,d).
3.5.1 Introduction to Electricity Price Forecasting Electricity price forecasting is essential for market participants in both daily operation and long-term planning analyses, such as designing bidding strategies and making investment decisions. It is very difficult to predict the exact value of the future electricity price, because it is a stochastic process with very high volatility. In electricity markets, the extremely high price is usually named as the price spike. A number of techniques have been applied in electricity price forecasting. These methods include time series models such as ARIMA and GARCH, neural networks, wavelet techniques, and SVM. These methods show good ability to forecast the expected electricity prices under normal market conditions (they are referred as expected prices in this chapter). So far however, none of these techniques can effectively deal with price spikes in an electricity market. In most cases, price spikes need to be removed as noise before forecasting algorithms can be applied; otherwise, very large forecasting errors will be produced. Another existing approach to process spikes is the Robust Estimation approach, which gives spikes smaller weights instead of eliminating them. However, accurate spike prediction still cannot be achieved with this method. Because price spikes have significant impact on the electricity market, an effective and accurate price spike prediction method is needed. In this chapter, a classification-based spike prediction framework is introduced. The framework can predict both the occurrence and values of price spikes. Combining this framework with an available expected price forecasting model, the new model can produce accurate price forecasts given the extreme price volatility caused by spikes. It is, therefore, very useful for electricity market participants. Besides predicting the value of the electricity price, it is also very important to predict the distribution, i.e. the prediction interval, of future prices, because price intervals can effectively reflect the uncertainties in the predication results. Generally speaking, a prediction interval is a stochastic interval, which contains the true value of the price with a pre-assigned probability. Because the prediction interval can quantify the uncertainty of the forecasted price, it can be employed to evaluate the risks of the decisions made by market
52
3 Data Mining Techniques and Its Application in Power Industry
participants. There are two major challenges for accurate interval forecasting of the electricity price: • To estimate the prediction interval, the value of the future price should be accurately forecasted. However, this is difficult because the electricity price is a nonlinear time series, which is highly volatile and cannot be properly modeled by traditional linear time series models. • In addition to the value, the variance of the price should also be accurately forecasted. This is because it is essential to estimate the price distribution so as to estimate the prediction interval. In practice, the price distribution is usually unknown; however, an estimated distribution can be commonly assumed for analysis. In this case, it will be essential to know the variance in order to predict intervals. Unfortunately, forecasting the variance is even more challenging because the variances of the price can be time varying. The electricity price is therefore a˙ heteroscedastic time series (Garcia et al., 2005). Because of the heteroscedasticity, the variance of the price at each time point should be estimated individually. However, in forecasting electricity price at each time point, we can have only one observation of the price, which is obviously insufficient to estimate its variance. In this chapter, a data mining based approach is presented to forecast the prediction interval of the electricity price series. To effectively handle the electricity price, the method is basically a nonlinear and heteroscedastic forecasting technique. As a well-known data mining method, SVM is employed to forecast the price value. SVM is considered as the candidate for the best regression technique, because it can accurately approximate any nonlinear function. Particularly, SVM has excellent generalization ability to unseen data, and outperforms most NN techniques by avoiding the over-fitting problem. To deal with the uncertainty in price forecasting, we use a statistical forecasting model for SVM to explicitly model the price variance and derive the maximum likelihood equation for the model. A gradient ascent based method for identifying the parameters of the model has also been developed. The established model can be used to forecast both the price value and variance. The prediction interval is then constructed based on the forecasted price value and variance. In the rest of the subsections of Section 3.5, the details of price spike prediction and interval price forecasting will be discussed. The experimental results will also be given in the following subsections (Zhao et al., 2007 b,d).
3.5.2 The Price Spikes in an Electricity Market Generally speaking, a price spike is an abnormal market clearing price at time t, and is significantly different from the price at previous time t−1.
3.5 Data Mining based Market Data Analysis
53
Price spikes may last for several time-units and are highly stochastic. The abnormal prices can be classified into three categories (Lu et al., 2005). Definition 3.1: A price that is significantly higher than its expected value is defined as an abnormal high price. Definition 3.2: If the difference between two neighbouring prices is larger than a threshold, this type of prices is defined as the abnormal jump price; Definition 3.3: A price lower than zero is defined as a negative price. Among these three types of spikes, the abnormal high price is analysed in this chapter. In the Australia National Electricity Market (NEM), this price can be several hundred times higher than the expected price, up to $10 000/MWhr. Given the above definitions of spikes, a precise criterion is needed to determine how high the prices should be in order to be considered as spikes. Price spikes can usually be determined by a statistical method based on historical data. Let μ be the mean and δ be the standard deviation of historical market prices, the spike threshold can be (Lu et al., 2005; Zhao et al., 2007b,c,d) Pv = μ ± 2δ.
(3.1)
All prices higher than this threshold are considered as spikes. Different electricity markets may have different thresholds. It should be noted that the threshold also varies in different seasons and months within the same market. More details will be given to explain how to determine the threshold in the following sections. The causes of price spikes have attracted extensive research in recent years. One of the common perceptions is that spikes are the results of the market power caused by suppliers (Borenstein, 2000). The authors of (Borenstein and Bushnell, 2000) also claimed that the vulnerability of the electricity market (difficulty in storing electricity, generation capacity constraints, and transmission capacity constraints) allowed the market power to be exploited to inflate the price. It is argued in (Guan et al., 2001) that suppliers withholding their capacity will shift the supply-demand curve so as to cause spikes. A good discussion in the article by Mount and Oh, 2004, proposed that the uncertainty about system load could be an incentive to speculations which will cause price spikes. The above analysis indicates that spikes are usually caused by some shortterm events, accidents or gaming behaviors, but not the long-term trends of market factors. These events causing spikes are usually subjective and difficult to forecast, spikes are therefore highly erratic. However, these events are not completely randomized. They statistically follow some patterns which can be discovered from data. For example, the probability of gaming behaviors will significantly increase when demand is high. It will be shown in following sections, that although key market factors cannot directly determine the occurrence of spikes, they can significantly influence the statistical distribution of spikes. Spikes can, therefore, be forecasted with the statistical infor-
54
3 Data Mining Techniques and Its Application in Power Industry
mation hidden in historical data. It is important to note that some spikes may not be predictable because of the uncertainty introduced by gaming behaviors. Game theoretical tools should be used to handle these spikes.
3.5.3 Framework for Price Spike Forecasting The objective of this price spike forecasting framework is to give reliable forecasts of both the occurrence and values of price spikes. As discussed in Section 3.5.2, spikes are the random events representing abnormal market conditions; therefore, better forecasting performance can be achieved if spikes and expected prices are handled separately by two models. This leads to the price spike forecasting framework. The major steps of this framework are given as follows: • Determining spikes. The first step of spike prediction is to determine whether a price will be considered as a spike based on Eq. (3.1). • Identifying and selecting relevant factors. Given a large amount of factors which can influence the electricity price, only a few of them need to be incorporated into the framework to improve its forecasting performance. Feature selection techniques can be applied to select the most relevant factors to be used in the following steps. • Training the expected price forecasting model, spike occurrence and spike value predictors. Proper regression or time series models can be used as the forecasting model of expected prices and as the spike value predictor. A classification algorithm is used as the spike occurrence predictor. The spike occurrence predictor is a classification model determining whether a market is in the abnormal conditions leading to spikes. Historical data of relevant features and prices can be used as training data. • For each future time point t, and its relevant feature vector Xt , determine whether it is a spike with spike occurrence predictor. • If a spike is predicted at time t, use the spike value predictor to estimate the value of the spike. • Otherwise, use the expected price model to forecast the price at time t. • Combining the results of expected price forecasting and spike predictions to form the complete results. The basic procedure of the framework is shown in Fig.3.1. We will conduct several experiments validating the effectiveness of our framework in the case studies section. Because classification techniques are essential in predicting the occurrence of spikes, we briefly review the classification fundamentals, and discuss two classification techniques, SVM and probability classifier in more detail. 1) Feature Selection Feature selection is used to choose the attributes relevant to a classifica-
3.5 Data Mining based Market Data Analysis
55
Fig. 3.1. A flow chart of the new framework of price spike forecasting (Zhao et al., 2007b,d)
tion problem. Based on statistical analysis, the following seven attributes are chosen to be incorporated into the classification algorithms. Demand It is well known that demand is closely related to the market clearing price in most electricity markets. As shown in Fig.3.2, when demand is greater than 5700MW the probability of spike occurrence significantly increases. Note that demand cannot exactly determine the occurrence of spikes. Fig.3.2 shows that spikes may also happen even when system demand is at a lower level of 4 000 – 4 500 MW. Demand is chosen as an input of the approach, because it can provide useful statistical information. The two classification methods chosen in our framework, Support Vector Machine and Naive Bayesian Classifier,
56
3 Data Mining Techniques and Its Application in Power Industry
forecast the spike by estimating its occurrence probability. Therefore, the inputs of our system is not necessary the determinants of spikes. Any factors that are statistically correlated with spikes can be selected as the relevant factors. Nogales and Conejio (Nogales and Conejio, 2006) gave a good discussion about the relationship between demand and price. The experiments in the artical by Nogales and Conejo, 2006 also demonstrated that including demand in price forecasting models would lead to significant improvement comparing with the same model without demand (Zhao et al., 2007b,d).
Fig. 3.2. Demand vs RRP in QLD Market in September 2003 – May 2004
Supply The relationship between supply and RRP is similar to that of demand – see Fig.3.3. This is because a power system requires supply and demand to be balanced constantly. Supply is also selected as a relevant factor for spike analysis. Existence Existence is an attribute describing the relationship between a spike and its predecessors. It can only be 1 or 0. Definition 3.4: Existence Index. Existence index of a time t is defined as ⎧ ⎪ ⎪ ⎨ 1, if spikes have occurred in the same day Iex (t) = (3.2) before time t ⎪ ⎪ ⎩ 0. otherwise This attribute is presented because spikes tend to occur together over a short period of time. According to the historical data, it is rarely observed that there is only one spike occurring within one day. Statistical analysis
3.5 Data Mining based Market Data Analysis
57
Fig. 3.3. Supply vs RRP in QLD Electricity Market in September 2003 – May 2004
shows that 96% of spikes occurred after some spikes had occurred within the same day. Therefore, the probability of spikes increases significantly if the existence index is 1. Table 3.1. QLD market spike distribution in summer 2003 – 2004 Date
11 Nov.
17 Nov.
9 Dec.
10 Dec.
11 Dec.
Spikes Date Spikes Date Spikes Date Spikes Date Spikes
2 16 Dec. 2 5 Jan. 34 11 Feb. 37 16 Feb. 39
4 17 Dec. 21 7 Jan. 44 12 Feb. 50 18 Feb. 11
11 19 7 21 1 13 65 19 73
9 3 Jan. 13 9 Feb. 25 14 Feb. 49 20 Feb. 71
30 4 Jan. 57 10 Feb. 50 15 Feb. 79 21 Feb. 88
Dec. Jan. Feb. Feb.
As shown in Table 3.1, spikes occurred in only 25 days out of the overall 121 days. The number of spikes in a given day is mostly greater than 10. This can be easily explained. As discussed previously, spikes are usually caused by some short-term events, such as contingencies and transmission congestion. These short-term events do not happen frequently, and usually last for several hours once occur. Accordingly, spikes tend to occur together in a short period, and this period can be several hours but no longer than a day (Zhao et al., 2007b,d). Season and time There are three types of seasons for each year in Australia: winter (May. – Aug.), middle (Mar., Apr., Sept., Oct.) and summer (Nov. – Feb.). Figs. 3.4 to 3.6 show the relationships between RRP and time of day in summer, middle season, and winter, respectively. In the Australia NEM, a time interval is 5
58
3 Data Mining Techniques and Its Application in Power Industry
minutes and there are 288 time intervals in a business day. A business day for the NEM starts at 4:05 a.m. in one day and ends at 4:00 a.m. the next day. From Figs. 3.4 to 3.6, obviously spikes and time of day have different relationships in three different seasons.
Fig. 3.4. Time of day vs RRP in QLD market in summer of 2003 – 2004
Fig. 3.5. Time of day vs RRP in QLD market in middle season of 2003 – 2004
Net interchange and dispatchable load Net interchange of a state is the amount of the electricity transported from other states. In QLD the net interchange is the electricity from NSW via the QNI interconnector. Dispatchable loads are the net consumers of electricity that register to participate in the central dispatch and pricing processes operated by NEMMCO
3.5 Data Mining based Market Data Analysis
59
Fig. 3.6. Time of day vs RRP in QLD market in winter of 2003 – 2004
before 1 July 2009 and AEMO thereafter. According to NEMMCO/AEMO, the following equation concerning dispatchable loads holds: Supply = T otal demand + Dispatchable generation +N et interchange.
(3.3)
These two factors are also selected as useful attributes for this classifier (6 and 7). 2) Classification Technique Fundamentals Classification is an important task in data mining and key for the price spike analysis. As discussed above, the classification problem is a supervised learning problem, where a fixed but unknown supervisor determines an output y for every input vector X (for classification, y should be discrete and is called the class label for a given X). The objective of a classifier is to obtain the functions y = f (X, α), α ∈ Λ which best approximates the supervisor’s response. Predicting the occurrence of the spike is a typical binary classification problem. The factors relevant to spikes can be considered as the dimensions of the input vector X = (x1 , x2 , . . . , xn ) at each time point t, where xi , i = 1, 2, . . . , n is the value of a relevant factor. The object is to determine the label y for every input vector X, where 1, non − spike y= (3.4) −1, spike and y denotes whether a spike will occur. SVM (Cortes and Vapnik, 1995) and a probability classifier (Han and Kamber, 2006) are selected in this chapter to predict the occurrence of spikes.
60
3 Data Mining Techniques and Its Application in Power Industry
Compared with other classification algorithms, SVM employs the structural risk minimization principle and is proven to have better generalization ability to unseen data in many classification problems (Cortes and Vapnik, 1995). Zhao et al. (Zhao et al., 2007b)showed that these two classification techniques were able to give good performance in the spike occurrence prediction 3) Support Vector Machine SVM is a new machine learning method firstly proposed by Vapnik et al. (Cortes and Vapnik, 1995; Vanik, 1995). It provides reliable classification functionality for the price spike analysis method and is briefly reviewed for completeness. The simplest form of SVM classification is the maximal margin classifier. It is used to solve the simplest classification problem, the binary classification case with linear separable training data. Consider the training data {(X1 , y1 ), . . . , (Xl , yl )} ⊂ Rn × {±1}, we assume that they are linear separable, i.e. there exists a hyperplane < W, X > +b = 0 which satisfies the following constraints: For every (Xi , yi ), i = 1, . . . , l, yi (< W, Xi > +b) > 0, where < W, X > is the dot product between W and X. Margin is defined as the distance from a hyperplane to the nearest point. The aim of maximal margin classifier is to find the hyperplane with the largest margin, i.e., the maximal hyperplane. This problem can be represented as: Minimize 2 W (3.5) 2 Subject to yi (< W, Xi > +b) 1. i = 1, . . . , l (3.6) Lagrange multipliers method can be used to solve it. In most real-world problems, training data are not always linear separable. There are two methods to modify linear SVM classification so as to handle non-linear cases. The first is to introduce some slack variables to tolerate some training errors; the influence of the noise in training data can be therefore decreased. This classifier with slack variables is a soft margin classifier. Another method is to use a map function Φ(X) : Rn → H to map the training data from the input space into some high dimensional feature space, so that they will become linear separable in the feature space where a SVM classifier can be applied. Note that the training data used in a SVM classifier are only in the form of dot product, therefore, after mapping the SVM algorithm will only depend on the dot product of Φ(X). If a function K(X1 , X2 ) =< Φ(X1 ), Φ(X2 ) > can be found, Φ(X) will not need to be explicitly calculated. K(X1 , X2 ) is a kernel function or kernel. The radial basis kernel is used in this chapter: X − Y 2 K(X, Y ) = exp − . (3.7) 2σ 2
3.5 Data Mining based Market Data Analysis
61
4) An Advanced Price Spike Probability Classifier A probability classifier is a classification algorithm based on statistical theory (Han and Kamber, 2006). Research has shown that although simple, the performance of probability classifier is equivalent to other popular classification methods such as decision tree and neural networks (Han and Kamber, 2006). It classifies input vectors based on the probability distribution of historical data. Basically, for a given input vector X = {x1 , x2 , . . ., xn } and its class label y ∈ {c1 , c2 , . . ., cm }, probability classifier calculates the probability that X belongs to class ci for i = 1, 2, . . ., m. X is labelled as class ci , which has the largest probability. The most popular probability classifier is the Na¨ıve Bayesian Classifier, which is based on the Bayes’ theorem. Theoretically, a Na¨ıve Bayesian Classifier has the least prediction error (Han and Kamber, 2006). An advanced classifier was proposed based on the basic Na¨ıve Bayesian Classifier to enhance the classification for price spike forecasting (Zhao et al., 2007b,c). For every input vector X, the probability of spikes (the probability of y = −1) is calculated and then compared with a threshold. If it is larger than the threshold, a spike will be predicted to occur, no matter whether this probability is larger than the probability of non-spikes (the probability of y = 1). This modification is performed because the price spike prediction problem is a serious imbalanced classification problem (i.e., some classes have much more samples than the other classes). In fact, the probability of spikes is less than the probability of non-spikes in most occasions. As will be shown in the case studies section, many spikes occur when their occurrence probabilities are smaller than 50%. Without setting a threshold smaller than 50%, many spikes will be misclassified. The threshold can also be determined from historical data. In summary, we assume that an input vector X has n attributes A1 , A2 , . . ., An , and Ai can take values xi1 , xi2 , . . . , xij , . . . Let s(i, j) denotes the number of input vectors which are spikes and have attributes Ai = xij . Let n(i, j) be the number of input vectors whose attributes Ai = xij . The probability classifier is summarised in Fig.3.7. With the classification techniques and feature selection procedures described above, the price analysis model is ready to be tested with the real market data from the Australian NEM. Some of the results are presented and discussed in the following section.
62
3 Data Mining Techniques and Its Application in Power Industry
Fig. 3.7. The procedure of a probability classifier
3.5 Data Mining based Market Data Analysis
63
3.5.4 Problem Formulation of Interval Price Forecasting In this section, the concept of heteroscedasticity is introduced, and the Lagrange Multiplier Test, which can be used to mathematically examine the heteroscedasticity of a time series. The formal definition of the prediction interval is then presented. Finally, three measures are introduced to evaluate the performance of this method. 1) Heteroscedasticity and Prediction Interval From the statistical point of view, a time series consists of the observations of a stochastic process. Generally, a time series {yt } can be assumed to be generated with the following statistical model: Yt = f (Xt ) + εt ,
(3.8)
where Yt is the random variable to forecast, and yt denotes the observed value of Yt at time t. Xt ∈ Rm is a m-dimensional explanatory vector. Each element Xt,i of Xt represents an explanatory variable which can influence Yt . Note that Xt can also contain the lagged values of Yt and εt , because Yt is usually correlated with its predecessors Yt−1 . . . and the previous noises εt−1 . . . The mapping f (Xt ): Rm → R can be any linear or nonlinear function. According to Eq. (3.8), the time series Yt contains two components, f (Xt ) is the deterministic component determining the mean of Yt ; and εt is the random component, which is also known as noise. εt is usually assumed to follow a normal distribution with a zero mean. We therefore have εt ∼ N (0, σ 2 ).
(3.9)
Because εt has a zero mean, the mean of Yt is completely determined by f (Xt ) and is usually selected as the forecasted value of Yt (Enders, 2004). On the other hand, because f (Xt ) is a deterministic function, the uncertainty of Yt purely comes from noise εt . Therefore, estimating σ 2 is essential for estimating the uncertainty of Yt . The statistical model Eqs. (3.8) and (3.9) assumes that the variance σ 2 is constant. This model is therefore named as the homoscedastic model. In practice, σ 2 of a time series is usually time-changing, of which the characteristic is termed as heteroscedasticity. The formal definition of the heteroscedastic time series is given as follows: Definition 3.5: Assuming a time series generating model, Yt = f (Xt ) + εt , εt ∼ N (0, σt2 ),
σt2 = g(εt−1 , εt−2 , . . . , Xt ).
(3.10) (3.11) (3.12)
If a time series {yt } is generated with the model Eqs. (3.10) – (3.12), it is a heteroscedastic time series (Engle, 82).
64
3 Data Mining Techniques and Its Application in Power Industry
Similar to f (Xt ), g(εt−1 , εt−2 , . . . , Xt ) can also be either linear or nonlinear. Note that the definition of heteroscedasticity in this chapter is a generalization of that in the article by Engle, 1982, because f (·) and g(·) can be both nonlinear in our model. According to Eq. (3.12), the variance of the heteroscedastic time series is time-changing and determined by the previous noises and the explanatory vector. A good example of a heteroscedastic time series is the electricity price of the Australian NEM as plotted in Fig.3.8, where it can be observed that the uncertainty/variance changes significantly in different periods. This observation clearly indicates that, even using the same forecasting technique, market participants may still face different risks in different time periods. Measuring these different risks is essential for market participants to make proper decisions.
Fig. 3.8. Electricity prices of the Australian NEM in May 2005
To verify the speculation from our visual observation, the Lagrange Multiplier (LM) test (Bollerslev, 1986) can be employed to mathematically test the heteroscedasticity of the NEM price series. In the experiments, we will verify that the electricity price is heteroscedastic by performing the LM test. To quantify the uncertainty of predicting the heteroscedastic price at each time point, we expect to construct a prediction interval, which contains the future value of the price with any pre-assigned probability. We give the following definition: Definition 3.6: Given a time series {Yt } which is generated with the model Eqs. (3.10) – (3.12), an α level prediction interval (PI) of Yt is a stochastic interval [Lt , Ut ] calculated from {Yt }, such that P (Yt ∈ [Lt , Ut ]) = α. Because noise εt is usually assumed to be normally distributed, Yt also follows a normal distribution. The α level prediction interval can therefore
3.5 Data Mining based Market Data Analysis
65
be calculated as Lt = μt − z(1−a)/2 × σt , Ut = μt + z(1−a)/2 × σt ,
(3.13) (3.14)
where μt is the conditional mean of Yt , and usually estimated with f (Xt ). In Eqs. (3.13) and (3.14), αis the confidence level and z(1−a)/2 is the critical value of the standard normal distribution. Now to calculate the prediction interval, the only remaining problem is to estimate σt from historical data. 2) Performance Evaluation Before developing our forecasting approach, several criteria are introduced for performance evaluation. Given T historical observations yt , 1 t T of a time series {Yt } and the corresponding forecasted prices yt∗ , 1 t T , mean absolute percentage error (MAPE) is defined as M AP E =
T 1 |yt − yt∗ | . T t=1 yt
(3.15)
MAPE is a widely used criterion for time series forecasting. It will also be employed to evaluate the proposed method in the case studies. Two criteria are also introduced to evaluate the interval forecasting. Given T historical observations yt , 1 t T of a time series {Yt } and the corresponding forecasted α level prediction intervals [lt , ut ], 1 t T , the empirical confidence α (Papadopoulos et al., 2001) and the Absolute Coverage Error (ACE) are defined as α =
f requency(yt ∈ [lt , ut ]) , T
ACE = |α − α |,
(3.16) (3.17)
where α is the number of observations, which fall into the forecasted PI, divided by the sample size. It should be as close to αas possible.
3.5.5 The Interval Forecasting Approach 1) Intuition behind Our Approach As stated in the preceding section, the proposed approach should be able to handle nonlinear and heteroscedastic time series. It must be able to accurately forecast both the value and variance of the price series, so as to forecast the PI (Zhao et al., 2008). To accomplish these objectives, we proposed the
66
3 Data Mining Techniques and Its Application in Power Industry
nonlinear conditional heteroscedastic forecasting (NCHF) model as follows, Yt = f (Yt−1 , . . . , Yt−p , Xt ) +
q
φi εt−i + εt ,
(3.18)
i=1
εt = σt · vt ,
(3.19)
vt ∼ N (0, 1), Xt = (Xt,1 , Xt,2 , . . . , Xt,m ), r m
αi ε2t−i + βj Xt,j . σt2 = α0 +
(3.20) (3.21)
i=1
(3.22)
j=1
In the above model, the time series {Yt } is a nonlinear function of its predecessors Yt−1 . . . the previous noises εt−1 . . . and the explanatory variables Xt . In Eqs. (3.18) and (3.22), p, q, r are user-defined parameters. Note that the Xt in Eq. (3.18) is slightly different from the Xt in Eq. (3.8). In Eq. (3.18), Xt does not contain Y and ε anymore. The variance σt2 in the proposed model is assumed to be a linear function of ε and Xt . Given an observed time series yt , 1 t T and the corresponding observed explanatory variables xt , 1 t T , the objective of the NCHF model is to estimate f (·) and the parameters φ, α, and β. Subsequently, the forecasted mean and variance of the price can be given as q
Yt∗ = E(Yt t−1 ) = f(Yt−1 , . . . , Yt−p , Xt ) + φi εt−i ,
σt2 ∗ = α 0 +
r
i=1
α i ε2t−i +
m
(3.23)
i=1
βj Xt,j ,
(3.24)
j=1
α, and β are the estimations of f, φ, α, and β. Finally, the predicwhere f, φ, tion interval can be calculated based on the forecasted mean and variance. By applying this method iteratively, we can easily obtain multiple step forecast. Training of the NCHF model can be divided into two major steps: • Any available nonlinear regression technique can be employed in the NCHF to estimate f (·) from historical data. We select SVM because of its excellent ability in handling nonlinearity and over-fitting. • α and β cannot be estimated using a regression technique because the true value of σt2 is unknown. Instead, we derive the likelihood function for the NCHF model and use the Maximum Likelihood Estimation (MLE) criterion to estimate φ, α, and β. The Gradient Ascent method is used to find out the optimal φ, α, and β, which maximize the likelihood function. The resultant φ, α, and β are used as the estimates of their actual values. With the NCHF model, the nonlinear patterns of the price can be well captured by a SVM. The heteroscedastic Eq. (3.22) is introduced to model the
3.5 Data Mining based Market Data Analysis
67
time-changing variance. Therefore, the NCHF model can effectively handle both nonlinearity and heteroscedasticity, hence satisfying the requirements of interval forecasting of the electricity price. This will be justified in the experiments (Zhao et al., 2008). 2) Estimating the NCHF Model As introduced in the previous section, constructing the NCHF model involves two steps: estimating f (·) and estimating φ, α, and β. If we consider Yt as the response variable (the output of SVM) and Yt−1 , . . . , Yt−p , Xt as the predictor variables (the inputs to SVM), a nonlinear function f(·) can be well approximated by SVM as the estimate of f (·). The remaining problem is how φ, α, and β can be estimated for the NCHF model. In practice, we never know the true values of σt2 . Therefore, data mining methods, such as SVM and regression tree, cannot be applied to estimate the relationship between σt2 and εt , Xt . To estimate φ, α, and β, Maximum Likelihood Estimation (MLE), which is a statistical estimation method, is employed in our approach. The main idea of MLE is to firstly derive the likelihood function, which represents the probability that the historical data can be observed given the NCHF model and a set of its parameters. The parameter values, which maximize the likelihood function, are then selected through an optimization process as the Maximum Likelihood estimates of φ, α, and β. Formally, let θ = (φ , α , β ) be the parameters to be estimated. Given the historical time series (y1 , y2 , . . . , yT ), we denote the likelihood function of the NCHF model as pYT ,...,Y1 (yT , . . . , y1 ; θ).
(3.25)
Likelihood expression (3.25) is known as the unconditional likelihood function, which represents the probability that (y1 , y2 , . . . , yT ) is observed given the NCHF model in expressions (3.18) – (3.22) and parameters θ. However, it is difficult to directly obtain expression (3.25) for the NCHF model. We therefore introduce the following lemma to decompose expression (3.25). Lemma 3.1: Given a time series y1 , . . . , yT generated by the model expressions (3.18) – (3.22), we assume that p, q, and r are smaller than T and k = max(p, q, r). The following equation holds: PYT ,...,Y1 (yT , . . . , y1 ; θ) = PYk ,...,Y1 (yk , . . . , y1 ; θ) × T
P (yt yt−1 , . . . , yt−k ; θ).
(3.26)
t=k+1
Lemma 3.1 is based on the Bayesian theory. According to Lemma 3.1, unconditional likelihood Eq. (3.26) can be obtained by multiplying the unconditional joint distribution of the first k observations with the conditional distributions of the last T −k observations. For computational convenience, an
68
3 Data Mining Techniques and Its Application in Power Industry
alternative likelihood function is employed instead of an unconditional likelihood function. By considering Y1 , . . . , Yk as deterministic values y1 , . . . , yk , the conditional likelihood function is P (yT , . . . , yk+1 yk , . . . , y1 ; θ) YT ,...,Yk+1 Yk ,...,Y1
=
T
P (yt yt−1 , . . . , yt−k ; θ).
(3.27)
t=k+1
In Eq. (3.27), P (yt yt−1 , . . . , yt−k ; θ) follows a normal distribution, because Yt−1 , . . . , Yt−k are treated as constants and the uncertainty is introduced only by εt , which is normally distributed. According to Eq. (3.18), we have q
2 (Yt yt−1 , . . . , yt−k ) ∼ N (f (yt−1 , . . . , yt−p , xt ) + φi et−i ), σt , (3.28) i=1
where et is the estimate of εt . The conditional density function of Yt can therefore be given as P (yt yt−1 , . . . , yt−k ; θ) Yt Yt−1 ,...,Yt−k ⎡ ⎤ q
2 −(y − f (y , y . . . , y , x ) − φe ) t t−1 t−2 t−p t t−i ⎥ ⎢ ⎢ ⎥ 1 i=1 ⎥. = exp ⎢ 2 ⎢ ⎥ 2σt 2πσt2 ⎣ ⎦ (3.29) Substituting Eq. (3.29) into Eq. (3.27), we reach the following theorem: Theorem 3.1: Denote Yt = (yt , . . . , y1 , xt , . . . , x1 ) as the observations of a time series {Yt }and the relevant explanatory variables obtained until time t, the conditional log likelihood for the NCHF model is given as L(θ) =
T
log f (yt xt , Yt−1 ; θ)
t=k+1 T 1
log(σt2 ) − = −(T − k) log(2π) − 2 t=k+1
T 1
2
t=k+1
(yt − f (yt−1 , . . . , yt−p , xt ) − 2σt2
q
i=1
φi et−i )2 .
(3.30)
The MLE of θ can now be considered as the value that maximizes Eq. (3.30).
3.5 Data Mining based Market Data Analysis
69
To calculate the log likelihood Eq. (3.30), a problem unsolved is how to obtain σt and et . According to Eq. (3.18), we have εt = yt − f (yt−1 , yt−2 , . . . , yt−p , xt ) −
q
φi et−i .
(3.31)
i=1
Therefore, as the estimate of εt , et can be calculated as t−1 , yt−2 , . . . , yt−p , xt ) − et = yt − f(y
q
φi et−i .
(3.32)
i=1
Replacing the εt in Eq. (3.22) with et and substituting Eq. (3.32) into Eq. (3.22), the estimate of σt2 can be given as σ t2 = α 0 +
r
i=1
α i e2t−i +
m
βj xt,j .
(3.33)
j=1
Given the sample Yt = (yt , . . . , y1 , xt , . . . , x1 ), the log likelihood Eq. (3.30) of the NCHF model can now be calculated via several steps. First, selecting an initial numerical value for θ = (φ , α , β ) and setting e1 , e2 , . . . , ek 2 2 to 0, the sequence of conditional variances σk+1 , σk+2 , . . . , σT2 can be iteratively calculated with Eq. (3.33) and employed to calculate conditional log likelihood Eq. (3.30). Second, an optimization algorithm should be performed to get the MLE of θ that maximizes Eq. (3.30). A simple optimization method is the Gradient Ascent optimization. To utilize this optimization method in our approach, we introduce the following lemma: Lemma 3.2: Given a sample of a time series and explanatory variables (yt , xt ), 1 t T , yt is assumed to be generated with the model expressions (3.18) – (3.22). Denote [zt (θ)] =
2 q
1, yt−1 − f (yt−2 , . . . , yt−p−1 , xt−1 ) − φi et−1−i , . . . , i=1
2 q
yt−r − f (yt−r−1 , . . . , yt−r−p , xt−r ) − , φi et−r−i i=1
the derivative of conditional log likelihood with respect to θ = (φ , α , β ) is thus given by log f (yt xt , Yt−1 ; θ) st (θ) = ⎤ ⎡ ⎡ θr xt et ⎤ 2 2 −2α e x j t−j t−j ⎥ e −σ ⎢ 2 = t 2 t ⎣ j=1 ⎦ + ⎣ σt ⎦ 2σt 0 zt (φ)
70
3 Data Mining Techniques and Its Application in Power Industry
Consequently, based on Eq. (3.34) in Lemma 3.2, the gradient of the log likelihood function can be calculated analytically: ∇L(θ) =
T
st (θ)
(3.34)
i=1
Summarizing the discussions above, the main procedure of training the NCHF model is presented as follows. Input: Training data (yt , xt ), 1 t T User defined parameters p, q, r Output: Forecasted time series yt , 1 t T t ], 1 t T Forecasted PI [ lt , u Algorithm: Train a SVM f(·) to approximate the function yt = f (yt−1 , yt−2 , . . . , yt−p , xt ); Randomly select initial values for parameters θ, and set e1 , e2 , . . . , ek as 0; Do Set the step lengthlen, take a step of length len to the direction of gradient (4.35), and obtain the new values of θ; Estimate σt2 and et for k + 1 t T according to Eqs. (4.32)−(4.33); Calculate log likelihood Eq. (4.30); Compare the new value of Eq. (4.30) with the value obtained in the last iteration; While (the optimization termination condition is not satisfied); After the estimates of f (·), φ, α, and β are obtained, forecasting the PI of a time series {yt } using the NCHF model is straightforward. The estimated mean of Yt , which is also the forecasted value of Yt , can be calculated with Eq. (3.23). The forecasted variance is obtained with Eq. (3.24). Finally, to forecast the PI of Yt , we can employ Yt∗ , σt∗ as the estimates of μt , σt , and use Eqs (3.13) and (3.14) to obtain the forecasted lower and upper bounds lt , ut of the PI (Zhao et al., 2008).
3.6 Data Mining based Power System Security Assessment Along with the world-wide market deregulation, the security of the power system becomes a severe challenge, because the power system is currently
3.6 Data Mining based Power System Security Assessment
71
operating under more stressed conditions and much more uncertainties as in comparison to the past. Recently, severe blackouts have been observed in USA, UK, Italy and several other countries. Blackouts are catastrophes with serious long-term consequences for the national economy and population, security assessment has, therefore, attracted great attention from both the academic society and industry. To assess the system security and effectively prevent blackouts, it is essential to predict which system component will become instable, so that corresponding measures can be taken to fix these components or separate them from the network. In practice, predicting the system instability is a highly difficult task because: 1) Feature Extraction No mature theory is currently established to identify the relevant factors of instability in a large-scale power system. Because a typical power system usually consists of tens of thousands of system variables, building a prediction model incorporating all system variables is computationally infeasible. 2) Fast Prediction In real power systems, the instability of a component can usually trigger a series of failures of other components, finally causing a blackout. To interrupt this cascading failure process, accurate prediction of the next instable component is essential so that measures can be taken to prevent it from becoming instable. Unfortunately, after an instable component is observed in a real system, existing simulation-based analysis tools need hours to identify potentially instable components, while in practice a blackout can usually occur in only several minutes thereafter. This characteristic of blackout prevention implies the need for a method that can give in-time prediction of instability. In this section, we report a data mining based tool developed to meet the above two major challenges. Our method consists of two major stages. In the first stage, a novel pattern discovering algorithm is implemented to search the Local Correlation Network Pattern (LCNP). To accelerate the search process, we take advantage of two important properties: the upward closure property of correlation patterns and the locality property of the power system. These two properties assure that LCNPs can be efficiently mined from a large-scale power network data. The LCNP consists of important system variables that are statistically correlated with instability. The instability predictor can be constructed based on LCNPs, the challenge of feature extraction is therefore met in the first stage. Based on the LCNPs identified in the first stage, a series of classifiers are constructed in the second stage as the instability predictor. The classifiers employ a graph kernel to explicitly take into account the topological information of the power network, so as to achieve the state of the art performance. When an instable component occurs, the proposed method can immediately predict the potentially instable components, thus satisfying the
72
3 Data Mining Techniques and Its Application in Power Industry
“fast-response” requirement of security assessment. Based on the above design, we implement a prediction tool of power system instability. The developed tool is tested with the New England system. Promising results were reported to demonstrate the effectiveness of the developed tool (Zhao et al., 2007a).
3.6.1 Background In this section, the background of the graph theory based correlation analysis method is presented. 1) Brief Introduction to Graph Theory Theoretically, a power system can be modeled as an undirected graph. Definition 3.7: An undirected graph (Diestel, 2006) is a pair (V, E), where V is a finite set of vertices and E ⊆ {e ⊆ V : |e| = 2} is a set of edges. From a power engineering point of view, the vertex of a power system is a bus. The edges of a power network are branches. In practice, loads, generators, and branches can all become instable; therefore, the proposed method should be able to predict the instability of all these components. In following sections, it is necessary to calculate the distance between two components, which can be both buses and branches. We therefore give several definitions that are slightly different from standard graph theory. In a power network (V, E), a path is a component sequence C1 , . . . , vi , ei,i+1 , vi+1 , ei+1,i+2 , . . . , Ck , where vi ∈ V, ei,i+1 ∈ E. Note that different from standard graph theory, the ends C1 , Ck of a path can be both vertices and edges in a power network. Two components are said to be connected if there is at least one path between them. The length of a path with k components is defined as k − 1. The distance between two components is defined as the length of the shortest path connecting these components. Each component in the power system has many system variables that may be correlated with the instability. Which of these variables are relevant to the instability remains unclear. A real power system can contain more than ten thousand buses and a many more branches, thus having tens of thousands of system variables. Building a model for stability analysis based on these variables is therefore a nontrivial task. 2) Correlation Analysis The proposed LCNP is based on the correlation analysis (Tamhane and Dunlop, 2000), which is a well-established methodology in classical statistics. We roughly introduce the basic ideas of correlation analysis as follows. Consider two random variables X ∈ {x1 , x2 } and Y ∈ {y1 , y2 }, we say two events X = x1 , Y = y1 are independent if and only if P (X = x1 , Y = y1 ) = P (X = x1 )P (Y = y1 ). If any of the four event pairs (x1 , y1 ), (x1 , y2 ), (x2 , y1 ),
3.6 Data Mining based Power System Security Assessment
73
(x2 , y2 ) is dependent, X and Y are said to be correlated. Similarly, if another random variable Z ∈ {z1 , z2 } is included, events X = x1 , Y = y1 , Z = z1 are considered as 3-way independent if and only if P (X = x1 , Y = y1 , Z = z1 ) = P (X = x1 )P (Y = y1 )P (Z = z1 ). X,Y and Z are correlated if any of the eight combinations of their values is dependent. The above definition of correlation can be further generalized to the random variable with k possible values. Definition 3.8: Consider m random variables X1 , X2 , . . . , Xm , where the ith variable Xi has ki possible values. X1 , X2 , . . . , Xm are said to be correlated if any of the k1 × k2 × . . . × km combinations of their values is dependent (Brin et al., 1997). In this study, each Xi represents a system variable that may be correlated with instability. Because most of these variables are continuous, several discrete values are defined on each of the variables by a domain expert to describe the actions or status of this variable. For example, the values of voltage may be defined as {rise, drop, oscillate}. The correlation of a set of random variables can be statistically tested with the chi-squared test (Tamhane and Dunlop, 2000). Assume m random variables X1 , X2 , . . . , Xm , where the ith variable Xi has ki possible values. Let V be the space {x1,1 , . . . , x1,k1 }×{x2,1 , . . . , x2,k2 }×. . .×{xm,1 , . . . , xm,km }, and let T denotes the training data consisting of n instances. We describe each instance of T as a value v = (v1 , v2 , . . . , vm ) ∈ V . Let n(v) be the number of training instances having value v, and n(vi ) be the number of instances whose Xi = vi ∈ {xi,1 , . . . , xi,ki }. The null hypothesis of the chi-squared test is called the hypothesis of independence, which indicates that H0 : p(v) = p(v1 )p(v2 ) . . . p(vm ) for any v in V.
(3.36)
The basic idea of chi-squared test is to determine whether the real frequency of value v significantly differs from its expectation. The chi-squared test is performed based on the assumption of independence. We thus denote p(vm ) p(v1 ) ×. . .× . the expectation of p(vi ) = n(vi ), 1 i k, and p(v) = n× n n The chi-squared statistic can then be calculated as χ2 =
ki m
(n(xi,j ) − p(xi,j ))2 i=1 j=1
p(xi,j )
.
(3.37)
It is proven that the chi-squared statistic Eq. (3.37) follows a chi-squared distribution (Tamhane and Dunlop, 2000). Therefore, on an α level confidence (Tamhane and Dunlop, 2000), we can reject the null hypothesis and conclude X1 , X2 , . . . , Xm are correlated if χ2 > χ(k1 −1)(k2 −1)...(km −1),α .
(3.38)
74
3 Data Mining Techniques and Its Application in Power Industry
A set of variables X1 , X2 , . . . , Xm that satisfy Expression (3.38) are named as a correlation pattern. Theoretically, there are a huge number of correlation patterns that can be mined if m is large. Two important properties are therefore introduced in following sections to restrict the pattern space and identify the correlation patterns that are most interesting and relevant to instability prediction.
3.6.2 Network Pattern Mining and Instability Prediction Network patterns include many important features correlated with system stability conditions. However, the number of features for a realistic large scale power system can be huge and make it too complex and even computationally impossible for real time analysis to consider all these features. In this section, methods to extract useful features so as to enable real time prediction of system stability condition are presented. 1) Intuition The intuitions behind the research method include two stages (Zhao et al., 2007a). Stage 1 — Feature extraction We have two important problems to answer. What kind of factors should be determined as relevant to instability? How to efficiently mine these factors from more than ten thousands system variables? The correlation analysis is selected because it is well-established in statistics and has many successfully applications. Furthermore, we would prefer a correlation measure that is user-independent. Chi-squared statistic is selected because there is no need to choose ad-hoc values of user-defined parameters, such as confidence and support. Mining correlation patterns from a power system is challenging because of its complexity. Two properties, the upward closure of correlation patterns and locality of the power system, are introduced to enable us to search only a small proportion of the entire pattern space. Stage 2 — Instability prediction In this study, a crucial issue to consider is how to take into account the topological structure of the power system. Existing research (Deshpande et al., 2003) shows that, considering the graph structure may significantly improve the performance of graph classifiers. Therefore, a kernel based method, which can explicitly model the network structure, is selected in the proposed method. The proposed method relies on the assumption that, two linked vertices are likely to have similar class labels. This assumption, which is used to design a regularization condition, will be explored in following sections.
3.6 Data Mining based Power System Security Assessment
75
2) Problem Setting Consider a power system (V, E). We assume each bus and branch can be described by a set of system variables X ∈ Rd . Note that X can be different for different buses and branches. Suppose we observe the training data T with n instances. Each instance consists of the system variables X and class labels y of every network component (bus or branch). The problem of instability prediction can be separated into following two sub-problems: • Given a system (V, E) and training data T, determine the system variables correlated with the instability of each network component. • Given a system (V, E) and training data T , train classifiers based on the system variables identified in stage 1. Then for each future instance, whose stability status is unknown, use the classifiers to predict which components will become instable. 3) Mining Local Correlation Network Patterns A large-scale power system contains more than ten thousands buses and far more system variables. For a given component, it is impossible to test every system variable in the network. To restrict the search space, two important properties are utilized in the proposed method (Zhao, 2007a). The first property is that, correlation patterns are upward closed, which can be formally stated as follows: Proposition 3.1: Given m random variables X1 , X2 , . . . , Xm , nd corresponding training data T , and suppose (Xi1 , Xi2 , . . . , Xik ) is a correlation pattern defined on X1 , X2 , . . . , Xm . Then any superset of (Xi1 , Xi2 , . . . , Xik ) defined on X1 , X2 , . . . , Xm is also a correlation pattern. The proof of Proposition 3.1 can be found by Brin (Brin, 1997). According to Proposition 3.1, if a set of variables is determined as a correlation pattern, we no longer have to test any of its superset because they are all correlation patterns. Therefore, only the minimal correlation patterns should be mined. What we are searching is essentially the border between correlated and uncorrelated variable sets. Definition 3.9: Given m random variables X1 , X2 , . . . , Xm , and corresponding training data T, and suppose (Xi1 , Xi2 , . . . , Xik ) is a correlation pattern defined on X1 , X2 , . . . , Xm . (Xi1 , Xi2 , . . . , Xik ) is a minimal correlation pattern if and only if all of its subsets are not correlation patterns. The second property comes from the power system theory. Intuitively, in a power system, a component can only influence another component via its neighbouring component. As illustrated in Fig.3.9(a), the system variables of Bus 1 can only influence Bus 2, then influence Bus 4 indirectly. In Fig.3.9(b), when the system is separated into two electrical islands, Bus 1 is not correlated with the instability of Bus 4; because Bus 1 is not connecting to any component that can influence Bus 4. Proposition 3.2: Given two components C1 and C2 in a power system, the system variables of C1 can be correlated with the instability of C2 only if
76
3 Data Mining Techniques and Its Application in Power Industry
Fig. 3.9. Illustration of locality property in a power system
C1 connects to another component C3 , whose system variables are correlated with the instability of C2 . Proposition 3.2 implies that, to predict the instability of a component, we need to consider only the local information (information of its neighbouring components). Propositions 3.1 and 3.2 motivate one of the main ideas of this study: to predict the instability, we only need to (1) search the local power network, and (2) mine minimal correlation patterns. These considerations lead to the problem of mining local correlation network patterns. Definition 3.10: In a power network, a variable X is called a dth-order local variable of component C, if the distance between its corresponding component C(X) and C is no greater than d. Definition 3.11: Consider a variable set (X1 , X2 , . . . , Xk ) in a power system, (X1 , X2 , . . . , Xk ) is called a dth-order local correlation network pattern (LCNP) of a component C if, (1) it is a minimal correlation pattern, and (2) Xi , 1 i k are all d-order local variables of C. Intuitively, mining LCNPs of C is equivalent to mining minimal correlation patterns only in the components that are close to C. This can be effective for instability prediction because Proposition 3.2 assures that the influence of the components far from C can finally be observed on the neighbouring components of C. Propositions 3.1 and 3.2 give rise to an efficient algorithm mining LCNPs. The algorithm is conceptually illustrated as follows. Algorithm: Mining LCNP Input: Significance level α, order of LCNP d, power network (V, E) target component C and training data T . Output: A set of LCNPs from (V, E) and T . Start: VAR ← All 1-order local variables; i ← 1; Do
3.6 Data Mining based Power System Security Assessment
77
For each variable X in VAR, add (X, S) in CAND; Do UNCOR ← ∅; Test each variable set in CAND with chi-squared test, add the set into COR if the test statistic is significant, otherwise add it into UNCOR; Set CAND to be all sets P , whose subsets of size |P | − 1 are all in UNCOR; While CAND is not ∅; i ← i + 1; VAR ← All i-order local variables; Remove all variables X from VAR, if X satisfies: (i) no pattern in COR includes the variables of component C(X), or (ii) C(X) connects C only through a component, whose system variables are not included in any pattern in COR; While i d; Consider a target component C and denote its stability status as S ∈ {stable, unstable}. To mine the d-order LCNP, all the 1st-order local variables X1 , X2 , . . . , Xm of C are firstly selected and stored in a list namely VAR. Chi-squared test is then applied to determine whether variable sets (X1 , S), (X2 , S), . . . , (Xm , S) are correlation patterns. The correlated patterns and uncorrelated patterns are stored in two lists, COR and UNCOR, respectively. Afterwards, all sets P, whose subsets of size |P | − 1 are all in UNCOR, are selected and added into COR or UNCOR according to its result of the chi-squared test. This process continues until no new set can be added into COR and UNCOR. All the 1st-order LCNPs are stored in COR now. We continue to mine 2ndorder LCNPs by adding all 2-order local variables of C into VAR. However, if the system variables of a component C are uncorrelated with C, all the components that connect C only through C , cannot be correlated with C according to Proposition 3.2 (see Fig.3.10). These components as well as C need not to be considered in the process of mining 2nd-order LCNPs and therefore their system variables are not added into VAR. Based on the new VAR, a similar procedure as described above is repeated to identify all 2nd-order LCNPs. This process continues until d-order LCNPs are all finally identified. 4) Instability Predictor Based on LCNPs, a kernel based classification method was proposed for instability prediction. In the proposed method, a classifier is constructed for each component that either (1) has previously become instable in historical data, or (2) is identified as an important component by a domain expert. Suppose l components are selected for constructing classifiers. For each component Ci , 1 i l, the system variables that are included in the LCNPs of Ci will be selected to form its explanatory vector Xi ∈ Rdi .
78
3 Data Mining Techniques and Its Application in Power Industry
Fig. 3.10. Mining LCNP. In Fig. 3.10(a), all 1st-order local variables are firstly tested for correlation. In Fig. 3.10(b), all components that connect C only through an uncorrelated component are not tested.
Given the training data T consisting of n training instance, each instance It = (Xt,1 , St,1 ), . . . , (Xt,l , St,l ), 1 t n includes the explanatory vectors Xt,i and corresponding class labels St,i ∈ {±1} for all l components. In the proposed method, the classifier of each component is designed to be a linear classifier fi (X) =< Wi φ(X) >,
1il
(3.39)
where Wi is a weight vector and φ(X) is a feature map. If the network structure is not considered, constructing the instability predictor, which is essentially a cluster of l classifiers, can be formulated as a standard kernel learning problem Wi = min
Wi ∈F
n C
λ L[St,i , fi (Xt,i )] + Wi . 1 i l n t=1 2
(3.40)
Introduce the kernel function k(Xi , Xj ) =< φ(Xi ), φ(Xj ) >, and let the kernel gram matrix be Ki = [k(Xa,i , Xb,i )],
a, b = 1, . . . , n
Problem expression (4.40) can be reformulated as Wi = min
Wi ∈F
n C
λ L(St,i , fi (Xt,i )) + FiT K −1 Fi , n t=1 2
where Fi = [fi (X1,i ), fi (X2,i ), . . . , fi (Xn,i )]T .
1il
(3.41)
3.7 Case Studies
79
To take into account the network structure, a method namely graph Laplacian (Zhang et al., 2006) is employed in the proposed instability predictor. Define N as the set of all pairs of neighboring components. The graph Laplacian of a power network is defined as follows F T gF =
n
[fm (Xt,m ) − fm (Xt,m )]2 .
(3.42)
t=1 (Cm ,Cm )∈N
In the graph learning setting, F T gF should be minimized because the predicted class labels for two neighboring components are expected to be similar. Combining Eqs.(3.41) and (3.42), the instability predictor can be finally formulated as n l
λ T −1 C
Wi = min L[St,i , fi (Xt,i )] + Fi K Fi Wi ∈F n t=1 2 i=1 +
n λ
2 t=1
[fm (Xt,m ) − fm (Xt,m )]2 .
(3.43)
(Cm ,Cm )∈N
The implications of problem expression (3.43) are as follows: (1) a small L(S, f ) implies that the classifiers have small errors in training data; (2) a small FiT K −1 Fi indicates f is approximately a linear function of its local features; (3) a small F T gF implies that f is smooth on the network. In practice, a power system can be considered as statistically static, because any change of the network structure requires large investments and a long execution time, which usually can be several months. Therefore, the instability predictor can be trained and maintained off-line. On the other hand, the trained instability predictor can give fast response to instability queries because its time complexity of classifying is linear to the dimension of explanatory vector X. Therefore, the proposed method well satisfies the requirements of instability prediction.
3.7 Case Studies In this section, three case studies will be given to show the application of the data mining methods in price spike forecasting, interval price forecasting and power system security assessment.
80
3 Data Mining Techniques and Its Application in Power Industry
3.7.1 Case Study on Price Spike Forecasting It is necessary to firstly define some measures to assess the case study results of price spike forecasting. The most popular measure of classification performance is the accuracy of classifier (Han and Kamber, 2006): classif ier accuracy number of correctly classif ied vectors . = number of vectors
(3.44)
This measure provides a convenient indication of the prediction accuracy for many classification problems. In spike prediction problems, however, this measure is not very suitable because the data of our problem are seriously imbalanced. According to the numerical analysis, given that only about 1/70 of input vectors are spikes, the classifier accuracy will be very high even if all spikes are misclassified. New measures are needed for evaluating the algorithms’ ability of predicting spikes. Definition 3.12: Spike prediction accuracy. spike prediction accuracy number of correctly predicted spikes = . number of spikes
(3.45)
Spike prediction accuracty is defined because the ability of correctly predicting spikes is a major concern in the spike prediction problem. This measure provides an effective way to assess this ability. Definition 3.13: Spike prediction confidence. Spike prediction conference is a very important indicator of the confidence level of prediction. Without a confidence level, the price spike prediction will only have very limited significance due to the large uncertainties and risks carried within the forecast. The spike prediction confidence is defined as spike prediction conf idence number of correctly predicted spikes . = number of predicted spikes
(3.46)
The classifier may misclassify some non-spikes as spikes. This definition is used to assess the percentile in which the classifier makes this kind of mistakes. A good classifier should have both high spike prediction accuracy and high spike prediction confidence. Only when the spike prediction confidence is high, is the spike prediction convincing. Before the new framework can be applied, it is important to properly set the price threshold Pv , because the threshold can significantly influence the performance. As observed in the histogram of the price data from september 2003 to May 2004, the distribution of RRP is very similar to a normal distribution. According to Eq. (3.1), the overall spike threshold in terms of this QLD market data set can be calculated as $75/MWhr.
3.7 Case Studies
81
Actually, the threshold is not fixed and should be different for different seasons. The means and standard deviations of RRP in three seasons are listed in Table 3.2. Table 3.2. Means and Standard Deviations of Three Seasons and Corresponding Threshold ($/MWHr) Means Summer Middle Winter
23.66 17.93 29.97
Standard Deviations 27.29 17.03 21.91
Threshold 78.24 51.99 73.79
SVM with radial basis kernel is used as the classifier to predict the occurrence of spikes. Radial basis kernel is chosen because it has the largest VC dimension. Basically, VC dimension is the largest number of input vectors which can be correctly classified in all possible ways by a type of classifiers. VC dimension is a measure of the learning ability of a type of classifiers. Previous research has shown that after model selection, SVM with radial basis kernel usually outperforms other popular kernels (Burges, 1998). When the width σ of a radial basis kernel is set to a small value, the VC dimension is nearly infinite (detailed proof can be found in the article by Burges, 1998. The training data includes RRP and other attributes as discussed earlier. The data from September 2003 to May 2004 are used as training data and those of June 2004 are used as testing data. The result obtained with SVM is shown in Table 3.3. Table 3.3. Accuracy of SVM Classification on June 2004 Data Performance Classifier accuracy Spike prediction accuracy Spike prediction confidence
Value 8595/8640 = 99.48% 50/95 = 52.6316% 50/50 = 100%
Similarly, the data from September 2003 to May 2004 are chose to train a SVM and January 2003 data is also chosen as the test data. The result of SVM is shown in Table 3.4. Table 3.4. Accuracy of SVM Classification on January 2003 Data Performance Classifier accuracy Spike prediction accuracy Spike prediction confidence
Value 8537/8640=98.81% 117/220=53.18% 117/117=100%
Tables 3.3 and 3.4 show that spike occurrence prediction accuracy Eq. (3.45) using SVM is above 50%. It means that more than 50% of spikes can be predicted by the proposed method. Note that this accuracy is obtained with serious insufficient data containing spikes, and spikes are caused by many stochastic events which cannot be considered in the model. This result is sufficiently good. Moreover, the spike prediction confidence Eq.(3.46) of the
82
3 Data Mining Techniques and Its Application in Power Industry
SVM is 100%. Compared with the result of the probability classifier, SVM has an obvious advantage: it will not misclassify non-spikes, which means that predicted spikes given by SVM are 100% confident. SVM with other two popular kernels, polynomial and sigmoid kernel (Burges, 1998), are also trained with the data from September 2003 to May 2004, and tested with the data of June 2004 as shown in Tables 3.5 and 3.6. A-polynomial kernel performs much worse than a RBF kernel, and a sigmoid kernel performed close to that of a RBF kernel. Their overall performances are no better than that of a RBF kernel. To further study the performance of SVM, another SVM classifier is trained with the data from November 2003 to February 2004. The model is then tested with 12 consecutive months, from June 2004 to May 2005. In these 12 months, the average confidence is over 80%. This degradation of performance is because that, there are only a few spikes in the middle season. In peak seasons (winter and summer), the confidence of SVM is still over 90%. Therefore, we can still conclude that SVM is highly reliable, especially in the peak months. Table 3.5. SVM with Polynomial Kernel on June 2004 Data Performance Classifier accuracy Spike prediction accuracy Spike prediction confidence
Value 8550/8640 = 98.9583% 5/95 = 5.2632% 5/5 = 100%
Table 3.6. SVM with Sigmoid Kernel on June 2004 Data Performance Classifier accuracy Spike prediction accuracy Spike prediction confidence
Value 8588/8640 = 99.3981% 43/95 =45.2632 % 43/43 = 100%
Another phenomenon observed in the experiments is that, confidence is a trade-off with accuracy. In the middle season with a lower threshold, the confidence dropped while the accuracy slightly increased. On the other hand, in peak seasons the performance of SVM is similar to the results shown in Tables 3.3 and 3.4. In the experiment, we also observe that, the proposed technique may miss some spikes in the middle season with much less spikes. These missing spikes are highly important and novel approach should be proposed to detect them in our future research. The same historical data are used to test the proposed probability classifier. According to the computational results of the classifier, the n(i, j) and s(i, j) (number of all the input vectors and number of spikes when an attribute takes a specific value or is in a specific range) of two key attributes are given in Tables 3.7 and 3.8. It can be clearly observed that most spikes occur when Iex (t) = 1 (Table 3.7) and most spikes occur during the day time (10:00 – 20:00, see Table 3.8).
3.7 Case Studies
83
Table 3.7. Distribution of Spikes Existence Index, Iex (t) All time points Spikes
0 62867 44
1 7405 1115
Table 3.8. Distribution of Spikes in Different time Ranges of a Day Time n(i, j) s(i, j) Time n(i, j) s(i, j)
4:05 – 6:00 5856 0 16:05 – 18:00 5856 222
6:05 – 8:00 5856 12 18:05 – 20:00 5856 104
8:05 – 10:00 5856 11 20:05 – 22:00 5856 4
10:05 – 12:00 5856 102 22:05 – 24:00 5856 1
12:05 – 14:00 5856 340 0:05 – 2:00 5856 0
14:05 – 16:00 5856 359 2:05 – 4:00 5856 4
Similar to SVM, the probability classifier is trained with the data from September 2003 to May 2004. The accuracy of the probability classifier on June 2004 is shown in Table 3.9. It can be seen that although the spike prediction accuracy of the probability classifier is higher than SVM, its spike prediction confidence is lower. The result of the probability classifier can be combined with SVM to give a better spike occurrence prediction. Because the predicted spikes given by SVM is 100% confident, all the predicted spikes given by SVM can be considered as spikes. We can select the predicted spikes, which are predicted by probability classifier but not by SVM, as candidate spikes. Together with their confidence levels given by the probability classifier, further analysis can be done on these candidate spikes to give market participants more information. Their confidence level can also be helpful for market participants to judge whether spikes will really occur at these time points (Zhao et al., 2007b,d). Table 3.9. Accuracy of Probability Classifier on June 2004 Data Performance Classifier accuracy Spike prediction accuracy Spike prediction confidence
Value 8552/8640 = 98.98% 60/95 = 63.16% 60/(60+53) = 53.1%
3.7.2 Case Study on Interval Price Forecasting We firstly apply the LM test to study whether the electricity price is heteroscedastic. The LM test is the standard hypothesis testing for heteroscedastic effects in a time series. The LM test gives two measures, P -value and LM statistic, which are the indicators of heteroscedasticity. In particular, the smaller P -value is, the stronger heteroscedastic effects are present in the time series. Moreover, we can also conclude that the time series is heteroscedastic
84
3 Data Mining Techniques and Its Application in Power Industry
when LM statistic is greater than the critical value (Zhao et al., 2007c). The LM test is performed on five price datasets from the Australia NEM, and the results obtained are shown as in Table 3.10. As illustrated in Table 3.10, setting the significance level as 0.05 and q as 20, P -value of the LM test is zero in all five months. Moreover, the LM statistics are significantly greater than the critical value of the LM test in all occasions. These two facts strongly indicate that significant heteroscedasticity exists in the electricity price. In the test, q = 20 means that the variance σt2 2 is correlated with its lagged values up to at least σt−20 . In other words, the electricity price at 20 time units before time t can still influence the uncertainty of the price at time t. Table 3.10. Results of LM Test Dataset
Season
P -value
05 Mar. 05 Apr. 05 May 04 Aug. 04 Dec.
Middle Middle Middle Winter Summer
0 0 0 0 0
LM statistic 2662 1487 2552 1225 1047
Critical value 31.41 31.41 31.41 31.41 31.41
Order (q) 20 20 20 20 20
Significance level 5% 5% 5% 5% 5%
To validate that NCHF is able to handle the nonlinear pattern of the electricity price, we apply both NCHF and ARIMA on the price datasets of May, 2005, August, 2004, and December, 2004, and compare their performances. The data of 1 – 10 May 2005, 1 – 10 August 2004, and 1 – 10 December 2004 are used as the training data for both NCHF and ARIMA. The rest of the data are used as the test data. The experiment results are shown in Fig.3.11. As observed, NCHF significantly outperforms ARIMA in all three months. The averaged MAPE of NCHF in these three months is 6.32%, while the averaged MAPE of ARIMA is 14.37%. Moreover, we can clearly observe that the performance of ARIMA collapses when two spikes occur in December, 2004. This is because ARIMA is a linear model and therefore cannot capture the nonlinear pattern of the electricity price in a volatile period. On the other hand, NCHF has excellent performance given these spikes. This is a strong proof of our claim that, NCHF is able to accurately model the nonlinear pattern of the electricity price series. The major objective of NCHF is to forecast the prediction interval. To prove that NCHF is effective in interval forecasting, we compare NCHF with GARCH on realistic NEM datasets. The GARCH model is a well-established heteroscedastic time series model. It is proven to be effective in modeling the time-changing variance and forecasting PI in financial time series. The major drawback of GARCH is that it is also a linear model. We compare NCHF with GARCH to verify that NCHF is superior in forecasting PI on nonlinear time series. Similarly, we apply both the NCHF and GARCH models on the price datasets of May 2005, August 2004, and December 2004. The data of 1 – 10
3.7 Case Studies
85
86
3 Data Mining Techniques and Its Application in Power Industry
May 2005, 1 – 10 August 2004, and 1 – 10 December 2004 are still used as the training data for both models. The rest of the data are the test data. The expected confidence α, empirical confidence α and ACE are shown in Table 3.11. As seen in Table 3.11, the NCHF consistently outperforms the GARCH in all occasions, disregarding how much the expected confidence level is set. The ACE of the NCHF is consistently within 4% and usually around 1% for all datasets, which indicates that the PI calculated by the NCHF is highly accurate. On the contrary, the performance of GARCH is far from satisfactory. The ACE is usually above 20%. These results clearly demonstrate that the NCHF is superior in handling heteroscedasticity and forecasting the PI of the electricity price. This superiority certainly comes from the NCHF’s capability of modeling the heteroscedasticity and nonlinearity of a time series. Table 3.11. Performances of NCHK and GARCH NCHF NCHF NCHF NCHF NCHF NCHF GARCH GARCH GARCH GARCH GARCH GARCH
Data May-05 May. 2005 Aug. 2004 Aug. 2004 Dec. 2004 Dec. 2004 May. 2005 May. 2005 Aug. 2004 Aug. 2004 Dec. 2004 Dec. 2004
α 95% 90% 95% 90% 95% 90% 95% 90% 95% 90% 95% 90%
α b 95.20% 89.10% 96.28% 93.16% 96.77% 91.49% 74.82% 69.08% 74.13% 62.72% 77.21% 63.20%
ACE 0.20% 0.90% 1.28% 3.16% 1.77% 1.49% 20.18% 20.92% 20.87% 27.28% 17.79% 26.80%
The 95% level PIs given by both the GARCH and NCHF models are illustrated in Fig.3.13. As clearly shown, in all three months the PIs given by the NCHF perfectly contains the true values of the electricity price, while GARCH has a much worse performance. It should be noted that, the GARCH fails in predicting the two spikes in December 2004. On the contrary, these two spikes fall well within the PIs forecasted by the NCHF. This indicates that the NCHF is reliable even with the presence of large price volatility. This characteristic is very important for market participants. In the period with large price volatility, the uncertainty involved in the price is greater, and will increase the risks of market participants. Market participants are therefore more interested in estimating the uncertainty for decision making. NCHF provides an excellent tool for market participants to analyze the uncertainty of the price given large volatility. The NCHF model has three user-defined parameters p, q, and r. To further investigate the performance of the NCHF, another experiment is performed. In the experiment, the expected confidence is set as α = 95%. The price data of 1 – 10 May 2005 and 11 – 31 May 2005 are employed as the training and testing data, respectively. The ACE of forecasting results against different
3.7 Case Studies
87
values of p, q, and r are plotted and shown in Figs.3.12.
Fig. 3.12. p, q, and r vs ACE in May 2005
According to Fig.3.12, performance of NCHF cannot be significantly influenced by p. By changing p from 1 to 8, ACE is always within 1%, which means that ACE is not sensitive to the lagged values of Yt in f (·). Different from p, ACE rapidly jumps to 80% by setting a large q. This discovery indicates that, only the noises of the time points, which are close to time t, are correlated withYt . Incorporating more lagged values of εt can cause overfitting, thus significantly degrade the performance of the NCHF. Similar to p, ACE is also insensitive to r according to Fig.3.12. However, NCHF achieves a better performance when a small r is set. Based on the above observations, we suggest that small q and r, which are no greater than 4, should usually be selected. Careful selection of q is especially important for obtaining a good
88
3 Data Mining Techniques and Its Application in Power Industry
3.7 Case Studies
89
performance. A thorough parameter selection may be performed to search the best values of p, q, and r.
3.7.3 Case Study on Security Assessment The proposed methods are implemented and tested with the IEEE 39 bus system, which is illustrated in Fig.3.14.
Fig. 3.14. Simplified network structure of the New England system
In the experiment, the system data include more than 300 000 instances, each of which consists of all the system variables in the New England test system. The LCNPs of every bus in the system are mined first. Then for each bus, 10 LCNPs with the greatest χ2 statistic values are selected as the classification features of the instability predictor. The 200 000 instances are randomly selected as training data, while the other 100 000 remains for testing. We report some of our results as follows. Table 3.12 shows the most significant LCNPs of 6 important buses in the NE system, which are identified by the domain expert. We denote S(i) as the instability of component i, and VAR(i) as a system variable of component i. Some interesting observations can be discovered from Table 3.12. For some buses such as Bus 3 and Bus 4, only 2-order LCNPs are mined. This implies
90
3 Data Mining Techniques and Its Application in Power Industry
that the local system variables of Bus 3 and Bus 4 directly correlate with their instability. On the other hand, most LNCPs of Bus 7 and Bus 8 have an order of 3. It means that most 2-order correlation patterns of Bus 7 and Bus 8 are insignificant, because the LCNP is a minimal correlation pattern. These observations clearly demonstrate the necessity of mining LCNPs. If we only calculate the correlations between instability and every system variable independently, we will miss the highly correlated variables that can only be identified by higher order LCNPs (Zhao et al., 2007a). Table 3.12. Local correlation network patterns of the New England system χ2 value V(3),S(3) 4506.5 V(25),S(3) 4489.2 V(2),S(3) 4485.6 V(18),S(3) 4250.8 Q(18),S(3) 4093.1 Q(25),S(3) 4057 V(17),S(3) 4007 P(3),S(3) 3759 BSFREQ(5),S(3) 3633.6 P(18),S(3) 3614.6 Bus 3
Bus 8
χ2 value
P(14),Q(14),S(8) Q(11),Q(14),S(8) P(11),P(14),S(8) Q(9),Q(14),S(8) P(9),P(14),S(8) Q(6),Q(14),S(8) P(6),Q(9),S(8) Q(1),P(6),S(8) P(1),Q(14),S(8) P(1),Q(11),S(8)
11348 10477 10360 9377.9 9238.3 8915.2 6149.2 5411.1 4610.5 4264.3
Bus 4 V(3),S(4) V(2),S(4) V(6),S(4) Q(18),S(4) V(4),S(4) V(18),S(4) V(13),S(4) BSFREQ(8),S(4) V(14),S(4) P(1),Q(1),S(4) Bus 12 V(6),S(12) Q(12),S(12) P(12),S(12) V(12),S(12) V(11),S(12) V(10),S(12) V(11),S(12) V(14),S(12) BSFREQ(6),S(12) BSFREQ(11),S(12)
χ2 value 3317.9 3316.3 3169.8 3119.4 2971.6 2929.3 2909.9 2777.6 2753.4 1575.3 χ2 value 3111.2 3090.7 3064 3058 3042.9 2914.6 2905.1 2787.1 2701.7 2672
χ2 value P(9),P(11),S(7) 10285 Q(9),P(11),S(7) 7037.2 P(9),Q(11),S(7) 6742.8 Q(7),BSFREQ(11),S(7) 3058.9 P(9),Q(9),S(7) 1561.7 Q(9),Q(11),S(7) 1055 P(11),Q(11),S(7) 1047 P(6),S(7) 132 Q(6),S(7) 130.3 P(9),BSFREQ(11),S(7) 127.9 Bus 7
Bus 15 V(19),S(15) V(17), S(15) V(13), S(15) V(16),S(15) V(15), S(15) V(14), S(15) BSFREQ(4),S(15) BSFREQ(14),S(15) BSFREQ(4),S(15) V(4), S(15)
χ2 value 3454.1 3446.6 3426.8 3399.2 3392.8 3360.1 3244.1 3241.2 3204.5 3127.6
A surprising discovery is that, the system variables of a bus may not be correlated with its instability. For example, no variables of Bus 8 are correlated with its instability. This justifies that the instability predictor should be constructed based on the LCNPs, rather than its own system variables. Another usefulness of LCNPs is to locate the component with low influence. For example, Bus 5 is directly connected with Bus 4 and 8. However, none of its variables are correlated with Bus 4 Bus and Bus 8. Bus 5 can then be ignored in the stability analysis later on. To help understand the correlation patterns, some correlation rules can be further generated from LCNPs. For instance, we can derive a correlation
3.7 Case Studies
91
rule from LCNP {P(9), P(11), S(7)}, as follows: P (9) is converging ∧ P (9) stops oscillating ∧ P (11) is converging ∧ P (11) stops oscillating → Bus 7 is beco min g instable (96.7%) It means that, the probability that Bus 7 will be instable given the conditions on the left-hand-side of the arrow is 96.7%. The correlation rules can be highly-useful for the system operator to determine potential instable components and take necessary actions. Another way to make the LCNPs more understandable is drawing the histogram. For example, the histogram of {P(9), P(11), S(7)} is given in Fig.3.15.
Fig. 3.15. The histogram of a LCNP
The proposed predictor is constructed based on the LCNPs mined in the first stage. As mentioned in the previous section, The 200 000 instances are randomly selected as training data. We denote TP as the instances that are correctly classified as instable, and FP as the instances that are incorrectly classified as instable. The precision and recall of the proposed instability predictor can be defined as TP , TP + FP TP . recall = number of instable ins tan ces precision =
(3.47) (3.48)
We also train a SVM for each bus in the system. The SVM of Bus i is constructed completely based on the variables of Bus i. The precision and recall of the proposed method and SVM are reported in Table 3.13. Obviously, the proposed instability predictor outperforms SVM both in terms of precision and recall. This clearly demonstrates the effectiveness of the proposed method.
92
3 Data Mining Techniques and Its Application in Power Industry
Table 3.13. Precision and recall of the instability predictor
Bus 3 Bus 4 Bus 7 Bus 8 Bus 12 Average
Proposed instability predictor Precision Recall 89.67% 91.23% 90.2% 93.1% 88.9% 94.09% 91.2% 92.8% 90.4% 93% 90.07% 92.84%
SVM Precision 79.6% 79.2% 79.6% 80.8% 77.5% 79.34%
Recall 78.5% 79.9% 77.9% 76.7% 82.7% 79.14%
3.8 Summary In this chapter, we discussed the applications of data mining in the power industry. The fundamental of data mining is firstly introduced. The main steps and important research directions of data mining are discussed. We then introduced the main concepts of three data mining techniques, correlation, classification and regression. Some existing data mining software systems are also introduced. We then discussed some important data mining applications in the power industry. The first problem we discuss is applying data mining in electricity price forecasting. A framework for price spike forecasting is introduced. Two data mining algorithms, SVM and probability classifier are employed in the framework to forecast the occurrences of spikes. We also introduce a model that can forecast the prediction intervals of electricity prices. The model incorporates SVM as a nonlinear function estimator. A maximum likelihood estimator is developed to estimate the model parameters. The second application is using data mining techniques for power system security assessment. Considering the characteristics of power systems, a graph mining based algorithm is developed to detect the system variables that are relevant to system stability. The detected system variables are then used to construct a predictor for system instability. Comprehensive case studies are presented to validate the proposed methods. The results demonstrated the usefulness of data mining in power engineering problems.
References Agrawal R, Imielinski T, Swami A (1993) Mining association rules between sets of
References
93
items in large databases. Proceedings of ACM SIGMOD Conference 1993, pp 207 – 216 Bollerslev T (1986) Generalized autoregressive conditional heteroscedasticity. Journal of Econometrics 31: 307 – 27 Borenstein S (2000) Understanding competitive pricing and market power in wholesale electricity markets. The Electricity Journal 13(6): 49 – 57 Borenstein S, Bushnell J (2000) Electricity restructuring: deregulation or reregulation? PWP-074. University of California Energy Institute. Available via DIALOG. http://www.ucei.berkeley.edu/ucei. Accessed 1 April 2009 Brin S, Motwani R, Silverstein C (1997) Beyond market baskets: generalizing association rules to correlations. Proceedings of the 1997 ACM SIGMOD Conference, Tucson, 13 – 15 May 1997 Burges JC (1998) A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2(2): 121 – 167 Cortes C, Vapnik V (1995) Support vector networks. Machine Learning 20: 273 – 297 Deshpande M, Kuramochi M, Karypis G (2003) Frequent sub-structure based approaches for classifying chemical compounds. Proceedings of IEEE International Conference on Data Mining, Melbourne, 19 – 22 November 2003 Diestel R (2006) Graph theory. springer, Heidelberg Enders, W (2004) Applied econometric time series. Wiley, Hoboken Engle, RF (1982) Autoregressive conditional heteroscedasticity with estimate of the variance of united kingdom inflation. Econometrica 50(4): 987 – 1008 Fayyad U, Piatetsky-Shapiro G, Smyth P (1996) From data mining to knowledge discovery in database. AI Mag, 1996, pp 37 – 54 Garcia RC, Contreras J, Akkeren M et al (2005) A GARCH forecasting model to predict day-ahead electricity prices. IEEE Trans Power Syst 20(2): 867 – 874 Guan X, Ho YC, Pepyne D (2001) Gaming and price spikes in electrical power market. IEEE Trans Power Syst 16(3): 402 – 408 Han JW, Kamber M (2006) Data mining: concepts and techniques, 2nd edn. Morgan Kaufmann, San Francisco Lu X, Dong ZY, Li X (2005) Electricity market price spike forecast with data mining techniques. Electr Power Syst Res 73(1): 19 – 29, ELSEVIER, Oxford Mount T, Oh H (2004) On the first price spike in summer. Proceedings of the 37th Annual Hawaii International Conference on System Science, Big Island, Hawaii, 5 – 8 January 2004 Nogales FJ, Conejio AJ (2006) Electricity price forecasting through transfer function models. The Journal of the Operational Research Society 57(4): 350 Papadopoulos G, Edwards PJ, Murray AF (2001) Confidence estimation methods for neural networks: a practical comparison. IEEE Trans on Neural Networks 12(6): 1278 – 1287 Tamhane AC, Dunlop DD (2000) Statistics and data analysis. Prentice Hall, Upper Saddle River Vapnik V (1995) The nature of statistical learning theory. Springer, New York Zhang T, Popescul A, Dom B (2006) Linear prediction models with graph regularization for web-page categorization. Proceedings of the 12th ACM SIGKDD Conference, Philadelphia, 20 – 23 August 2006 Zhao JH, Dong ZY, Zhang P (2007a) Mining complex power networks for blackout prevention. Proceedings of the 13th ACM SIGKDD conference, San Jose, 12 – 15 August 2007 Zhao JH, Dong ZY, Li X (2007b) Electricity market price spike forecasting and decision making. IET Gen Trans Distrib 1(4): 647 – 654 Zhao JH, Dong ZY, Li X (2007c) An improved naive bayesian classifier with advanced discretisation method. Int J Intell Syst Technol Appl 3(3 – 4): 241 – 256
94
3 Data Mining Techniques and Its Application in Power Industry
Zhao JH, Dong ZY, Li X, et al. (2007d) A framework for electricity price spike analysis with advanced data mining methods. IEEE Trans Power Syst 22(1): 376 – 385 Zhao JH, Dong ZY, Xu Z et al (2008) A statistical approach for interval forecasting of the electricity price. IEEE Trans Power Syst 23(2): 267 – 276
4 Grid Computing Mohsin Ali, Ke Meng, Zhaoyang Dong, and Pei Zhang
4.1 Introduction Power systems have been reformed from isolated plants into individual systems and interregional/international connections throughout the world since the 1990s (Das, 2002). Due to constant expansions and deregulations in many countries, future power systems will involve many participants, including generator owners and operators, generator maintenance providers, generation aggregators, transmission and distribution network operators, load managers, energy market makers, supplier companies, metering companies, energy customers, regulators, and governments (Irving et al., 2004). All these participants need an integrated and fair electricity environment to either compete or cooperate with each other in operations and maintenances with secured resource sharing. Moreover, it has been widely recognised that the Energy Management Systems (EMS) are unable to provide satisfactory services to meet the increasing requirements of high performance computing as well as data resource sharing (Chen et al., 2004). Although many efforts have been carried out to enhance the computational power of EMS in the form of parallel processing, only the centralized resources were adopted, and equal distributions of computing tasks among participators were assumed. In parallel processing, tasks are equally divided into a number of subtasks and then simultaneously dispersed to all the computer nodes. Therefore, all these machines should be dedicated and homogeneous, i.e., should have common configurations and capabilities, otherwise different computers may return results in a non-synchronous manner depending on their availability at the time the tasks were assigned. Furthermore, in parallel processing, data from different organizations are required to collaborate, which is difficult due to technical or security issues. Consequently, a mechanism, which can process the distributed and multi-owner data repositories, should be developed for better computing efficiency (Cannataro and Talia, 2003). In addition, the parallel
96
1 Grid Computing
processing approaches involve tight coupling of machines (Chen et al., 2004). Although supercomputers are another solution, they are very expensive and not suitable for small organizations which may be constrained by their resources. The idea of grid computing was proposed by computer scientists in the mid of 1990s. It is a technology that involves integration and collaboration of computers, networks, databases, and scientific instruments owned or managed by multiple organizations (Foster et al., 2001). It can provide high performance computing by accessing remote, heterogeneous, or geographically separated computers. Although this technology was mainly developed in the E-science community (EUROGRID, website; NASA, website; Particle, website; GridPP, website) before, nowadays it has been widely used in many other fields like the petrochemical industry, banking, and education. In the past few years, grid computing has attracted widespread attention from the power industry, and significant research has been carried out at different fields in order to investigate the potential use of grid computing technology (Chen et al., 2004; Taylor et al., 2006; Ali et al., 2006a; Ali et al., 2006b; Wang and Liu, 2005; Axceleon and PTI, 2003). Its importance in the power industry has been further strengthened in recent years, because it can provide efficient computing services, which meet the increasing requirement of high performance computation in power system analysis. Meanwhile, it can provide remote access to distributed resources of power system, which can facilitate the mechanisms of effective monitoring and control of modern power systems (Irving et al., 2004; Ali, et al., 2005). This chapter is organized as follows: first,the fundamentals of grid computing are presented, followed by the summary of available packages for grid computing and pioneering projects. After that, grid computing based power system security assessment, reliability assessment, and power market analysis are discussed, respectively, and then case studies are represented. Conclusions are given in the last section.
4.2 Fundamentals of Grid Computing The fundamental of grid computing is reviewed for completeness in this section. The architecture, features and functionalities of grid computing are reviewed first, followed by compassion of grid computing with parallel and distributed computing.
4.2 Fundamentals of Grid Computing
97
4.2.1 Architecture Grid computing is a form of parallel and distributed computing that involves coordination and sharing of computing facilities, data storage, and network resources across dynamic or geographically distributed organizations (Asadzadeh et al., 2004). It is a back-bone infrastructure for web services. Like internet which allows information sharing, a grid provides the sharing of computational power and available resources. The basic architecture of grid computing is shown in Fig.4.1. This integration creates a virtual organization where a number of mutually distrustful participants with varying degrees of prior relationships want to share respective resources to perform computational tasks (Foster et al., 2001).
Fig. 4.1. Basic Grid Computing Architecture
The regular grid computing frame forms a three-layer architecture. The first one is the resource layer, which includes the hardware part of computing grid. The second one is regarded as the grid middleware. The third one is the service layer, which uses the interface of hardware toolkit software and executes applications. Resource Layer The resource layer consists of the physical architecture of the grid. All the hardware resources are included in this layer. Normally, it consists of
98
1 Grid Computing
computers, workstations, clusters of computers, communication media (LAN or Internet), and data resources (databases). Grid Middleware This layer provides a link between grid services and grid resources. It provides the access and information about the grid resources to the grid services in the third layer. The main objective of this layer is to manage the heterogeneous computing resources and make them a single virtual high performance machine. Service Layer This layer consists of grid services. The core services are used to manage the resources, communications authorization and authentications, as well as system monitoring and control.
4.2.2 Features and Functionalities Grid computing offers many advantages depending upon the nature of the requirements. It provides high computing power, sharing of resources across the network, and access to remote and distributed data. It provides high level reliability in communication and different levels of security between nodes. It provides many services like remote process management, remote resource allocation, task distribution, and scheduling services. It provides a standard component integration mechanism, active and real time system management, self healing services, auto provisioning, and a virtualized environment. It also provides service level agreements. Specifically, the outstanding features of grid computing can be summarised as follows (Foster et al., 2005): Parallel Processing The computational power of modern computers and fast network communication techniques has facilitated the effective employment of network-based computing approaches. Parallel processing is one of the most attractive features of grid computing which increases CPU processing capacity and then leads to more available computational power. In addition to pure scientific or research needs, such computing power is driving new evolutions in many industries such as bio-medical field, financial modeling, oil exploration, motion picture animation, and of course, power engineering. Grid Services There are many factors that need to be considered in developing any grid-enabled applications. All applications are required to be exposed as services in order to run on a grid (Ferreira and Berstis, 2002). However not all applications can be transformed to run parallelly on a grid. Also, there are no practical tools for transforming any application to exploit the parallel
4.2 Fundamentals of Grid Computing
99
capabilities of a grid, although there are a number of practical tools that can be used by skilled designers to develop parallel grid based applications. Automatic transformation of applications is a science in its infancy, and it requires top mathematics and programming talents, if it is even possible in a given situation. Virtual Organizations and Collaboration The users of the grid can form virtual organizations across the world for common interests (Foster et al., 2001). Although each user may have different rules and regulations for its own organization, they can be gathered for collaborative works. These virtual organizations can share their resources collectively as a larger grid. Sharing is not only limited to data and files, but also includes other available resources, such as equipments, software, services, licenses, and others. These resources are virtualized to offer more uniform interoperability among the heterogeneous grid participants (Ferreira and Berstis, 2002). Resource Sharing In addition to CPU and storage resources, a grid can provide access to a number of additional shared resources. For example, if a user needs data transfer on the Internet; more than one connection can be shared via internet to increase the total band width. Similarly, if a user wants to print any large documents, more than one printer can be used in order to reduce the printing time. Efficient Use of Idle Resources Normally, each computer is used, at the peak, for about eight hours every day, while for the rest of the day they may remain idle. Actually, some heavy processing jobs can be shifted to other idle systems, to maximise the computer utilization ratio. The easiest function of grid computing is to run existing applications on different machines. As discussed, if the organizations from different parts of the world are connected with each other on the grid, they can take advantage of time zone and random diversity at peak hours, and use the idle resources in different time zones across the world (Foster et al., 2001). Load Balancing and Task Scheduling Grid computing can offer load balancing actions by scheduling jobs for grid based applications to the machines that have low utilization. This feature is very useful for handling occasional peak load activity in any part of a large organization (Ferreira and Berstis, 2002). There are two options: the unexpected load can be shifted to comparatively idle machines in the grid or if the grid is already fully utilized, the lowest priority jobs can be temporarily suspended or even cancelled and performed again later to make room for the higher priority tasks. In the past, each project was only responsible for its own resources and associated expenses, but nowadays the grid offers priority
100
1 Grid Computing
management among different projects. Reliability of Computing Grids Many important computing systems use expensive hardware to increase reliability. They are built with redundant chips and circuits, and contain complex logic to achieve graceful recovery from an assortment of hardware failures. The machines also use duplicate processors, power supplies, and cooling systems, with the hot-plug ability, so that the failed system can be replaced without turning off others. Systems are operated with special power sources which can start generators if utility power is interrupted. A reliable system can be built on these designs, but at higher costs due to the duplication of system components. Grid computing provides a perfect solution to this problem, because of its physically distributed structure as well as efficient tasks management mechanism. Security Security issues become very important, when resources and data are shared within a huge amount of organizations. Data flowing across different nodes in the grid is very valuable for its owner, so it should go only to those who are authorized to receive it. And therefore, there are enormous concerns about data and application security when the data flow across the Internet. The first concern is mainly because it is possible for someone to tap your data and modify it on its path. The second concern is that when you use computers in the grid, it is possible that the owners of those computers may read your data. These issues can be addressed by sophisticated encryption techniques both during transmission and during representation or storage on external resources. The secure sockets layer (SSL) encryption system can be used to authenticate users. The grid security infrastructure (GSI) (Foster et al., 2002) employs SSL certificates for authentication. Operating systems have already provided means to control data access authorization.
4.2.3 Grid Computing vs Parallel and Distributed Computing In parallel processing, systems should be qualified with identical configurations and capabilities, otherwise the final results may not be returned simultaneously; while in grid computing, the collaborated machines do not need to be homogenous. In grid computing, heterogeneous computers can participate in processing together. There is a mechanism for load balancing, through which workloads can be assigned to each node according to the CPU availability and the grid should also have the capability of transferring loads to other idle or different machines. Furthermore, parallel processing techniques involve a tightly coupling mechanism, while grid computing approaches involve a loosely coupling mechanism. Distributed computing solutions also
4.3 Commonly used Grid Computing Packages
101
demand homogeneous resources and furthermore, they are not a scalable solution (Shahidehpour and Wang, 2003), while the grid computing is simply a plug-n-play technology and resources can be added and removed during the processing (Foster et al., 2002).
4.3 Commonly used Grid Computing Packages There are a number of grid computing packages available either commercially or as free/share ware. The most commonly used such packages are presented in this section.
4.3.1 Available Packages A number of available grid computing packages are listed as follows. 1) Globus The Globus toolkit includes software services and libraries for resource monitoring, discovery, management, and security (Globus, website). All of them are packaged as a set of components that can be used either independently or together. The Globus toolkit was conceived to remove obstacles that prevent seamless collaboration. Its core services, interfaces, and protocols allow users to access remote resources as if they were located within their own room while simultaneously preserving local control over whom and when can use resources (Globus, website). Moreover, the Globus toolkit has grown through an open-source strategy similar to Linux, and distinct from proprietary attempts at resource-sharing software, which encourages broader, more rapid adoption and leads to greater technical innovation, as the opensource community provides continual enhancements to the product (Globus, website). 2) EnFuzion EnFuzion, a grid computing tool developed by Turbolinux, has been deployed in a wide range of areas, including energy, finance, bioinformatics, 3D rendering, telecommunications, scientific research, and the engineering sector, where it has helped users to get results faster (Axceleon, website). Key outstanding features of EnFuzion can be summarized as follows: strong robustness, high reliability, efficient network utilization, intuitive GUI interface, multi platform support, multi-core processors, flexible scheduling and lights-out option, and extensive administrative tools (Axceleon, website).
102
1 Grid Computing
The power of EnFuzion can be directed through its feature for easy program implementation and efficient computer management. It provides users with an easy and friendly environment to execute programs over multiple computers. It allows users to specify experiment parameters and codes, and generate executable files using a simple Java based GUI. After the input files and commands are specified by users, EnFuzion can produce job lists, disperse them to separate computers, monitor the whole progress, and then reassemble the results from each of these batch runs automatically. Jobs can be dispatched to computers over a local area network or over the Internet. To the users themselves, this appears as if the programs were executing on their own machines only but with faster speed, while maintaining a high degree of accuracy. They can view operation status through a web interface on their computers, including the information of jobs, nodes, users, performance, and errors. 3) Sun Grid Engine Sun Grid Engine (SGE) is an open source batch-queuing system, developed and supported by Sun Microsystems, which is typically used on a computer farm or high-performance computing cluster and is responsible for accepting, scheduling, dispatching, and managing the remote and distributed execution of large numbers of standalone, parallel, or interactive user jobs (Sun, website). It also manages and schedules the allocation of distributed resources such as processors, memory, disk space, and software licenses. A typical grid engine cluster consists of one master host and one or more execution hosts; moreover, multiple shadow masters can be configured as hot spares, which take over the role of the master when the original master host crashes (Sun, website). 4) NorduGrid The NorduGrid middleware (or Advanced Resource Connector, ARC) is an open source software solution distributed under the GPL license, enabling production quality computational and data grids (NorduGrid, website). ARC provides a reliable implementation of fundamental grid services, such as information services, resource discovery and monitoring, job submission and management, brokering and data, and resource management (NorduGrid, website). This middleware integrates computing resources and storage elements, making them available through a secure common grid layer (NorduGrid, website).
4.3.2 Projects Grid computing has developed greatly in the last decade. There are a number of pioneering projects, such as Condor (Condor, website), Legion (LE-
4.3 Commonly used Grid Computing Packages
103
GION, website), and Unicore (UNICORE, website), providing high performance grid solutions. Nowadays, grid projects have been developed in many fields, namely earth study, bio-medical, physics astronomy, engineering, and multimedia. Some prominent projects are listed as follows. 1) Biomedical Informatics Research Network Life science is a new and hot topic in scientific research. To better understand bio-networks, it is necessary to introduce biological interpretations to explain how the molecular, protein, and gene work. Computer based mathematical simulations can give a clearer representation of the real biological world. Biomedical Informatics Research Network is a very popular example of grid computing applications. It is a geographically distributed virtual community of shared resources offering tremendous potential to advance the diagnosis and treatment of disease (Biomedical, website). It enhances the scientific discoveries of biomedical scientists and clinical researchers across research disciplines. 2) NASA Information Power Grid Grid computing provides a platform for engineering applications which require high performance computing resources. One example of grid computing applications is NASA IPG which provides grid access to heterogeneous computational resources managed by several independent research laboratories (NASA, website; Global, website). The computational resources of IPG can be accessed from any location with grid interfaces, providing security, uniformity, and control. 3) TeraGrid TeraGrid is an open scientific discovery infrastructure combining leadership class resources at eleven partner sites to create an integrated, persistent computational resource (TeraGrid, website). There are many scientific areas that benefit from employing TeraGrid, namely the real-time weather forecasting, bio-molecular electrostatics, and electric and magnetic molecular properties. 4) Data Grid for High Energy Physics The GriPhyN Project is developing grid technologies for scientific and engineering projects that must collect and analyze distributed peta-byte-scale datasets (OurGrid, website). GriPhyN research will enable the development of peta-scale Virtual Data Grids (PVDGs). 5) Particle Physics Data Grid The Particle Physics Data Grid Collaboratory Pilot (PPDG) is developing and deploying production grid systems vertically integrating experimentspecific applications, grid technologies, grid and facility computation, and storage resources to form effective end-to-end capabilities (Particle, website). PPDG is a collaboration of computer scientists with a strong record in grid
104
1 Grid Computing
technology, and physicists with leading roles in the software and network infrastructures for major high energy and nuclear experiments. 6) OurGrid OurGrid is an open, free-to-join, cooperative grid in which labs donate their idle computational resources in exchange for accessing other labs’ idle resources when needed (OurGrid, website). It uses a peer-to-peer technology that makes it in each lab’s best interest to collaborate with the system by donating its idle resources due to the fact that people do not use their computers all the time (OurGrid, website). Even when actively using computers as research tools, researchers alternate between job execution and result analysis. Currently, the platform can be used to run any application whose tasks do not communicate among themselves during execution, like most simulations, data mining, and searching (OurGrid, website).
4.3.3 Applications in Power Systems With market deregulation and constant increases in energy demand, power systems are expanding very fast and hence result in interconnectivity of power systems with large generations. Power system engineers in many countries are facing pressure on the increasing computational demand to handle power system data. Because of the complex structure and large number of system component variables in an actual power network, many existing analytical tools fail to perform accurate and efficient power system analysis. Now power market participants need more efficient computing systems and reliable communication systems in order to process data for system operations and make decisions for future investments. They also need to collaborate and share data for different purposes especially in deregulated environments. Fortunately, the computational power of modern computers and the application of network technology can significantly facilitate the large-scale power system analysis. High performance computing plays an important role in ensuring efficient and reliable communication for power system operation and control. In the past few years, grid computing technology has attracted much attention from power engineers and researchers. Grid computing provides cheap and efficient solutions for power system computational issues. The grid computing based power system applications are presented in Fig.4.2. The following sections will present a state-of-the-art survey on the research that has been done in recent years regarding the implementation of grid computing technology in order to facilitate power system security assessment, reliability assessment, and power market analysis.
4.4 Grid Computing based Security Assessment
105
Fig. 4.2. A Grid Computing Framework for Power System Analysis
4.4 Grid Computing based Security Assessment Power system security assessment aims to find out whether, and to what extent, a power system is reasonably safe from serious interferences to its operation. For power system simulation, security assessment is based on the contingency analysis, which runs in Energy Management System in order to give operators an indication of what might happen to the power system in the event of unplanned or un-scheduled equipment outage (Balu et al., 1992). Until recently, mainstream understanding of computer simulation of power system is to run a computer simulation program, such as a load flow, transient stability or voltage stability program, or a combination of such programs on a computer (InterPSS, website). Due to the complex structure and the large number of system component variables in an actual power net-
106
1 Grid Computing
work, many existing analytical tools fail to perform accurate and efficient system security analyses required for effective system operation and management. The key problem is the low computing efficiency due to the analysis of a large number of involved system data and considered system contingencies. Recent advances on computer network and communication protocols have made it possible to perform grid computing, where a large number of computers forming a connected computer network, are used to solve computationally intensive problems. Grid computing provides a secured mechanism for data sharing and distributed processing for system security assessment. In this section, the grid computing based power system security assessments are discussed. The load flow plays an important role in power system operation, which provides an effective technique to address the power system operation and management problems. The bottleneck of load flow is the computational speed, and that’s why normally it is used offline. Furthermore, if considering “N-1” static security constraints, the computational load will increase greatly. With a grid computing approach, the computation speed limitation constraint can be relaxed greatly. This makes it possible for real-time online applications of load flow for power system stability analysis. A grid computing based power system simulation is developed and implemented in InterPSS (InterPSS, website). The goal of the InterPSS grid computing solution is to provide a foundation for creating a computational grid to perform some computationally intensive power system simulations, such as contingency analysis, security assessment, and transfer capacity analysis, using conventional, inexpensive computers in a local area network with minimum administration overhead (InterPSS, website). However the applications of grid computing are not limited to the steady state assessment, it also effectively facilitates the transient stability analysis and signal stability analysis. Although time domain simulation based transient stability assessment provides a satisfactory accuracy, it poses some limitations. First, this process becomes slower by defining a smaller time interval for more accurate results. Second, large numbers of simulations are required. In order to reduce the overall time, there are two main options. Traditional deterministic stability criteria can be replaced with probabilistic based transient stability analysis; grid computing based framework can be adopted, which provides a platform for transient stability analysis. In previous researches, kinds of grid computing based transient stability analysis have been proposed. A grid computing framework for dynamic security assessment is proposed by Jing and Zhang, 2006. Based on this proposed framework, an application has been developed which can help power system operators to anticipate and prevent potential stability problems before they lead to cascading outages. And a new method is proposed for transient stability analysis in terms of measuring critical clearing time using grid computing technology (Ali et al., 2007). It presents a grid computing based approach for probabilistic transient stability analysis, which is able to measure the critical clearing time through time do-
4.5 Grid Computing based Reliability Assessment
107
main simulation. Results show that this method has capability of providing accurate results with better performance. In the article by Meng et al., 2009, the power system simulator for engineering (PSS/E) dynamic simulations are accelerated with EnFuzion based computing technique. This approach is proved to be effective by testing with 39-bus New England power system under “N-1”, “N-2”, “N-1-1” contingencies analysis, it includes redispatch after a disturbance with optimal power flow. The results show that the simulation process can be speeded up dramatically, and total elapsed time can be reduced proportionally with the increase of computer nodes. In the deregulated electricity market, small signal stability analysis can be used to provide a comprehensive visualization of the system considering various uncertainties, which is essential for system operation and planning. However, along with the deregulation, more system uncertainties have surfaced, which requires more computing power and storage memory in the analysis process. Obviously, traditional methods are no longer appropriate due to the size of expanding and interconnected systems. Therefore, studying the probabilistic small signal stability is of high importance to ensure the security and healthy operation of deregulated systems, which motivates the research in this area. In the article by Xu et al., 2006, a grid computing based approach is applied for probabilistic small signal stability analysis in electric power systems. The developed application, based on this approach, is successfully implemented to carry out probabilistic small signal stability. As compared to the traditional approaches, the grid computing based method gives better performances in terms of computing capacity, speed, accuracy, and stability. Overall, power systems are operating under stressed conditions with the introduction of deregulation, which is further complicated by the everincreasing demand. Therefore, it is very important to consider the security assessment for power system secure operation. Grid computing provides satisfactory services to meet the increasing requirements of high performance computing as well as data resource sharing. Results show that grid computing techniques greatly increase the computational efficiency.
4.5 Grid Computing based Reliability Assessment Reliability is a key aspect of power system design and planning, which traditionally was assessed using deterministic methods. However, the traditional deterministic analysis does not recognize the unequal probabilities of events that may lead to potential operating security limit violations. Moreover, these techniques do not satisfy the probabilistic nature of power system and result in excess amount of operating costs due to selecting worst case scenarios
108
1 Grid Computing
(Billinton and Li, 1994). Therefore, the power system reliability analysis warrants more effective and reliable approaches. Responding to this need, probabilistic based approaches appeared and can offer much more information regarding system behavior, enabling a better allocation of economic and technical resources, compared with the deterministic methods. Although probabilistic techniques have been extensively studied, maturely applied to many fields and acquiring satisfactory performance, this technique requires a large amount of computational resources and a large size of memory (Zhang et al., 2004). As a result, grid computing can provide excellent platforms for probabilistic based power system reliability assessment. A grid computing framework for power system reliability and security analysis of a large and complex power system has been proposed (Ali et al., 2006a). Grid computing provides economical and efficient solutions to meet the required computational needs for getting fast and comprehensive results of complex systems. This framework also provides the infrastructure for secured data sharing and a mechanism of collaboration between different entities of electricity market. In addition, Monte Carlo Simulation is an important technique for probabilistic load flow analysis in system reliability assessment. However, it mainly relies on the computational resources for high performance computing as well as large memory size for handling huge amounts of data. Based on the grid computing discussed above, a grid service has been developed for performing Monte Carlo Simulation based probabilistic load flow analysis (Ali et al., 2006b). Results show that this approach gives better accuracy, reliability, and performance as compared to traditional computing techniques.
4.6 Grid Computing based Power Market Analysis With the deregulation of power systems, competitive electricity markets have been formed all over the world. Deregulations and competitive markets for the power industry have changed the original characteristics and structures of power systems. Power system operations have been transformed from centralized into coordinated decentralized decision-making (Zhou et al., 2006). Moreover, industry restructuring has brought two major changes into the structures of control centres. The first one is the expansion of the control centre functions from traditional energy management to business management in the market, primarily for reliability reasons; and the second one is the change from the monolithic control centre of traditional utilities to a variety of control centres of ISOs or RTOs, transmission companies, generation companies, and load serving entities that differ in market functions. Corresponding to the changes, grid computing based control centres are proposed
4.7 Case Studies
109
for power system management and market analysis (Zhou and Wu, 2006) (Wu et al., 2005). A grid computing approach for forecasting electricity market prices with a neural network time-series model was proposed in (Sakamoto et al., 2006). The results show improvement in the computational speed and accuracy as compared with the forecasting using existing application programs designed for single computer processing. A work flow based bidding system has been suggested earlier in (Ali et al., 2005). It can accelerate the bidding process and decision making, reduce the work load of ISO by providing additional information like the available transfer capacity, ancillary services, and congestion information related to current bids. Large amounts of data are required in system planning simulation and modeling. A grid computing based framework has been proposed for providing the power system reliability and security analysis of large power system for future expansion planning (Huang et al., 2006; Sheng et al., 2004). Moreover, the planning of future power systems needs the combined efforts of many companies. The sharing of accurate information and reliable forecasting mechanism facilitates this process. Grid computing can provide an integrated environment for all the companies and individuals involving in power systems planning.
4.7 Case Studies Many examples of grid computing applications in power system analysis can be found in the referenced literature. Some of the application examples are given in this section.
4.7.1 Probabilistic Load Flow Probabilistic load flow analysis provides very useful statistical information for power system planning. Monte Carlo simulation is used for such computation and is normally expensive in computation. In the article by Ali et al., 2006 probabilistic load flow analysis using the IEEE 30 bus system was given. The system one-line diagram is shown in Fig.4.3. Representing a part of the American Electric Power System, this sample power system includes 6 generators buses and 41 transmission lines. The process of probabilistic load flow can be found in Chapter 5 of this book together with other probabilistic analysis techniques. The same compu-
110
1 Grid Computing
Fig. 4.3. IEEE 30-bus system (Power system test case archive, 1993)
tational task was performed using different numbers of computers in a LAN environment. The computational time was recorded to compare the performances as shown in Fig.4.4. Clearly the computational time decreases as the number of computers in the computational grid increases. The same computational task can be completed in a 10-computer grid in around 4 minutes
Fig. 4.4. Comparing the computational time for probabilistic load flow computation with different number of computers
4.7 Case Studies
111
whereas it takes more than 42 minutes if a single computer is used. It should be noted that this was done in a prototype grid only. The computational efficiency can be improved with more advanced grids.
4.7.2 Power System Contingency Analysis Power system contingency assessment is an essential procedure in power system operations and planning. Normally the N–1 criterion is used for contingency analysis. It involves a large number of power system simulations. PSS E is used extensively in many power companies for contingency assessment as well as other system analysis applications. In the article by Meng et al., 2009, the authors developed a parallel computing framework for PSS E based simulations. It provides a very useful tool for industrial applications. The 39-bus New England system (see Fig.3.14) was used to test the efficiency of the framework. The average computational costs of contingency assessment are given in Fig.4.5. The simulation took approximately 73 minutes on a single computer if iPLAN is used, or about 11 minutes with idv code. Furthermore, increasing the number of computational nodes also increases the efficiency.
Fig. 4.5. Computational Costs of N–1 Contingency Analysis with Different Numbers of Nodes
4.7.3 Performance Comparison In the article by Ali et al., 2009, a comprehensive review on grid computing
112
1 Grid Computing
is given. Grid computing can provide high performance computing for power system analysis needs. The use of this technology particularly finds its increasing popularity in cases where probabilistic analysis tasks are needed. Grid computing is helpful in data and resource sharing among different machines/networks/organizations with secured accessed according to defined regulated policies. Large computation can be performed in a short time to meet various power system operational and planning needs. Moreover, it can provide services for distributed monitoring and control of a power system efficiently and economically, especially after the introduction of renewable energy resources and their integration in the form of micro-grid systems. Ali et al. (Ali et al. 2009) compared the computational efficiency improvements with grid computing versus computation with a single computer. Fig.4.6 present the performance comparison summary in processing time for different power system analysis tasks including probabilistic load flow analysis (Ali et al., 2006b), probabilistic small signal analysis (Xu et al., 2006), probabilistic transient stability analysis (Ali et al., 2007; Ali et al., 2006a) and load forecasting computation (Al-Khannak and Bitzer 2007). The grid used in this comparison consists of 10 computers.
Fig. 4.6. Computing performance comparison
4.8 Summary
113
4.8 Summary Grid computing has been identified as a significant new technique in scientific and engineering fields as well as commercial and industrial enterprises. It provides economical and efficient solutions to meet the required computational needs for getting fast and comprehensive results of complex systems with existing IT infrastructures. This chapter highlights the advantages and potentials of applying grid computing techniques in power engineering. Several important topics in power system analysis are introduced, in which the research has been done or is in progress and future trends are presented. Results show that grid computing methods greatly enhance the computational power. However, there are many open issues to be addressed and missing functionality to be developed, the potential of grid computing needs to be further explored to meet the challenges in a deregulated power industry. A lot of work has yet to be done in various fields to realize full advantage of this technology for enhancing efficiency of electricity market investment, accurate and efficient system analysis, as well as distributed monitoring and control, especially power systems with the renewable energy resources.
References Ali M, Dong ZY, Zhang P (2009) Adoptability of grid computing in power systems analysis, operations and control: A review on existing and future work. Transmission and Distribution 3 (10): 949 – 959 Ali M, Dong ZY, Zhang P et al (2007) Probabilistic transient stability analysis using grid computing technology. Proceedings of IEEE Power Engineering Society General Meeting, Tampa, 24 – 28 June 2007 Ali M, Dong ZY, Li X et al (2006a) RSA-Grid: A grid computing based framework for power system reliability and security analysis. Proceedings of IEEE PES General Meeting Montreal, 6 – 10 June 2006 Ali M, Dong ZY, Li X et al (2006b) A grid computing based approach for probabilistic load flow analysis. Proceedings of the 7th IEE International Conference on Advances in Power System Control, Operation and Management, Hong Kong, 30 October – 2 November 2006 Ali M, Dong ZY, Li X et al (2005) Applications of grid computing in power systems. Australasian Universities Power Engineering Conference, Hobart, 25 – 28 September 2005 Al-Khannak R, Bitzer B (2007) Load balancing for distributed and integrated power systems using grid computing. Proceedings of International Conference on Clean Electrical Power, Capri, 21 – 23 May 2007 Asadzadeh P, Buyya1 R, Kei CL et al (2004) Global grids and software toolkits: A study of four grid middleware technologies, Technical Report. GRIDS-TR2004-4, Grid Computing and Distributed Systems Laboratory, University of Melbourne Axceleon and Power Technologies Inc. (PTI) (2003) Partner to deliver grid com-
114
4 Grid Computin
puting solution for top global electricity transmission company. http://www. axceleon.com/press/release030318.html. Accessed 2 April 2009 Axceleon website. http://www.axceleon.com. Accessed 2 April 2009 Balu N, Bertram T, Bose A et al (1992) Online power system security analysis. Proceedings of the IEEE 80(2): 262 – 282 Billinton R, Li W (1994) Reliability assessment of electric power systems using Monte Carlo methods. Plenum Press, New York Biomedical Informatics Research Network (BIRN). http://www.nbirn.net/index. shtm. Accessed 13 February 2009 Chen Y, Shen C, Zhang W et al (2004) Φ GRID: grid computing infrastructure for power systems. International Conference on Power System Technology 2: 1090 – 1095 Cannataro M,alia D (2003) Semantics and knowledge grids: building the nextgeneration grid. IEEE Intelligent Systems 19(1): 56 – 63 Condor, High throughput computing. The University of Wisconsin, Madison. http://www.cs.wisc.edu/condor. Accessed 1 February 2009 Das JC (2002) Power system analysis: short-circuit load flow and harmonics. Marcel Dekker, New York EUROGRID Project: Application Testbed for European GRID computing, http:// www.eurogrid.org. Accessed 1 February 2009 Ferreira L, Berstis, V (2002) Fundamentals of grid computing. IBM Redbooks Foster I, Kishimoto H, Savva A et al (2005) The open grid services architecture, http://forge.gridforum.org/projects/ogsa-wg. Accessed 1 February 2009 Foster I, Kesselman C, Nick JM (2002) The physiology of the grid: An open grid services architecture for distributed systems integration. Argonne National Laboratory, University of Chicago, University of Southern California, and IBM, Globus Project Foster I, Kesselman C, Tuecke S (2001) The anatomy of the grid: enabling scalable virtual organizations. Int J Supercomput Appl 15(3) Global Ring Network for Advanced Applications Development website. http:// www.gloriad.org/gloriad/index.html. Accessed 1 February 2009 Globus Alliance. http://www.globus.org. Accessed 1 February 2009 GridPP UK Computing for Particle Physics. http://www.gridpp.ac.uk. Accessed 1 February 2009 Grid Physics Network website. http://www.griphyn.org. Accessed 1 February 2009 Huang Q, Qin K, Wang W (2006) A software architecture based on multi-agent and grid computing for electric power system applications. International Symposium on Parallel Computing in Electrical Engineering, pp 405 – 410 Irving M, Taylor G,Hobson P (2004) Plug in to grid computing. IEEE Power and Energy Magazine 2(2): 40 – 44 InterPSS Community. http://sites.google.com/a/interpss.org/interpss. Accessed 9 April 2009 Jing C, Zhang P (2006) Online dynamic security assessment based on grid computing architecture. Proceedings of the 7th IEE International Conference on Advances in Power System Control, Operation and Management, Hong Kong, 30 October – 2 November 2006 LEGION, Worldwide virtual computer. University of Virginia, VA. http://legion. virginia.edu. Accessed 1 February 2009 Meng K, Dong ZY, Wong KP (2009) Enhancing the computing efficiency of power system dynamic analysis with PSS E. Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, San Antonio, 11 – 14 October 2009 NASA Information Power Grid (IPG) Infrastructure. http://www.gloriad.org/ gloriad/projects/project000053.html. Accessed 1 February 2009 OurGrid website. http://www.ourgrid.org. Accessed 1 February 2009
References
115
Particle Physics Data Grid Collaboratory Pilot. http://www.ppdg.net. Accessed 1 February 2009 Power Systems Test Case Archive (1993) hosted by The University of Washington. http://www.ee.washington.edu/research/pstca/pf30/pg tca30bus.htm. Accessed 15 January 2009 Shahidehpour M, Wang Y (2003) Communication and control in electric power systems; Applications of parallel and distributed processing, IEEE Press Sakamoto N, Ozawa K, Niimura T (2006) Grid computing solutions for artificial neural network-based electricity market forecasts. International Joint Conference on Neural Networks, Vancouver, 16 – 21 July 2006, pp 4382 – 4386 Sheng S, Li KK, Zen XJ et al (2004) Grid computing for load modeling. IEEE International Conference on Electric Utility Deregulation, Restructuring and Power Technologies, Hong Kong, April 2004, pp 602 – 605 Sun Grid Engine website. http://gridengine.sunsource.net. Accessed 1 February 2009 Taylor GA, Irving MR, Hobson PR et al (2006) Distributed monitoring and control of future power systems via grid computing. IEEE PES General meeting, Montreal, 6 – 10 June 2006 TeraGrid. http://www.teragrid.org. Accessed 1 February 2009 NorduGrid middleware. http://www.nordugrid.org/middleware. Accessed 1 February 2009 UNICORE (Uniform Interface to Computing Resources) Distributed computing and data resources. Distributed Systems and Grid Computinng, Juelich Supercomputing Centre, Research Centre Juelich. http://www.unicore.eu. Accessed 1 February 2009 Wang H, Liu Y (2005) Power system restoration collaborative grid based on grid computing environment. Proceedings of IEEE Power Engineering Society General Meeting, San Francisco, 16 – 16 June 2005 Wu FF, Moslehi K,Bose A (2005) Power system control centers: past, present, and future. Proceedings of the IEEE, 93: 1890 – 1908 Xu Z, Ali M, Dong ZY et al (2006) A novel grid computing approach for probabilistic small signal analysis. IEEE PES 2006 General Meeting, Montreal, 6 – 10 June, 2006 Zhang P, Lee ST,Sobajic D et al (2004) Moving toward probabilistic reliability assessment methods. Proceedings of the 8th International conference on Probabilistic Methods Applied to Power Systems, Ames, 12 – 16 Septerber 2004 Zhou HF, Wu FF, Ni YX (2006) Design for grid service-based future power system control centers. Proceedings of the 7th IEE International Conference on Advances in Power System Control, Operation and Management, Hong Kong, 30 October – 2 Norember 2006 Zhou HF, Wu FF (2006) Data service in grid-based future control centers. Proceedings of the 7th IEE International Conference on Advances in Power System Control, Operation and Management, Hong Kong, 30 October-2 Norember 2006
5 Probabilistic vs Deterministic Power System Stability and Reliability Assessment Pei Zhang, Ke Meng, and Zhaoyang Dong
5.1 Introduction The power industry has undergone the significant restructuring throughout the world since the 1990s. In particular, its traditional, vertically monopolistic structure has been reformed into competitive markets in pursuit of increased efficiency in the electricity production and utilization. Along with the introduction of competitive and deregulated electricity markets, some power system problems have become difficult to analyse with traditional methods, especially when power system stability, reliability, and planning problems are involved. Traditionally, the power system analysis was based on deterministic frameworks; but they only consider the specific configurations, which ignore the stochastic or probabilistic nature of real power systems. Moreover, many exterior constraints as well as growing system uncertainties now need to be taken into consideration. All these have made existing challenges even more complex. One consequence is that more effective and efficient power system analysis methods are required in the deregulated, market-oriented environment. The mature theory background has facilitated effective employment of probabilistic based analysis methods. The study of probabilistic approaches based power system analysis has become highly important. In this chapter, the reported research is directed at introducing probabilistic based techniques to solve several power system problems in the deregulated electricity markets. This chapter is organized as follows, after the introduction section; the needs for probabilistic approaches are identified, followed by the available tools for probabilistic analysis. The probabilistic stability assessment, probabilistic reliability assessment (PRA), and probabilistic system planning are discussed, respectively, and then two case studies are represented as well. Conclusions are discussed in the last section.
118
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
5.2 Identify the Needs for the Probabilistic Approach The main application fields of probability approaches in power systems can be classified into the following three aspects: stability assessment, reliability assessment, and system planning. In this section, the importance and needs of introducing probabilistic approaches into the power system analysis is discussed.
5.2.1 Power System Stability Analysis A power system is said to be stable if it has the capacity to retain a state of equilibrium under normal operating conditions and to regain an acceptable state of equilibrium after being subjected to a disturbance (Kundur et al., 2004; Kundur, 1994). The classification proposes the categories of power system stability, shown in Fig.5.1 (Kundur et al., 2004). The following sections focus on the discussions of transient stability and small signal stability.
Fig. 5.1. Classification of power system stability
1) Transient Stability Transient stability is the ability of power systems to maintain synchronism in case of a severe transient disturbance, such as faults on transmission lines, generating units, or load outages (Kundur, 1994). It has been widely applied in power system dynamic security analysis for years. Traditionally, the power systems transient stability was studied using deterministic stability criteria. In such criteria, several extreme operation conditions and critical contingencies are manually selected by expert experience, such as the load levels, fault types, and fault locations. The designed system should withstand all the extreme conditions after most severe disturbances. Although the deterministic method has served the power industry well, acquiring satisfactory performance, it ignores the stochastic or probabilistic nature of a real power system, which is unrealistic in the complex system analysis. Moreover, in addition to the probabilistic characteristics of system loads, power generations,
5.2 Identify the Needs for the Probabilistic Approach
119
network topologies, and component faults all contribute to the uncertainties in the modern power system analysis (Ali et al., 2007). In a deregulated environment, these existing uncertainties greatly influence the performance of the power system transient stability analysis. Therefore, the traditional deterministic methods are no longer suitable for sophisticated system stability assessment any more. The study of probabilistic approaches based transient stability analysis has become highly important for the power system stability analysis. 2) Small Signal Stability Small signal stability analysis explores the power system security conditions in the space of power system parameters of interest, including load flow feasibility, saddle node and Hopf bifurcations, maximum and minimum damping conditions, in order to determine suitable control actions to enhance power system stability (Dong et al., 1997; Makarov and Dong, 1998; Makarov et al., 2000). Therefore, studying the small signal stability is of great importance to ensure the secure and healthy operation of power systems with growing uncertainties. In order to investigate the small signal stability of a power system, the dynamic components (e.g., generators) and relevant control systems (such as excitation control system, and speed governor systems) should be modelled in detail (Dong et al., 2005). The accuracy of small signal stability analysis depends on the accuracy of the models used, which means more accurate models could result in increased overall power system transfer capability and associated economic benefits. Traditionally, the system security is evaluated under the deterministic framework, which was based on given network configurations, system loading conditions, disturbances, etc. Due to the stochastic nature of a real power system, it is really important to attempt mathematically modeling and analyse these parameters probabilistically. In order to have a comprehensive picture of small signal stability, the probabilistic methods based small signal stability assessment is attracting more and more attention over the traditional deterministic approaches.
5.2.2 Power System Reliability Analysis The reliability of a bulk system is a measure of the ability to deliver electricity to all points of utilization within accepted standards and in the amount desired (Ringlee et al., 1994). The reliability is a key aspect of power system design and planning, which can be assessed using deterministic methods. The most common deterministic method for assessing power system reliability is the N-1 criterion. It defines that the power system a considered reliable if it is able to withstand any prescribed outage situations or contingencies within acceptable constraints (Zhang et al., 2004). However, the situation considered is only a state condition for a specific combination of bus loads and gener-
120
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
ating unit outages, which is theoretically not suitable in a restructured and deregulated electricity market. Along with this deregulation process, a variety of challenges appears in the reliability analysis, namely the uncertainties of new power generation projects; the uncertainties of future power demand and scope, and the uncertainties of regulatory constraints and external rules (Zhang et al., 2004). The traditional deterministic contingency analysis does not recognize the unequal probabilities of events that lead to potential operating security limit violations. Therefore, the power system reliability analysis requires more effective and reliable methods. Responding to this need, probabilistic based approaches appeared which can offer much more information regarding system behaviors, and enabling better allocation of economic and technical resources, compared with the deterministic methods. Because probabilistic evaluations model the random nature of the problem, they can efficiently handle a numerous sets of possible alternatives, with different outcomes and chances of occurrence, for which individual evaluations could be unfeasible (Zhang et al., 2004).
5.2.3 Power System Planning Power system planning is an important topic and a general problem in the modern power system analysis, that of energy and economic development planning. The general fundamental objective of system planning is to determine a minimum cost strategy for expansion of generation, transmission, and distribution systems adequate to supply the load forecast within a set of technical, economic, and political constraints (Xu et al., 2006a; Xu et al., 2006b; Zhao et al., 2009). The power system behavior is stochastic in nature, and therefore the theoretically system planning should be carried on with probabilistic techniques. However, most of the present planning, design, and operating criteria are based on deterministic techniques which have been widely used for decades. Along with market deregulation, the operation of large-scale power systems needs more careful study, usually guided by safety and environmental requirements, legal and social obligations, present and future power demands, and maximizing the values of generating resources (Operation, website). Deterministic planning methods usually consider the worst situations, which are selected based on subjective judgements and therefore it is difficult to justify as a part of an economic decision-making process. Moreover, with deterministic planning methods, the systems are often designed or operated to withstand severe situations that have a low probability of occurrence, which greatly influences the economical and efficient operation of power systems. Furthermore, it is difficult to address all the transmission challenges and uncertainties with deterministic methods. In other words, the essential weakness of deterministic approaches is that they do not and cannot recognize the probabilistic or stochastic nature of system behavior, of
5.3 Available Tools for Probabilistic Analysis
121
customer demands, or of component failures. Fortunately, the probabilistic system planning provides a practical and effective system planning technique. The probabilistic planning, through qualified reliability assessment, can capture both single and multiple component failures and recognize not only the severity of the events but also the likelihood of their occurrences (Li and Choudhury, 2007). Probabilistic techniques consider factors that may affect the performance of the system and provide a quantified risk assessment using performance indices, which are sensitive to factors that affect the reliability of the system. Quantified descriptions of the system performance, together with other relevant factors will make a sound estimate of the expected value of energy at risk (Probabilistic System Planning, 2004).
5.3 Available Tools for Probabilistic Analysis A survey of state-of-the-art probabilistic methods that facilitate power system stability, reliability, and planning is provided, respectively, in this section.
5.3.1 Power System Stability Analysis Power system stability is essential for power system operations as well as planning. Probabilistic methods have been proposed for power system stability analysis to provide more information on the system stability compared with the deterministic stability assessment. The transient stability and small signal stability analysis are discussed in this section. 1) Transient Stability The transient stability analysis aims at finding out whether the synchronous machines will regain or lose synchronism in the new steady-state equilibrium. In general, there are two main classes of probabilistic techniques for transient stability assessment, namely conditional probability theorem based methods and Monte-Carlo simulation based approaches. The use of probabilistic methods in transient stability studies was first proposed by Billinton and Kuruganty (Billinton and Kuruganty, 1980; Billinton and Kuruganty, 1981; Kuruganty and Billinton, 1981), which established the ground for further application of probabilistic techniques based on transient stability assessment. Their research mainly focused on the probabilistic aspects of fault type, fault location, fault clearing phenomenon, and system operating conditions which can affect transient stability. Then Anderson & Bose carried on this research, in which a complex analytical transformation was considered
122
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
(Anderson and Bose, 1983). Hsu and Chang conducted a transient stability analysis deriving the joint probability distribution function (PDF) for the Critical Clearing Time (CCT) (Hsu and Chang, 1988). Aboreshaid et al. introduced a bisection algorithm which reduces the computation time required to conduct probabilistic transient stability studies (Aboreshaid et al., 1995). McCalley et al. presented a new risk based security index for determining the operating limits in stability limited electric power systems (McCalley et al., 1997). Ali, et al. presented a new technique for probabilistic transient stability analysis using grid-computing technology, which significantly improved the computing efficiency (Ali et al., 2005; Ali et al., 2007). Nowadays, probabilistic approaches are considered as the more comprehensive and rational techniques for addressing transient stability problems. 2) Small Signal Stability A complex pattern of oscillations can result in proceeding system disturbances; linear, time invariant state-space models are widely accepted as a useful mean of studying perturbations of the system state variable from the nominal values at a specific operating point (Burchett and Heydt, 1978; Makarov and Dong, 1998). Sensitivity analysis is then typically undertaken by examining the change in the system state matrix, or the eigenvalue sensitivity, for a variation in the system parameter in question (Van Ness and Boyle, 1965). With the sensitivity analysis results, further probabilistic stability properties of the power system can be obtained. Probabilistic eigenvalue analysis of power system dynamics is often applied with the advantage of determining the probabilistic distributions of critical eigenvalues, and hence providing an overall probability of the system dynamic instability (Wang et al., 2000; Wang et al., 2003). The probabilistic approach to dynamic power system analysis first occurred in 1978. Wang et al. proposed a hybrid utilization of central moments and cumulants, in order to ensure the consideration of both the dependence among the input random variables and the correction for probabilistic densities of eigenvalues (Wang et al., 2000). Wang et al. also used a 2-machine test system at a particular load level to determine the eigenvalue probabilities derived from the known statistical attributes of variations of system parameters (Wang et al., 2003). Dong et al. investigated power system state matrix sensitivity characteristics with respect to system parameter uncertainties with analytical and numerical approaches and identified those parameters that have great impacts on system eigenvalues (Dong et al., 2005; Pang et al., 2005). The Monte Carlo technique is another option which is more appropriate for analysing the complexities in large-scale power systems with higher accuracy, though it may require more computational efforts (Robert and Casella, 2004; Xu et al., 2005).
5.3 Available Tools for Probabilistic Analysis
123
5.3.2 Power System Reliability Analysis The concept of power system reliability was first proposed in 1978 (Endrenyi et al., 1988). Since then, many efforts have been applied to develop kinds of reliability assessment approaches. Although, the probabilistic techniques have been extensively studied and maturely applied to many fields, acquiring satisfactory performance, historically the reliability assessments were basically based on the deterministic criteria. The introduction of probabilistic method to hulk power system evaluation is a comparatively new development and requires further study. However, the slow development in this area can be explained by the following difficulties (Zhang et al., 2004, Zhang and Lee, 2004): • Concept-difficulties associated with clearly defining the goals and purposes of reliability evaluations, and selecting appropriate indices and failure criteria. • Modeling-difficulties associated with finding mathematical models that describe the failure and repair processes, load and weather effects, remedial actions, and generation scheduling in hulk systems with acceptable fidelity. • Computation-difficulties associated with finding solution methods whose accuracy and computational efficiency can be considered acceptable. • Data Collecting-difficulties due to the unavailability of sufficient observed failures. In a project endorsed by NERC, EPRI sponsored a Power Delivery Reliability Initiative that focused on the development of Reliability Assessment methods for operators. One important outcome of this work was the PRA methodology. This methodology offers a practical hybrid approach to reliability assessment that combines probabilistic and deterministic methods, allowing users to incorporate the probability of an event within feasible data limitations. EPRI has the vision of developing next-generation probabilistic reliability assessment methods and tools for both operators and planners to address reliability issues under an open access environment. The detailed description of the PRA methodology is summarized in the next section.
5.3.3 Power System Planning The probabilistic techniques based system planning has become increasingly popular and significant in recent years, which is not the substitute of traditional methods but effective complementary approaches. Traditionally, in a vertically integrated power system, the deterministic load flow (DLF) was applied to the power system planning. The DLF provides an effective technique to address the power system security and re-
124
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
liability problems, like the future expansion planning of power systems and the best operation status determining of existing electric power systems. For specified load and generator real power and voltage conditions, the principal information obtained from the DLF is the magnitude and phase angle of the voltage at each bus as well as the active and reactive power flowing in each line. However, it only represents the system condition of a given time instant or a series of determined values selected by the planner. As a result, the DLF ignores some power system uncertainties, like loss of generating units, variations of load demands, and break or circuit outages within the system. If carrying out DLF computations for every possible combination of bus loads and generating unit outages of the modern power system, it is completely unpractical because of the extremely large computational effort required. Moreover, the restructuring and deregulation of the power industry have given rise to more and more system uncertainties in the power system operation and planning. Traditionally, the system operator is solely responsible for system operation and planning. To some extent, power system engineers knew with some certainty where power plants and transmission facilities were going to be built and with what capacity beforehand. Therefore, it is relatively easy to forecast the necessary generation and transmission capacities. But in an open access environment, some business confidential information about generation and distribution companies cannot be accessed. Therefore, following these changes, one consequence is that power systems require more effective design, management, and direction techniques due to the ever expanding large-scale interconnection of power networks. These techniques should not only consider the traditional constraints, but should also promote fair competition in the electricity market as well as ensuring certain levels of security and reliability. The application of probabilistic analysis to power system load flow was first proposed by Borkowa in 1974 (Billinton and Allan, 1996). Since then, there are two options of adopting probabilistic approaches to study load flow problems: stochastic load flow (SLF) and probabilistic load flow (PLF). Because of the extensive mathematical background, the PLF has been widely used in the power system operation and planning. Instead of obtaining a point estimate result by the deterministic load flow, the PLF algorithm evaluates probability density functions and/or statistical moments of all state variables and outputs network quantities to indicate the possible ranges of the load flow result (Su, 2005). Therefore, the PLF study provides power system engineers a better and effective way to analyze the future system conditions and provides more confidence in making judgments and planning decision concerning investments.
5.4 Probabilistic Stability Assessment
125
5.4 Probabilistic Stability Assessment Probabilistic stability assessment gives the distribution of system stability indices. It also studies the impact from different system contingencies which may have significantly different probabilities of occurrence. In this section, the probabilistic transient stability and small signal stability assessment are presented.
5.4.1 Probabilistic Transient Stability Assessment Methodology The traditional transient stability studies follow a step-by-step process in which the factors such as the load compositions, fault types, fault locations, etc., are selected beforehand, usually in accordance to the “worst-case” philosophy described earlier (Vaahedi et al., 2000). Furthermore, in order to ensure that the most severe disturbance is selected, the contingency types and locations should also be provided in advance. The probabilistic studies take into account of the stochastic and probabilistic nature of the real power system. The comparisons of the procedures for both deterministic and probabilistic factors in transient stability studies are shown in Figs. 5.2 and 5.3
Fig. 5.2. Procedures for deterministic transient stability studies
126
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
(Vaahedi et al., 2000).
Fig. 5.3. Procedures for probabilistic transient stability studies
For deterministic transient stability analysis methods, only one network topology is selected in the assessment, while in the probabilistic studies, for each sample a determination has to be made for the forced transmission outages. Also, in the probabilistic studies, the disturbance sequence becomes dynamic since it is driven by the operation status of the circuit breakers. The sample selection in the probabilistic studies was derived using the MonteCarlo method. Also in Fig.5.2, barely stable means a case whereby increasing the stability parameter by the threshold will result in an unstable case (Vaahedi et al., 2000).
5.4 Probabilistic Stability Assessment
127
5.4.2 Probabilistic Small Signal Stability Assessment Methodology The Monte Carlo method involves using random numbers and probabilistic models to solve problems with uncertainties, such as risk and decision making analysis in science and engineering research. Simply speaking, it is a method for iterative evaluating a deterministic model using sets of random numbers. For application in the probabilistic small signal stability analysis, the method starts from the probabilistic modeling of system parameters of interest, such as the dispatching of generators, electric loads at various nodal locations, network parameters etc. Next, a set of random numbers with uniform distribution will be generated. Subsequently, these random numbers are fed into the probabilistic models to generate actual values of the parameters. The load flow analysis and system eigenvalue calculation can then be carried out, followed by the small signal stability assessment via the system modal analysis.
Fig. 5.4. A Procedure for Monte Carlo based small signal stability studies
The overall structure of the Monte Carlo based small signal stability analysis is presented in Fig.5.4 (Xu et al., 2005). The scheme starts from the initial
128
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
stage of the random number generation, followed by a loop of random input variables generation, load flow and system eigenvalue calculation, and the final stage of eigenvalue analysis. The random number generated in the first stage must follow the uniform distribution. To ensure the accuracy of Monte Carlo simulation, the probabilistic models of input variables for subsequent power system analysis must be built as realistic as possible. More details of the probabilistic modeling of the random variables of interest will be discussed in the next section. By continuously feeding random numbers into the probabilistic models built, sets of system input variables are obtained. Subsequently, power flow calculation can be carried out to determine the initial system state for each group of inputs. Next, the small signal stability of the system can be analyzed based on eigenvalue analysis. Finally, the statistics of system parameters, such as the eigenvalues and damping ratios, will be calculated with the results stored from the previous stage. Based on the resultant statistics, further studies of stability-related topics can be carried out.
5.5 Probabilistic Reliability Assessment In this section, probabilistic reliability assessment methods are discussed. They are important parts of probabilistic power system planning. For the purpose of completeness, we first review the traditional system reliability assessment methods.
5.5.1 Power System Reliability Assessment Power system reliability refers to the power system’s capability to provide adequate supply of electrical energy to customers. It has a wide meaning, and includes system adequacy and system security. Power system adequacy is a measure of the existence of sufficient facilities in the power system to meet the consumer load demand. System security is the ability of the system to respond to disturbances and maintain stable operating conditions. In this chapter, reliability, as in many in other literatures, is used to represent adequacy only. Because a power system is a large scale complex system, reliability assessment is a very complex process as well. According to the functionalities of different subsystems within a power system, hierarchical levels had been introduced for reliability assessment (Billinton and Allan, 1984). Starting from the hierarchical level I (or HLI) which includes generation facilities of a power system, hierarchical level II (HLII) includes the transmission facilities as well as the generation facilities; and further inclusion of the distribution facilities
5.5 Probabilistic Reliability Assessment
129
to represent the complete system compose hierarchical level III (HLIII). A number of key reliability criteria are described briefly below for completeness (Allan and Billinton, 2000): 1) Loss of load probability (LOLP) The LOLP is the probability that the load will exceed the available generation throughout the year. It is the most basic probabilistic index for system reliability assessment. 2) Loss of load expectation (LOLE) The LOLE is extensively used in generation capacity planning. It is the annual average time in the form of days or hours when the daily peak load or load is expected to exceed the available generation capacity. 3) Loss of energy expectation (LOEE) The LOEE is the expected energy that will not be supplied due to occasions when the system load exceeds the available generation. It is more realistic measure when there are increasing numbers of energy limited occasions in power systems today. The expected energy not supplied (EENS) and expected unserved energy (EUE) are of similar nature as LOEE. 4) Energy Index and Reliability (EIR) The EIR is 1 minus the normalized loss of energy expectation. It enables comparison of power systems of different scales. For the power transmission system expansion planning, EUE and value of lost load (VoLL) are extensively used (AEMO web). These reliability assessment methods are further referred to as and used for power system risk assessment. Li (Li, 2005) summarized the power system risk assessment, which covers detailed outage models, probabilistic reliability assessment methods, and their applications in power systems. Utility application experience on probabilistic risk assessment methods are reported by Zhang et al, 2007. EPRI’s tools and plans with probabilistic power system risk assessment techniques for power transmission planning is reported in technical report of EPRI (EPRI, 2004). It is also necessary to report the probabilistic security assessment as an essential part of system reliability assessment. EPRI technical report on probabilistic dynamic security region (EPRI, 2007) gave a summary on probabilistic system security assessment. The method presented in the report is based on Cumulants and Gram-Charlier Expansion method which is based on PLF analysis (Zhang and Lee, 2004). Monte-Carlo simulation method is also used. Power system uncertainties include generation forced outages, transmission unit failures, and forecasted loads. Using a single dynamic security index, the probabilistic dynamic security assessment (PDSA) provides a measure of dynamic security region’s boundary. PDSA can be used to identify the critical potential generator or
130
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
grid failures and therefore to locate the corresponding effective prevention and mitigation actions. It can also provide useful input to the following questions (EPRI 2007): • Which component failure would most affect system dynamic security? • Which components are the most affected by the failures of other components? • What are the weak points in the power system under stay? The PDSA method can be summarized by the following flowchart (EPRI, 2007) given in Fig.5.5.
Fig. 5.5. Monte Carlo method for dynamic security assessment and system planning (EPRI, 2007)
5.5 Probabilistic Reliability Assessment
131
5.5.2 Probabilistic Reliability Assessment Methodology Probabilistic reliability assessment is a concept which was originally used effectively in the nuclear power industry to determine the risk to the general public from the operation of nuclear power plants (Zhang et al., 2004). When it is further developed and applied to the power system, this technique provides an effective way to evaluate the probability of an undesirable event and the relevant impacts on the power system. The probabilistic reliability index (PRI) is a reliability index — which combines a probabilistic measure of the possibility of undesirable events with a criterion of the consequence of the events. The PRI can be defined as follows, P RI =
Index
Out probabilityi × impacti ,
(5.1)
i=1
where Out probability is the possibility of simulated outage situation; and impact is the seriousness of the situation. Generally, there are four distinct types of indices, namely the APRI (amperage or thermal overload), VPRl (voltage violation), VSPRl (voltage instability), and LLPRI (load loss) (Zhang et al., 2004; Maruejouls et al., 2004): 1) Overload Reliability Index
AP RI =
Index
Out probabilityi × Aimpacti ,
(5.2)
i=1
where Aimpacti is the thermal overload above the branch thermal rating caused by the ith critical situation. The impact is measured in terms of MVA. 2) Voltage Reliability Index
V P RI =
Index
Out probabilityi × V impacti ,
(5.3)
i=1
where V impacti is the voltage deviation from bus upper and lower limits caused by the ith critical situation. The impact is measured in terms of kV. 3) Voltage Stability Reliability Index
V SP RI =
Index
Out probabilityi × V Simpacti ,
(5.4)
i=1
where V Simpacti is the voltage stability impact caused by the ith critical situation. The impact exists in state “1” or “0”, which represents that this
132
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
situation causes the system voltage become unstable or remain stable, respectively. 4) Load Loss Reliability Index
LLP RI =
Index
Out probabilityi × LLimpacti ,
(5.5)
i=1
where LLimpacti is the total load loss caused by the ith critical situation. The load loss impact is measured in MW. Here the probability of a certain situation is the likelihood that power system changes to this specific situation at any time in the infinite future. If there are two sets of possible situations for the components in the system, namely the available (A) and unavailable (U ), then the probability of one specific situation can be defined as Out probability =
ci ∈U
u (ci )
a (cj ),
(5.6)
cj ∈A
where u (ci ) is the unavailability of component ci ; and a (ci ) is the availability of component ci . Because of the complex structure and the large number of system components of an actual power network, it is unrealistic to analyse all the outage situations dependently. If every situation needs to be analysed individually, the handling process would be very complicated because of the vast amount of data involved. Fortunately, it is noticeable that the outage of several components may share an identical cause. In PRA, a group of components simultaneously experience outages due to a common cause can be defined as a common mode failure, which can be modeled as a single availability rate. Therefore, the reliability indices are actually an estimation value because only a reduced set of situations are simulated. And the reliability indices are just approximations of system’ reliability. In another word, the PRA methodology is a combination of a purely probabilistic method and a purely deterministic approach, which overcomes individual disadvantages and benefits from each others’ advantages. Generally, the PRA includes five types of analysis criteria (Zhang et al., 2004): Interaction Analysis; Situation Analysis; Root Cause Analysis; Weak Point Analysis; and Probabilistic Margin Analysis. 1) Interaction Analysis The cause and effect relationship among user defined zones can be revealed by the interaction analysis. Zone interaction is defined by a zone “cause” where the outage is located and a zone “affected” where the violations are experienced. Each interaction is named as “by Zone-Cause on Zone-Affected. by Zone1 on Zone2” meaning that the violations encountered in Zone 2 are
5.5 Probabilistic Reliability Assessment
133
caused by outages in Zone 1 (Zhang et al., 2004). P RI (Zone1 on Zone2)
= Situation∈Zone1
P RI (Situation, Component) .
Component∈Zone2
(5.7) 2) Situation Analysis The events or situations that have high probabilities or higher impacts on the system can be analysed by the situation analysis. The analysis results can be revealed in the probability/impact as space, shown in Fig.5.6.
Fig. 5.6. Probabilistic risk indices in impact/probability space
3) Root Cause Analysis The key components that may cause critical situations can be indicated by the root cause analysis. A root cause facility is a facility that experiences an outage and creates a violation, whether or not it is combined with other outages (Zhang et al., 2004). The root cause reliability index can be defined as follows
P RI (Situation) P RI (Root Cause) = , (5.8) k order (Root Cause) where PRI (Situation) is the PRI of all of critical situations that involve this root-cause component; and k is the number of situations that involve the root cause component. 4) Weak Point Analysis The buses and branches which are sensitive to disturbances can be identified with the weak point analysis. These system components at least expe-
134
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
rience one violation. The weak point analysis can be defined as P RI (W eak P oint) =
X
P RI (Situation, W eak P oint)
Situation∈SituationsAf f ecting the W eak P oint
(5.9)
The index is associated with a list of critical situations that cause violations on the weak point components. 5) Probabilistic Margin Analysis The relationship between reliability indices with system stress level can be revealed by the probabilistic margin analysis, which is a criterion of system robustness and a measure of the distance to system danger zones, as shown in Fig.5.7. The direction could be load level, transfer level, or generation output etc. Normally the deterministic margin corresponds to the maximum level of load increase that the system can withstand without any reliability problems. The probabilistic margin extended the concepts of deterministic margin by adopting a tolerance criterion.
Fig. 5.7. PRA method expresses reliability margin as a function of load/transfer increment
By incorporating with probabilities, the PRA analysis provides extended dimension over a deterministic method, which enables interpretations based on simulated situations that correspond to the likelihood of various scenarios (Zhang et al., 2004). To aid this interpretation, the results reflect the situation probability and severity. EPRI Tool for Probabilistic Risk Assessment The PRA methodology offers a more effective method than the traditional deterministic approaches for assessing power grid reliability in today’s uncertain and deregulated environment. It helps identify the most critical
5.6 Probabilistic System Planning
135
potential component failures, evaluate the relative impacts, and provide effective mitigation alternatives. Together with a number of energy companies, EPRI developed a PRA program to help system operators and planners to perform risk-based reliability assessment. It offers the energy industry a more accurate tool for assessing grid reliability under restructured market conditions. PRA method calculates a measure of the probability of undesirable events and a measure of their severity or impact on system operations. Operating a transmission system is like navigating a ship. System operators need to know where the system problem is located, how likely it is going to happen, and how much operating margin the system has. Risk assessment gives information on potential danger and the proximity towards the danger. PRA combines a probabilistic measure of the likelihood of undesirable events with a measure of the consequence of the events into a single reliability index — probabilistic risk index (PRI) (Zhang et al., 2007). The basic methodology of PRA can be found by Zhang et al., 2004. The collaborative PRA study achieved the follows (EPRI, 2007; Zhang et al., 2007): • Assessed overall system reliability; • Unveiled the cause-and-effect relationship among user-defined areas; • Ranked the contingencies according to their contribution to reliability indices; • Identified the transmission system components most likely to contribute to critical situations; • Identified the specific branches and buses most susceptible to interruption.
5.6 Probabilistic System Planning Power system planning is a complex procedure, in which many factors should be carefully considered, especially under a restructured and deregulated environment. In the planning process, the available options should be first generated, then undergo the stability, reliability, and cost assessments, finally the optimal options will be selected. The specific procedures of probabilistic planning can be summarized as follows.
5.6.1 Candidates Pool Construction The planning process starts by generating an initial candidate pool, which is constructed based on the given and forecasted system information. Also,
136
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
expert knowledge is used in this stage to ensure rationality of the candidates with practical engineering and management concerns. Furthermore, other unpredictable factors such as new generation capacity, fuel prices, the change of market rules, and so on should also be considered. The candidates should consider as many uncertainties, which might affect the planning, as possible.
5.6.2 Feasible Options Selection This step can be regarded as a first filter process according to initial criteria. Based on the candidate pool formulated with the practical and management experience, the selection process usually starts from deciding the planning horizon and performing market forecasting correspondingly. Market simulations can be conducted to examine the system stability and reliability, and identify potential locations that need new branches. For example, select the planning alternatives from candidate pool that meet the N-1 principle using analysis tools, like load flow, contingency analysis. A portion of candidates can be eliminated by examining the relevant investments and construction time. Some other options may also be dropped if the environmental criterion or government policies are violated.
5.6.3 Reliability and Cost Evaluation This step is the key procedure to the whole planning process. Conduct probabilistic reliability evaluation for the selected alternatives, and the one with the lowest reliability level will be discarded. Then calculate the overall costs of investment, operation, and unreliability expense for the selected alternatives in the planning time period. The objective of this process is to select reduced set of alternatives from large number of options according to minimum costs.
5.6.4 Final Adjustment The final adjustment to the planning process is to select an appropriate criteria (Li and Choudhury, 2007; Mansoa and Leite da Silva, 2004) and conduct an overall probabilistic economic analysis. A general procedure for probabilistic planning is reported in Fig.5.8.
5.7 Case Studies
137
Fig. 5.8. Procedures for probabilistic system planning studies
5.7 Case Studies In this section some examples of probabilistic power system analysis are given. These include probabilistic power system stability and load flow assessments.
5.7.1 A Probabilistic Small Signal Stability Assessment Example The 39 bus New England test system (see Fig.3.14) is used for a probabilistic small signal stability assessment with grid computing techniques. Except for generator number 10, all other generators are modeled using 7 differential equations, which include both generator and excitation system dynamics. Only generator dynamics are used to model generator 10 which is connected to the slack bus. The excitor model is used IEEE DC excitor type I (Chow, 2000). In order to perform small signal stability analysis, the system dynamics equations are linearised around an operating point, as shown in Eq. (5.10), (Kundur, 1994). The simulation process is given in Fig.5.9 (Xu et al., 2006).
138
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
Fig. 5.9. Flowchart of Monte Carlo based small signal analysis (Xu et al., 2006)
⎧ ⎪ Δδ˙ = Δω, ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ Δω˙ = 1 DΔω, ⎪ ⎪ ⎪ M ⎪ ⎪ ⎪ ⎪ ⎪ 1 xd ⎪ ˙ ⎪ ΔEf d − ΔEq , ΔEq = ⎪ ⎪ ⎪ Td0 xd ⎪ ⎪ ⎪ ⎪ ⎨ xq 1 , ΔE˙ d = − ΔEd Tq0 xd ⎪ ⎪ ⎪ ⎪ ⎪ 1 ⎪ ⎪ ΔV˙ A = − (KA ΔVF + ΔVA ), ⎪ ⎪ ⎪ T A ⎪ ⎪ ⎪ ⎪ 1 ⎪ ˙ ⎪ ΔV = − (K + S )ΔE Δ E fd A E E fd , ⎪ ⎪ TE ⎪ ⎪ ⎪ ⎪ ⎪ 1 ⎪ ⎩ ΔV˙ F = (KF ΔEf d − ΔVF ). TF where: δ, ω denote generator rotor angle and speed;
(5.10)
5.7 Case Studies
139
M denotes moment of inertia of generator; D denotes Damping coefficient; xd , xq denote steady state reactance, and xd , xq transient reactance; Ed , Eq denote voltages behind xd and xq respectively, and Ef d field voltage; VA , VF denote output voltages of regulator amplifier and stabilizer respectively; Td0 , Tq0 denote transient time constants of d and q axis; TA , TF , TE denote time constants of regulator, stabilizer, and excitor circuits respectively; KA , KF , KE denote gains of regulator amplifier, stabilizer, and excitor respectively; SE denotes excitor saturation function. The uncertainties of the process include loads and generator outputs. The load variations are performed at buses 15 – 29 with mean and standard deviations of the real power load shown in Table 5.5. The 6 000 simulations were run for this case study using Monte Carlo approach. The resultant (mean) eigenvalue distribution is given in Fig.5.10. The distributions of real and imaginary parts of one eigenvalue are given in Fig.5.11 which are clearly not normal distributions. Table 5.5. Mean and standard deviations of real power loads Bus
15
16
17
18
19
20
21
22
Mean Std
3 0.5
3 0.2
2 0.3
1 0.1
4.5 0.5
2 0.5
4 0.4
1 0.1
Bus
23
24
25
26
27
28
29
30
Mean Std
4 0.4
4 0.6
2.1 0.4
4 0.4
2.2 0.3
4 0.2
5 0.4
2 0.1
It is further observed that among the resultant 67 eigenvalues, only one unstable mode exists. This unstable mode (No. 2) has a positive real eigenvalue with a mean of 0.006 9 and standard deviation of 0.001. This means that the system may be unstable for this particular mode, and all other modes are stable for the cases considered in the Monte Carlo simulation. Moreover, all oscillation modes are showing damping ratios greater than 0.05, which means the system is well — damped for most operating conditions.
140
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
Fig. 5.10. Distribution of system eigenvalues (mean)
Fig. 5.11. Distribution function of real and imaginary parts of a complex eigenvalue
5.7.2 Probabilistic Load Flow The Queensland, Australia transmission network is used as an example to compare the performance of different probabilistic load flow computation methods. The system includes more than 8 400 km of high-voltage trans-
5.7 Case Studies
141
mission lines from far north Queensland to South East part bordering with New South Wales. The total installed generating capacity as of 2006 is about 10.6 GW. The original system is grouped into 10 regions: Far North (FN), Ross, North, Central West (CW), Gladstone, Wide Bay, South West (SW), Moreton North, Moreton South, and Gold Coast (Powerlink, 2006). Four different PLF methods are applied to this system: (1) combined cumulants and Gram-Charlier expansion (CGC) methods (Zhang and Lee, 2004); (2) CGC method considering network outages (CGCN); (3) CGC method considering uncertainty factors from both the electricity market and the physical power system; and (4) Monte Carlo Simulations (MCS) considering generation dispatch, generation force outage, and network contingencies. A Weibull distribution is used to model generations in (2) and (3). The conditional probability concept is used to represent network contingencies in (2). Method (3) is a proposed new approach in article by Miao et al., 2009. Reconstructions using Gram-Charlier A expansion can be performed with any number of cumulants. As the cumulant order increases, the computational expense for the reconstruction also increases while maintaining a higher level of accuracy. In order to display the graph clearly with accurate but not time consuming results, only the 6th order of Gram-Charlier expansion is recorded and shown here. The results of MCS with 5 000 simulations are used as reference for comparison purpose. The resultant PDF and CDF of power flow magnitude of circuits between CW to SW regions are given in Figs. 5.12 and 5.13.
Fig. 5.12. PDF of Active Power of Transmission Line between CW and SW Regions Obtained by Different Methods
142
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
Fig. 5.13. Comparison between the Active Power CDF of Transmission Line between CW and SW obtained by different methods
From these density curves, it is simple to deduce the confidence levels and the probability of any quantity being greater than or less than a certain value. It can be seen that visible differences in some local parts still exist. However, the errors between those results are fairly small and can be considered acceptable. When we compare with the results of MCS, the proposed method considering the dispatch strategies has given a more closely expected output. Both the expected range of the output quantities and the probability distribution are more accurate than the results of CGC and CGCN. The computational time is comparable among methods (2) – (4), which ranges between 2 – 4 seconds. Compared with the computational of the Monte Carlo approach of over 240 seconds, these cumulant based approaches are more efficient with sufficiently good results (Miao et al.. 2009). More examples of probabilistic methods, especially probabilistic reliability assessment and probabilistic planning can be found in (Zhang and Lee, 2004; Maruejouls et al., 2004; Zhang et al., 2004; Zhang et al., 2007).
5.8 Summary Probabilistic power system analysis methods, including load flow, stability, reliability, and planning, provide a valuable approach to handle the increasing
References
143
uncertainties associated with power system operations and planning nowadays. The key concepts of the move toward probabilistic reliability assessment and planning, as initiated by EPRI, are reviewed in this chapter. Specific techniques for load flow analysis and stability assessment are also discussed. Results of some probabilistic analysis examples including probabilistic load flow calculation considering generation dispatch uncertainties in a market environment is given in the case studies section. Those analytical approaches, compared with the Monte Carlo based approach significantly improved computational efficiency with sufficiently good results. Probabilistic planning of a power system provides the system planner with more confidence to select expansion planning options which are more economically attractive. How to model the uncertainties in a power system for probabilistic analysis remains an interesting problem that still needs further research. This is particularly important in cases where events with low probability but high impact need to be studied in the planning process.
References Aboreshaid S, Billinton R, Fotuhi-Firuzabad M (1995) Probabilistic evaluation of transient stability studies using the method of bisection. IEEE Trans Power Syst 11(4): 1990 – 1995 AEMO (Australian Energy Market Operator) website. http://www.aemo.com.au/. Accessed 25 May 2009 Ali M, Dong ZY, Zhang P et al (2007) Probabilistic transient stability analysis using grid computing technology. Proceedings of IEEE Power Engineering Society General Meeting, Tampa, 24 – 28 June 2007 Ali, M, Dong ZY, Li X et al (2005) Applications of grid computing in power systems. Proc. Australian Universities Power Engineering Conference, Hobart, 25 – 28 September 2005 Allan R, Billinton R (2000) Probabilistic Assessment of Power Systems. Proceedings of the IEEE, 88(2): 140 – 162 Anderson PM, Bose A (1983) A probabilistic approach to power system stability analysis. IEEE Trans Power App Syst PAS-102(8): 2430 – 2439 Billinton R, Kuruganty PRS (1980) A probabilistic index for transient stability. IEEE Trans Power App Syst PAS-99(1): 195 – 206 Billinton R, Kuruganty PRS (1981) Probabilistic assessment of transient stability in a practical multimachine system. IEEE Trans Power App Syst PAS-100(7): 3634 – 3641 Billinton R, Allan RN (1996) Reliability Evaluation of Power Systems, Plenum, New York Burchett RC, Heydt GT (1978) Probabilistic methods for power system dynamic stability studies. IEEE Trans Power App and Syst PAS-97(3): 695 – 702 Chow J (2000) Power System Toolbox 2.0-Dynamic Tutorial and Functions, Cherry Tree Scientific Software, 2000 Dong ZY, Makarov YV, Hill DJ (1997) Genetic algorithms in power systems small signal stability analysis. Proceedings of 1997 International Conference on Ad-
144
5 Probabilistic vs Deterministic Power System Stability and Reliability ...
vances in Power System Control. Operat Manage 342 – 347 Dong ZY, Pang CK, Zhang P (2005) Power system sensitivity analysis for probabilistic small signal stability assessment in a deregulated environment. Int J Cont Aut Syst 3(2): 355 – 362 Endrenyi J, Bhavaraju MP, Clements KA et al (1988) Bulk power system reliability concepts and applications. IEEE Trans Power Syst 3(1): 109 – 117 EPRI (2007) Utility Application Experiences of Probabilistic Risk Assessment, Palo Alto Hsu Y, Chang CL (1988) Probabilistic transient stability studies using the conditional probability approach. IEEE Trans Power Syst 3(4): 1565 – 1572 Kundur P, Paserba J, Ajjarapu V et al (2004) IEEE / CIGRE Joint Task Force on Stability Terms and Definitions “Definition and classification of power system stability. IEEE Trans Power Syst 19(2): 1387 – 1401 Kundur P (1994) Power System Stability and Control. McGraw-Hill, New York Kuruganty PRS, Billinton R (1981) Protection system modeling in a probabilistic assessment of transient stability. IEEE Trans Power App Syst PAS-100(5): 2163 – 2170 Li W (2005) Risk Assessment of Power Systems: Models, Methods, and Applications. IEEE Press, Wiley Interscience Li WY, Choudhury P (2007) Probabilistic transmission planning. IEEE Power Energy Mag 5(5): 46 – 53 McCalley JD, Fouad AA, Agrawal BL et al (1997) A risk based security index for determining operating limits in stability limited electric power systems. IEEE Trans Power Syst 12(4): 1210 – 1219 Makarov YV, Dong ZY (1998) Eigenvalues and Eigenfunctions, vol. Computational Science & Engineering, Encyclopedia of Electrical and Electronics Engineering, Wiley, London Makarov YV, Dong ZY, Hill DJ (1998) A general method for small signal stability analysis. IEEE Trans Power Syst 13(3): 979 – 985 Makarov YV, Hill DJ, Dong ZY (2000) Computation of bifurcation boundaries for power systems: a new Δ-plane method. IEEE Trans Circuits Syst 47(4): 536 – 544 Maruejouls N, Sermanson V, Lee ST et al (2004) A practical probabilistic reliability assessment using contingency simulation. Proceedings of IEEE Power Systems Conference and Exposition, New York, 10 – 14 October 2004 Mansoa LAF, Leite da Silva AM (2004) Probabilistic criteria for power system expansion planning. Electr Power Syst Res 69(1): 51 – 58 Miao L, Dong ZY, Zhang P (2009) A Cumulant based Probabilistic Load Flow Calculation Method Considering Generator Dispatch Uncertainties in an Electricity Market IEEE Trans Power Syst (submitted to) Operation planning. BC Hydro for Generations. http://www.bchydro.com. Accessed 25 May 2009 Probabilistic system planning: Comparative Options & Demonstration (2004) Parsons Brinckerhoff Associates Pang CK, Dong ZY, Zhang P et al (2005) Probabilistic analysis of power system small signal stability region. Proceedings of International Conference on Control and Automation, Budapest, 26 –29 June 2005 Powerlink Queensland (2006) Annual planning report 2006. http://www.powerlink. com.au/asp/index.asp?sid=5056&page=Corporate/Documents&cid=5250& gid=476. Accessed 25 May 2009 Ringlee RJ, Albrecht P, Allan RN et al (1994) Bulk power system reliability criteria and indices trends and future needs. IEEE Trans Power Syst 9(1): 181 – 190 Robert CP, Casella G (2004) Monte Carlo Statistical Methods, 2nd edn. Springer, New York
References
145
Su CL (2005) Probabilistic load-flow computation using point estimate method. IEEE Trans Power Syst 20(4): 1843 – 1851 Vaahedi E, Li WY, Chia T et al (2000) Large scale probabilistic transient stability assessment using B.C. Hydro’s on-line tool. IEEE Trans Power Syst 15(2): 661 – 667 Van Ness JE, Boyle JM (1965) Sensitivities of large multiple-loop control systems. IEEE Trans Automatic Control, AC-10: 308 – 315 Wang KW, Chung CY, Tse CT et al (2000) Improved probabilistic method for power system dynamic stability studies. IEE Proceedings-Generation, Transm Distrib 147(1): 37 – 43 Wang KW, Tse CT, Bian XY et al (2003) Probabilistic eigenvalue sensitivity analysis and PSS design in multimachine systems. IEEE Trans Power Syst 18(1): 1439 – 1445 Xu Z, Ali M, Dong ZY (2006) A novel grid computing approach for probabilistic small signal analysis. Proceedings of IEEE Power Engineering Society General Meeting Xu Z, Dong ZY, Wong KP (2006a) A hybrid planning method for transmission networks in a deregulated environment. IEEE Trans Power Syst 21(2): 925 – 932 Xu Z, Dong ZY, Wong KP (2006b) Transmission planning in a deregulated environment.IEE Proceedings of Generation, Transm Distrib 153(3): 326 – 334 Xu Z, Dong ZY, Zhang P (2005) Probabilistic small signal analysis using Monte Carlo simulation. Proceedings of IEEE Power Engineering Society General Meeting, San Francisco, 12 –16 June 2005, 2: 1658 – 1664 Zhang P, Lee ST, Sobajic D (2004) Moving toward probabilistic reliability assessment methods. Proceedings of International Conference on Probabilistic Methods Applied to Power Systems, Ames, 12 – 16 September 2004, 906 – 913 Zhang P, Min L, Hopkins L et al (2007) Utility experience performing probabilistic risk assessment for operational planning. Proceedings of International Conference on Intelligent Systems Applications to Power Systems, Kaohsiung, 5 – 8 November 2007 Zhang P, Lee ST (2004) Probabilistic load flow computation using the method of combined cumulants and gram-charlier expansion. Proceedings of IEEE Trans Power Syst 19(1): 676 – 682 Zhao JH, Dong ZY, Lindsay P et al (2009) Flexible transmission expansion planning with uncertainties in an electricity market. IEEE Trans Power Syst 24(1): 479 – 488
6 Phasor Measurement Unit and Its Application in Modern Power Systems Jian Ma, Yuri Makarov, and Zhaoyang Dong
The introduction of phasor measurement units (PMUs) in power systems significantly improves the possibilities for monitoring and analyzing power system dynamics. Synchronized measurements make it possible to directly measure phase angles between corresponding phasors in different locations within the power system. Improved monitoring and remedial action capabilities allow network operators to utilize the existing power system in a more efficient way. Improved information allows fast and reliable emergency actions, which reduces the need for relatively high transmission margins required by potential power system disturbances. In this chapter, the applications of PMU in modern power systems are presented. Specifically, the topics touched in this chapter include state estimation, voltage and transient stability, oscillation monitoring, event and fault detection, situation awareness, and model validation. A case study using the Characteristic Ellipsoid method based on the PMU measurements to monitor power system dynamics is presented.
6.1 Introduction Synchrophasors are precise measurements of the power systems and are obtained from PMUs. PMUs measure voltage, current, and frequency in terms of magnitude and phasor angle at a very high speed (usually 30 measurements per second). Each phasor measurement recorded by PMU devices is time-stamped based on universal standard time, such that phasors measured by different PMUs installed in different locations can be synchronized by aligning time stamps. The phasor measurements are transmitted either via dedicated links between specified sites, or over a switched link that is established for the purpose of the communication (Radovanovic, 2001). These synchronized phasor measurements allow the operators to monitor dynamics,
148
6 Phasor Measurement Unit and Its Application in Modern ...
identity changes in system conditions, and better maintain and protect the reliability of power systems. Many new promising concepts, such as the widearea measurement/monitoring system (WAMS), are directly related to the PMU techniques. PMUs bring great potential for upgrading the supervision, operation, protection, and control of modern power systems. Modern synchronized phasor measurement technology dates back to the article by Phadke et al., 1983, in which the importance of positivesequence voltage and current phasor measurements was identified, and some of the uses of these measurements were presented. The Global Positioning System (GPS) provides the most effective manner to measurement synchronized phasor in power systems over great distances. In early 1980s, Virginia Polytechnic Institute and State University (Virginia Tech) in the USA led the effort to build the first prototypes of the modern PMU based on GPS. IEEE finished a standard in 1995 (Martin et al., 1998) and released a revised version in 2005 (IEEE Power Engineering Society, 2006) to standardize the data format used by PMUs. The North American SynchroPhasor Initiative (NASPI) (NASPI, 2009a) was launched in 2005 in a hope of improving power system reliability and visibility through wide area measurement, monitoring, and control. The major goal of the NASPI is to create a robust, widely available and secure synchronized data measurement infrastructure for the interconnected North American electric power system with associated analysis and monitoring tools for better planning and operation, and improved reliability. The increased utilization of electric power systems is of major concern to most utilities and grid operators today. Advanced control and supervision systems allow the power system to operate closer to its technical limits by increasing power flow without violating reliability constraints. The introduction of phasor measuring units is the first step towards more efficient and reliable network operation. PMUs provide relevant phasor data for off-line studies and post-event analysis. Typically, each PMU has 10 or 20 analog input channels for voltages and currents and, in addition, it is capable of handling a practically unlimited number of binary information signals. The terminals transmit information to the data concentrator up to 60 times a second. Based on the stored data in the data concentrator, extensive off-line studies and post-event analyses can be performed. Phasor measurements obtained from PMUs have a wide variety of applications in support of maintaining and improving power system reliability. PMUs have been applied to North America, Europe, China, and Russian for post-disturbance analysis, stability monitoring, thermal overload monitoring, power system restoration, and model validation (Chakrabarti et al., 2009a). Applications of PMUs for state estimation, real-time control, adaptive protection, and wide area stabilizer are in either testing phase or planning stage in these countries. India and Brazil are in the process of either the planning or testing phase of using PMUs in their power grids. Some of important potential applications of PMUs in power systems in-
6.1 Introduction
149
clude (Phadke, 1993): • improvement of the static state estimation function in a power system control center; • robust, two side transmission line fault locator; • emergency control during large disturbances in a power system; • voltage control in a power system; • synchronized event recording. According to NASPI’s synchrophasor applications table (NASPI, 2009b), actual and potential phasor data application areas include: reliability operations, market operation, planning, and others. A detailed description of each application area is provided in Table 6.1. Table 6.1. NASPI’s Synchrophasor Applications Table (NSAPI, 2009b) Topics
Applications
Description
Reliability Operations
Wide-area grid monitoring and visualization
Use phasor data to monitor and alarm for metrics across entire interconnection (frequency stability, voltage, angle differences, MW and MVAR flows). Use real-time data to track and integrate power plant operation (including intermittent renewables and distributed energy resources. Use real-time data and analysis of system conditions to identify and alert operators to potential grid problems Use actual measured system condition data in place of modeled estimates. Use phasor data and analysis to identify frequency oscillations and initiate damping activities. Use phasor data and analysis to identify frequency oscillations and initiate damping activities. Real-time phasor data allow identification of grid events and adaptive design, execution and evaluation of appropriate system protection measures Improve planned separation of power system into islands when instability occurs, and dynamically determine appropriate islanding boundaries for island-specific load and generation balances. Use PMU data to monitor or improve transmission line rating in real time Use phasor data and improved models to understand current, hour-ahead, and day-ahead system operating conditions under a range of normal and potential contingency operating scenarios.
Power plant monitoring and integration
Alarming for situational awareness tools State estimation Inter-area oscillation monitoring, analysis and control Automated real-time control of assets Wide-area adaptive protection and system integrity protection Planned power system separation
Dynamic line ratings and VAR support Day-ahead and hour-ahead operations planning
150
6 Phasor Measurement Unit and Its Application in Modern ...
Continued Topics
Applications
Description
Reliability Operations
Automatically manage frequency and voltage response from load System reclosing and power system restoration
System load response to voltage and frequency variations.
Market Operation
Planning
Protection system and device commissioning Congestion analysis
Static model benchmarking
Dynamic model benchmarking
Generator model validation Stability model validation Performance validation
Others
Forensic event analysis
Use phasor data to bring equipment back into service without risking stability or unsuccessful reclosing attempts.
Synchronized measurements make it possible to operate the grid according to true real-time dynamic limits, not conservative limits derived from off-line studies for worst-case scenarios. Use phase data to better understand system operations, identify errors in system modeling data, and fine-tune power system models for on-line and offline applications (power flow, stability, short circuit, OPF, security assessment, modal frequency response, etc.). Phasor data record actual system dynamics and can be used to validate and calibrate dynamic models.
Use phasor data to validate planning models, to understand observed system behavior and predict future behavior under assumed conditions. Use phasor data to identify the sequence of events underlying an actual system disturbance, to determine its causes.
Phasor applications vision, road mapping & planning
PMUs are used in many electrical power engineering applications such as measurements, protection, control, observation, etc. In measurements, it has the unique ability to provide synchronized phasor measurements of voltages and currents from widely dispersed locations in an electric power grid to be collected at a control center for analysis. PMUs revolutionize the process of power systems monitoring and control. This revolution can benefit from WAMS technology as well. In addition, in protection and control, PMUs are used in many applications for measuring the synchronized phasor parameters needed for taking a decision or an action. PMU-based measurements are extensively used for a wide range of applications including state estimation, situational awareness for operational decision making, and model validation. A number of novel applications that utilize phasor measurements from PMUs to determine small signal oscillatory
6.2 State Estimation
151
modes, model parameter identification, and post scenario system analysis have also been developed (Balance et al., 2003). With the initiation of the Eastern Interconnection Phasor Project (EIPP) (Cai et al., 2005; Donnelly et al., 2006) new opportunities have arisen to incorporate phasor measurements from PMUs in real time analysis to evaluate system dynamic performance. Recent efforts involving the use of PMU measurements for voltage stability analysis and monitoring power system dynamic behavior have been developed (Corsi and Taranto, 2008; Sun et al., 2007; Liu et al., 1999a; Liu et al., 1999b; Liu et al., 1998; Miloˇsevi´c and Begovi´c 2003a,b).
6.2 State Estimation The accuracy of state estimation can be improved with the synchronized measurement from PMUs. This section gives an overview of PMU applications in this area.
6.2.1 An Overview The results of the state estimation is the basis for a great number of power system applications, including Automatic Generation Control (AGC), load forecasting, optimal power flow, corrective real and reactive power dispatch, stability analysis, security assessment, contingency analysis, etc. Fast and accurate determination of the system state is a critically important for the secure and safe operation of power systems. Therefore, modern Energy Management Systems (EMSs) in electric energy control centers are usually equipped with state estimation solvers. The major goal of state estimation solvers is to provide optimal estimates of the system current operating state based on a group of conventional redundant measurements, and on the assumed system model (Abur and Exposito, 2004). These available measurements are traditionally provided by supervisory control and data acquisition (SCADA) and usually include voltage magnitude, real and reactive power injection, line and reactive power flow, etc. With the growing use of synchronized PMUs in recent years, PMUs have received great interest to improve state estimation due to their synchronized characteristic and high data transmission speed (Thorp et al., 1985; Phadke et al., 1986; Zivanovic and Cairns, 1996). The PMUs are able to obtain measurements synchronously, and thus are more accurate than traditional SCADA systems. Consequently, the performance of state estimation is dramatically improved by PMUs.
152
6 Phasor Measurement Unit and Its Application in Modern ...
In conventional state estimation approaches a sufficient number of traditional SCADA measurements in proper placement are assumed capable of dealing with bad data and providing complete observability without using PMU measurements. Besides its capacity of increasing the accuracy of state estimation, PMU measurements can also improve network observability (Nuqui and Phadke, 2005), help in bad data processing (Chen and Abur, 2005), and in determining network topology. The objective of applying PMUs in the state estimation problem is to take advantages of the highly accurate measurements of magnitude and phasors for both bus voltage and branch current. If enough PMUs exist to guarantee the observability of the entire system, the state estimation problem can be formulated in a slightly simpler manner. Then, the relation between measured phasors and system states will become linear yielding a linear measurement model (Baldwin et al., 1993).
6.2.2 Weighted Least Squares Method Many different methods have been developed to solve the state estimation problem. Least square-based algorithms may be some of the most popular methods. Among them, the weighted least squares (WLS) method is commonly used in power systems. Its objective is to minimize the weighted sum of the squares of the differences of the estimated and measured values. A brief description of the WLS method is provided as follows. Due to the existence of the measurement error, the measurements can be expressed as z = h(x) + v, (6.1) where z is the measurement vector containing the real and imaginary parts of the measured voltage and current phasors, x refers to state variables containing the real and imaginary parts of bus voltage phasors, v stands for the measurement error vector, and h(·) denotes the non-linear relation between measurements and state variables. Under the assumption that v is Gaussian with E(v) = 0, E(vv T ) = R,
(6.2) (6.3)
where R is the covariance matrix of measurement errors. The maximum likelihood estimate of x is the value that minimizes the weighted least-squares performance index J(x) = z − h(x)
T
R−1 z − h(x) .
(6.4)
If the white noises associated with the measurements are considered as
6.2 State Estimation
153
independent, then we have ⎡ ⎢ ⎢ ⎢ R = diag(v 2 ) = ⎢ ⎢ ⎣
σ12
0
...
0
0 .. .
σ22 . . . .. .
0 .. .
0
0
2 . . . σm
⎤ ⎥ ⎥ ⎥ ⎥, ⎥ ⎦
(6.5)
where m is the number of the measurements. Eqs. (6.4) and (6.5) show that the weights are set as the inverse of the measurement noises. Therefore, higher quality measurements have lower noises and larger weights, while low quality measurements have higher noises and smaller weights. The minimum of J(x) can be calculated based on J(x) = 0. x
(6.6)
Combining Eqs. (6.4) and (6.6), we can get H(x)R−1 (z − h(x)) = 0,
(6.7)
where H(x) is the Jacobian matrix of the measurement function h(x), and H(x) is constant and a function of the network model parameters only: ⎡ ⎤ h1 h1 h1 ... ⎢ x1 x2 xn ⎥ ⎢ ⎥ ⎢ ⎥ h2 h2 ⎥ ⎢ h2 ⎢ ⎥ ... x x2 xn ⎥ , (6.8) H(x) = ⎢ ⎢ .1 .. .. ⎥ ⎢ . ⎥ . . ⎥ ⎢ . ⎢ ⎥ ⎣ hm hm hm ⎦ ... x1 x2 xn where n is the number of the state variables. The non-linear function h(x) can be linearized by expanding its Taylor series expansion around point xo and omitting the items higher than order 2, i.e., h(x) ≈ h(x0 ) + H(xo )Δx. (6.9) The Eqs. (6.7) and (6.9) can be solved by iterative methods such as Newton Raphson’s method. At the (k + 1)th iteration, the state variables can be calculated from their kth iteration: Δx(k) = [H T (x(k) )R−1 H T (x(k) )]−1 H T (x(k) )R−1 [z − h(x(k) )], (6.10) x(k+1) = x(k) + Δx(k) ,
(6.11)
154
6 Phasor Measurement Unit and Its Application in Modern ...
The iteration can be stopped when satisfying the following criterion: ! " n " (k+1) (k) # (xi − xi )2 < ε, (6.12) i=1
or
J(x(k+1) ) − J(x(k) ) < ε,
(6.13)
where ε is a predetermined convergence factor.
6.2.3 Enhanced State Estimation Usually, the PMU measurements used for state estimation include bus voltage phasors and branch current phasors. Sometimes, traditional measurements can also be used to build a hybrid state estimator (Bi et al., 2008; Xu and Abur, 2004). In PMU based state estimation, the use of voltage and current phasors obtained from a PMU can be installed on voltage side of any levels. Some applications suppose that the PMUs are installed on the extra-high voltage side of the substation of a power plant (Angel et al., 2007). To calculate the covariance matrix of PMU measurements, the branch current phasors are converted from polar coordinates into rectangular coordinates (Bi et al., 2008), which might cause indirect measurement transformation error. Usually, to get the relative phase angles of all buses in a traditional SCADA system, one slack bus has to be chosen as the reference bus. Based on this reference, WLS can obtain the voltage phase angles in state vector. In WAMS, however, synchronized phasor measurements may use a different reference determined by the instant synchronized sampling initiated. The reference problem, if not properly handled, may lead to incorrect results in PMU based state estimation (Bi et al., 2008). Usually, a PMU needs to be installed at the reference bus of traditional state estimation model. Thus, the same bus can be chosen as the reference in WAMS. Moreover, the reference bus should be equipped with two PMUs to protect against the failure of single reference measurement (Bi et al., 2008). 1) Optimal PMU Locations for State Estimation The problem of finding optimal PMU locations (or minimum PMU placement) for power system state estimation is one of the important problems associated to the PMU-based state stimulation (Baldwin et al., 1993; Nuqui and Phadke, 2005; Miloˇsevi´c and Begovi´c, 2003a; Xu and Abur, 2005; Xu and Abur, 2004; Rakpenthai et al., 2007; Chakrabarti and Kyriakides, 2008; Chakrabarti et al., 2009b). The purpose of minimum PMU placement is to minimize the number of PMUs installed in a power system under the constraint that the system is topologically observable (all of the bus voltage
6.2 State Estimation
155
phasors can be estimated) during its normal operation and following any single-line contingency (Miloˇsevic and Begovic, 2003): ⎧ ⎨ M in{N, −S}, x∈R (6.14) ⎩ Subject to M = 0, where R is the search space, N denotes the total number of PMUs to be placed in the system, M is the total number of unobservable buses, and S is expressed as a number of buses that are observable following any single-line outage, referring to single line-outage redundancy of the system. This is a typical multi-criteria combinatorial optimization problem requiring simultaneous optimization of two conflicting objectives, different individual optima: minimization of the number of PMUs, and maximization of the measurement redundancy. Three criteria need to be considered when solving the minimum PMU placement problem (Rakpenthai et al., 2007): the accuracy of estimation, the reliability of estimated state under measurements failure and change of network topology, and the investment cost. The solution space is defined over the domain space that consists of all the placement sets of PMUs (Baldwin et al., 1993). Because the minimum PMU placement problem is NP-complete (Brueni and Heath, 2005), no polynomial time algorithm can be applied to find the exact solution of the problem. Therefore, Paretooptimal solutions with a set of optimal tradeoffs between the objectives can be found instead of a unique optimal solution (Miloˇsevi´c and Begovi´c, 2003a). Meta heuristics techniques, including simulated annealing (SA) (Baldwin et al., 1993), genetic algorithm (GA) (Miloˇsevi´c and Begovi´c, 2003a), Tabu search (Peng et al., 2006), adaptive clonal algorithm (Bian and Qiu, 2006), etc., have been applied to formulate the problem. Abur et al. pioneered the attempt to solve the optimal PMU placement based on an Integer Linear Programming (ILP) (Abur and Magnago, 1999). An approach based on complete enumeration trees was applied in (Nuqui and Phadke, 2005). In the article by Baldwin et al., 1993, graph theorem analysis combined with a modified bisecting search and simulated annealing-based method is applied to solve the PMU placement problem. However, the possible contingency in the power system is not considered, the measurement set is not robust to loss of measurements and branch outages. In the article by Miloˇsevi´c and Begovi´c, 2003a, the nondominated sorting GA is used for the optimal PMU placement problems. Each optimal solution of objective functions is estimated by the graph theory and simple GA. Then, the best tradeoff between competing objectives is searched by using nondominated sorting GA. Since this method requires more complexity computation, it is limited by the size of the problem. In addition, the integer programming based on network observability, and the cost of PMUs has been applied to find the PMU placement (Xu and Abur, 2004). This method can be applied to the case of the mixed measurement set which PMUs and conventional measurements are employed in the system. Furthermore, the minimum condition number of the normalized measure-
156
6 Phasor Measurement Unit and Its Application in Modern ...
ment matrix is used as criteria for the numerical observability (Rakpenthai et al., 2007). The sequential elimination is used to find the essential measurements for the completely determined condition. The sequential addition is used to select the redundancy measurements under the contingency. The binary integer programming is also applied to select the optimal redundant measurements. 2) Distributed State Estimation The ever-growing real-time requirements for very large power systems lead to increasing size and complexity of balancing authorities (BAs). An increased computational burden and more severe constraints are imposed on the state estimation solvers in energy control centers. Distributed state estimation is an effective approach to alleviate the computational burden by distributing the computation across the system rather than to centralize it at the control center (Jiang et al., 2007). To utilize the natural division and form subsystems of large power systems, two major procedures are commonly used: decomposition and aggregation. Therefore, gross measurement errors and ill conditioning are localized. The purpose of the synchronized phasor measurements in distributed state estimation is to aggregate the voltage phase angles of each decomposed subsystem of large-scale power systems (Jiang et al., 2007). In distributed state estimation, the entire power system is decomposed into a certain number of non-overlapping subsystems based on their geographical location. Each subsystem performs its own distributed state estimation using its local computing resources and provides the local state estimation solution. Each subsystem has a slack bus where a PMU is installed. The state estimation solution of each subsystem is coordinated by the PMU measurements. The impact of the neighboring subsystem is assumed to affect only boundary buses. A sensitivity analysis based on updates at chosen boundary buses can be used to obtain the distributed solution for the aggregated state estimation. Sensitive internal buses within each subsystem are identified by sensitivity analysis, which evaluates the degrees of impact from the neighboring subsystems. Boundary bus state variables and sensitive internal bus state variables can be re-estimated at the aggregation level to enhance the aggregated state estimation solution. In some distributed state estimation approaches (Zhao and Abur, 2005), the special requirements on the boundary measurements are not imposed on multi-area measurement configuration. Even though area state estimation solvers may use different solution algorithms, data structures, and post processing functions for bad data, they are required to provide their phasor measurements and state estimation solutions only to the central coordinator. Thus, network data sharing and other information exchange are not required between areas and the coordinator. When there are a large number of tie line measurements, the measurements in each subsystem will have a larger impact on the state estimation solution at the internal buses of neighboring subsystems rather than at just the boundary buses. Therefore, this approach
6.3 Stability Analysis
157
can be improved when applied to a large scale power system with a large number of tie lines among each subsystem (Jiang et al., 2007). The tie line measurements can be removed during the process of dividing the power system into a certain number of subsystems (Jiang et al., 2008). However, the tie line measurements are required to be considered in subsequent steps of sending the intermediate subsystem state estimation results to a central coordinator for completing. PMU measurements are used to make each sub-problem solvable and to coordinate the voltage angles of each subsystem state estimation solution. In the article by Zhou et al., 2006, the authors addressed the inclusion of PMU data in the state estimation process. PMU measurements are used in a post-processing step by a mathematical equivalent. Thus, the results of the traditional state estimate and the phasor measurements with their respective error covariance matrices are considered to be a set of measurements that are linear functions of the state vector. The quality of the estimated state is progressively improved by increasing the number of phasor measurements on a power system. 3) Uncertainty in PMU for State Estimation Analysis of the uncertainties in the estimated states of a power system is important for the PMU-based state estimation (Al-Othman and Irving, 2005a; Al-Othman and Irving, 2005b). Classical uncertainty propagation theory and the random fuzzy variables are used to compute the PMU measurement uncertainties (Chakrabarti et al., 2007; Chakrabarti and Kyriakides, 2009). In the article by Chakrabarti and Kyriakides, 2009, the authors presented an approach to evaluate the uncertainties in the final estimated states based on PMU measurements. The uncertainties in the angles and magnitudes of the voltage phasors measured or computed by the PMU as a result of the uncertainties in the A/D converter and the associated computational logic are considered by neglecting errors due to transmission line parameters. A distributed parameter model of the transmission lines is used to obtain more accurate expressions for the uncertainties associated with the direct and pseudo-measurements obtained by PMUs (Chakrabarti and Kyriakides, 2009). The propagation of the measurement uncertainty for different line lengths and conductors provides a basis for weighing PMU measurements in a WLS state estimation.
6.3 Stability Analysis Modern power systems are operating more and more close to their stability and security limit. Based on fast and reliable state estimation or, actually, state calculation, a variety of system stability indices is available on-line to
158
6 Phasor Measurement Unit and Its Application in Modern ...
the system operator. Besides the fast, efficient, and reliable state estimation, PMUs allow online derivation and monitoring of a variety of system stability indexes. Then, more actual system conditions, including load flow pattern, voltage level, security enhancement by early detection of emergency condition, and optimal preventive/corrective or remedial control actions, can be made adaptive with the support of wide-area measurements (Tiwari and Ajjarapu, 2007). PMUs make it possible to measure dynamic performance of power systems, and have been widely used in many aspects of power system stability analysis (Taylor et al., 2005), such as on-line voltage stability analysis, transient stability assessment, oscillation monitoring, prediction, and control (Vu et al., 1999). PMU measurements offer an approach to analyze and predict the voltage stability problems by mathematical simulations. Different stability programs for a number of contingencies can then be ran to evaluate risks and margins. Such applications contribute to optimizing the power system operation process. This provides an attractive opportunity to reconfigure the power system before it reaches to the voltage collapse point, and ultimately to mitigate the power system blackouts (Liu et al., 2008).
6.3.1 Voltage and Transient Stability Static bifurcation models are often used for investigation voltage instabilities. In recent years, using direct parametric (load) dependence to evaluate the proximity of a power system to the voltage collapse has attracted significant attention (Milˇ osevi´c and Begovi´c 2003b). Voltage stability indices indicate the distance between the current operating point and the voltage instability point. Voltage security is the ability of the power system to maintain voltage stability following one of credible events, such as a line or generator outage, load ramp, or any other event stressing the power system. Some researchers distinguish the real-time stability prediction problem from on-line dynamic security assessments (Liu and Thorp, 1995; Liu et al., 1999b). In conventional dynamic security assessments (Fouad et al., 1988; Pai, 1989; Sobajic and Pao, 1989), a power system goes through three stages based on critical clearing time (CCT): prefault, fault-on and postfault stages. The CCT is not the major concern for the prediction problem. PMUs allow monitoring the transient process in real-time, where the protection devices act extremely fast for faulted transmission lines such that the fault can be cleaned immediately at the fault inception. By ignoring the short fault-on stage (in which the transient phasor measurements are discarded) in real-time, the prediction problem only involves prefault and postfault stages. PMUs have been widely utilized to monitor voltage stability (El-Amary et al., 2008). A great number of researches have been conducted to apply PMUs for voltage and transient stability assessment. Because PMUs provide syn-
6.3 Stability Analysis
159
chronized, real-time measurements of voltage and incident current phasors at the system buses (Phadke, 1993), and the voltage phasors contain enough information to detect voltage stability margin directly from their measurements, some algorithms based on phasor measurement have been proposed to determine voltage collapse proximity (Gubina and Strmcnik, 1995; Verbic and Gubina, 2000; Vu et al., 1999). The concept of insensitivity of the apparent power at the receiving end of the transmission line has been used to infer the voltage instability proximity (Verbic and Gubina, 2004). Whereas, the concept of Thevenin equivalent and the Tellegen’s theorem are used to identify the Thevenin parameters (Smon et al., 2006). The status of the overexcitation limiters (OELs) of nearby generators are also monitored for voltage instability proximity indication. A new algorithm for fast-tracking the Thevenin parameters (voltage and reactance) based on the local voltage and current phasor measurements is proposed by Corsi and Taranto, 2008. Traditional identification methods, based on least-squares, need a large data window to suppress oscillations. The proposed algorithm, however, can filter these oscillations without significantly delaying the identification process. An online dynamic security assessment scheme based on phasor measurements and decision trees is described by Sun et al., 2007. Decision trees are built and periodically updated offline to decide critical attributes as security indicators. Decision trees provide online security assessment and preventive control guidelines based on real-time measurements of the indicators from PMUs. A new classification method is used to involve each whole path of a decision tree instead of only classification results at terminal nodes. Therefore, more reliable security assessment results can be obtained when the system conditions change. A piecewise constant-current load equivalent (PCCLE) technique is proposed to provide fast-transient stability swing prediction for use with highspeed control based on PMUs (Liu and Thorp, 1995). The PCCLE technique can speed the calculation of integrating the differential/algebraic equation (DAE) description of post-fault transient dynamics model. The approach used in this technique is to eliminate the algebraic equations by approximation if the load flow solution piecewise, such that only internal generators buses are preserved while retaining the characteristics of the static composite loads. A method to detect voltage instability and the corresponding control in the presence of voltage-dependent loads is proposed by Milˇ osevi´c and Begovi´c 2003b. The onset of the voltage collapse point to the current operating condition is determined based on the VSLBI indicator calculated from the local voltage, the current phasor measurements, and the system-wide information on reactive power reserves. Because the reactive power limitations can result in sudden changes in the VSLBI and prevent the operator from acting in time, the control actions are deployed when the stability margin is small and the reactive power reserves are nearly exhausted.
160
6 Phasor Measurement Unit and Its Application in Modern ...
In the article by Liu et al., 2008, an equivalent model based on PMUs is proposed to analyze and predict the voltage stability of a transmission corridor. This equivalent model retains all transmission lines in a transmission corridor, which is more detailed and accurate than the traditional Thevenin equivalent model. To estimate parameters of the proposed model, the Newton method or the least square estimation method is adopted by using multiple continuous samples of PMU measurements. Based on the new model, an actual load increase direction is estimated in real-time with PMU measurements, along which the load margin is calculated. The load margin is equal to the Available Transfer Capacity (ATC) of the transmission corridor, and is used as a voltage stability index. The traditional Equal Area Criteria (Monchusi et al., 2008), Extended Equal Area Criteria (EEAC) (Wang et al., 1997; Xue et al., 1998; Xue et al., 1989), the critical clearing time, and energy functions (Lyapunov indirect method) (Meliopoulos et al., 2006) are usually used for transient stability analysis. The generators internal angles and the maximum electrical power can be calculated by using voltage and current measurements obtained from PMUs placed at the terminals of the generator buses. The generator rotor angles in a short time after the fault measured by the PMU as the inputs and makes out the stability classification results of multi-machine system (Liu et al., 2002). In the midterm stability evaluation method proposed by Ota et al., 2002, the power system is clustered and aggregated to coherent generator groups. Then, the stability margin of each coherent group is quantitatively evaluated on the basis of the one machine and infinite bus system. A two-layer fuzzy hyper-rectangular composite neural network (FHRCNN) is applied for a real-time transient stability prediction based on PMUs (Liu et al., 1999b). The neuro-fuzzy approach learns from training set off-line and predicts future behavior of new data on-line. A class of FHRCNNs based on phasor angle measurements is also utilized to provide fast transient stability prediction for use with high-speed control (Liu et al., 1998; Liu et al., 1999b).
6.3.2 Small Signal Stability — Oscillations Electromechanical oscillations have been a challenging research topic for many years. Power oscillations can be quantitatively characterized by several parameters in the frequency-domain and the time-domain, such as modal frequency and damping, amplitude and phase. In weak power systems with remote generation, power oscillations caused by insufficient damping often limit the available transmitted power. There are two ways to increase the transmission capacity and in that way to fully exploit the generation resources. The traditional way is to build new power lines, but this is costly and increasingly difficult due to environmental constraints. A more attractive alternative is to move the stability limit closer to the thermal limits of the
6.3 Stability Analysis
161
power lines by introducing extended power system control, thus improving the utilization of the entire transmission system. In case of insufficient damping, stability is usually improved by continuous feedback controllers. The most common type of controller is a power system stabilizer, which controls generator output by influencing the set point of the voltage regulator. Traditionally, only locally available input signals such as shaft speed, real power output and network frequency have been used for closed-loop control purposes. Advanced communication system technology has made it feasible to enhance the performance of power system stabilizers with remotely available information. This type of information, e.g., active and reactive power flow, frequency and phasors, is provided by PMUs. Synchronized measuring provides system wide data sets in time frames appropriate for damping purposes. System wide communication makes it possible to decide where to measure and where to control. The actuator and the measuring points can then be selected independently. In such a case, modal controllability and modal monitoring are maximized. Present SCADA/EMS requires valuable and potentially critical functions such as the ability to provide operators with sufficient information about increasing oscillation or reduced stability. Efficient detection and monitoring of power oscillations was identified as one of the most important functions to be included in the WAMS application (Leirbukt et al., 2006). Thus, the modal analyses were used as a basis for selecting the locations for the PMU installations (Uhlen et al., 2008). Once alerted about a potential stability problem, the operators should easily be able to monitor the details of the phasor measurements in order to identify the root cause of the incident and, if necessary, make corrective actions (Uhlen et al., 2008). Direct observation of interarea oscillation modes using phasor measurements is more convenient than computation of eigenvalues using a detailed model of a specific system configuration (Rasmussen and Jørgensen, 2006). A model-based approach that has been implemented as part of the WAMS, utilizes carefully selected PMU measurements, an autoregressive model, and Kalman Filtering (KF) techniques for identification of the optimal model parameters (Uhlen et al., 2008). Traditional methods include modal analysis (Kundur, 1994) and Prony analysis (Trudnowski et al., 1999). With the vast implementations of phasor measurement technology, it is now possible to monitor the oscillations in real time (Liu and Venkatasubramanian, 2008). Besides the post-disturbance type methods in the article by Liu and Venkatasubramanian, 2008, the system-identification type methods also give eigenvalue estimation, but they need probing signals (Liu and Venkatasubramanian, 2008). For oscillation monitoring, the ambient (or routine) measurement type methods for oscillation monitoring are more attractive. All ambient type methods are applied to ambient measurements of power systems, i.e. the measurements when the system is in a normal operating condition. The main advantage of ambient data is that they work in a non-intrusive manner, and
162
6 Phasor Measurement Unit and Its Application in Modern ...
they are also always available. For the application of small signal stability monitoring, the eigenvalues and eigenvectors of the interarea mode can be monitored by measuring voltages in the areas and comparing them with their phase angles (Kakimoto et al., 2006). A Fourier spectrum depending on the random nature of load is used to estimate the eigenvalue. The inclusion of PMUs into the generator control loop in the form of inputs to a PSS installed in a two-area, four-machine test system is examined in the article by Snyder et al., 1998. A frequency domain approach called Frequency Domain Decomposition (FDD) to ambient PMU measurements for the purpose of real-time electromechanical oscillation monitoring is applied in the article by Liu and Venkatasubramanian, 2008. Even though FDD gives larger variance in damping ratio estimates for well damped system, it is still enough for the purpose of oscillation monitoring as long as we can give good estimates for the poorly damped case. The strength of FDD lies in its suitability for real-time PMU measurements from improved noise performance, correlated inputs, and closely spaced modes. The PMUs are applied to prevent power system blackout due to a sequence of relay trip events by monitoring the generators and the major EHV transmission lines of a power system (Wang et al., 2005). An instability prediction algorithm for initiating a PSS is applied to avoid a sequence of relay trip events whenever necessary. Real-time phasor measurements are used to estimate the parameters of OMIB.
6.4 Event Identification and Fault Location PMUs can be widely used for event identification, which is one of the major challenges to an operator. Modern power systems are always experiencing different types of disturbances, such as generation trip or load trip, amount of generation/load trip and location of disturbance, and so on. The voltage/current phasor and frequency obtained from PMUs can be used for identifying the nature of disturbances based on logic-based techniques such as sequence of event recorder (SER) (Tiwari and Ajjarapu, 2007) or decision trees. Logic-based event identification has relatively fast execution speed because it does not need to do complex mathematical calculations, thus is suitable for real-time decisions. Faults, mostly transmission line faults, can occur in modern power systems frequently. To repair the faulted section, restore power delivery, and reduce outage time as soon as possible, it is necessary to locate the transmission line faults quickly and accurately. Transmission line faults usually cause heavy economical or social problems. The more accuracy of fault detection
6.4 Event Identification and Fault Location
163
and location has been obtained the easier task for inspection, maintenance, and repair of the line (Jiang et al., 2000b). Therefore, the development of a robust and accurate fault location technique based on PMU under various normal and fault conditions has been an important research and application area (Lien et al., 2005; Fan et al., 2007; Din et al., 2005). Accurate determination of the fault location is essential for the inspection, maintenance, and repair of transmission lines. A number of algorithms for two-terminal fault-location using phasor measurements have been proposed (Jiang et al., 2000b; Chen et al., 2002; Yu et al., 2002; Lin et al., 2002). These PMU-based techniques can determine the fault location with high accuracy based on synchronized voltage and current phasors obtained by PMUs. However, they are limited to locating faults in a transmission network installed with PMUs on every bus. To achieve the fault-location observability over the entire network it is important to examine minimal PMU placement considering the installation cost of PMUs in the PMU-based fault-location scheme (Lien et al., 2006). The line parameters usually used in existing fault location algorithms are provided by the manufacturer, and the parameter uncertainty is not considered (Yu et al., 2001). Actually, the environment and operation conditions have dramatic effects on the practical line parameters. Estimation of the change of transmission line parameters is very difficult. PMUs provide an approach to calculate transmission line parameters online, such as line impedance and capacitance, by using the voltages and currents of the transmission line obtained through PMUs. Then, many fault detection and location methods based on phasor measurements are proposed (Lin et al., 2004a; Lin et al., 2004b; Brahma and Girgis, 2004; Yu et al., 2002; Jiang et al., 2000b; Jiang et al., 2000a). A typical fault location algorithm contains two steps: first identify the faulted section and then locate the fault on this section. These techniques make use of local fault messages (synchronized voltages and currents at two terminals of a transmission line) to estimate fault location. Usually, the WAMS/PMU-based fault location technique needs voltage measurements of all nodes in power network. It is difficult to uses this technique for power systems when PMUs are not available on all the nodes. Early PMU-based fault event location techniques (Burnett et al., 1994) use one of the first field measurements of positive sequence voltage phasors at key system buses to identify fault. A technique proposed by Wang et al. (Wang et al., 2007) uses only fault voltages of two nodes of fault line and their neighboring nodes rather than all nodes in the whole network. Line currents between two nodes of fault line can be calculated based on the fault node voltages measured by PMUs. Node injection currents at two terminals of the faulted line are formed from the line currents. Then, the fault node can be deduced; meanwhile, fault location in transmission lines can be calculated accurately based on the calculated fault node injection currents. Kezunovc et al. proposed a fault location algorithm based on synchro-
164
6 Phasor Measurement Unit and Its Application in Modern ...
nized sampling technique by using a time domain model of a transmission line as a basis for the development of the algorithm (Kezunovic et al., 1994). Although the accuracy of the proposed algorithm is within 1% error, because the adequate approximation of the derivatives heavily depends on the selection of the line model and the system itself, the acquired data must maintain at a sufficiently high sampling rate. A fault detection/location index based on Clarke components of PMUs was applied to the adaptive fault detection/location technique proposed by Jiang et. al. (Jiang et al., 2000b). The parameter estimation algorithm and Smart Discrete Fourier Transform (SDFT) method are used in the development of the method. Mei et al. proposed a clustering-based online dynamic event location technique using wide-area generator rotor frequency measurements (Mei et al., 2008). Based on an angle (frequency) coherency measurement, generators are clustered into several coherent groups during the offline hierarchical clustering process. Based on the closeness of the frequency to the center of inertia frequency of all the generators, one representative generator is selected from each group. The rotor frequencies of the representative generators are used to identify the cluster with the largest initial swing. Then, the event location is formulated as finding the most likely group from which an event originates. In the fault location algorithm proposed by Samantaray et al., 2009, a differential equation is used to locate faults at a transmission line that equipped with a unified power flow controller (UPFC). In the development of the method, a detailed model of the UPFC and its control is integrated into the transmission system for accurately simulating fault transients. A wavelet-fuzzy discriminator is used to identify the fault section for a transmission line with a UPFC. Once the faulted line is identified, the control shifts to the differential equation-based fault locator to determine the fault location described by the line inductance up to the fault point from the relaying end. The instantaneous fault current and voltage samples obtained by PMU at the sending and receiving ends are fed to the proposed algorithm. Brahma proposed a new iterative method to locate a fault on a single multi-terminal transmission line using synchronized voltage and current measurements obtained by PMUs from all terminals (Brahma, 2006). The positive-sequence components of the prefault and postfault waveforms and positive-sequence source impedances are used to form the positive-sequence bus impedance matrix.
6.5 Enhance Situation Awareness In power system operation, operators must monitor a large and complex set of operational data while operating all kinds of control equipment and devices
6.5 Enhance Situation Awareness
165
to maintain system reliability and stability within particular constraints. Although some tools can help operators to avoid this situation, the violation of these constraints could lead to misoperation. With the interconnection of power systems, grid operators must be able to analyze and evaluate a large amount of data and make cognitively demanding evaluations and critical, timely decisions under high pressure, meanwhile evaluating ever larger and more complex and changing data streams, and being fully aware of changing system operational conditions. They must focus on individually demanding and precise tasks while maintaining an overall understanding of a large amount of dynamic data affecting the operator’s perception and operation. The process needs to maintain awareness of the overall situation while evaluating an overwhelming amount of critical and ever-changing information. In power grid operation, situation awareness is the concept of describing the performance of an operator during the operation of the power grid. From the point of view of human factors, the situation awareness is described as (Endsley, 1995): “An expert’s perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future”. An example is the visualization tools that enable operators and other decision makers to operate a power grid effectively without cognitive overload. The importance of situation awareness becomes greater as power system complexity and dynamics increase. Elements in the complex and dynamic power grid vary across time, possibly at different rates, and are interdependent. In addition, current situation awareness affects the way the new information is perceived and interpreted. Incomplete or inaccurate current situation awareness will lead to poorer situational awareness and at later times. Grid operators, therefore, must continuously maintain high situation awareness. In the case of critical circumstances, where the grid operator must correctly react within only a limited amount of time, incomplete or inaccurate situation awareness can result in serious errors in decision making with disastrous consequences. The decisions that an operator must make often require the acquisition and integration of a substantial amount of information and, in certain situations, also call for a prompt response. An increase in situation awareness knowledge could significantly increase the frequency with which the grid operator makes optimum decisions and decrease the time needed to reach these decisions. In addition, any decrease in workload resulting from increased situation awareness could also improve the chances of accurate decision making. PMUs can help to maintain and enhance situation awareness by incorporating PMU data to the power system analysis and visualization tools. PMUs provide the operators with real-time measurement data indicating power system dynamic in very high frequency. Effective and efficient analysis and visualization tools based on PMUs can assist operators at control centers to maintain situational awareness and perform time-critical decision making tasks under critical conditions. Visualization tools have been one critical part
166
6 Phasor Measurement Unit and Its Application in Modern ...
of modern EMS/SCADA systems. One of the important goals in power grid visualization is to convey a relevant abstract view of the power grid to an operator, in such a way that the operator can understand the situation of the power grid with minimal cognitive effort. Examples of such an abstract view include bar charts, contour maps, pie charts, etc. SCADA measurements provide a picture of the steady-state health of the system, whereas PMUs capture the faster variations that may indicate small signal stability problems. The information presented is static based on steady state of the systems. PMUs provide more accurate and faster data, and needs a more powerful and effective visualization tool to convert the data to information useful for operator’s decisions. Because the PMU is a relatively new technology, tools to increase situation awareness based on PMU are limited. Therefore, it is important to study and develop such tools. Other analysis and visualization tools are also needed to enable the operators to make appropriate decisions concerning system operations and data management rapidly and correctly. The primary concern in real-time operation of power system has been real time information visualization, simply because the power grid is an important visual reference in the power grid. A key component of situation awareness tool is the access to current and historical PMU data. The historical information assists operators to quickly evaluate the priority of a given event compared to other potential events, when a remedial action or decision must be made to solve the event. Many applications deemed a success because of the usage of situation awareness. Many pie charts, power flow animations, flow charts, text massaging, etc. to deliver situation awareness are used. The unique feature of the system is that it relies on visual or aural information for an efficient delivery of the situation awareness. Situation Awareness Board is a software tool to provide an analysis capability of the power system control center to the operator. It helps in maintaining situation awareness (Donnelly et al., 2006). The tool gives the operator an indication of the important of information and action within a “board”, a designated area by the operator. The important indication is displayed on the configurable status board. The goal of Situation Awareness Board is to make the complex PMU environment intuitive by providing situation awareness. Another promising technique that can be applied in PMU-based visualization tool is virtual reality (VR). Virtual reality technique provides a birdseye-view into a simulated power grid by overlaying entities and information onto 2D view of power grid and visual databases. The overlaid information includes tracking individual measurement and group of measurements, providing visual information to an operator to help build situation awareness. The main functionality of the system is displaying concentrated information of power grid entities. The virtual reality system can use a grid based approach to compute the concentration, and constructs iso-surfaces of concentration by considering the concentration value as vertical data in a three
6.6 Model Validation
167
dimensional space. Depending on the viewpoint, concentration is shown by height or by color intensity. Thereby, the vertical data serves as a redundant encoding of the concentration. Analytical tools based on PMUs help operators in making decision by converting raw data into meaningful information, to evaluate current operating situation and predict the future operation status. More advanced analytical tools can help to convert information to knowledge. Usually the power grid is represented by large datasets with various attributes. These large datasets make it difficult for an operator to assess the situation in the power grid in a timely manner. Moreover, it is more troublesome when it comes to dealing with the multi attribute or multi dimensional datasets, since the operator has to spend more time to build up a single comprehensive view of the power grid. An appropriate visual interpretation of the power grid helps an operator to build up a comprehensive view of the power grid effortlessly and rapidly, and as a result to make a strategic decision accurately. One of the keys to successful decision making is a clear and relatively accurate understanding of the environmental context of a decision.
6.6 Model Validation Model validation is used to critically assess the validity of a model before it can be used for prediction purpose. In most of the existing efforts, model validation is viewed as verifying the model accuracy through comparing the model predictions with physical experiment observations. Most of the existing model validation work is rooted in computational science where validation is viewed as verifying the model accuracy, i.e. a measure of the agreement between computational and experimental results. Sometimes, because of the lack of resources, validation metrics are assessed based on limited test points without considering the predictive capability at untested but potentially critical design spaces and the various sources of uncertainties. Therefore, the existing approaches for validating analysis models are not directly applicable for assessing the confidence of using analytical models in power systems. Validation is concerned with determining whether the model is an accurate representation of the system under study. Model validation is part of the total model development process, and it consists of performing a series of tests and evaluations within the model development process. This validation process is multifaceted and involves the minimal procedure of taking a set of real-system observations and rectifying these observations with an assumed mathematical model or vice versa. This process involves an estimation of parameters of the model that yields a model that best reflects the real-system behavior.
168
6 Phasor Measurement Unit and Its Application in Modern ...
Fig.6.1 shows a general flowchart of model validation. As shown, the comparison between the physical experiments and the computer outputs is the key element in model validation. It is often expected that the model verification needs to be implemented first before a model is validated. Verification is the assessment of the accuracy of the solution to a computational model, involving code verification and solution verification. Code verification deals with the error due to computer programming. Solution verification (also referred to as “numerical error estimation”) deals with errors that can occur in a computer model. While model verification deals with building the model right, model validation deals with building the right model.
Fig. 6.1. A flow chart of PMU-based model validation.
PMU-based model validation provides a connection between theoretical knowledge and power system operation reality. The model validation procedure evaluates the applicability of a specified power system component model with respect to an input/output experiment quantified by PMU measurements. It determines whether or not there is an element of the power system component model set which accounts for the experimental observation and PMU measurements. The model validation test therefore provides a necessary condition for a model to describe a physical power system component. A formal statement of the model validation problem can be given as: let P be a robustly stable plant model with black structure Δ. Given measurements (u, y), do there exist a Δ ∈ BΔ and signals d and n satisfying ||d|| 1 and
6.7 Case Study
||n| 1, such that: ∗
y = Wn n + (Δ P )
$ % d n
.
169
(6.15)
Any (Δ, d, n) satisfying these conditions are referred to as admissible. The existence of an admissible (Δ, d, n) is a necessary condition for the model to be able to describe the system. On the other hand, if no such Δ, d, and n exist the model cannot account for the observation, and we say that y and W invalidate the model. Model validation gives conclusive information only when there is no model in the set that is consistent with y and u; there is no way of proving that a model is valid simply because there is no way of testing every experimental condition. If the system is in the continuous time domain the model validation test should be performed for continuous time measurements y and u. In practice, however, data is taken by sampling. Then at each frequency, P and Δ are complex valued matrices and n and d are complex valued vectors. In this case, the statement of the model validation problem remains the same, except that now all signals and operators are vectors and matrices. Since no assumptions are made about the nature of the physical power system component, PMU measurements are taken and the assumption that the model describes the component is directly tested. Model validation determines whether the model of the component could have produced the experimental observation and gives a means of checking the adequacy of a given model with respect to an experimental system. Model validation determines whether PMU measurement data is inconsistent with the model structure by determining if an independent data set could have produced the model. The term model validation is misleading as a data set does not validate the model but rather attempts to falsify or invalidate the model. Since it is impossible to completely capture the dynamics in practice, it remains plausible that a different set of data will invalidate the model.
6.7 Case Study In this section, examples of PMU applications are given, including both an overview of the techniques used in the case study and detailed simulation results.
170
6 Phasor Measurement Unit and Its Application in Modern ...
6.7.1 Overview Despite the progress achieved in developing visualization tools, alarming tools, modal analysis tools, and statistical analysis tools, it is critically required to develop more real-time PMU-based applications. There is also a need to develop relatively simple, easy-to-implement and easy-to-use tools, meanwhile, more informative and actionable approaches to apply PMUs to improve the situation awareness of power grid operators. In this section, a case study of applying a characteristic ellipsoid (CELL) method on PMU data to monitor power system dynamic behaviors is presented. The CELL method was initially proposed by Yuri Makarov (Makarov et al., 2007; Makarov et al., 2008). Ellipsoid is a powerful tool to extract the major features contained in a set of measurement or observation vectors. In power systems, these measurement vectors can be voltage magnitude and angle, active power, reactive power, and frequency information. When a power system experiences disturbances, such as voltage dip, frequency changes, or power flow drop, the recorded PMU data reflects system dynamic and other quality information before and after the disturbances. The characteristic ellipsoid method uses multi-dimensional minimum volume enclosing ellipsoids (MVEE) to enclose a given set of PMU data for the purpose of indicating the disturbances associated with the changes of PMU data sets. This approach provides the ability to increase situational awareness of power grid operators. The materials presented in this section are based on the articles by Ma et al., 2008; Ma, 2008.
6.7.2 Formulation of Characteristic Ellipsoids The CELL is a multi-dimensional minimum volume second-order closed surface (“an egg”) that contains a certain limited part of the system trajectory, for example, 1-second set of subsequent phasor data. The system trajectory and the ellipsoid are represented in the phasor data space. The shape, volume, orientation, and rate of change of the CELL parameters in time provide a new look on the essential information about the system status and dynamic behavior, including such characteristics as system stress, generalized damping, the magnitude of disturbances, the mode of motion of some parts of the system against the other parts during the disturbance (mode shape), and so on. During the past few decades, extensive research efforts have been put on computing the MVEE in n-dimensional space Rn containing m given points p1 , p2 , . . . , pm ∈ Rn , and several algorithms have been developed for solving the MVEE problem. Generally, these algorithms can be loosely classified in three categories: first-order algorithms based on gradient-descent techniques (Silverman and Titterington, 1980), second-order algorithms based
6.7 Case Study
171
on interior-point techniques (Sun and Freund, 2004), and the algorithms combining first-order and second-order algorithms (Khachiyan, 1996). Our concern is with covering m selected PMU measurement points p1 , p2 , . . . , pm ∈ Rn , with a CELL of minimum volume. Here n refers to the dimension of the problem, i.e., the number of different PMU measurement sequences, such as voltage magnitude, voltage angle, or frequency, etc. m refers to the number of data points in a sequence of PMU measurements. Let P denote the n×m matrix whose columns are the vectors p1 , p2 , . . . , pm ∈ Rn : P := [p1 |p2 | . . . |pm ].
(6.16)
For c ∈ Rn and A ∈ S n , the CELL can be defined as (Kumar and Yildirim, 2005): EA,c := {x ∈ Rn |(x − c)T A(x − c)} 1}. (6.17) Here n is the dimension of the problem, the vector c is the center of the CELL, and the positive defined matrix A determines the general shape and orientation of the CELL. The volume of the CELL is given by the formula: n
V ol(EA,c ) =
π2 1 √ , n+2 Γ( 2 ) det A
(6.18)
where Γ (·) is the standard gamma function of calculus. The matrix P contains the original PMU data points, which are usually fairly dense. AT A as well as AAT will be completely dense. Thus, the problem of finding the CELL containing the points of P is equivalent to determining a vector c ∈ Rn and an n× n positive definite symmetric matrix A, which minimizes det(A) subject to Eq. (6.18). Under the above assumption, the formulation of the CELL problem becomes the following optimization problem: 1 minA,c √ , det A (6.19) s.t. (xi − c)T A(xi − c) 1, i = 1, . . . , m, A 0. The procedure to find a solution for problem expression (6.17) is automatically repeated for each new data point. The analyzed parameters include voltage magnitudes, local frequencies and power flow. These parameters may be normalized to make parameters of different physical nature and dimension comparable in Rn . This paper describes combinations of different phasor measurements helping to identify and locate such events and physical phenomena as generator trips, inter-area oscillations, and static system stress.
172
6 Phasor Measurement Unit and Its Application in Modern ...
6.7.3 Geometry Properties of Characteristic Ellipsoids For an n × n real-valued matrix A with rank r (r n), the equation for singular value decomposition of A is given as A = U DV T
(6.20)
where U and V T are n × n matrices, U = [u1 , u2 , . . . , un ], V T = [v1 , v2 , . . ., vn ]T , and D = λ1 , λ2 , . . . , λn is an n × n diagonal matrix, containing the eigenvalue λ1 λ2 . . . λn of the matrix A. Matrix U is the rotation matrix that gives the orientation of the characteristic ellipsoid. Because the matrix A is a symmetric matrix, U and V T are identical after using SVD. [u1 , u2 , . . ., un ] (or [v1 , v2 , . . ., vn ]T ) are unit vectors &√ that represent the &√ direc&√ tions of axes of the characteristic ellipsoid, and 1 λ1 , 1 λ2 , . . ., 1 λn correspond& to the lengths of the semi-axes of the characteristic ellipsoid. Together, ui λj (i = 1, 2, . . ., n; j = 1, 2, . . ., n) form the components of the axes of the characteristic ellipsoid along with the global coordinate. The orientation matrix U of the characteristic ellipsoid defined in global coordinates is given as ⎡ ⎤ u11 u12 . . . u1n ⎢ ⎥ ⎢ u 2 u 2 . . . u2 ⎥ ⎢ 1 2 n⎥ ⎥ U =⎢ (6.21) ⎢ .. .. .. ⎥ , ⎢. . . ⎥ ⎣ ⎦ un1 un2 . . . unn where, u1 , u2 , . . ., un are unit row vectors in the direction of the axes of the characteristic ellipsoid. The set of vectors [u1 , u2 , . . ., un ]T with the center of the characteristic ellipsoid defines the local coordinate system. The set of column vectors [u1 , u2 , . . ., un ] contains the orientation information of the characteristic ellipsoid in the global dimension coordinates. The unit vector ui (i = 1, . . ., n) denotes [cos(ki1 ), cos(ki2 ), . . . , cos(kin )], where kij (j = 1, . . . , n) refers to the angle of the i-th axis of the characteristic ellipsoid corresponding to the j-th global coordinate. Then the projection matrix of the axes of the characteristic ellipsoid on global coordinates can be given as: ⎡ 1 ⎤ u1 u1n u12 √ √ √ . . . ⎢ λ1 λ1 λ1 ⎥ ⎢ 2 ⎥ 2 ⎢ u1 u2 u2n ⎥ ⎢√ ⎥ √ √ . . . ⎢ λ λ2 λ2 ⎥ 2 ⎥. (6.22) Eij = ⎢ ⎢ . .. .. ⎥ ⎢ .. ⎥ . . ⎥ ⎢ ⎢ ⎥ n ⎣ un u2 unn ⎦ 1 √ √ ... √ λn λn λn
6.7 Case Study
173
&√ where 1 λi denotes the length of the i-th semi-axes of the characteristic ellipsoid. The semi-axis lengths of the characteristic ellipsoid can be directly calculated from the eigenvalues of the matrix A. The semi-length of the axis of the characteristic ellipsoid is 1 ri = √ , (i = 1, . . . , n). λi
(6.23)
where λi are the eigenvalues of the matrix A. Furthermore, the normalized semi-axis can be given: ri ri = n (6.24)
ri i=1
Let rmax and rmin be the length of major (with the largest semi-axis length) and minor (with the smallest semi-axis length) axes. The second eccentricity of the characteristic ellipsoid can also be given as ' 2 rmax e= − 1. (6.25) rmin If rmax = rmin , e is equal to 0, the characteristic ellipsoid becomes a multidimensional sphere. If the length of one semi-axis of the ellipsoid becomes 0, then e is equal to 1, which means the characteristic ellipsoid is singular. ereflects the degree of anisotropy of the characteristic ellipsoid. The volume of characteristic ellipsoid is given by the formula: n
V ol(EA,c ) =
1 π2 √ , Γ( n+2 ) det A 2
(6.26)
where Γ (·) is the standard gamma function of calculus. The calculation of Γ (n/2) is given as √ (n − 2)!! n n , (6.27) Γ( ) = 2 2(n−1)/2 where n!! is a double factorial defined by ⎧ ⎪ n > 0, odd ⎪ ⎨ n × (n − 2) × . . . × 5 × 3 × 1, n!! ≡ n × (n − 2) × . . . × 6 × 4 × 2, (6.28) n > 0, even ⎪ ⎪ ⎩1 n = −1, 0
6.7.4 Interpretation Rules for Characteristic Ellipsoids Characteristic ellipsoids project the measurement points along the directions where the data varies the most. These directions are determined by the eigen-
174
6 Phasor Measurement Unit and Its Application in Modern ...
vectors of the matrix A corresponding to the smallest eigenvalues. The magnitude of the eigenvalues corresponds to the variance of the data along the eigenvector directions. The eigenvectors with the smallest eigenvalues are the principal axes. Each principal axis has associated with it an eigenvalue, λp , which is equal to the fraction of the variance of the entire measurement set that falls along the principal axis. The multi-dimensional characteristic ellipsoid rotates the measurement points such that the maximum variability is visible. SVD can help to get the radii and orientation of the multi-dimensional characteristic ellipsoid of the most important gradients among measurement points. Multi-dimensional characteristic ellipsoid shows different variability along different axes. Because it is multi-dimensional, some of the axes may not have large variation, yet some others have large variability, which means along these axes, the shape of characteristic ellipsoid experience significantly changes. But in a multi-dimensional space, perhaps only a few dimensions have dramatic change in the length of the axes, which can affect the shape of the characteristic ellipsoid. And most of the axes remain unchanged or contain minor changes in the length of axes. The principal eigenvectors can point to the principal directions of the distribution of the measurement data, which illustrates the orientation of the characteristic ellipsoid. Besides, the eigenvectors describe the spatial distribution of the projected measurement data that evolves in time according to the aforementioned projection. Once an event happens in the power systems, the first m principal axes would account for a large amount of the total variance of measurement data. Then, the contribution of each axis to the measurement data in terms of variance can be reflected by ranking the eigenvalues. Arrange the eigenvalues of the correlation matrix A in increasing order λ1 λ2 . . . λn , with the corresponding orthogonal eigenvectors u1 , u2 , . . ., ui , . . ., un . Some insights into the behavior of the characteristic ellipsoids can be given to analyze and understand the dynamic behavior of a power system. The characteristic ellipsoid’s volume V (ε) is a measure of system stress reflecting the spatial magnitude of the system trajectory. Relatively small characteristic ellipsoids volume indicates that system motion is not stressed. Large V (ε) points forward to a disturbed state of the system. The derivative V = ΔV /Δt, calculated numerically for a certain number of subsequent measurements, measures generalized damping of the system motion. Positive V ’ signals increasing spatial magnitude of the system trajectory; negative V ’ implies system trajectory stabilization. Sudden increase of V (ε) signifies a disturbance. The characteristic ellipsoids are able to determine such disturbances as voltage sags and swells caused by power system faults, equipment failure, and control malfunctions; momentary interruptions, which are the results of momentary loss of voltage in a power system; oscillatory transient disturbances, which occur when a sudden, non-power frequency change happens in positive and negative polarity values in the steady state condition of
6.7 Case Study
175
voltage, current, or both. The shape and orientation of characteristic ellipsoids are also informative. The orientation of the ellipsoid’s axes is specified by the eigenvectors ui , i = 1, . . ., n, of matrix A. The lengths of the semi-axes are given by the eigenvalues λi , i = 1, . . ., n, of matrix A. The eigenvector umax = ui corresponding to the smallest λi indicates the dominating direction of the system motion. The angles between umax and the coordinates of Rn help to identify phasors (and system locations) involved in the system’s dominating motion. The orientation of umax also helps to understand whether the phasors move in phase or out of phase.
6.7.5 Simulation Results In order to evaluate the performance of the CELL method in more practical situations, real PMU measurement data was used to test the method. Three sequences of PMU measurements were used to conduct the analysis. Fig.6.2 demonstrates the voltage magnitude response at three locations recorded by PMUs. From Fig.6.2, one can see the four vents resulted in significant variation on the voltage magnitude.
Fig. 6.2. Voltage magnitudes
For the purpose of demonstration, three dimensional ellipsoids were built based on the selected three sequences of PMU data (see Fig.6.2). If more PMU data records are applied, multi-dimensional ellipsoids would be built. Based on the formed 3-dimensional ellipsoids, the normalized radii of the
176
6 Phasor Measurement Unit and Its Application in Modern ...
ellipsoids, the volume of the ellipsoids, and the projection of the radii on global coordinate (in PMU data space) are analyzed. Fig.6.3 shows the three normalized radii of ellipsoids. From Fig.6.3, one can see that the three events cause significant variation on all of the three normalized radii of the ellipsoids, which imply that when an event happens, the radius of the ellipsoids will experience significant change with respect to time steps. Therefore, by monitoring the change of the normalized radii length, the dynamic behavior of the system can be detected.
Fig. 6.3. Normalized radii of 3-D ellipsoids
Fig.6.4 illustrates the volume change of the ellipsoids. Similar to the normalized radii of the ellipsoid, the volume of the ellipsoids experience significant variation when the events happen in the system, which suggest that the volume can be viewed as a good indicator of dynamic behavior of the system. Thus, the dynamic behavior of the system can be monitored in turn by monitoring the volume change of the ellipsoids. Figs. 6.5 through 6.7 demonstrate the projection of each axis on each global coordinate. Figs. 6.5 and 6.6 show the projection of the first (the shortest) axis and the second axis on global (PMU measurement) coordinates, respectively. The spikes in these two diagrams clearly demonstrate the occurrence of the events. However, no obvious spikes are observed in the diagram of the projection of the third (the longest) axis on global coordinates (see Fig.6.7). This is because the longest axis is so long compared with the other axis, and that the variation on the longest axis caused by the events is
6.7 Case Study
Fig. 6.4. Volume of the 3-D ellipsoids
Fig. 6.5. Projection of the 1st axis (shortest) on global coordinates.
177
178
6 Phasor Measurement Unit and Its Application in Modern ...
Fig. 6.6. Projection of the 2nd axis on global coordinates
Fig. 6.7. Projection of the 3rd axis (longest) on global coordinates
6.8 Conclusion
179
hidden. This case study illustrates that the proposed ellipsoid method can be a good approach for monitoring dynamic behavior of power systems. Some of the geometric properties of the ellipsoids are also demonstrate that they can be used as effective and efficient indicators for system dynamic behavior monitoring.
6.8 Conclusion In this chapter, applications of PMUs in modern power systems are discussed. The topics discussed include state estimation, stability analysis, oscillation monitoring, event detection and fault locations, enhance situation awareness, and model validation. A case study using a characteristic ellipsoid method based on PMU data to monitor the dynamic behavior of power systems is presented. The theoretical background behind the proposed method is explored. Real PMU data are used to illustrate the effectiveness of the characteristic ellipsoids method.
References Abur A, Exposito AG (2004) Power system state estimation: theory and implementation. Marcel Dekker, New York Abur A, Magnago FH (1999) Optimal meter placement for maintaining observability during single branch outages. IEEE Trans Power Syst 14(4): 1273 – 1278 Al-Othman AK, Irving MR (2005a) A comparative study of two methods for uncertainty analysis in power system state estimation. IEEE Trans Power Syst 20(2): 1181 – 1182 Al-Othman AK, Irving MR (2005b) Uncertainty modeling in power system state estimation. IEE Proceedings Generation, Transm Distrib 152(2): 233 – 239 Angel AD, Geurts P, Ernst D et al (2007) Estimation of rotor angles of synchronous machines using Artificial Neural Networks and local PMU-based quantities. Neurocomputing 70(16 – 18): 2668 – 2678 Baldwin TL, Mili L, Boisen MB et al (1993) Power system observability with minimal phasor measurement placement. IEEE Trans Power Syst 8(2): 707 – 715 Balance JW, Bhargava B, Rodriguez GD (2003) Monitoring power system dynamics using phasor measurement technology for power system dynamic security assessment. Proceedings of IEEE Bologna PowerTech Conference, Bologna, 22 – 26 June 2003 Bi TS, Qin XH, Yang QX (2008) A novel hybrid state estimator for including synchronized phasor measurements. Electr Power Syst Res 78(8): 1343 – 1352
180
6 Phasor Measurement Unit and Its Application in Modern ...
Bian X, Qiu J (2006) Adaptive clonal algorithm and its application for optimal PMU placement. Proceedings of IEEE International Conference on Communication, Circuits and Systems, Island of Kos, 21 – 24 May 2006 Brahma S, Girgis AA (2004) Fault location on a transmission line using synchronized voltage measurements. IEEE Trans Power Deliv 19(4): 1619 – 1622 Brahma SM (2006) New fault-location method for a single multiterminal transmission line using synchronized phasor measurements. IEEE Trans Power Deliv 21(3): 1148 – 1153 Brueni DJ, Heath LS (2005) The PMU placement problem. SIAM J on Discr Math 19(3): 744 – 761 Burnett ROJ, Butts MM, Cease TW et al (1994) Synchronized phasor measurements of a power system event. IEEE Trans Power Syst 9(3): 1643 – 1650 Cai JY, Huang Z, Hauer J et al (2005) Current status and experience of WAMS implementation in North America. Proceedings of IEEE/PES Transmission and Distribution Conference and Exhibition: Asia Pacific, Dalian, 23 – 25 August 2005 Chakrabarti S, Eliades D, Kyriakides E et al (2007) Measurement uncertainty considerations in optimal sensor deployment for state estimation. Proceedings of IEEE Symposium on Intelligent Signal Processing, Alcala de Henares, 3 – 5 October 2007 Chakrabarti S, Kyriakides E (2008) Optimal placement of phasor measurement units for power system observability. IEEE Trans Power Syst 23(3): 1433 – 1440 Chakrabarti S, Kyriakides E (2009) PMU measurement uncertainty considerations in WLS state estimation. IEEE Trans Power Syst 24(2): 1062 – 1071 Chakrabarti S, Kyriakides E, Bi T (2009a) Measurements get together. IEEE Power Energy Mag 7(1): 41 – 49 Chakrabarti S, Kyriakides E, Eliades DG (2009b) Placement of synchronized measurements for power system observability. IEEE Trans Power Deliv 24(1): 12 – 19. Chen CS, Liu CW, Jiang JA (2002) A new adaptive PMU based protection scheme for transposed/untransposed parallel transmission lines. IEEE Trans Power Deliv 17(2): 395 – 404 Chen J, Abur A (2005) Improved bad data processing via strategic placement of PMUs. 2005 IEEE Power Engineering Society General Meeting, 12 – 16 June 2005 Corsi S, Taranto GN (2008) A real-time voltage instability identification algorithm based on local phasor measurements. IEEE Trans Power Syst 23(3): 1271 – 1279 Din ESTE, Gilany M, Aziz MMA et al (2005) An PMU double ended fault location scheme for aged power cables. Proceedings of IEEE Power Engineering Society General Meeting, San Francisco, 12 – 16 June 2005 Donnelly M, Ingram M, Carroll JR (2006) Eastern interconnection phasor project. Proceedings of the 39th Annual Hawaii International Conference on System Sciences, Hawaii, 4 – 7 January 2006 El-Amary NH, Mostafa YG, Mansour MM et al (2008) Phasor Measurement Units’ allocation using discrete particle swarm for voltage stability monitoring. 2008 IEEE Canada Electric Power Conference, Vancouver, 6 – 7 October 2008 Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Human Factors 37(1): 32 – 64 Fan C, Du X, Li S et al (2007) An adaptive fault location technique based on PMU for transmission line. 2007 IEEE Power Engineering Society General Meeting, Tempa, 24 – 28 June 2007 Fouad AA, Aboytes F, Carvalho VF et al (1988) Dynamic security assessment practices in North America. IEEE Trans Power Syst 3(3): 1310 – 1321
References
181
Gubina F, Strmcnik B (1995) Voltage collapse proximity index determination using voltage phasors approach. IEEE Trans Power Syst 10(2): 788 – 794 IEEE Power Engineering Society. (2006) IEEE Std C37.118TM – 2005: IEEE standard for synchrophasors for power systems, New York Jiang JA, Lin YH, Yang JZ et al (2000a) An adaptive PMU based fault detection/location technique for transmission lines Part-II: PMU implementation and performance evaluation. IEEE Trans Power Deliv 15(4): 1136 – 1146 Jiang JA, Yang JZ, Lin YH et al (2000b) An adaptive PMU based fault detection/location technique for transmission lines Part-I: Theory and algorithms. IEEE Trans Power Deliv 15(2): 486 – 493 Jiang W, Vittal V, Heydt GT (2007) A distributed state estimator utilizing synchronized phasor measurements. IEEE Trans Power Syst 22(2): 563 – 571 Jiang W, Vittal V, Heydt GT (2008) Diakoptic state estimation using phasor measurement units. IEEE Trans Power Syst 23(4): 1589 – 1589 Kakimoto N, Sugumi M, Makino T et al (2006) Monitoring of inter-area oscillation mode by synchronized phasor measurement. IEEE Trans Power Syst 21(1): 260 – 268 Kezunovic M, Mrkic J, Perunicic B (1994) An accurate fault location algorithm using synchronized sampling. Electr Power Syst Res 29(3): 161 – 169 Khachiyan LG (1996) Rounding of polytopes in the real number model of computation. Math Oper Res 21(2): 307 – 320 Kumar P, Yildirim EA (2005) Minimum-volume enclosing ellipsoids and core sets. J Optim Theory Appl 126(1): 1 – 21 Kundur P (1994) Power System Stability And Control. McGraw-Hill, New York Leirbukt A, Gjerde JO, Korba P et al (2006) Wide area monitoring experiences in Norway. Proceedings of Power Systems Conference & Exposition, Atlanta, 29 October – 1 November 2006 Lien KP, Liu CW, Jiang JA et al (2005) A novel fault location algorithm for multiterminal lines using phasor measurement units. Proceedings of the 37th Annual North American Power Symposium, Ames, 23 – 25 October 2005 Lien KP, Liu CWk, Yu CS et al (2006) Transmission network fault location observability with minimal PMU placement. IEEE Trans Power Deliv 21(3): 1128 – 1136 Lin YH, Liu CW, Chen CS (2004a) A new PMU-based fault detection/location technique for transmission lines with consideration of arcing fault discriminationPart I: Theory and algorithms. IEEE Trans Power Deliv 19(4): 1587 – 1593 Lin YH, Liu CW, Chen CS (2004b) A new PMU-based fault detection/location technique for transmission lines with consideration of arcing fault discriminationPart II: Performance evaluation. IEEE Trans Power Deliv 19(4): 1594 – 1601 Lin YH, Liu CW, Yu CS (2002) A new fault locator for three-terminal transmission line-using two-terminal synchronized voltage and current phasors. IEEE Trans Power Deliv 17(2): 452 – 459 Liu CW, Chang CS, Su MC (1998) Neuro-fuzzy networks for voltage security monitoring based on synchronized phasor measurements. IEEE Trans Power Syst 13(2): 326 – 332 Liu CW, Su MC, Tsay SS et al (1999a) Application of a novel fuzzy neural network to real-time transient stability swings prediction based on synchronized phasor measurements. IEEE Trans Power Syst 14(2): 685 – 692 Liu CW, Thorp J (1995) Application of synchronised phasor measurements to realtime transient stability prediction. IEE Proceedings Generation, Transm Distr 142(4): 355 – 360 Liu CW, Tsay SS, Wang YJ (1999b) Neuro-fuzzy approach to real-time transient stability prediction based on synchronized phasor measurements. Electr Power Syst Res 49(2): 123 – 127
182
6 Phasor Measurement Unit and Its Application in Modern ...
Liu G, Venkatasubramanian V (2008) Oscillation monitoring from ambient PMU measurements by frequency domain decomposition. 2008 IEEE International Symposium on Circuits and Syst, Seattle, 18 – 21 May 2008 Liu M, Zhang B, Yao L et al (2008) PMU based voltage stability analysis for transmission corridors. Proceedings of the 3rd International Conference on Electric Utility Deregulation and Restructuring and Power Technologies, Nanjing, 6 – 9 April 2008 Liu Y, Lin F, Chu X (2002) Transient stability prediction based on PMU and FCRBFN. Proceedings of the 5th International Conference on Power System Management and Control, London, 17 – 19 April 2002 Ma J (2008) Advanced techniques for power system stability analysis. PhD Dissertation, The University of Queensland, Brisbane, Australia Ma J, Makarov YV, Miller CH et al (2008) Use multi-dimensional ellipsoid to monitor dynamic behavior of power systems based on PMU measurement. Proceedings of IEEE Power and Energy Society General Meeting – Conversion and Delivery of Electrical Energy in the 21st Century, Pittsburgh, 20 – 24 July 2008 Makarov YV, Miller CH, Nguyen TB (2007) Characteristic ellipsoid method for monitoring power system dynamic behavior using phasor measurements. 2007 iREP Symposium-Bulk Power System Dynamics and Control - VII Revitalizing Operational Reliability, Charleston, 19 – 24 August 2007 Makarov YV, Miller CH, Nguyen TB et al (2008) Monitoring of power system dynamic behavior using characteristic ellipsoid method. The 41th Hawaii International Conference on System Sciences, Hawaii, USA, 7 – 10 January 2008 Martin KE, Benmouyal G, Adamiak MG et al (1998) IEEE standard for synchrophasor for power systems. IEEE Trans Power Deliv 13(1): 73 – 77 Mei K, Rovnyak SM, Ong CM (2008) Clustering-based dynamic event location using wide-area phasor measurements. IEEE Trans Power Syst 23(2): 673 – 679 Meliopoulos APS, Cokkinides GJ, Wasynczuk O et al (2006) PMU data characterization and application to stability monitoring. Proceedings of IEEE Power Engineering Society General Meeting, Piscataway, 18 – 22 June 2006 Miloˇsevi´c B, Begovi´c M (2003a) Nondominated sorting genetic algorithm for optimal phasor measurement placement. IEEE Trans Power Syst 18(1): 69 – 75 Miloˇsevi´c B, Begovi´c M (2003b) Voltage-stability protection and control using a wide-area network of phasor measurements. IEEE Trans Power Syst 18(1): 121 – 127 Monchusi BB, Mitani Y, Changsong L et al (2008) PMU based power system stability analysis. Proceedings of IEEE Region 10 Conference, Hyderabad, 19 – 21 November 2008 NASPI (2009a) North American Synchrophasor Initiative, http://www.naspi.org/. Accessed 22 June 2009 NASPI (2009b) Actual and potential phasor data applications. Avilable at: http:// www.naspi.org/phasorappstable.pdf. Accessed 22 June 2009 Nuqui RF, Phadke AG (2005) Phasor measurement unit placement techniques for complete and incomplete observability. IEEE Trans Power Deliv 20(4): 2381 – 2388 Ota Y, Ukai H, Nakamura K et al (2002) PMU based midterm stability evaluation of wide-area power system. 2002 IEEE/PES Transmission and Distribution Conference and Exhibition: Asia Pacific, Yokohama, 6 – 10 October 2002 Pai MA (1989) Energy Function Analysis for Power System Stability. Kluwer, Boston Peng J, Sun Y, Wang HF (2006) Optimal PMU placement for full network observability using Tabu search algorithm. Electr Power Energy Syst 28(4): 223 – 231 Phadke AG (1993) Synchronized phasor measurements in power systems. IEEE
References
183
Comput Appl Power 6(2): 10 – 15 Phadke AG, Thorp JS, Adamiak MG (1983) A new measurement technique for tracking voltage phasors, local system frequency, and rate of change of frequency. IEEE Trans Power App Syst, PAS-102(5): 1025 – 1038 Phadke AG, Thorp JS, Karimi KJ (1986) State estimation with phasor measurements. IEEE Trans Power Syst 1(1): 233 – 241 Radovanovic A (2001) Using the internet in networking of synchronized phasor measurement units. Inte J Electr Power Energy Syst 23(3): 245 – 250 Rakpenthai C, Premrudeepreechacharn S, Uatrongjit S et al (2007) An optimal PMU placement method against measurement loss and branch outage. IEEE Trans Power Deliv 22(1): 101 – 107 Rasmussen J, Jørgensen P (2006) Synchronized phasor measurements of a power system event in eastern Denmark. IEEE Trans Power Syst 21(1): 278 – 284 Samantaray SR, Tripathy LN, Dash PK (2009) Differential equation-based fault locator for unified power flow controller-based transmission line using synchronised phasor measurements. IET Generation, Trans Distrib 3(1): 86 – 98 Silverman BW, Titterington DM (1980) Minimum covering ellipses. SIAM J Statist Sci Comput 1(4): 401 – 409 Smon I, Verbic G, Gubina F (2006) Local voltage-stability index using Tellegen’s theorem. IEEE Trans Power Syst 21(3): 1267 – 1275 Snyder AF, Hadjsaid N, Georges D et al (1998) Inter-area oscillation damping with power system stabilizers and synchronized phasor measurements. Proceedings of International Conference on Power System Technology, Beijing, 18 – 21 August 1998 Sobajic DJ, Pao YH (1989) Artificial Neural-Net based dynamic security assessment for electric power systems. IEEE Trans Power Syst 4(1): 220 – 228 Sun K, Likhate S, Vittal V et al (2007) An online dynamic security assessment scheme using phasor measurements and decision trees. IEEE Trans Power Syst 22(4): 1935 – 1943 Sun P, Freund RM (2004) Computation of minimum-volume covering ellipsoids. Oper Res 52(5): 690 – 706 Taylor CW, Erickson DC, Martin KE et al (2005) WACS-wide-area stability and voltage control system: R & D and online demonstration. Proceedings of the IEEE 93(5): 892 – 906 Thorp JS, Phadke AG, Karimi KJ (1985) Real time voltage-phasor measurements for static state estimation. IEEE Trans Power App Syst, PAS-104(11): 3098 – 3106 Tiwari A, Ajjarapu V (2007) Event identification and contingency assessment for voltage stability via PMU. Proceedings of the 39th North American Power Symposium, Las Cruces, 30 September – 2 October 2007 Trudnowski DJ, Johnson JM, Hauer JF (1999) Making Prony analysis more accurate using multiple signals. IEEE Trans Power Syst 14(1): 226 – 231 Uhlen K, Warland L, Gjerde JO et al (2008) Monitoring amplitude, frequency and damping of power system oscillations with PMU measurements. 2008 IEEE Power and Energy Society General Meeting — Conversion and Delivery of Electrical Energy in the 21st Century, Pittsburgh, 20 – 24 July 2008 Verbic G, Gubina F (2000) A new concept of protection against voltage collapse based on local phasors. Proceedings of International Conference on Power System Technology, Perth, 4 – 7 December 2000 Verbic G, Gubina F (2004) A new concept of voltage-collapse protection based on local phasors. IEEE Trans Power Deliv 19(2): 576 – 581 Vu K, Begovic MM, Novosel D et al (1999) Use of local measurements to estimate voltage-stability margin. IEEE Trans Power Syst 14(3): 1029 – 1035 Wang C, Dou CX, Li XB (2007) A WAMS/PMU-based fault location technique.
184
6 Phasor Measurement Unit and Its Application in Modern ...
Electr Power Syst Res 77(8): 936 – 945 Wang L, Wang X, Morison K (1997) Quantitative search of transient stability limits using EEAC. Proceedings of IEEE PES Summer Meeting, Berlin, 20 – 24 July 1997 Wang YJ, Liu CW, Liu YH (2005) A PMU based special protection scheme: A case study of Taiwan power system. Int J Electr Power Energy Syst 27(3): 215 – 223 Xu B, Abur A (2004) Observability analysis and measurement placement for systems with PMUs. Proceedings of IEEE PES Power Systems Conference and Exposition, New York, 10 – 13 October 2004 Xu B, Abur A (2005) Optimal placement of phasor measurement units for state estimation. PSERC, Final Project Report Xue Y, Custem TV, Ribbens-Pavella M (1989) Extended equal area criterion justifications, generalizations, applications. IEEE Trans Power Syst 4(1): 44 – 52 Xue Y, Yu Y, Li J et al (1998) A new tool for dynamic security assessment of power systems. Control Eng Pract (6): 1511 – 1516 Yu CS, Liu CW, Yu SL et al (2001) A new PMU-based fault location algorithm for series compensated lines. IEEE Power Eng Rev 21(11): 58 – 58 Yu CS, Liu CW, Yu SL et al (2002) A new PMU based fault location algorithm for series compensated lines. IEEE Trans Power Deliv 17(1): 33 – 46 Zhao L, Abur A (2005) Multiarea state estimation using synchronized phasor measurements. IEEE Trans Power Syst 20(2): 611 – 617 Zhou M, Centeno VA, Thorp JS et al (2006) An alternative for including phasor measurements in state estimators. IEEE Trans Power Syst 21(4): 1930 – 1937 Zivanovic R, Cairns C (1996) Implementation of PMU technology in state estimation: An overview. Proceedings of the 4th IEEE AFRICON Conference, Stellenbosch, 25 – 27 September 1996
7 Conclusions and Future Trends in Emerging Techniques Zhaoyang Dong and Pei Zhang
A number of emerging techniques for power system analysis have been described in the previous chapters of this book. However, given the complexity and ever increasing uncertainties of the power industry, there are always new challenges and consequently new techniques that are needed as well. The major initiatives in the power industry of this decade are no doubt renewable energy and more recently, the smart grid. These new challenges have already encouraged engineers and researchers to explore more emerging techniques. Given the fast changing environment, some of the techniques may become more and more established for power system analysis. These rapid changes also result into the wide diversity in the emerging techniques; consequently, this book can only cover some of these techniques. However, it is expected that these techniques discussed in the book can provide a general overview of the recent advances in power system analysis. As the technology advances, continuous study in this area is expected. This chapter summarizes some of the key techniques discussed in the book. The trends of emerging techniques are also given, followed by a list of topics for further reading.
7.1 Identified Emerging Techniques The following key emerging techniques have been covered in this book: • data mining techniques and their applications in power system analysis; • grid computing techniques and their applications in power system analysis; • probabilistic methods for power system stability assessment and planning; • phasor measurement units and their applications in power system analysis.
186
7 Conclusions and Future Trends in Emerging Techniques
Other emerging techniques, which are also important but only briefly introduced in this book, are: • power system load modeling; • topological methods for system stability and vulnerability analysis; • power system cascading failure; • power system vulnerability analysis; • power system control and protection. Detailed descriptions of the above listed techniques have been given throughout Chapters 1 – 6. They, together with the conventional methods, provide the power industry much needed tools for system operation, control, and planning tasks. Many of the emerging characteristics of the power system nowadays had been considered in these techniques; however, not all of the needs from the power industry have been addressed satisfactorily. The emerging techniques themselves are evolving as well to meet the rapid development of the power industry today. It is necessary to recognize the trends in the power industry development which help to define the new challenges and opportunities, as well as the scope of corresponding new emerging techniques.
7.2 Trends in Emerging Techniques In the past few years, the power industry worldwide has been experiencing more rapid changes which lead to new opportunities as well as challenges. Among the external factors leading to the changes are government policies. The growing awareness and practice in renewable energy and sustainable development have introduced a significant amount of renewable energy into the electricity supply sector. Along with the technical challenges associated with the renewable generators such as wind power generators and solar power generation units, emissions trading and carbon reduction policies also contribute significantly to reshaping the power industry. From 2009, the move towards a smart grid which combines the physical power system with information communications technology (ICT) has attracted huge investments in several major countries including the USA and China. Although the definition and scope of a smart grid is largely vague and varies with the individual government, the overall trend towards a more intelligent power system is clear. Techniques such as self-healing systems, power quality improvement techniques and ultra-high voltage DC and AC transmission system, and associated ICT techniques will be among the key techniques to facilitate the smart grid move.
7.3 Further Reading
187
7.3 Further Reading Following the major trends in power engineering development, further reading in the areas of emissions trading impacts on power system operations and planning, renewable technology developments and their impacts on power systems, and the smart grid are recommended.
7.3.1 Economic Impact of Emission Trading Schemes and Carbon Production Reduction Schemes As global warming and climate change are threatening the ecosystems and economies of the world, many countries have realized the urgent need to reduce greenhouse gas (GHG) emissions and achieve sustainable development. Many efforts towards emission reduction have already been made in the form of government policies and international agreements. In the scientific and engineering literature, traditional command and control regulations have been criticized and the call for establishing more effective environmental policies for sustainable development never stops. Jordan et al. (2003) argued that even the most sophisticated forms of environmental regulation cannot alone achieve sustainable development. Schubert and Zerlauth (1999) argued that the cost of complying with command-and-control regulations excessively limits business profitability and competitiveness. It throttles back technological and environmental innovation and consequently economic growth. According to the articles by Janicke, 1997 and Mol, 2000, new and more novel approaches such as voluntary agreements and market-based instruments are needed by governments and non-legislative organizations for emission reduction purpose. Partially, in view of these arguments, a Europe-wide Emission Trading Scheme (ETS) was introduced by the European Union (EU) from the 1st of January 2005, which obligated major stationary sources of GHGs to participate in a cap and trade scheme. Emission trading is designed to achieve a cost-efficient emission reduction through the equalization of marginal abatement cost. The EU-ETS is the current major policy instrument across Europe to manage emissions of carbon dioxide (CO2 ) and other greenhouse gases. Since the introduction of EU-ETS, it remains a hot topic for discussion and the debate is mainly focused on emission right allocations. Whether emission allowances should be provided free of charge or through purchase (auction) is the centre of the debate. Economists argue, based on the assumption of profit maximization, that the existence of a carbon price implies an extra cost for every fossil generator; and in a competitive market, the generator will pass this extra cost through to consumers by means of the electricity price. Because of this, free allocation of emissions allowances represents a large windfall to generation
188
7 Conclusions and Future Trends in Emerging Techniques
companies. Burtraw et al. (1998) compared three different allocation options for the electricity sector in the US and found that the costs to society through auctioning are about half compared to the other two free-of-charge options, i.e. emission-based allocation and production-based allocation options. Zhou et al. (2009) presented an overview of emission trading schemes and the carbon reduction scheme impacts on the Australian National Electricity Market (NEM). Quirion (2003) suggested that to achieve profit neutrality only 10 – 15% of allowances need to be freely allocated. Bovenberg and Goulder (2000) also proposed that no more than 15% of allowances need to be freely allocated to secure profits and equity values after they did research on the coal, oil and gas industries in the US. Sijm et al. (2006) suggested that overall auctioning seems to be a better option than free allocation, because auctioning can avoid windfall profits among producers, internalizes the costs of carbon emission into the power price, raises public revenue to mitigate rising power prices and avoids potential distortions of new investment decisions. Emission allocation is also a political issue and needs to be compared against allowances auction when considering the additional financial costs of emitters, therefore, power producers and other carbon-intensive industries covered by EU-ETS. The generation sector is among those contributing the most to green house emissions (Sijm et al., 2006; Zhou et al., 2009). Consequently, the ETS has been introduced mainly targeting at the generation sector following the Kyoto protocol. The exact impacts of ETS on the generation composition, profitability, dispatching order, and generation new entry into the market are to be clearly depicted. However, it can be quite confidently anticipated that the generators in an electricity market will definitely be affected. Should ETS be implemented, there will be more renewable and combine cycle generators and less, if not completely no, coal fired power stations entering the market. Take the Australian National Electricity Market (NEM) for instance. Australian government signed the Kyoto protocol in 2008 and encourages renewable resources into the NEM (Garnut, 2008). Zhou et al. (2009) studied the emissions tradition scheme impacts on the Australian National Electricity Market (NEM) and compared the profits and costs of generators under different emission allocation schemes vs. business as usual, i.e., no ETS scenarios. The study indicates that the impact on the profitability of generators and the reduction of GHG in the Australian NEM is small if the carbon price is low. The pricing of carbon is still yet to be determined in Australia. Currently, the generation connection inquiries to the transmission network service providers by wind generators have been increasing rapidly in SA, VIC, and TAS where wind resources are abundant. Another important fact to be considered in this aspect is the Carbon Pollution Reduction Scheme (CPRS) promoted by the Australian government (Yin 2009). CPRS is expected to commence on 1 July 2011. The Australian government expects that CPRS can guarantee that the emissions in Australia are to be reduced by 25% of 2000 levels by 2020. The ETS and CPRS impacts will have to be considered after 2010 in operations and planning in the whole power sector. For generation companies,
7.3 Further Reading
189
this means that the impacts must be considered in forming optimal bidding strategies and selecting optimal portfolios. For transmission network service providers (TNSPs), this means that transmission network expansion planning will deal with increasing number of connection requests from generators using renewable sources. For distribution network service providers, distributed generation using renewable resources will become more widespread, and the consequent distribution network operation, control and planning will have to accommodate such changes as well.
7.3.2 Power Generation based on Renewable Resources such as Wind Increasing power generation from renewable sources such as wind would help in reducing carbon emissions and hence minimize the effect on global warming. Wind energy is one of the fastest growing industries worldwide. Various actions have been taken by the utilities and government authorities across the world to achieve this objective. Most of the states in USA have Renewable Portfolios Standard (a state policy aiming at obtaining certain percentage of their power form renewable energy sources by certain date) ranging from 10% – 20% of total capacity by 2020 (US Department of Energy, 2007). This increasing penetration of renewable sources of energy, in particular wind energy conversion systems (WECS), in the conventional power system has put tremendous challenges to the power system operators/planners, who have to ensure the reliable and secure grid operation. As power generation from WECS is significantly increasing, it is of paramount importance to study the effect of wind integrated power systems on overall system stability. One of the key technologies for wind power is the modeling and control of wind generator systems. The Doubly Fed Induction Generator (DFIG) is the main type of generators in variable-speed wind energy generation systems, especially for high-power applications. This is because of its higher energy transfer capability, reduced mechanical stress on the wind turbine, relatively low power rating of the connected power electronics converter, low investment and flexible controls (Eriksen et al., 2005; Wu et al., 2007; Yang et al., 2009a). DFIG is different from the conventional induction generator in a way that it employs a series voltage-source converter to feed the wound rotor. The feedback converters consist of a Rotor Side Converter (RSC) and a Grid Side Converter (GSC). The control capability of these converters gives DFIG an additional advantage of flexible control and stability over other induction generators (Mishra et al., 2009a). With an increasing penetration level of DFIG type wind turbines into the grid, there is a genuine concern that the stability issue of the DFIG connected system needs proper investigation. A DFIG wind turbine system, including an induction generator, two-mass drive train, power converters and feedback controllers, is a multivariable, nonlin-
190
7 Conclusions and Future Trends in Emerging Techniques
ear, and strongly coupled system (Kumar et al., 2009). In order to assess the stability of the system, dynamics of the DFIG system including generators and controls as well as the power system where the DFIG system is connected need to be analyzed as an overall complex system (Yang et al., 2009a; Mishra et al., 2009b). The interaction between system dynamics and DFIG dynamics needs to be considered carefully. The characteristics of DFIG systems and the increased complexity of DFIG connected power systems also require new control methodologies (Yang et al., 2009b). DFIG control is normally a decoupled control of active and reactive power of DFIG. Vector control strategy based on proportional-integral (PI) controllers has been used to realize this decoupled control objective by the industry (Yamamoto and Motoyoshi, 1991; Pena et al., 1996; Muller et al., 2002; Miao et al., 2009; Xu and Wang, 2007; Brekken and Mohan, 2007).
7.3.3 Smart Grid Following the initiative of greenhouse gas emission reduction and also aiming at reducing energy costs, the smart grid has been promoted as the most important development for the power industry in a number of major economic powerhouses from 2009. For example, in the USA, the Smart Grid project is expected to attract US$150 billion investments. Clearly, in addition to the original objective of sustainable and reliable energy supply, it also serves as a major investment to stimulate the economic development. Similarly, huge amount of investments are also expected in the development of the Smart Grid in China and Europe nations as well. In USA, the 2007 Energy Independence and Security Act (EISA) gives the US Department of commerce, National Institute of Standards and Technology (NIST) the responsibility for issues related to smart grid developments in the USA. In June 2009, Electric Power Research Institute (EPRI, 2009) submitted a report detailing the interoperability standards of the Smart Grid, gaps in current standards and priorities for new standards. In this document, EPRI summarized the high level architecture development in the smart grid including conceptual models, architectural principles and methods, and cyber security strategies for the smart grid. It also summarized the implementation of the conceptual model of the smart grid, and principles of enabling the smart grid to support new technologies and business models. According to the EISA of 2007 and EPRI’s IntelliGrid initiative (2001 – 2009), the Smart Grid refers to the development of the power grid which links itself with communications and computer control so that it can monitor, protect and automatically optimize the operation of the components including generation, transmission, distribution and consumers of electricity. It also coordinates in an optimal way the operation of energy storage systems and other appliances such as electric vehicles and air-conditions. According
7.4 Summary
191
to EPRI (2009), the Smart Grid is characterized by “a two-way flow of electricity and information to an automated, widely distributed energy delivery network”. The benefits of the Smart Grid (EPRI, 2009) are summarized as to be able to achieve: (1) reliability and power quality improvement; (2) enhanced grid safety and cyber security; (3) higher energy efficiency; (4) more sustainability in energy supply; (5) a wider range of economic benefits to participants of the smart grid including both the supplier and consumer sides. Along the line of the smart grid development, a group of techniques need to be further explored, these include automated metering infrastructure (AMI), demand side participation, plug in electric vehicles, wide area measurement based measurement and control techniques, communications, distributed generation and energy storage techniques. Moreover, Transportation of renewable and alternative electricity generation to the end user may require more interconnections in a power system. Given the increasing interconnection of power systems in many countries, electric transportation, especially ultra-high voltage AC and DC transmission techniques, are other important issues for the development of a very large scale smart grid.
7.4 Summary The power industry in many countries today has been experiencing various developments which lead to continuously emerging challenges. Power system analysis techniques need to be advanced as well in order to follow these challenges. This book presents an overview of some key emerging techniques being developed and implemented over the past decades. It also summarized the trends in power industry and the emerging technology development. The authors of this book hope to provide readers a picture of the technological advances that have happened in the past decade. However, as we stated in the book, technological development will not stop, there are new challenges emerging and the research and development of power system analysis techniques will continue.
References Bovenberg AL, Goulder LH (2000) Neutralizing the adverse industry impacts of CO2 abatement policies: what does it cost. NBER Working Paper No. W7654.
192
7 Conclusions and Future Trends in Emerging Techniques
Available at SSRN: http: //ssrn. com/abstract=228128. Accessed 1 June 2009 Brekken TKA, Mohan N (2007) Control of a doubly fed induction wind generator under unbalanced grid voltage conditions. IEEE Trans Energy Conversion 22(1): 129 – 135 Burtraw D, Harrision KW, Turner P (1998) Improving efficiency in bilateral emission trading. Environ Resour Econ 11(1): 19 – 33 EPRI IntelliGridSM Initiative (2001 – 2009). http://intelligrid.epri.com. Accessed 8 July 2009 EPRI (2009) Report to NIST on the Smart Grid Interoperability Standards Roadmap, 17 June 2009 Eriksen PB, Ackermann T, Abildgaard H, et al (2005) System operation with high wind penetration. IEEE Power Energy Manag 3(6): 65 – 74 Kumar V, Kong S, Mishra Y, et al (2009) Doubly fed induction generators: overview and intelligent control strategies for wind energy conversion systems. Chapter 5, Metaxiotis edt. Intelligent Information Systems and Knowledge Management for Energy: Applications for Decision Support, Usage, and Environmental Protection, IGI Global publication Janicke M (1997) The political system’s capacity for environmental policy. In National Environmental Policies: a Comparative Study of Capacity-Building. Janicke M, Weidner H. Springer, Heidelberg, pp 1 – 24 Garnaut R (2008) Garnaut climate change review, emissions trading scheme discussion paper. Melbourne. http://www.garnautreview.org.au. Accessed 2 July 2009 Miao Z, Fan L, Osborn D, et al (2009) Control of DFIG-based wind generation to improve interarea oscillation damping. IEEE Trans Energy Conversion 24(2): 415 – 422 Mishra Y, Mishra S, Li F, et al (2009) Small signal stability analysis of a DFIG based wind power system with tuned damping controller under super/subsynchronous mode of operation. IEEE Trans Energy Conversion Mishra Y, Mishra S, Tripathy M, et al (2009) Improving stability of a DFIG-based wind power system with tuned damping controller. IEEE Trans on Energy Conversion Mol APJ (2000) The environmental movement in an era of ecological modernization. Geoforum 31(1): 45 – 56 Muller S, Deicke M, De Doncker RW (2002) Doubly fed induction generator system for wind turbines. IEEE Industry Appl Mag 8(3):26 – 33 Pena R, Clare JC, Asher GM (1996) Doubly fed induction generator using backto-back PWM converters and its application to variable speed wind-energy generation. IEE Proceedings on Electric Power Applications, 143(3):231 – 241 Quirion P (2003) Allocation of CO2 allowances and competitiveness: A case study on the european iron and steel industry. European Council on Energy Efficient Economy (ECEEE) 2003 Summer Study proceedings. http://www.eceee.org/ conference proceedings/eceee/2003c/Panel 5/5060quirion/. Accessed 28 April 2008 Schubert U, Zerlauth A (1999) Innovative regional environmental policy: the RECLAIM-emission trading policy. Environ Manag and Health 10(3): 130 – 143 Sijm JPM, Bakker SJA, Chen Y, et al (2006) CO2 price dynamics: the implications of EU emissions trading for electricity prices & operations. IEEE PES General Meeting, Montreal, 18 – 22 June 2006 US Deptment of Energy (2007) EERE state activities and partnerships. http:// apps1.eere.energy.gov/states/maps/renewable portfolio states.cfm. Accessed 2 July 2009 Wu F, Zhang XP, Godfrey K, et al (2007) Small signal stability analysis and optimal
References
193
control of a wind turbine with doubly fed induction generator. IET Gener Transm Distrib 1(5): 751 – 760 Xu L, Wang Y (2007) Dynamic modeling and control of DFIG-based wind turbines under unbalanced network conditions. IEEE Trans Power Syst 22(1): 314 – 323 Yamamoto M, Motoyoshi O (1991) Active and reactive power control for doubly-fed wound rotor induction generator. IEEE Trans Power Electron 6(4): 624 – 629 Yang LH, Xu Z, Østergaard J, Dong ZY, et al (2009) Oscillatory stability and eigenvalue sensitivity analysis of a doubly fed induction generator wind turbine system. IEEE Trans Power Syst (submitted). Yang LH, Yang GY, Xu Z, et al (2009) Optimal controller design of a wind turbine with doubly fed induction generator for small signal stability enhancement. In Wang et al ed. Wind Power Systems: Applications of Computational Intelligence. Springer, New York Yin X (2009) Building and investigating generators’ bidding strategies in an electricity market. PhD thesis, Australian National University, Canberra Zhou X, James G, Liebman A, et al (2009) Partial carbon permits allocation of potential emission trading scheme in australian electricity market. IEEE Trans Power Syst
Appendix ZhaoYang Dong and Pei Zhang
A.1 Weibull Distribution Other than the often used normal distribution, Weibull distribution has been used in many applications to model different distributions of power system parameters in probabilistic analysis. Some important properties of this distribution are reviewed here. Weibull Probability density function is defined as follows, f (t) =
β η
T −γ η
β−1
e −(
T −γ η
)β ,
where η is the scale parameter (η > 0), γ is the location parameter (−∞ < γ < ∞) and β is the shape parameter (β > 0). f (t) 0 and T 0 or γ. The Mean T of the Weibull pfd is given by Abernethy, 1996; Dodson, 1994: 1 +1 . T = γ + ηΓ β The kth raw moment μk of a distribution f (x) is defined by ⎧
⎪ discrete distribution xk f (x), ⎪ ⎨ μk = ( ⎪ ⎪ ⎩ xk f (x)dx, continuous distribution
196
Appendix
A1.1 An Illustrative Example Since the variable T > 0, according to the moment definition, ( +∞ f (x)xk dx. μk = 0
The kth raw moment of the two-parameter Weibull probability density function is β−1 ( +∞ T β β T μk = e−( η ) T k · dT η η 0 ( +∞ β−1 T β T T = β e −( η ) T k d η η 0 ( +∞ k β−1 β βη T T −( T k ) η (A.1) e T d = k η η η 0 ( +∞ k β−1 β T T T −( T k ) η =η e βd k η η η 0 β ( +∞ k T β T T e −( η ) d = ηk η η 0 ) *β ) * 1 Let Tη = x, then Tη = x β Therefore, k k T = xβ η Substitute Eq. (A.2) into Eq. (A.1), ( +∞ k k μk = η x β e−x dx.
(A.2)
(A.3)
0
Set
k = n, β μk
( =η
k
+∞
xn e−x dx.
(A.4)
0
Since the gamma function is defined as ( +∞ xn−1 e−x dx. Γ(n) =
(A.5)
0
Then, the kth raw moment of two-parameter Weibull probability density function is ⎧ ⎪ ⎨ μk = η k Γ(n + 1), (A.6) k k ⎪ +1 . ⎩ μk = η Γ β
A.2 Eigenvalues and Eigenvectors
197
The central moment definition is ( +∞ + ,k f (x) x − T dx, μk = 0
where T is the mean of Weibull distribution, and 1 +1 . T = γ + ηΓ β The central moments μk can be expressed as terms of the raw moments μk using the binomial transform k
k (−1)k−j μj μk−j , (A.7) μk = 1 j j=0 with μ0 = 1 (Ni et al., 2003). k
k j + 1 μk−j (−1)k−j η j Γ , μk = 1 β j j=0 where μ1 = T . μk =
k
k
(−1)k−j η j Γ
j k
k
j=0
= ηk
j=0
j
j +1 β
(−1)k−j Γ
j +1 β
ηΓ
1 +1 β
k−j
k−j 1 Γ +1 . β
A.2 Eigenvalues and Eigenvectors Power system small signal stability analysis is based on linearised system analysis which requires eigenvalue analysis. This section gives an overview of eigenvalue and eigenvectors. Consider a square matrix A = [ai,j ]n×n which can be the state matrix of a linearised dynamic power system model. The eigenvalue calculation is to find a nonzero vector x = [xi ]1×n and scalar λ such that Ax = λx, (A.8) where λ is the eigenvalue, also known as characteristic value or proper value, of matrix A, and x is the corresponding right eigenvector (also known as the characteristic vector or proper vector) of matrix A.
198
Appendix
The necessary and sufficient condition for the above equation to have a non-trivial solution for vector x is that the matrix (λI − A) is singular. This can be represented as a characteristic equation of A shown below, det(λI − A) = 0,
(A.9)
where I is the identity matrix. The eigenvalues [λ1 , λ2 , . . . , λn ] are the roots of this characteristic equation. The characteristic polynomial of A is S(λ) = an λn + an−1 λn−1 + . . . + a1 λ + a0 ,
(A.10)
where λk , k = 1, . . . , n, are the corresponding k− th powers of λ, and ak , k = 1, . . . , n, are the coefficients determined via the elements aij of A. Eq (A.11) can be obtained by expansion of det(λI − A as a scalar function of λ. Each eigenvalue also corresponds to a left eigenvector y which is the right eigenvector of transpose of A, and (λI − AT )y = 0.
(A.11)
For power system analysis, singular values are used in some stability studies. They can be obtained through singular value decomposition. Consider an m × n matrix B, if B can be transformed as in Eq. (A.13), $ % S 0 ∗ U BV = , where S = diag[σ1 , σ2 , . . . , σr ], (A.12) 0 0 where Um×m and Vn×n are orthogonal matrices, and all σk 0, then Eq. (A.13) is called singular value decomposition and the singular values of B are σ1 , σ2 , . . . , σr ; and r is the rank of B. If B is a symmetric matrix, then matrices U and V coincide, and σk are the absolute values of eigenvalues of B. Eq. (A.13) is often used in the least square method, especially when B is ill-conditioned (Deif, 1991).
A.3 Eigenvalues and Stability Power system small signal stability is based on modal analysis of linearised system around an operating point. The time domain characteristics of a mode corresponding to an eigenvalue λi is eλi t , correspondingly the stability of the system is determined by the eigenvalues of the (linearised) system state matrix (Makarov and Dong, 2000; Kundur 1994; Dong, 1998). • Nonoscillatory modes: Real eigenvalues of a system correspond to nonoscillaotry modes. A positive real eigenvalue leads to aperiodic instability and A negative real eigenvalue represents a decaying mode.
A.3 Eigenvalues and Stability
199
• Oscillatory modes: Conjugate pairs of complex eigenvalues correspond to oscillatory modes. The real part and imaginary part of the eigenvalues define the damping and frequency of corresponding oscillations. Let σ and ω represent the real and imaginary part of a complex of eigenvalues, λ = −σ ± jω, the frequency of oscillation in hertz is f=
ω , 2π
(A.13)
and the damping ratio is −σ ξ= √ . σ2 + ω2
(A.14)
A dynamic system such as a power system can be modeled by Differentialand Algebraic Equations (DAEs): x˙ = f (x, y, p), f : Rn+m+q → Rn (A.15) 0 = g(x, y, p), g : Rm+n+q → Rm where x ∈ Rn , y ∈ Rm , p ∈ Rq ; x is the vector of dynamic state variables, y is the vector of static or instantaneous state variables, and p is a system parameter which may change and therefore affects the system small disturbance stability properties. The system is in an equilibrium condition if it satisfies 0 = f (x, y, p), (A.16) 0 = g(x, y, p). Solutions to Eq. (A.17) are the system equilibrium points of Eq. (A.16) which can be linearised at an equilibrium point when it is subject to small disturbances, ⎧ f f ⎪ ⎪ ⎨ Δx˙ = x Δx + y Δy, (A.17) ⎪ g g ⎪ ⎩0 = Δx + Δy, x y or in a simpler form as
Δx˙ = AΔx + BΔy, 0 = CΔx + DΔy,
(A.18)
If det D = 0, the state matrix As can be obtained by As = A − BD −1 C.
(A.19)
It can then be analyzed for system small disturbance stability studies using eigenvalues and eigenvectors.
200
Appendix
References Abernethy RB (1996) The New Weibull Handbook. Gulf Publishing, Houston Deif AS (1991) Advanced Matrix Theory for Scientists and Engineers, 2nd edn. Abacus, New York Dodson B (1994) Weibull Analysis. Amer Society for Quality, Milwaukee Dong ZY (1998) Advanced Technique for Power System Small Signal Stability and Control, PhD thesis, Sydney University, Sydney Kundur P (1994) Power System Stability and Control. McGraw-Hill, New York Makarov YV, Dong ZY (2000) Eigenvalues and eigenfunctions. Computational Science & Engineering, Encyclopedia of Electrical and Electronics Engineering, Wiley, pp 208 – 320 Ni M, McCalley JD, Vittal V et al (2003) Online risk-based security assessment. IEEE Trans Power Syst 18(1): 258 – 265
Index
A a heteroscedastic time series 63 area control error (ACE) 13, 65 automatic generation control (AGC) 12, 151 available transfer capacity (ATC) 160
B bilateral contract blackout 18, 71
5, 6
C cascading failure 23 classification 29, 47–49, 59, 81, 118 correlation 47–49, 71, 72, 75 critical clearing time (CCT) 122, 158 cumulant 129
D deregulation 1, 2, 19, 108 distributed computing 100
E eigenvalue 197, 198 Electric Power Research Institute (EPRI) 27 electricity market 2, 52, 57, 188 Energy Management System (EMS) 30, 95, 151 equal area criteria (EAC) 160
extended equal area criteria (EEAC) 160
F facts 24 feature extraction
71
G Game theory 43 genetic algorithm (GA) 155 grid computing 29, 31, 95–97, 100, 101, 105, 107, 108 grid middleware 97
H heteroscedastic time series 52, 63, 84 high performance computing (HPC) 29
I independent system operator (ISO) 6
K knowledge discovery in (KDD) 46
L lagrange multiplier 63 linear programming 155
database
202
Index
load flow 109, 129, 140 load forecasting, 49, 112, 151 load modeling 7 local correlation network pattern (LCNP) 71 local correlation network pattern (LNCP) 76
M Monte Carlo 32, 33, 108, 109, 122, 127, 128, 138, 139, 141–143
N neural network
93, 115, 179
O On-Load Tap Changer (OLTC) 13 optimal power flow (OPF) 107, 151 oscillatory modes 199 out-of-step relay 14
P parallel computing 114 phasor measurement unit (PMU) 34 power system stabilizer (PSS) 13 probabilistic load flow (PLF) 109, 129, 140 probabilistic reliability assessment (PRA) 41, 117, 123, 128, 129, 131, 135 probabilistic reliability index (PRI) 131 PSS E 111, 114
resource layer
97
S scale-free networks 35 service layer 98 simulated annealing (SA) 155 small signal stability 119, 122, 127, 137, 160 small world 26 state estimation 157 STATic COMpensator (STATCOM) 13 static var compensator (SVC) 13 supervisory control and data acquisition (SCADA) 151 support vector machine (SVM) 28, 48, 49 system restoration 35
T time series 37, 62, 72-29, 93 transient stability 118, 121, 125, 158
U under-frequency load shedding (UFLS) 14 under-voltage load shedding (UVLS) 14
V voltage stability 158 vulnerability 36
W R regression relay 12
29, 47–49
Weibull distribution 195 wide-area measurement/monitoring system (WAMS) 148