RESERVOIR CAPACITY AND YIElD
DEVELOPMENTS IN WATER SCIENCE, 9
advisory editor
VEN TE CHOW Professor of Hydraulic En...
345 downloads
1532 Views
5MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
RESERVOIR CAPACITY AND YIElD
DEVELOPMENTS IN WATER SCIENCE, 9
advisory editor
VEN TE CHOW Professor of Hydraulic Engineering Hydrosystems Laboratory University of fllinois Urbana, fll., U.S.A.
FUR THER TITLES IN THIS SERIES
1 G. BUGLIARELLO AND F. GUNTER COMPUTER SYSTEMS AND WATER RESOURCES
2 H L. GOLTERMAN PHYSIOLOGICAL LIMNOLOGY
3 Y. Y. HAIMES, W. A. HALL AND H. T. FREEDMAN MULTI OBJECTIVE OPTIMIZATION IN WATER RESOURCES SYSTEMS: THE SURROGATE WORTH TRADE-OFF METHOD
4 J. J. FRIED GROUNDWATER POLLUTION
5 N. RAJARATNAM TURBULENT JETS
6 D. STEPHENSON PIPELINE DESIGN FOR WATER ENGINEERS
7 V. HALEK AND J. SVEC GROUNDWATER HYDRAULICS
8 J. BALEK HYDROLOGY AND WATER RESOURCES IN TROPICAL AFRICA
RESERVOIR CAPACITY AND YIElD THOMAS A. McMAHON & RUSSEL G. MEIN Department of Civil Engineering, Monash University, Clayton, Vic., Australia
ELSEVIER SCIENTIFIC PUBLISHING COMPANY 1978
Amsterdam - Oxford - New York
ELSEVIER SCIENTIFIC PUBLISHING COMPANY 335 Jan van Galenstraat P.O. Box 211, Amsterdam, The Netherlands
Distributors for the United States and Canada: ELSEVIER NORTH-HOLLAND INC. 52, Vanderbilt Avenue New York, N.Y. 10017
Lihr:uy of ('()ngn's~ Cataloging in Publication nata
HCHahon, Thomas ]\quinas. Reservoir capacity and yield. (Developments in water science; v. 9) Bibliography: p. Includes index. 1. Reservoirs. I. Mein, Russell G., joint author. II. Title. III. Series. TD395.H24 1978 628.1'3 77-18704 ISBN 0-444-41670-6
ISBN 0-444-41670-6 (Vol. 9) ISBN 0-444-41669-2 (Series) © Elsevier Scientific Publishing Company, 1978 All rights reserved. No part of this pUblication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher, Elsevier Scientific Publishing Company, P.O. Box 330, Amsterdam, The Netherlands
Printed in The Netherlands
PREFACE
The text for this book has evolved from the notes written for a workshop held at Monash University in May 1975.
The format of the workshop, the
first of a series on specific topics in water engineering, was about one half lectures supported by printed notes, and one half exercises involving both manual and computer applications ot the theory. For this text, the printed notes have been revised and expanded, and the exercises have been replaced by worked examples.
Most of the latter have
been worked using the streamflow data of one river, the Mitta Mitta (Appendix E), chosen because of its median value of variability with respect to other Australian streams;
compared to North American and European data
it would be classed in the high range of variability. The aim of the text is to provide a comprehensive review and classification of most of the currently used storage-estimation procedures.
The
essential features of each method are presented and limitations inherent in the assumptions are discussed.
Recommendations based on the results of
considerable research effort in this department over a period of several years are made.
The book is written for the practising engineer involved
in storage estimation and for graduate level study in this field. The authors are indebted to the contributions of several postgraduate students who have worked, or are presently working in the Department under the supervision of the senior author.
These are Dr. C. Joy,
Dr. G. Codner, Mr. G. Philips, Mr. C. Teoh, Mr. S. Fletcher and Mr. R. Srikanthan.
Discussions with Dr. R. Phatarfod of the Mathematics
Department at Monash University have also provided a strong source of stimulation.
The authors' colleague, Professor E. M. Laurenson, the third
of the original workshop instructors, has given unfailing support and assistance in this and many other areas.
Finally, for the production of the
manuscript itself, many people have contributed;
in particular, Mrs. J. Helm
typed the final draft most ably, Mr. R. Alexander drafted the diagrams, and Mr. D. Holmes completed the photographic reproduction.
We are very
appreciative of their efforts. T. A. McMahon, R. G. Mein, Department of Civil Engineering, Monash University. September, 1977.
This Page Intentionally Left Blank
CONTENTS
Chapter 1
INTRODUCTION
1.1
THE DESIGN PROCESS
2
1.2
CLASSIFICATION OF RESERVOIR CAPACITY-YIELD PROCEDURES
3
1.3
PROCEDURES IN CURRENT USE
5
DEFINITION OF TERMS
6
2.1
TIME INTERVAL
6
2.2
INFLOW DATA
6
Chapter 2
2.3
2.2.1
Measures of Central Tendency
7
2.2.2
Measures of Variability
9
2.2.3
Measures of Skewness
9
2.2.4
Measure of Persistence
10
2.2.5
Typical Parameter Values
11
2.2.6
Standard Errors of Parameters
11
STORAGE TERMS
14
2.3.1
Active Storage
14
2.3.2
Within-year Storage
14
2.3.3
Carryover Storage
14
2.3.4
Conceptual Storages
14
2.4
RELEASE
15
2.5
RELEASE RULE OR OPERATING RULE
16
2.6
PROBABILITY OF FAILURE AND RELIABILITY
16
2.7
NOTATION
18
CRITICAL PERIOD TECHNIQUES
19
3.1
CRITICAL PERIOD
19
3.2
METHODS WHICH INDICATE RESERVOIR FULLNESS WITH TIME
20
Chapter 3
3.2.1
Mass Curve Method (Rippl Diagram)
20
3.2.2
Residual Mass Curve Method
22
3.2.3
Behaviour (or Simulation) Analysis
24
3.2.4
Semi-Infinite Reservoir
27
3.3
METHODS BASED ON RANGE Hurst's Procedure
3.3.1
3.4
30 31
3.3.2
Fathy and Shukry
33
3.3.3
Sequent Peak Algorithm
33
METHODS BASED ON LOW FLOW SEQUENCES
35
3.4.1
Minimum Flow Approach
35
3.4.2
Alexander's Method
38
3.4.3
Dincer's Method
46
3.4.4
Gould's Gamma Method
49
3.4.5
Carryover Frequency Mass Curve Analysis
52
3.4.5.1
Overlapping Sequence Approach
52
3.4.5.2
Independent Series Approach
55
3.4.5.3
Independent versus Overlapping Series
57
3.4.6
Wi thin-year Frequency Mass Curve Analysis
59
3.4.7
Regional Within-year Storage Estimates
63
3.4.8
Bias in Mass Curve Frequency Analysis
64
3.4.9
Combining Carryover and Seasonal Storages - Hardison's Approach
65
3.5
OTHER CRITICAL PERIOD METHODS
67
3.6
SUMMARY
67
3.7
NOTATION
68
PROBABILITY MATRIX METHODS
71
4.1
GENERAL CLASSIFICATION OF MORAN DERIVED METHODS
71
4.2
A SIMPLE MUTUALLY EXCLUSIVE MODEL
73
Chapter 4
4.2.1
The Discrete Equations for the Mutually Exclusive Model - General case
76
4.3
A SIMPLE SIMULTANEOUS MODEL
78
4.4
COMPUTATION OF STEADY STATE CONDITION
79
4.5
DISCUSSION - MORAN TYPE MODELS
81
4.5.1
82
Further Modifications
4.6
4.7
4.8
GOULD'S PROBABILITY MATRIX METHOD
83
4.6.1
Procedure
83
4.6.2
Practical Considerations
90
RELATED PROBABILITY MATRIX METHODS
93
4.7.1
McMahon's Empirical Equations
93
4.7.2
Probability Routing
96
4.7.3
Hardison's Generali zed Method
97
OTHER MODELS
101
4.8.1
Me1entijevich
4.8.2
K1emes
102
4.8.3
Phatarfod
102
101
4.9
SUMMARY
IDS
4.10
NOTATION
IDS
USE OF STOCHASTICALLY GENERATED DATA
107
5.1
TIME-SERIES COMPONENTS
108
5.2
HISTORICAL DEVELOPMENTS TO 1960
110
5.3
ANNUAL MARKOV MODEL
111
Chapter 5
5.3.1
Practical Considerations
112
5.4
THOMAS AND FIERING SEASONAL MODEL
ll4
5.5
MODIFICATIONS FOR NON-NORMAL STREAMFLOWS
115
5.5.1
Modifying ti
115
5.5.2
Moment Transformation Equations
ll7
5.5.3
Normalizing Flows
121
5.6
TWO TIER MODEL
124
5.7
OTHER CONSIDERATIONS
125
5.8
MODEL VERIFICATION AND PERFORMANCE
126
5.8.1
Unrepresentative Streamflow Data
135
5.9
SIMULATION
When and How to Use Generated Data
5.9.1 5.10
Chapter 6
6.1
136
GENERALIZED RESERVOIR CAPACITY-YIELD RELIABILITY RELATIONS
140
5.10.1
Gould's Synthetic Data Procedure
140
5.10.2
Gug1ij's and Svanidze's Synthetic Data Procedures
5.11
135
NOTATION
144 144
QUANTITATIVE ASSESSMENT OF CAPACITY-YIELD TECHNIQUES FOR SINGLE RESERVOIRS
147
CRITICAL PERIOD AND PROBABILITY MATRIX METHODS
147
6.1.1
Mass Curves and Minimum Flow (Waitt)
147
6.1.2
Alexander's Method
149
6.1.3
Overlapping Series Frequency Mass Curve Method (Thompson)
6.1. 4
150
Independent Series Frequency Mass Curve Method (Stall)
151
6.1.5
Gould's Probability Matrix Method
152
6.1.6
Further Comparison of Gould and Behaviour Methods
153
6.1. 7
Summary
154
6.2
CAPACITIES BASED ON STOCHASTIC DATA GENERATION
155
6.3
RAPID RESERVOIR CAPACITY-YIELD PROCEDURES
159
6.4
SAMPLING ERROR OF STORAGE AND DRAFT ESTIMATES
164
6.5
RECOMMENDATIONS
169
6.6
NOTATION
170
MULTI-RESERVOIR SYSTEMS
171
A TYPICAL PROBLEM
171
Chapter 7 7.1
7.1.1
Traditional Solution
172
7.1. 2
Recycled Historical Sequences
174
7.2
STOCHASTICALLY GENERATED FLOWS
175
7.2.1
Key Station Approach
175
7.2.2
Principal Component Approach
176
7.2.3
Regression Method
176
7.2.4
Residual Approach
176
7.2.5
Multi-site Model Performance
177
7.2.6
Use of Generated Data
177
7.2.7
Application to Multi-storage Systems
178
7.3
TRANSITION MATRIX APPROACH
179
7.4
OTHER ALTERNATIVES
180
7.5
NOTATION
181
REFERENCES APPENDIX A·
181 Procedure to adjust Storage Estimate for Net Evaporation Loss
APPENDIX B
Adjustment for Assumption of Independence of Annual Flows
APPENDIX C
APPENDIX E
193
Theoretical Justification of a Non-Seasonal Markov Model
APPENDIX D
190
199
Newton-Raphson Method for solving an Inexplicit Variable
202
Flow Tables for Mitta Mitta River
203
AUTHOR INDEX
205
SUBJECT INDEX
207
This Page Intentionally Left Blank
CHAPTER 1
INTRODUCTION The storage required on a river to meet a specific demand depends on three factors;
the variability of the river flows, the size of the demand,
and the degree of reliability of this demand being met.
As this and sub-
sequent chapters will show a large number of procedures have been proposed to estimate storage requirements.
This text is concerned with examining
and classifying these procedures with the aim of recommending the ones most suitable for particular requirements. In its simplest form the problem being tackled is shown in Fig. 1.1. It is required to divert water from the stream with flow sequence Q(t) to meet the demand of perhaps an urban area or of a rural irrigation scheme. Alternatively it may be necessary to augment the low flow periods of the river.
In any event, the question being posed is:
"How large does the
reservoir capacity CC) need to be to provide for a given controlled release or draft DCt) with an acceptable level of reliability?"
Other variations
of this question are possible (such as determining release for a given capacity) but the basic problem remains unaltered;
the relationship between
inflow characteristics, reservoir capacity, controlled release, and reliability must be found. Following definition of terms in the next chapter, Chapters 3-5 examine in detail all the common and some relatively unknown procedures for
Stream flow sequence
QW . . Demand area
Controlled release sequence OW Reservoir with active storage capacity C
FIG. 1.1
SPil~ An idealized view of the reservoir capacity-yield problem.
2
solution of the single reservoir problem.
The performance of several of
the methods is assessed in Chapter 6, where recommendations for use of particular procedures are made. The use of more than one reservoir storage to satisfy the demand adds a significant degree of complication to the problem.
The reservoirs may be
on the same stream, different streams, or not on any stream (e.g. pumped storage).
Additional complexity may result from topographical or other
constraints which restrict flow between reservoirs and thus reduce system flexibility. 1.1
The multi-reservoir problem is discussed in Chapter 7.
THE DESIGN PROCESS In the early analysis of a water supply development, a number of
alternative darn sites would be investigated, not only for the construction requirements but also from the hydrologic point of view.
For such studies
and for hydrologic reconnaissance or regional reviews, quick and relatively simple techniques for estimating the reservoir capacity-yield relationship are required. The methods which can be used for rapid assessment are designated as preZiminary design techniques. Simplifying assumptions are often made; for example, releases may be assumed to be constant, evaporation and sedimentation losses ignored, the probability of failure may not be considered, and the seasonal characteristics of the river flows may not be taken into account.
For these preliminary methods, accuracy is reduced for ease of
application. After using preliminary design techniques to eliminate unsuitable reservoir sites from consideration, the remaining few should be evaluated using a finaZ design technique.
These techniques are often more compli-
cated because they take into account most, or all, of the factors which influence storage.
Thus, properties of the river inflOWS, variation of
releases with season, the possibility of water restrictions, the effect of evaporation, and the probability of not being able to meet the demand must be realistically treated. In the text, recommended methods are designated as being suitable as preliminary or as final design techniques.
3
1.2
CLASSIFICATION OF RESERVOIR CAPACITY-YIELD PROCEDURES Reservoir capacity-yield procedures can be classified into three main
groups although the distinction between groups is not always clear-cut. The first group (critical period techniques) includes methods in which a sequence (or sequences) of flows for which demand exceeds inflows is used to determine the storage size.
Those methods related to Moran Darn Theory
or similar procedure are included in the second group, part of which is grouped under a general umbrella of probability matrix methods.
The third
group consists of those procedures which are based on generated data.
A
detailed classification along with author references is given in Fig. 1.2. The methods are discussed in detail under these groupings in Chapters 3, 4 and 5, respectively. Briefly, critical period methods are those in which the required reservoir capacity is equated to the difference between the water released from an initially full reservoir and the inflows, for periods of low flow. For the procedures designated as mass curve, minimum flow, or range, the storage is normally associated with the severest drought sequence in the historical record.
If historical data is used with these procedures, an
estimate of the risk of being unable to meet the design releases (probability of failure) cannot be made.
In contrast, other critical period
methods enable the reliability of the reservoir to meet the demand to be estimated. The second group of procedures is considered to be a development of Moran's Theory of Storage (1954, 1955, 1959).
In essence Moran derived an
integral equation relating inflow to reservoir capacity and releases such that the probable state of the reservoir contents at any time could be defined.
However, except for idealized conditions the solution was
intractable.
Subsequently Morml considered time and flow to be discon-
tinuous variables ffild showed how reservoir capacity, release and inflow could be related to each other by a system of simultaneous equations, but the method has several shortcomings.
Gould (1961) modified Moran's approach
to a general procedure of direct practical use to the water engineer.
In
this context it is worth noting that a Russian, Savarenskiy, published in 1938 similar ideas to those presented later by Moran and Gould but it is only recently that his contributions have become known in English technical literature. Although procedures for estimating reservoir capacity-yield relationships using streamflow data generated by stochastic methods were first used
RESERVOIR CAPACITY - YIELD ANALYSIS
PROCEDURES BASED ON DATA GENERATION
GENERAL WITH PROBABILITY
OVER FREQUENCY
YEAR FREQUENCY
Hazen
(1914)
Sudler
(1927)
Barnes (1954)
Rippl
(1883)
King
(1920)
Waitt (1945)
Hurst 0951.56.57.6S} Alexander (1962) Thompson l1 950) U.S.G.S. (c1960) Stall (1962) Hardison ('965)
Fathy & Shukry C1956} Dincer
Thomas (1963) (Sequent peak)
Gould
Wilson (1949)
(1965)
Hurst
(1953.55)
Law
(1962)
Guglij
(1969)
Svandze (1964)
(1964)
MORAN THEORY
I CONTINUOUS TIME Moran
(1956)
Gani
(1965)
Gant & Prabhu (1958.59) Gani & Pyke (1960.52)
Melentijevich (1966) (1967)
DISCONTINUOUS TIME
(1976)
I CONTINUOUS
DISCONTINUOUS
~
VOLUMES
Gani &. Prabhu (1957) (1958)
Prabhu
Ghosal
Langbein
- - - - - - - - - - - - - - - - - - - Hardison (r965)
I
I
NON-SEASONAL
SEASONAL
------r=
I
(1959.60>
INDEPENDENT Moran (1954)
CORRELATED Lloyd
(1963)
INDEPENDENT Moran
(1955)
Lloyd &. Odoom (1964)
FIG. 1.2
A classification of reservoir capacity-yield procedures
(19611
Gould Maass
Gould £1960 McMahon (1916)
I CORRELATED
Dearlove & Harris (1965) Venetis (1969)
5
more than sixty years ago, it was not until the advent of high-speed digital computers in the sixties that such procedures became established in engineering hydrology.
Stochastic data generation is the basis of the third
group of storage-yield procedures. It should be noted that many of the methods shown in Fig. 1.2 are included for only their historical importance in the development of a particular technique or groups of techniques;
they are often impractical
or use unacceptable assumptions in their derivation. 1.3
PROCEDURES IN CURRENT USE Very little published information is available on techniques currently
in use by water authorities around the world.
A questionnaire survey of
Australian water authorities by the senior author in 1974 showed that, in general, storage capacity designs were based on mass curve or simulation analyses using historical streamflows.
These two methods were used both
for preliminary and final design calculations.
In about one half of the
cases, the probability of the reservoir not being able to meet the demand was computed, although it was never used as the sole design criterion. Data generation techniques have been used by about one half of the Australian water authorities although more than that indicated their belief in the potential of the method. There is no reason to assume that the methods in current use in Australia are any different to those in current use overseas.
6
CHAPTER 2
DEFINITION
OF TERMS
This chapter is concerned with defining and explaining several of the terms used in reservoir capacity-yield analyses.
The meaning of some of
these terms sometimes differs from one author to another;
it is therefore
important that the reader be clear as to which interpretation is used in this text. The definitions given include those for several statistical measures. These are necessary to specify the characteristics of the river inflows to the reservoir because these characteristics have a major influence on storage requirements.
Other important terms discussed in this chapter
include several storage terms, release, release rule, and definition of probability of failure. 2.1
TIME INTERVAL The time interval required for the inflow data depends on the size of
the storage and on the degree of accuracy required.
For small storages
designed to provide water in excess of the river flow for only a month or two in the year, daily flow data are required.
For larger storages, monthly
data are usually adequate to define the variations of streamflow with season (seasonality), although annual data can often provide sufficiently accurate results for preliminary design estimates. As a general rule, monthly data are used for most studies.
With this
time interval the data processing time is not excessive, variations in streamflow and releases throughout the year are adequately accounted for, and records are readily available.
A minor drawback in dealing with monthly
flow volumes is that the calendar months are not equal in length;
the effect
of this on storage is small, however, and is usually ignored. 2.2
INFLOW DATA It is not possible to predict the future sequence of flows of a
natural stream.
All methods therefore use historical flow data or parameters
derived from it, and thus implicitly assume that these data are representative of the true streamflow characteristics.
Hence, any value of storage
(or draft) estimated using historical data has a sampling error inherent in it (see Sec. 2.2.6).
7
In the analytical procedures that follow it is assumed that inflows into the reservoir occur as daily, monthly, or annual discrete events. These will be available as measured data at the site in question, or will have been estimated by either regression analysis (see, for example, Searcy, 1966, or Brown, 1961) or with a deterministic process model such as the Stanford Watershed Model (Crawford and Linsley, 1966). It is also assumed that the data have been checked for homogeneity
and consistency.
In this context homoger:eity requires that identical flow
events in a time series are equally likely to occur at all times.
Consis-
tency requires that there has not been any physical change at the stream gauging station that might affect the recorded flows.
Searcy and Hardison
(1960) discuss this aspect in detail. The inflow data can be represented as a frequency distribution of flows, such as those plotted for several rivers in Fig. Z.1.
Often these
distributions can be approximated by standard theoretical distributions such as the Normal, log-Normal, Gamma, Weibull, Extreme Value Type I, and logPearson Type III.
These distributions are defined by parameters of the
flows, for example, mean, standard deviation,and coefficient of skewness. Another important flow parameter is the lag-one serial correlation which describes flow persistence. If the flow volumes in successive time intervals are designated as Xl' x z' ... ' Xi' ... , xn ' the parameters are defined as follows; 2.2.1
Measures of Central Tendency * arithmetic mean n
L x.
x
1
1
(2.1)
n
* median The median is the middle value or the variate which divides the flow frequency distribution into two equal portions. The armithmetic mean is more commonly used because of its computational simplicity.
In extremely skewed distributions, however, the
median will provide a better indication of central tendency.
8
27 24 21
DIAMANTINA RIVER
~ t: V
WARRAGAMBA RIVER
~18
6 5 4
~
15
g-:J 12 L.t 9
5-3
~ 2 u. 1
6 3
O+------r--~~-L~--r_--~
o
1
2
3
O~--r_-r--~~~~~~~
4
8 7 >- 6 5 ~ 4 g- 3 L.t 2
4
6
Annual flow (xl0 9 m 3 )
MITTA MITTA RIVER
g
2
0
Annual flow (xl09 m3 )
8
MEKONG RIVER
7 ~ 6
~ 5 5- 4
~ 3
u. 2 1 04-~~---r--~----~~'_--~
1
2
o
3
100
Annual flow (xl0 9 m 3 ) 18 16 14
YARRA RIVER
140
160
180
BATANG PADANG RIVER
7 ~ 6 ~ 5
~12
~10
4 v 3 L.t 2 1+----'
5- 8 ~
120
Annual flow (xl0 9 m 3 )
:J
C'
6
u. 4 2 O+---r_----._----~~~~~
o
0.2
0.1
0.3
0.4
O+---r_--~--~~~~--~
o
0.5
0.6
0.7
0.8
Annual flow (x 10 9 m 3 )
8 7 >- 6
g5 ~ 4 g- 3 L.t 2 1 O~----._--~----._--~~~
0.6
0.8
1.0
FIG. 2.1
1.2
1.4
Frequency distributions of annual 'flows of selected rivers.
0.9
200
9
2.2.2
Measures of Variability * standard deviation 1 s
n
[-1 n-
L (x.1
1 n \ [-1 ( L
n-
2
x.1 - n
(2.2) 1
X-2)]2
(2.3)
For computational convenience Eq. 2.3 is preferred.
The standard deviation
is the basic measure of variability.
* variance is the square of the standard deviation. * coefficient of variation
cv
(2.4)
The coefficient of variation is a dimensionless measure of variability and is widely used in hydrology. * index of variaJ;iUty 1
[-1 n-
n
L (log 10 x.1
(2.5)
The index of variability is the standard deviation of logarithms of flows. 2.2.3
Measures of Skewness The lack of symmetry of a distribution is called skewness. * coefficient of skewness a
Cs where
(2.6)
53 a
n (n-l) (n-2)
n
L ex.1 - x) 3
(n-l~ (n-2) [L
x 3 _ 3x
(2.7)
L x2
+
2nx 3]
(2.8)
This dimensionless measure relates to the third moment of the data and is one measure defining the shape of the distribution.
Data with
positive skewness are skewed to the right (Fig. 2.2). Another measure of skewness used in hydrology is given by: * Pearson second coefficient of skewness, given by 3 (mean - median) standard deviation
(2.9)
10
Median I Mean:
>-
+'
(/)
s::: Q)
"'C
I
>-
:!: ..0 ~
..0
o
~
a.. Magnitude
Magnitude (b)
(a) FIG. 2.2
Skewed distributions, (a) Positively skewed; (b) Negatively skewed.
Typically, flow distributions have a positive skewness as shown in Fig. 2.1.
The degree of skewness generally decreases as the time interval
of the data increases.
Thus, the distribution for annual flows will
normally be less skewed than the distribution for monthly flow of the same river. 2.2.4
Measure of Persistenct Persistence is the non-random characteristic of a hydrologic time-
series.
For example, a month with high streamflow will tend to be followed
by another of high flow rather than by one of low flow.
This feature, which
is important in storage-yield studies, but which is not a parameter that can be included in theoretical distributions, is quantitatively characterized by the serial correlation coefficient.
It indicates how strongly one event
is affected by a previous event. * serial correlation 1
---k n-
~
_1_
n-k
where
n-k
I
x2 i
n-k 1 n-k n-k \ x.l x.l + k - (----k)2 \ x.l L\ x.l + k L nL
(2.10)
_
lag k serial correlation coefficient, and lag between flow events.
Except for some procedures using stochastic data generation, lag one serial correlation (k chapters.
= 1 in Eq. 2.10) is the only lag considered in later
11
It should be noted that in addition to Eq. 2.10 there are several other procedures for calculating serial correlation of time-series data. The characteristics of each are discussed by Wallis and O'Connell (1972). For reservoir capacity-yield analyses the differences among the procedures are of little importance and Eq. 2.10 is recommended. Serial correlation is usually significant for monthly flow data.
For
annual flow data the majority of streams do not have a serial correlation coefficient significantly different from zero;
however, there are still a
large number of streams with significant coefficients. 2.2.5
Typical Parameter Values In order to illustrate this discussion dealing with parameter values,
monthly and annual flow parameters for five Australian and two South-east Asian streams are tabulated in Table 2.1.
As well, Fig. 2.3 shows, for
156 Australian streams, frequency histograms of coefficient of variation, coefficient of skewness, and serial correlation coefficient of annual flows. Various continental values are superimposed for comparison along with estimates of continental mean annual runoff.
(The latter values are taken
from Australian Dept. of National Resources, 1976.) 2.2.6
Standard Errors of Parameters It must be emphasized that the parameter values defined in the previous
section are no more than estimates of the population values.
An
indication
of the magnitude of the error of the estimate is given by the standard error of the parameter.
These are defined as follows:
standard error of mean standard error of standard deviation
(2.11) 1
(2.12)
s/(2n)"
standard error of coefficient of variation
(2.13)
standard error of coefficient of skewness
(2.14 ) 1
standard error of serial correlation coefficient where
(n - k - 1)" n - k
s
standard deviation of flow volumes,
n
number of items of data, and
k
lag between flow events.
(2.15)
TABLE 2.1
Annual, monthly seasonal and non-seasonal parameters for selected Australian and South-east Asian streams.
River (Country) (Area km 2 ) Diamantina (Australia) (115 000) Warragamba (Aus t Tali a) (8750) Mi tta Mitta (Aus t ra1 i a) (4710) Mekong (Thailand/ Laos) (299 000) Yarra (Australia) (334) Bat.ang Padang (Malaysia) (378)
I
King (Australia) (451)
.
Parameter
x
C V C s r
x CV C s r
x
C V C s r
x C V C s T
x C CV s r
x C V C rs
x
C V Cs r
Annual
.
All Months
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
-
4.5+ 1.82 2.63 0.33
10.1 0.97 0.69 0.62
28.7 1.74 2.51 0.59
'8.1 1.62 1.67 0.91
11.9 1.62 1.86 0.18
6.0 2.78 3.83 0.97
5.4 3.21 3.97 0.71
2.5 2.36 2.56 0.66
0.3 1.84 1.47 0.01
0.6 1.96 3.06 0.40
0.9 1.98 2.76 0.27
1.0 2.07 2.10 -0.16
-
5.2 1.60 2.76 0.20
8.5 2.34 5.76 0.76
10.3 2.62 4.43 0.58
6.8 2.28 5.58 0.20
10.7 1. 81 2.83 0.39
15.3 1. 83 2.69 0.53
14.0 1. 58 2.74 0.53
9.7 1.85 4.59 0.42
5.7 1.35 2.93 0.30
6.8 1. 76 3.85 0.57
3.2 1.55 2.77 0.27
3.8 1.72 3.39 0.32
-
3.1 0.66 1.79 0.58
2.1 0.58 1.12 0.60
2.4 1.03 3.27 0.54
3.7 1. 74 4.88 0.86
4.9 1.13 4.11 0.80
8.1 1.16 3.17 0.63
12.0 0.87 1.34 0.59
15.6 0.76 1.30 0.65
15.6 0.52 0.88 0.73
16.2 0.56 0.58 0.65
10.6 0.62 0.62 0.79
5.7 0.69 1.72 0.61
-
3.2 0.19 -0.14 0.89
2.3 0.17 -0.18 0.91
2.2 0.16 -0.00 0.88
2.1 0.19 0.20 0.86
3.0 0.30 0.96 0.69
6.4 0.32 0.71 0.56
13.0 0.25 0.49 0.62
23.1 0.22 -0.03 0.49
20.7 0.22 0.53 0.52
12.7 0.28 0.47 0.50
7.0 0.30 1.26 0.85
4.4 0.22 -0.00 0.95
-
3.3 0.89 5.15 0.53
1.9 0.72 3.89 0.35
2.0 0.74 2.29 0.36
3.1 1.24 3.31 0.51
5.4 1.05 2.23 0.38
9.0 1.03 3.52 0.59
13.3 0.58 0.91 0.49
15.8 0.51 0.67 0.46
15.6 0.48 0.83 0.44
14.3 0.61 1.48 0.52
9.9 0.75 1.53 0.61
6.4 1.11 4.11 0.36
-
9.6 0.32 1.63 0.74
6.9 0.27 0.70 0.62
6.9 0.28 0.66 0.75
8.3 0.27 0.30 0.66
9.3 0.29 0.32 0.53
7.1 0.23 0.31 0.47
5.9 0.22 0.54 0.56
5.6 0.21 0.35 0.25
6.9 0.27 1.98 0.58
9.43 0.31 0.59 0.59
12.4 0.34 0.77 0.48
11.7 0.35 0.79 0.71
4.4 0.74 1.19 0.16
4.2 0.70 0.84 0.24
4.6 0.69 1. 74 -0.35
8.2 0.39 0.32 0.03
9.6 0.54 1.03 0.18
10.7 0.58 1.12 0.01
11.6 0.35 0.24 0.44
12.0 0.42 0.46 0.35
11.0 0.39 0.92 0.36
9.3 0.45 0.55 0.41
7.8 0.49 0.90 -0.05
6.6 1.11 5.11 -0.11
7 1.19 1. 85 0.11
2.80 4.71 0.53
122 1.11 2.67 0.30
2.14 4.73 0.45
270 0.57 1.50 0.06
1.06 1.91 0.71
480 0.17 -0.08 0.45
0.90 1.27 0.74
538 0.40 0.77 0.12
0.99 1. 78 0.62
1700 0.18 0.60 0.47
0.40 1.46 0.60
2340 0.19 -0.31 -0.11
0.63 1.64 0.32
-
Mean is expressed as depth of runoff in mm.
tMonth1y flows are expressed as percentage of mean.
x Cs
is mean;
C
v
is coefficient of variation;
is coefficient of skewness;
r
is serial correlation.
13
EUROPE & ASIA NORTH AMERICA
25 AUSTRALIA
>~ 20
,"",500
E E
:::J
::t: o
u.
Q)
.... 400
a
~ 15
§ 300 ~
10
5 OL..LJL..LJL..f-I-U.L....&.+...L.L.L..L+L~.&..,
2.0 1.0 0.5 1.5 o Annual coefficient of variation ~OTHER
CONTINENTS "AUSTRAlIA
~AUSTRAlIA
25
>-
1./ NORTHERN
c:
I' HEMISPHERE
u
~ 20
>c:
CT
20
u
~
Q)
U.
g.
15
15
Q) ~
u. 10
-
5
10
-
5
o -[ 2 3 Annual coefficient of skewness
FIG. 2.3
n-n,
-0.2 0 +0.2 +0.4 +0.6 Annual serial correlation
Annual flow parameters for Australian streams compared with other continental values. (Lines indicate average values for the continents indicated.)
14 As a general rule, the interpretation of these errors may be likened to the interpretation of the standard deviation of a variable.
If we
assume the parameters values are normally distributed (this is a reasonable approximation in most circumstances) two-thirds of the values will lie within ± one standard error. 2.3
STORAGE TERMS
2.3.1
Active Storage The active storage of a reservoir is the water stored above the level
of the lowest offtake.
It is thus equal to the total volume of water stored
less the volume of "dead" storage (the volume below the level of the offThroughout this text the terms storage and active storage are used
take).
synonymously. 2.3.2
Within-year storage Many small reservoirs fill up and spill on the average several times
a year.
These reservoirs are constructed to provide water over a short
drawdown period of only a month or two of low flows.
The estimation of the
storage required in this case is termed a within-year storage analysis. 2.3.3
Carryover storage Where the reservoir fills up and spills only every few years on the
average, the water stored at the end of one year is carried over to the next.
This is called carryover storage.
On the other hand seasonal storage
results from the fluctuations of inflows and outflows during the year.
In
procedures that utilize only annual data, the seasonal effects are not taken into account. cedures;
In this text such procedures are known as carryover pro-
those concerned only with seasonal storage are known as within-
year procedures.
Figure 2.4 illustrates the difference between these two
components. 2.3.4
Conceptual Storages A finite storage is a conventional storage which can spill and run dry.
Not all reservoir storage-yield procedures assume finite storages.
infinite storage is one that can spill but never run dry.
A semi-
It is a
conceptual tool and the consequences of using it are discussed in Chapter 3. Another conceptual storage is the infinite storage which can empty but never spill.
15
FULL
....C (JI
....4lC
Carryover storage
0
(J
... ...>4l
'0 (JI
4l
a:
EMPTY
n+12
n+6
n
Time (months) FIG. 2.4
2.4
Illustration of carryover and within-year storages showing the increase of storage necessary to cater for seasonal fluctuations.
RELEASE
Release is the volume of controlled water released from a reservoir during a given time interval.
The term release is used synonymously in
this text with the terms yield, draft, outflow and regulation and describes regulated flow from the reservoir.
Spill is regarded as uncontrolled flow
from the reservoir and will take place only when the water stored in the reservoir is above full supply level. Release is often expressed as a percentage of mean flow having values generally around 50 - 70%, and because of net evaporation losses rarely exceeds 90%. Data from Australian water authorities indicate that the median regulation of Australian reservoirs is 65%. markedly across a continent.
Potential regulation varies
Hardison (1972) has shown that for mainland
United States potential regulation varies from 57% in Lower Colorado to 95% in Tennessee.
In Australia, potential values vary from 70% in the arid
zone to 95% in Tasmania (McMahon, 1977). To estimate the design capacity of a reservoir it is necessary first to estimate the demands which will be placed upon the storage at some time (or times) in the future.
This is a difficult and uncertain task which is
beyond the scope of this text.
A general discussion of ways of tackling
this problem is given in Linsley and Franzini (1974).
16
2.5
RELEASE RULE OR OPERATING RULE Usually the volume of water released from a reservoir is equal to the
volume of water required (or demanded) by the consumers.
However, there
may be periods when either the reservoir level is so low that the water required cannot be supplied, or that prudence dictates that only part of the water demanded is released from storage (for example, water restrictions for an urban centre).
Another factor in the decision may be the time of the
year and the expected inflows for the subsequent period.
The way in which
releases are controlled is called the release or operating rule. The simplest release rule is to supply all of the water demanded [Fig. 2.5(a)].
In this situation, the draft is independent of reservoir
content and season.
If there is insufficient water in the reservoir to meet
the required draft, the storage empties.
Release
~ 100
~ 100
"0
"0
c: ('iJ
c:
('iJ
E Q) Cl
E
Q)
0
Cl
C
0
0
FIG. 2.5
C
0
Water stored (al
Water stored (b)
Example of two operating rules: (a) Simple operating rule; (b) Operating rule with restrictions.
The more complicated release rule shown in Fig. 2.5(b) is typical of that used by a metropolitan water supply authority.
As the volume of water
stored in the headwater reservoirs decreases, restrictions are placed on users so that demand falls and releases are lowered.
It will be noted in
later chapters that few procedures can accommodate such an operating rule. In the majority of reservoir capacity-yield techniques, constant draft is assumed, that is, seasonal fluctuations in demand are not considered. 2.6
PROBABILITY OF FAILURE AND RELIABILITY A number of definitions of probability of failure of a reservoir are
given in the technical literature.
Probably the most common
one defines
probability of failure as the proportion of time units during which the
17 reservoir is empty to the total number of time units used in the analysis. Hence, P
~
e
where
p
(2.16) the number of time units during which the storage is empty, and
N
the total number of time units in the streamflow sequence.
The corresponding definition of reZiabiZity is defined as: R
e
1 - P
(2.17)
e
These definitions of probability of failure and of reliability are not very realistic for most situations.
A city water supply reservoir, for
instance, would never be permitted to empty; apply long beforehand. to Eq. 2.16 but where
restrictions on releases would
An alternative definition sometimes used is similar p
is taken as the number of months during which
restrictions are necessary, that is, months during which the reservoir cannot meet the demand under the adopted operating rule. Another definition of reliability, voZumetric reZiabiZity, is equivalent to Fiering's (1967) performance index;
it relates the volume of
water supplied to the volume of water demanded for the study period as follows: R
v
actual supply demand
(2.18)
This definition has merit for overall reservoir performance, but can mask the severity of any restrictions imposed. The definition of probability of failure used for the remainder of this text is Eq. 2.16 unless otherwise indicated.
Although it may be
somewhat unrealistic in practice, it enables comparisons to be made between different methods.
The reader can, of course, use an alternative definition
to suit his purpose for most of the methods recommended.
18
2.7
NOTATION
(n-l~ (n-2)
a
(xi-x) 3
l:
(Eq. 2.7)
coefficient of variation
cs
coefficient of skewness
I
index of variability (standard deviation of logarithms of flows)
v
k
lag between flow events under analysis
n
number of items of data
N
number of time units
p
number of months reservoir is empty
p
e
probability of failure (emptiness) serial correlation coefficient
r
lag k serial correlation coefficient
1 - Pe
R
reliabilitv
R v
volumetric reliability defined as water supplied divided by water demanded
s
standard deviation
e
X.
1
~
=
' ·1 th perlo . d fl ow volumes d urlng
flow volumes in successive time intervals x
mean flow
19
CHAPTER 3
CRITICAL PERIOD TECHNIQUES The methods presented in this chapter typify two general approaches to the reservoir capacity-yield problem.
The first group of methods all use
the historical inflows and projected demand to simulate the volumetric behaviour of the reservoir, that is, the state of fullness versus time.
The
second group of methods has in common that only the periods of low flow (droughts) in the record are used in the analysis. Some of the methods of each group provide a reservoir size that will not fail for the historical inflow sequence;
the remaining methods allow
the user to determine the storage size for a given probability of failure. However, all methods base the estimate of required storage capacity on sequences of low flows and hence can be placed under the general heading of critical period techniques.
3.1
CRITICAL PERIOD A criticaZ period is defined as a period during which a reservoir goes
from a full condition to an empty condition without spilling in the intervening period.
The start of a critical period is a full reservoir;
of the critical period is when the reservoir first empties. I
FULL
~
a critical
Thus, only one
I
period ~
-----------c--I 1
I
I 1
I
·1 I 1
I I I
I I
I I
I I
I I I EMPTY~-+-1-9-40-4r-1-9-41~~19-4-2-r-1-94-3~--1-94-4-+~19~4-5~~1~94-6~ Years
FIG. 3.1
the end
Behaviour diagram showing critical periods. {Mitta Mitta River; draft = 75%)
20
faiZure can occur during a critical period. where there are two critical periods.
Figure 3.1 gives an example
Note that the remaining failures
(empty condition) of the reservoir in years 1945 and 1946 are not included in a critical period. This definition is not universally accepted.
For example, the
U.S. Army Corps of Engineers (1975) define the critical period from the full condition through emptiness to the full condition again and use the term
criticaZ draw down period to apply from fullness to emptiness. 3.2
METHODS WHICH INDICATE RESERVOIR FULLNESS WITH TIME
3.2.1
Mass Curve Method (Rippl Diagram) The mass curve technique (following Ripp1, 1883) would appear to be
the first known rational method for estimating the size of storage required to meet a given draft (see classification, Fig. 1.2).
15000 M
E
'"o
-3
A
5000
E
"
.;:; :J III
'"c:
0
/ /
c:
/ /
.E'"
/
"",e
~ 0
:(\ \V\ /K ~t?v~# ;1 \ V\ ) /V)~0~v
en Q) a: en cu
~O ~~
~",v ~KI"\Y/Kl>
"'.§
5~
4
"i:' '"
.t:.
!I>
'5
"'" a;'"
«>
1
'f I
183
~
120
:B'"
>-
c
2
SO
0
~
30 Cl 7
1.1
15
2
3
5
10
20
50
Recurrence interval (years)
FIG. 3.19
Annual low flow frequency curves for Brandywine Creek at Chadds Ford, Pa., USA. (U.S.G.S. 7600)
61
(v)
Flows for a given recurrence interval are read from the low flow frequency curves and replotted as inflows against duration (like the drought curves in earlier approaches) on arithmetic graph paper (Fig. 3.20).
50
M-
./
E 40
./
'"0
./
x Q)
./ ./
Required reservoir capacity
30
./
./ ./
E
./
:l
"0 > ~ 0
;;::
20
.E
10
O~~~
____
o
~
____- L____-L____
40
~
____
80 Duration
FIG. 3.20
(vi) (vii)
~
____L -_ _
120
~L-
__
~
____
160
(days)
Mass inflow-duration curve for within-year storage analysis (Data are for Brandywine Creek at Chadds Ford, Pa., U.S.G.S. 7600).
Constant draft lines are superimposed on the diagram. The largest intercept between the draft line and inflow curves is taken as the reservoir capacity required to meet the draft at the design level of reliability (or probability of failure). For this situation the probability expresses the chance that the reservoir, if operated under the design conditions, will fail (empty) at least once within any year.
Asswrrptions: (i)
The reservoir is assumed to be initially full.
Unlike
carryover storage situations this assumption will probably be met in most situations, particularly as the levels of regulation are usually very low. (ii)
Failures that occur after the end of the critical period are neglected.
~
200
62
Limitations: (il (ii)
Variable draft conditions cannot be easily treated. The use of frequency curves introduces a bias in the computed storage estimates.
This bias is due in part
to cross-nesting of the mass curves as explained in Sec. 3.4.8.
Computed estimates of capacity should be
increased by 10% to take this bias into account. (iii)
This method further underestimates the required storage due to the method of establishing the low flows.
As the
ranked values are necessarily from different years, there is no allowance for two or more independent events in the one year that are more severe than the next rank event (from a different year).
This effect would be small
except for low recurrence interval events (say less than 10 years). (iv)
The frequency analysis is based on daily flows.
This
considerably increases the computational requirements where these have not already been processed. (v)
The method of analysis does not take into account net evaporation losses.
If required, an additional amount
of storage has to be added to the computed value to cover this loss (Appendix A).
Attributes: (i)
If low flow frequency curves are readily available for the site in question, the method is quick and simple.
(ii)
Notwithstanding the bias in the storage
estimate~,
the
within-year reservoir capacity determined using the annual low flow frequency procedure gives satisfactory estimates if compared with those determined using
annual mass curves (Hardison, 1965).
63 EXAMPLE 3.12 Using the annual low flow frequency curves for Brandywine Creek at Chadds Ford, Pa., Fig. 3.19, determine the storage required to provide a 30 96 draft with a 5% annual probabi Ii ty of fai lure. Brandywine Creek = 1046 m3/s).
*
*
*
(l>lean flow rate for
*
*
The 5% annual probability of failure corresponds to a recurrence interval of 20 years.
From Fig. 3.19 the flow rates for durations of 7, 30,
60, 120 and 183 days corresponding to a 20 year recurrence interval can be
read off as follows: Duration ( days)
Flow Rate (m 3s- 1)
Flow Volume (x 10 6m3)
7
1. 62
0.97
30
1. 89
4.67
60
2.29
11.4
120
2.70
28.0
183
3.25
50.6
These points are plotted on the graph (Fig. 3.20) and the storage determined from the maximum intercept between the demand and mass inflow curves, that is, 5.1 x 10 6m3 . In practice, the storage estimate needs to be increased by 10% to account for the bias due to cross-nesting.
3.4.7
Regional Within-year Storage Estimates Hardison (1965) also provides a technique for obtaining an approximate
estimate of seasonal storage requirements by using the median annual 7-day flow as an index.
Based on 72 streams in eastern United States, he related
seasonal storage need to the median annual 7-day low flow as shown in Fig. 3.21.
Storages taken from these curves are subject to the bias inherent
in using low flow frequency curves to compute storage-draft relations and the storage estimates should be increased by 10% (see next section).
64
60
~
.2 Cii ::l
c: c:
'"
c:
'" Q)
E
'0 'EQ) ~ Q)
.3....
0'"
~ .l)
'"~
10
.2
« 0
0
0.3
0.4
0.5
Median Annual 7-day Low Flow. (ratio to mean annual flow)
FIG. 3.21
3.4.8
Areal draft-reservoir capacity relationship for 5% probability of failure as a function of median annual fow flow. (Parameter is storage capacity in percent of mean annual flow.) (Hardison, 1965.)
Bias in Mass Curve Frequency Analysis A further procedural error is associated with the use of frequency
curves to estimate reservoir capacity.
This is evident in both the inde-
pendent and overlapping series, and the within-year frequen;y curves.
This
effect, which results in under-estimation of the equivalent behaviour capacity, has been attributed principally to the cross-nesting of the mass curves of each record (Hardison, 1965) and can be illustrated as follows. Consider the example given in Table 3.4 which is similar to that used by Hardison to illustrate the bias.
If the two years of low flow data are
time-wise ordered the rank 1 and rank 2 deficiences (or storage sizes) are 3000 and 2500 m3/s day respectively. Yet if the flows are ordered by rank (which is the procedure in low flow frequency analysis) the rank 1 and 2 deficiences are 3000 and 2000 m3/s day. The difference in rank 2 estimates results from the cross-nesting effects.
65 TABLE 3.4
Example of calculations to show bias in frequency-mass curve storage procedure. Flow in m3 js
Ordering
By year By rMk
period
10 day period
200
300
2000
3000 1
100
500
2500 2
1000
100
300
2500
3000 1
500
2000 2
1000
5d~
1929 1930 1 2
Deficiency in m3 js days for a draft of 600 m3 js 10 day 5d~ period period
200
Hardison (unpublished paper, 1965) has examined this bias for the independent series Md suggests that the under-estimation of storage is approximately 20% for streams with an Mnual coefficient of variation less than 0.4.
For within-year analyses the degree of under-estimation is
about 10%. 3.4.9
Combining Carryover and Seasonal Storages - Hardison's Approach In order to correct his carryover storage estimates based on annual
data for seasonal (or within-year) need, Hardison (1965) utilized the Mnual low flow frequency mass-curve method (Sec. 3.4.6).
He proposed two solutions
of which the one described below assumes that the probability distribution of seasonal storage is independent of carryover storage.
The alternative
but quicker procedure assumes that the average seasonal storage requirement for 100% regulation is 0.4 times the mean annual flow. Hardison's steps for combining seasonal and carryover storage are as follows (Fig. 3.22): (i)
Divide the seasonal storage-probability curve (calculated using the within-year frequency mass curve analysis) into about eight segments Md compute the mean storage Md corresponding incremental probability for each segment.
(ii)
For a selected amount of total storage, the required carryover storage for each segment of the seasonal curve is computed by subtracting the seasonal amount from the total amount.
66
1.2
\
1.0
\ \
Q)
Cl tV
....
£!II
".:;....
Q)
\
:t: 0
c: ::l 0.8 ....
\
carryover
a:
•\ ,
,
iii ::l
\
c: 0.6 c:
\
tV
cr- c: tV
Q)
, \
"\
Q)
~ 0.4
"\ '.~ combined
.....
"',
0.2
95
.0.1
0
98 99
Probability of failure ('Yo)
FIG. 3.22
(iii)
Combination of seasonal and carryover storage probabilities.
For each segment, the probability of the required carryover storage taken from the carryover storageprobabili ty curve is multiplied by the incremental probability of the selected amount of total storage in (ii).
(If there are segments of the seasonal
storage curve which correspond to storage values equal to or greater than the total storage, then the probability for the corresponding carryover storage for those segments is taken to be unity.) (iv)
This is repeated for other points of total storage to give the total storage versus probability curve as shown in Fig. 3.22.
In the above analysis, the carryover storage-probability curve must be based on annual data.
If monthly data were used the computed carryover
storage would automatically have taken the seasonal need into account.
67
3.5
OTHER CRITICAL PERIOD METHODS Because of distinctive seasonal variations (four wet and eight dry
months each year) Wilson (1940) was able to define clearly the starting and finishing months of critical periods and hence compute the storage size necessary to offset specific inflow conditions.
He evaluated the proba-
bility of failure associated with each initial period and summed these to give the total probability of failure.
The procedure is not of general
applicability. Law (1953, 1955) developed massed curves of rainfall expressed as a percentage of average rainfall for various durations and coefficients of variation and for given probabilities of occurrence.
Through regression
analysis, the rainfalls were converted to streamflow for the same durations. Assuming an initally full reservoir, Law used a cumulative depletion diagram to calculate the required reservoir capacity.
The generalized procedure is
appli cab Ie to the Bri tish Isles for which the empirical massed rainfall curves were derived.
Outside this region, a complete analysis would need
to be carried out before the procedure could be applied.
Other procedures
are less complex. 3.6
SUMMARY Critical period procedures for estimating reservoir capacity-yield
relationships were reviewed under three main headings - methods which indicate reservoir fullness with time, methods based on the range of flows and methods based on low flow sequences.
Another classification that could
have been used is based on whether the procedure allows storage to be related to probability of failure.
Those methods that do not consider
probability of failure - mass and residual mass curves, Hurst and sequent peak, and Waitt's minimum flow approach - are considered to be inadequate. However, the sequent peak algorithm, although reviewed as a technique to be used with only an historical record, was developed as an efficient approach to be used with generated sequences;
in that context it is possible to
calculate the storage-probability relationship. Examination of the assumptions and theoretical basis of the cri tical period procedures not included above, showed that all are deficient in one way or another.
For example, all assume that the reservoir is initially full.
Alexander, Dincer and Gould Gamma procedures also assume that annual serial correlation is zero.
In addition, Dincer's procedure is based on the
68
assumption that
n
consecutive year flows are normally distributed whereas
Alexander and Gould assume that flows are Gamma distributed.
Because of
cross-nesting of mass curves (Sec. 3.4.8) carryover and within-year frequency analysis underestimates storage need.
In addition the overlapping
carryover frequency procedure is inadequate because of effects of dependence. From this review it is concluded at this stage that the Alexander and Gould Gamma approaches appear to be suitable preliminary design procedures, and that behaviour analysis of finite reservoirs is a useful technique to display clearly the behaviour of the reservoir contents.
3.7
NOTATION
A
loge x - loge x
B
draft parameter in Hurst's equations (Eqs. 3.6, 3.7)
c
variable in Eq. 3.35.
C
reservoir capacity
C1 ' C2
various reservoir capacity estimates
CCRIT
reservoir capacity for critical drawdown using the Minimum Flow approach (Sec. 3.4.1)
CDESIGN
design reservoir capacity including safety factor
CP
critical period
CP I
critical period for a
C
coefficient of variation
C n,p
storage capacity for n year inflow with p% probability of occurrence
C
reservoir capacity in gamma units
d
difference between the lower p percentile flow of Gamma distribution Gec) and a Normal distribution N(c,c)
D
draft as ratio of mean flow
D.
draft
D n
constant draft over
D
draft during tth period
f(x)
probability density function
G( c)
Gamma dis tribution with mean and variance equal to
K
Hurs t exponen t
V
y
1
t
(Eq. 3.13) i.e. log of mean - mean of logs.
=
n
1
year period
c
69
number of months of emptiness for semi-infinite storage
,1',. 1
number of months of emptiness for semi-infinite storage (Sec. 3.2.4) water losses from the reservoir other than evaporation during time t m
rank of event
m.
number of months of emptiness for a finite storage
1
number of months of emptiness for a fini te storage n
1 ength of a sub-sequence of monthly flows
n
number of items of data
N
number of flow events
N
number of years of data
N'
number of months of data
N( c, C)
normal distribution with mean and variance equal to
p
percentage ch"ance of occurrence
p
probability of failure or probability of occurrence
c
probability of failure using behaviour diagram p
.
semI
probabili ty of failure using semi-infinite depletion diagram sequent peaks n
year flow with a probability of occurrence of p%
. fl ow d ' . d In urlng t th perla range of flows range of flows standard deviation
s s
n
standard deviation of
n
year flows
time
t
T.
recurrence interval for independent partial duration series
T
recurrence interval of an n-month event
T a
recurrence interval for overlapping partial duration series
T
recurrence interval (years)
1
n
r
sequent troughs x
flow volume
70
mean flow
x x
mean of
n
n
year flows
X
value of flow
x.l
flow
z
a'
z p
standardized normal variate
Zt,Zt+l
reservoir storage contents at the beginning and the end of tth time interval
a
shape parameter in Gamma distribution
a
n
n year flow Gamma shape parameter
a
estimate of
S
scale parameter in Gamma distribution
Sn
n year flow Gamma scale parameter
B
estimate of
~Et
net evaporation loss during time
T
reservoir capacity divided by mean annual flow
Tj
reservoir capacity divided by mean annual flow for
T
reservoir capacity divided by mean annual flow in Gamma units
Y
TCa)
a
S
Gamma function
CEq. 3.13)
CEq. 3.13) t
=
1
71
CHAPTER 4
PROBABILITY MATRIX METHODS The second group in the classification of reservoir capacity-yield procedures (Fig. l. 2) is headed "Moran Related and Other Techniques". Virtually all of the methods shown are based on the theory presented by Moran in his book Theory of Storage (1959). In terms of practical usefulness, the most important methods in this group are those described as probability matrix methods.
The other
techniques are mainly of theoretical interest and are important only because of their r6le in the development of the procedures which use historical data. This chapter includes some of the theoretical development, but is mainly devoted to details of selected probability matrix procedures.
4.1
GENERAL CLASSIFICATION OF MORAN DERIVED METHODS In Fig. 1.2 we see that the Moran approach can be subdivided into
three groups: (i)
those in which both time and volume are considered as
continuous variables.
The most common continuous time
model is the 'random buckets in the bath' model. Moran's (1959, p. 79) description of the model is of a man pouring buckets, at random instants of time, into a bath which has no plug".
Generally, the continuous
time model is the most complex and least realistic of the various classes of techniques due to Moran [see, for example, Gani (1955), Gani and Prabhu (1958, 1959) and Gani and Pyke (1960, 1962)].
The assumed form of the inflow distribution
(often Poisson), the darn size (possibly infinite) and the 'buckets in the bath' approach all contribute to the unrealistic nature of the solution.
Thus, continuous
time solutions are of theoretical interest only; (ii)
those in which time is discontinuous but water volumes aPe
continuous.
Moran (1955) derived the following integral
equations describing a mutually exclusive (see below) situation.
72
For x ( C - D D
g(x)
=
f(x)!
D+x
g(x) dx +
!
o
f(x+D- t) get) dt
(4.1)
D
For x > C - D C
D
g(x)
=
f(x) !
o
+ f(x
+
g(x) dx + ! f(x
+
D - t) get) dt
D
D - C)
!
(4.2)
get) dt
C
where
x
inflows,
C
reservoir capacity,
D
constant release during unit period,
f(x) g(x)
inflow probability function, and probability function of storage content plus inflow during unit period.
Solutions for particular inflow distributions and release rules have been obtained by Gani and Prabhu (1957), Prabhu (1958a) and Ghosal (1959, 1960).
Of these the most
potentially useful solution is that due to Prabhu (1958a) in which the inflows were assumed Gamma distributed and releases were assumed constant.
However, evaluation of
this solution is very complex; (iii)
those in which time and water volumes are both discrete
variables.
This approach by Moran (given in his 1954
paper) and followed by others (for example Ghosal, 1962 and Prabhu, 1958b) is the basis of the practical applications of his work. Basically it involved sub-dividing the reservoir volume into a number of parts, thus creating a system of equations which approximate the integral equations (Eqs. 4.1 and 4.2). This approximation primarily affects the results at the storage boundaries (that is, full and empty) but is satisfactory if the sub-division of the storage volume is fine enough. Two main assumptions can be made about the inflows and outflows, which occur at discrete time intervals.
The first, given by Moran (1954),
assumes that the inflows and outflows do not occur at the same time. this model, termed the "mutually exclusive" model, the unit period is
In
73
sub-divided into a wet season (all inflows and no outflows) followed by a dry season (all releases but no inflows). The alternative assumption, not given by Moran but which is only a simple further development, is that inflows and outflows occur simultaneously - the "simultaneous" model. Both of these models are discussed in detail in the following sections. 4.2
A SIMPLE MUTUALLY EXCLUSIVE MODEL It is convenient to choose the inflows, draft, and storage capacity as
integer multiples of some arbitrary volume unit.
Consider the following
example: Reservoir capacity:
2 units
Constant draft: Inflows:
unit per time period
discrete and independent and distributed as in Fig. 4.1.
Note that the sum of the probabilities
equals unity.
Relative Frequency (Probability)
1/5
o
1
2
3
Units of flow FIG. 4.1
Distribution of reservoir inflows.
For the mutually exclusive model we have: Z t+l
o
~
M
if M < Z + X < K t t
Zt+l
(4.3) (4.4)
Zt
(4.5) Zt + Xt ·· 'd store d water at t h e b eglnnlng 0 f t h e t th perlo,
Zt+l
stored water at the end of the tth period or at
Zt+l where
if Z + Xt t
ifK
~
the beginning of the (t+l)th period, K
X t M
capacity of reservoir, . fl ow d ' . d an d In urlng t th perlo, constant volume released at the end of the unit period.
74
Gi ven this information about capacity, draft and inflows, the firs t step is to set up the "transition matrix" of the storage contents. A transition matrix shows the probability of the storage finishing in any particular state at the end of a time period for each possible initial state at the beginning of that period.
The transition matrix for the above
example is a (2 x 2) matrix representing an empty condition and a half full condition as follows: Initial State Zt Empty Finishing
Full 1
°
Empty
State
°
Zt+l
1
Full
I
1 + -2 5 5
1 5
-
-1 + 1
5
5
-
2
(4.6)
2 + -1 + 1 5 5 5
-
2 =
1
(always check)
1
Each element of the transition matrix is found by applying Eqs. 4.3 to 4.5 to determine the inflows (and hence probability) of the storage beginning and ending in the state corresponding to that element.
In the
computations the boundary conditions (empty and full) must be considered and, for the mutually exclusive model, the inflows must be considered separately and prior to the outflows. Consider the element (0,0) in Eq. 4.6 which represents a reservoir starting empty and finishing empty.
This can happen if there are no
inflows for the period (probability 1/5) or if there is one unit of inflow (probability 2/5).
In the latter case the release of one unit reduces the
reservoir contents back to zero.
Hence, if the reservoir starts empty
there is a probability of 0.6 that it will still be empty at the end of the time period. Consider now the element (1,0) which represents a reservoir starting empty and finishing half full.
If there are two units of inflow
(probability 1/5) followed by one unit of release the reservoir will finish half full.
If there are three units of inflow (probability also 1/5) the
reservoir will spill because its capacity is only 2 units, then after 1 unit of release, it will again finish half full. from empty to half full is 2/5.
Thus the probability of going
75
Note that the reservoir can never finish (and hence start) in the full condition because of the mutually exclusive assumption about inflows and outflows.
Note also that the reservoir must finish in some condition
thus the sum of the probabilities in any column must be unity. Let us now assume that the time unit is equal to one year and that the reservoir of capacity 2 units is empty at the beginning of the year one, that is, the initial probability distribution of storage contents is: Storage
o (4.7)
1
State
2
L=l Since the transition matrix expresses the conditional probability of final storage contents given the various values of initial contents, the probability distribution of final contents can be found by the matrix product of the transition matrix and the probability distribution of initial contents. Therefore, at the end of year one (or at the beginning of the year two) the probability of storage content will be: 0.6 [ 0.4
0.2J
OJ
0.6 x 1 + 0.2 x [ 0.4 x 1 + 0.8 x 0
0.8
transition matrix
state of storage at beginning of year one
=
state of storage at end of year one
[0.6 ] 0.4 (4.8)
L 1.0
The quantitative process in Eq. 4.8 may be described as follows.
The
transition matrix shows the probability of the reservoir finishing in a specific state, given an initial state.
If the initial state is known in
terms of probability, then the joint probability will indicate the likelihood of the storage ending in a specific state.
In Eq. 4.8 the transition
matrix shows the probability of going from state 0
+
state 0 as 0.6, and
the probability of being in state 0 at the beginning of year one is 1, thus the probability of ending in state 0 is 0.6 x 1
=
0.6.
But also it is
possible to arrive at state 0 from state 1 which from the transition matrix has a probability of 0.2.
The likelihood of being in state 1 at the
beginning of the year one is 0, thus the probability of ending in state 0 but beginning in state 1 is 0.2 x 0
=
O.
Hence the combined probability of
ending in state 0 at the end of the first year is 0.6 + 0 argument holds for state 1.
= 0.6.
A similar
76
The process can now be repeated, using the state vector as the new starting condition.
Therefore, at the end of the second year, the proba-
bility of storage content will be: [ 0.6 0.4
0.2J 0.8
transi tion matrix
[0.6 x 0.6 + 0.2 x 0.4 J 0.4 x 0.6 + 0.8 x 0.4
[0.6 ] 0.4 state of storage at end of year one or beginning of year two
state of storage at end of year two
[0.44 ] 0.56
(4.9)
L=1.00
At the end of the third year, the probability of storage content will be: [0.6 0.4
0.2J 0.8
[0.44] 0.56
= [0.6 x 0.44 0.4 x 0.44
+ +
0.2 x 0.56] = [0.38 ] 0.62 0.8 x 0.56
(4.10)
L=1.00
At the end of the fourth year, the probability of storage content will be: 0.6 [ 0.4
0.2J 0.8
0.38J [ 0.62
=
[0.6 x 0.38 + 0.2 x 0.62] = [0.35] 0.4 x 0.38 + 0.8 x 0.62 0.65
(4.11)
L=1. 00 At the end of the eighth year the probability of the storage content will be: 0.33J [ 0.67
(4.12)
At the end of the ninth period it will be: 0.33J [ 0.67 It will be noticed that as successive years are
(4.13)
consi~ered,
the
probability vector of storage content becomes less affected by the initial starting conditions(in this example, the reservoir was assumed empty) and approaches a constant or steady state situation, which is independent of the initial conditions.
From the steady state vector (Eq. 4.13) it is seen
that there is a 1/3 chance that the reservoir will be empty at the end of any year. 4.2.1
of
The Discrete Equations for the Mutually Exclusive Model General Case.
Consider a reservoir with discrete inflows, X , and a constant draft t M during unit time period t. Zt is the stored content at the
77
beginning of time
t.
All volumes are multiples of a constant water volume.
The reservoir is divided into K-M+I discrete zones 0, 1, 2, ... ,K-M where
o
is the empty zone. From continuity it follows that
o
Zt
+ X
:: ~1
t
(4.14)
M < Zt + Xt
70%) with the limited accuracy (round-off errors) associated with mini-computers in solving for steady state using (20 x 20) or larger matrices.
In these cases,
as a check to the solution of simultaneous
equ~tions
it is
recommended that the transition matrix be powered up and a check made to ensure that the sum of each column equals unity. (iii)
Probability matrix solutions are affected by zone size and hence the number of zones.
Joy (1970, Appendix V) examined
this question using Moran's mutually exclusive model and found that for streams with C < 0.5, 20 zones were required v to adequately define the storage size; for 0.5 < C < 0.85, v 30 zones were required and for C ~ 0.85, 40-50 zones were v required. Teoh (personal
communication, 1977) analysed ten streams with
Cv varying between 0.19 and 1.79 using Gould's procedure.
From
91 his results it is concluded that as a general rule: for C < 0.5 usc 10 zones, v 1.0 usc 20 zones, for 0.5 ~ C < v for 1.0 ~ C < 1.5 use 30 zones, and v for C ;: 1.5 use 40 zones. v The differences between these and Joy's results (which were based on Moran's rather than Gould's model) arc consistent with Doran's divided interval approach (Doran, 1975) and Klemes' (1977) recent analysis. If an insufficient number of zones is used, sometimes a hunting effect in the storage-probability relabon becomes evident.
An example is shown in Fig. 4.5 for the Nogoa River
in Queensland.
Results arc plotted for 10, 20, 30 and 40 zones. ~
25
+----+ ./0.
10 zones 20 zones 30 zones 40 zones
20
~ ~
"
15
'0 .~
10
~
:c
"'0
.0
ct
5
O+-----~----_r-----+----~~~
o
2 Reservoir capacity
llG. ->'5
3
4
(xl0 6 m3 )
Effect of llllmber of zones on reservoir capaci ty probabili ty of failllre relation (Nogoa River Australian gauging station no. 13020J).
(iv) At this state little guidance can be given regarding the effect of beginning a Gould analysis in different months because insufficient research information is available.
It appears that, for
at least some rivers, the derived storage using a GOllld analysis docs depend on the starting month if the draft is high.
It is,
therefore recommendecl that before a fina 1 des i gn capaci ty is chosen, four separate Gould analyses be carried out, each heginning three months apart, to check the significance of the starting month.
92
EXAMPLE 4.1 For the Mitta Mitta River (Appendix E) use Gould's probability matrix procedure to determine the storage required to meet a draft of 75% of the mean flow with a 5% probability of failure.
*
*
*
*
*
The Gould procedure requires a computer for efficient solution; for each estimate of storage capacity the method requires each year of flow to be routed through the storage for each possible starting condition.
In
Sec. 4.6.2 it is shown that the number of zones required depends on the coefficient of variation of annual flows, C ' For the Mitta Mitta River, v 15 zones should therefore suffice.
C is equal to 0.57; v
The procedure is an iterative one, the probability of failure being calculated for the input draft and the storage capacity estimate. For the Mitta Mitta at 75% draft and a storage capacity of 910 x 10 6 m3 the following (15 x 15) transition matrix is obtained (terms are espressed as probability): Starting Zone
o o
3
4
6
Zt 7
8
9
10
11
12
13
14
.147 .147 .147 .147 .147 .118 .118 .088 .088 .000 .000 .000 .000 .000 .000 .118 .118 .118 .118 .118 .088 .029 .059 .000 .088 .000 .000 .000 .000 .000 .029 .029 .029 .029 .029 .088 .088 .000 .059 .000 .088 .000 .000 .000 .000
3
.029 .029 .029 .029 .029 .029 .059 .088 .000 .059 .000 .088 .000 .000 .000
N~ 4
.029 .029 .029 .029 .029 .000 .029 .059 .088 .000 .059 .000 .088 .000 .000
.118 .118 .118 .088 .088 .059 .000 .029 .059 .088 .000 .059 .000 .088 .029
~
+
~
5
~
6
.059 .059 .059 .088 .059 .059 .059 .000 .029 .059 .088 .000 .059 .000 .059
....~ ....~
7
.029 .029 .029 .029 .059 .059 .059 .059 .000 .029 .059 .088 .000 .059 .029
~
8
.029 .029 .029 .029 .029 .088 .059 .059 .059 .000 .029 .059 .088 .000 .029
9
.029 .029 .029 .029 .029 .029 .088 .059 .059 .059 .000 .029 .059 .088 .029
.aoo
10
.000 .000 .000 .000 .000 .000 .029 .088 .059 .059 .059
11
.029 .029 .029 .029 .029 .000 .000 .029 .088 .059 .059 .059 .000 .029 .029
.020 .059 .088
12
.059 .059 .059 .059 .029 .029 .000 .000 .029 .088 .059 .059 .059 .000 .029
13
.147 .147 .147 .147 .176 .llS .147 .147 .147 .176 .265 .265 .294 .353 .353
14
.147 .147 .147 .147 .147 .235 .235 .235 .235 .235 .235 .294 .324 .324 .324
To compute the steady state (or long term) probabilities of the reservoir being in any particular zone the transition matrix can be powered up or solved as a system of simultaneous equations (Sec. 4.4).
The latter
option involves less computation and the result is tabulated below with the respective probability of failure for each starting zone:
93
Zone
Steady State Probabi l i ty of being in Zone
Probabi l i ty of Fai lure from Starting in Zone
(2)
(1)
Contribution to Overall Probability of Failure
(3)
(2) x (3)
0
0.034
0.502
0.0171
1
0.026
0.480
0.0125 0.0065
2
0.018
0.360
3
0.017
0.260
0.0044
4
0.015
0.157
0.0024
5
0.057
0.083
0.0047
6
0.039
0.044
0.0017 0.0007
7
0.044
0.017
8
0.029
0.007
0.0002
9
0.051
0
0
10
0.055
0
0
11
0.031
0
0
12
0.028
0
0
13
0.276
0
0
14
0.279
0
0 I:
=
0.0502
that is, the probability of failure of a 910 x 10 6m3 storage is 5.02%. This estimate needs adjustment for the effects of the annual serial correlation of 0.06.
From Fig. B.l the adjustment factor is 1.06.
Therefore, the final answer by Gould method is 910 x 1.06 6 (x 10 m3).
4.7 4.7.1
=
960
RELATED PROBABILITY MATRIX METHODS McMahon's Empirical Equations For 156 Australian streams, McMahon (1976) used Gould's modified
procedure to estimate the theoretical storage capacities for four draft conditions (90%, 70%, 50% and 30%) and three probability of failure values (2±%, 5% and 10%).
These capacities were related by least squares analysis
to the appropriate coefficient of variation of annual flows by the following simple relationship:
L)~
where
aC b
C/x
T
(4.37)
v
C
storage capacity in volume units,
x
mean annual flow in volume units,
T
reservoir capacity divided by mean annual flow,
C v a, b
coefficient of variation of annual flows, and empirically derived constants tabulated in Table 4.3.
In addition to
a
and
b
in Table 4.3, standard errors of estimate
in percent and coefficients of determination are shown.
The constants were
not based on regional analysis and are considered to apply to the whole of Australia. TABLE 4.3
Draft
(%)
90
Reservoir capacity-yield equation coefficients, standard errors and coefficients of determination for Mc,lahon I s empi rical method. (e = standard error of estimate, r2 = coefficient of determination) Probabili ty of failure (% ) Parameter
2.5
5
a
7.50
5.07
b
l. 86
l. 81
e
+ 18, -15
r2
97
a
2.51
b 70
50
30
+21, -17
r2
96
a
0.98
b
+12, -11
1. 83
e
+58, -36
r2
81
3.08 1. 82
+43, -30
98
87
1. 81
1. 21
1. 79
+25, -20
1. 91
e
10
1. 74
+29, -23
94
92
0.75
0.51
1. 93
+63, -39 79
1. 83
+61, -38 79
a
0.28
0.22
0.15
b
1. 53
1. 49
1. 79
e
+44, -31
r2
82
+61, -38 72
+64, -39 77
Assumptions, Limitations and Attributes: As the storage values used in the regression analysis are Gould estimates, a major assumption relates to the neglect of annual serial correlations.
Corrections given in Appendix B should be made.
Another
95
assumption is that capacity for given conditions is related only to the coefficient of variation.
The proportion of variance accounted for shown
in the table suggests that this is a reasonable assumption. Because of the errors noted above and limitations due to constant oraft, this procedure is regarded only as a preliminary procedure.
However, as a
preliminary procedure it is based on monthly flol"s and on a large number of well-distributed Australian streams and therefore should provide reasonable estimates of storage at least within the Australian environment.
EXAMPLE 4.2 Compute the storage required on the Mitta Mitta River (Appendix E) to meet a draft of 75% with a 5% probabi li ty of fai lure using McMahon's Empirical equations.
*
*
*
*
*
From Eq. 4.37 storage,
C
(aC
b
v
-
) x
From Appendix E for the Mitta Mitta C
0.57
v
Table 4.3 gives values of ability of failure.
a
and
b
for various drafts and prob-
Since a draft of 75% is not mentioned specifically, it
is necessary to interpolate on a
log-linear plot of draft versus storage
as follows:
ae b v
C (l06 m3)
1. 81
1. 83
2331
1. 79
0.66
841
0.75
1. 93
0.25
319
0.22
1. 49
0.10
127
Draft (%)
a
b
90
5.07
70
1. 81
50 30
Interpolation for 75% draft on the
log-linear plot of storage versus
draft (Fig. 4.G) gives a storage estimate of 1090 (x 10 6 m3).
96 Adjust for annual serial correlation.
From Fig. B.l for 75% draft
and annual serial correlation of 0.06 correction factor is 1.06 approximately.
Thus the estimate of storage requirement by McMahon's procedure is: 1090
x
1.06
X
2000
1000 M
X
800
E ~
600 500
0 ~
400
~
CD ~
X
300
~
0 ~
00
200
X
100
FIG. 4.b
4.7.2
0
Interpolation on log-linear plot for McMahon's Empirical procedure (Example 4.2).
Probability Routing Langbein's Probability Routing method (Langbein, 1958) is very similar
to Moran's (1954) probability matrix procedure except that Langbein modified his technique to deal with correlated annual inflows.
Both the stream-
flow regime and reservoir storage were divided into low, medium and high sub-regimes.
By classifying each flow into the same streamflow regime as
its predecessor, three separate streamflow histograms were obtained.
Thus
in setting up his system of equations describing the cumulative probability
97 of reservoir contents, Langbein used the inflow distribution appropriate to the state of the reservoir.
In this way he was able to take annual serial
correlation into account in an approximate way. 4.7.3
Hardison's Generalized Method Hardison (1965) generalized Langbein's probability routing procedure
using theoretical distributions of annual inflow and assuming serial correlation to be zero.
This is equivalent to Moran's (1954) model except
that Hardison used a simultaneous model rather than the mutually exclusive type adopted by Moran.
The annual storage estimates are shown graphically
in Figs. 4.7, 4.8 and 4.9 for log-normal, Normal and Weibull t distributions of annual flows.
The percentage chance of deficiency shown in the figures
is defined by Hardison as the percentage of years that the indicated storage capacity would be insufficient to supply the design draft. In addition to the carryover storage based on annual data, Hardison presented a procedure for determining the combined carryover plus seasonal storage requirement.
The latter procedure is discussed in Sec. 3.4.9.
Procedure: (i)
Compute the mean, standard deviation and coefficient of skewness of both the annual flows and their common logari thms.
(ii)
The appropriate distribution depends on the parameters computed in (i) as follows: (a)
Adopt a log-normal distribution if the coefficient
of skewness of the logarithms of flows is algebraically greater than -0.2. (b)
Adopt a Normal distribution if the coefficient of
skewness of the absolute flows is algebraically less than +0.2 or if the coefficient of variation of the flows is less than 0.25. tThe probability density function of the two parameter Weibu1l distribution is: f(x) where
11
11 (~) 8
8
11-1
exp [- (~e)
11
1
shape parameter, and characteristic drought when Prob (x
e)
lie.
98
........,.... 0
10· PERCENT CHANCE OF DEFICIENCY
5 PERCENT CHANCE OF DEFICIENCY
2 . PERCENT CHANCE
. PERCENT CHANCE
c:
::t \..
(ij
::t
c: c:
0
and
C is the
The physical process of dam fluctuations can be
likened to a random walk with impenetrable barriers at full supply and empty conditions.
Phatarfod used Wald's identity, an approximate technique,
to solve a problem with absorbing barriers and a relation connecting the two kinds of random walks. Steps in his method, which assumes the draft is the unit of measurement, are: (i) (ii)
Assume a constant draft' D as a ratio of mean annual flow. Calculate 4
h
Y
a
Q::L
e
).l -
o
(4.39)
2
where ).l
20
(4.40)
y
mean flow in draft units
liD,
= standard deviation of annual flow in draft units, = ).lC ' v
C v y
(iii)
(4.38)
2
Decide on
coefficient of variation of annual flow, and coefficient of skewness of Annual flow. P
and
~
where
P
is the probability of the
reservoir contents being less than £C total capacity.
and
C is the
103 (iv)
Solve for
y,
the unique positive solution
(other than unity) of Py v-I + Py v-2 + ... + Py - (l-P) where
For example, if t
(v)
= 1/3,
H- 1 + [1 +
=
(4.41)
= v.
l/t
Y
o
4(1~P)l\
(4.42)
Use the Newton-Raphson iteration to solve for e, which is the unique positive solution of:
(vi)
e (1- e)
=
where
r
+r) + ea(l-r) + [{il+r)+ eaCl-r)}2-4rl~J ( 4 . 4 3) h loge [Cl---"----~~--"--'~~'----'-~-'-"--"-~ annual serial correlation coefficient.
Calculate required capacity of the reservoir as C where
=
v log y
e
Dx
(4.44)
C
capacity in volume units, and
x
mean annual flow in volume units/year.
This model assumes that annual flows are Gamma distributed and is based on a fixed draft.
It is considered to be a preliminary design
procedure, although the solution of Eqs. 4.41 and 4.43 can be quite timeconsuming.
The procedure is a useful preliminary way to determine the
likelihood of the reservoir falling below some level and the possibility of restrictions in releases. is limited to
v
Because of the approximation made, the procedure
being less than or equal to about 5.
(Phatarfod, personal
communication, 1977.)
EXAMPLE 4.4 Find the storage required on the Mitta Mitta River (Appendix E) to provide for 75% draft with a probability of the reservoir being less than one third full using Phatarfod's procedure. *
*
*
*
*
For the Mitta Mitta River (Appendix E) annual flow parameters are: coefficient of variation
0.57
coefficient of skewness
1.50.
104 Draft
D
0.75
In draft units, mean flow
lJ
liD
and standard deviation
(J
11
1. 33,
=
Cv
(1.33) (0.57) 0.76 From Eq. 4.38,
h
From Eq. 4.39,
a
From Eq. 4.40,
e
4
1. 78
2y
=
(0.76) (1.50)
0.57
2 2a
Y 1.33 _ 2 (0. 76) lJ -
--r:so
0.32. Choose t
= 1/3
P of 0.05 gives the chance
so that the probability
of the reservoir being less than or equal to 1/3 full. From Eq. 4.42,
y
H- 1
+ [1 + 4(l-P)] \
H- 1
+ [1
P
!
+ 4(1 - 0.05)] } 0.05
3.89. Solve Eq. 4.43 for
e:
eel-e)
= h loge [(l+r)+ 8a(1-r)+
where
r
[{(~+r)+
annual serial correlation
8a(1-r)}2 - 4r]!]
=
0.06
8(1-0.32)= 1.78 loge [1.06+ 8(0.57) (0.94)+ [{1;06+ 8(0.57(0.94)}2 - 0.24]!] Using the Newton-Raphson iteration procedure (Appendix D), the required value of
8
From Eq. 4.44:
is found to be 1.824. Storage
v log Y D 8
x
3 loge 3.89 1. 824
Dx
2.23 (0.75)(1274) 2130 (x 10 6m3) [Note that this is the reservoir size for which there is a probability of 5% Thus the figure of 2130 x 10 6m3 is
of being only one third (or less) full.
105 not directly comparable with reservoir sizes based on 5% probability of failure.
However, one can run the Gould procedure and compute the
probability of being in the lower one-third of the storage from 'the steady state matrix. For 2130 x 10 6m3 storage the answer is 3.8%.]
4.9
SUMMARY From a theoretical point of view the Gould procedure as described in
Sec. 4.6 stands out as the most acceptable reservoir capacity-yield technique. Essentially the procedure involves only one major assumption and overcomes most of the disadvantages of other probability matrix procedures and critical period approaches.
Based on these reasons and the satisfactory
results of several extensive testing programs
using Australian streamflow
data (reported in Chapter 6) the Gould procedure, modified as outlined in Sec. 4.6.1 and corrected for annual serial correlation, is recommended as a final design tool for establishing the single reservoir capacity-yieldprobability of failure relationship. In this chapter it was also noted from a theoretical point of view that there are two preliminary procedures which are suitable for storage analysis - Hardison's (1965) Generalized carryover procedure and McMahon's (1976) Empirical Equations.
Both are based on results generalized from
applying probability matrix methods, but the empirical constants in the latter procedure are based only on Australian streamflow data. 4.10 a
NOTATION variable in Phatarfod's procedure (Sec. 4.8.3.)
a
variable in McMahon's Empirical procedure (Sec. 4.7.1)
b
variable in McMahon's Empirical procedure (Sec. 4.7.1)
C
reservoir capacity
C1, C2
various reservoir capacity estimates
C v
coefficient of variation
D D
draft as ratio of mean flow . dra f turIng d · t th perlod
e
variable in Phatarfod's procedure (Sec. 4.8.3)
f(x)
probability density function
t
106 g(x)
probability function of storage content plus inflow during unit period
h
variable in Phatarfod's procedure (Eq. 4.38)
K
reservoir capacity as a multiple of constant water volume in Moran analysis (Secs. 4.1-4.3)
K
number of zones in Gould analysis (Sec. 4.6)
!C
proportion of reservoir capacity being 1/2, 1/3, 1/4 or 1/5 (Eq. 4.41)
m
positive integer exponent large enough so that resulting matrix is equivalent to steady state (Secs. 4.4 and 4.6)
M
release as a multiple of constant water volume in Moran analysis (Secs. 4.1-4.3)
N
number of years of data
P
prohability of reservoir contents being less than some amount (Sec. 4.8.3)
P.
probability that the content of the reservoir is in zone i at the heginning of the period
P!
probability that the content of the reservoir is in zone i at the end of the period
1
1
rPj
probahi Ii ty vector
qi r
probability of receiving an inflow of
t
time
[T]
transition matrix of reservoir contents
units
annual serial correlation coefficient
v
lit in Phatarfod's procedure (Sec. 4.8.3)
W
zone volume in Gould's probability matrix procedure (Sec.4.6.1)
x
flow volume
x
mean flow
X t y
inflow during tth period variahle in Phatarfod's procedure (Sec. 4.8.3)
Zt,Zt+l
reservoir storage contents at the beginning and the end of tth time interval
y
coefficient of skewness of annual flows
LlE
net evaporation loss during time
t
t
~
shape parameter in Weibull distribution
e e
variable in Phatarfod's procedure (Sec. 4.8.3)
characteristic drought in Weibull distribution
jJ
mean annual flow in draft units
a
standard deviaition of annual flow in draft units
T
reservoir capacity divided by mean annual flow
107 CHAPTER 5
USE OF STOCHASTICALLY GENERATED
DATA
The third grouping of storage estimation methods is based on the use of generated or synthetic data. same as described previously; are changed.
In essence, however, the methods are the the difference is that the input streamflows
The technique involves using a stochastic generation model to
produce "streamflow" sequences with the same statistical properties as the historical record.
It is then possible to determine the storage capacity
(using some standard method) corresponding to each sequence, thus providing a designer with a distribution of values.
This in turn gives him an idea of
the confidence which can be placed on the adopted design value.
"Synthetic
flows (or stochastic data) do not improve poor records but merely improve the quality of designs made with whatever records are available."
(Fiering
and Jackson, 1971, p.24.) In this chapter we restrict our examination of data generation processes to operational aspects of Markovian models that are used for generating annual and monthly streamflows.
Readers requiring more detail
are referred to the many excellent texts, reports and papers devoted to stochastic processes.
A selection of these is included in the references.
It should also be noted that data generation procedures will be dealt with
here not in terms of the physical mechanics underlying the streamflow process but rather from an operational point of view.
In this regard we
commend readers to Klemes' (1974) paper for some thought-provoking comments on the relationships between physically based and operational models. This chapter is divided into several distinct parts.
The first
examines the time-series components making up the streamflow process.
This
is followed by a review of historical developments in data generation procedures up to 1960.
Next, the methodology and performance of Markovian
data generation procedures are discussed in detail.
Simulation and the use
of generated data are then considered and finally several procedures based on generated data for making preliminary estimates of reservoir capacity are reviewed.
108 5.1
TIME-SERIES COMPONENTS From a stochastic point of view, streamflow data can be regarded as
consisting of four components (Kottegoda, 1970);
trend (T ), periodic or t seasonal (St)' correlation (K ), and random (E ) components which can be t t combined simply as: (5.1) These components are represented pictorially in Fig. 5.1.
Time
FIG. 5.1
Time
Time-series components of the streamflow process.
To obtain representative stochastic data, it is necessary to identify and measure the strength of each component.
A fifth component not included
in Eq. 5.1 relates to catastrophic events.
This aspect is beyond the scope
of this
text
and relates to the so-called 'Noah and Joseph' effects and
the 'Hurst' phenomenon.
Data generation models accounting for these effects
are still at the research stage (Sec. 5.7). A sequence of values arranged in order of their occurrence is called a time-series.
A time-series is considered to be stationary if the
statistical properties characterising i t are time invariant.
In this
discussion it is assumed that the data are stationary or can be made so by a simple transformation.
For example, to partially eliminate the non-
stationary effect of seasonality, monthly data can be standardised by the following equation:
x't
(5.2)
109 where
monthly flows,
x t x, t x.
standardised monthly flows, mean monthly flow for the j th month, and
s.
standard deviation of monthly flows for the J.th month.
J
J
One characteristic of a time-series is persistence which relates to the sequencing of the data.
In Chapters 3 and 4 it was noted that this
property is very important in storage-yield analysis.
In streamflow, per-
sistence arises from natural catchment storage effects which tend to delay the runoff;
over a short time period high flows in one interval will
tend to be followed by high flows in the following interval.
The longer the
time period the lesser the effect and for many streams it is negligible for annual flows. The usual quantitative measure of persistence is serial correlation .. Serial correlation coefficients may be calculated for the correlation between the flow in any given time period (for example, month or year) and the flow in lag.
k
time periods earlier where
In many studies only the lag
k (= 1, 2, ... ) is called the
one serial correlation is considered,
that is, the persistence between an event and the immediately preceding event.
Lag one models have been shown to be operationally satisfactory in
several studies (for example, Kottegoda, 1970;
Philips, 1972;
Wright,1975).
The algorithm to compute serial correlation is given as Eq. 2.10. For a sample of finite size, computed values of serial correlation (r , where
is the lag) may differ from zero because of sampling errors.
k
k
Thus it is necessary to test the values to determine if they are significantly different from zero. purpose.
Yevjevich (1972b) outlines a test for this
The confidence limits (eL) for a computed value of
r
k
are given
by: -l±z
Cl
IN-k-l (5.3)
N- k where
z
Cl
the standardized normal deviate corresponding to the
N
Cl
level of significance, and
number of flow events.
falls outside the confidence limits,
If
significantly different from zero at the
Cl
is considered to be level of significance.
Equation 5.3 may be used to test the statistical significance of k
>
1 if
k
is small relative to
N.
r
k
for
110 5.2
HISTORICAL DEVELOPMENTS TO 1960 Hazen (1914) is considered to be the first to recognise the
desirability of extending hydrolgic data.
He combined standardised annual
flows for fourteen streams in the northwest of U.S.A. to produce a synthesized record of 300 years.
His procedure has a number of limitations.
The streams were geographically close, and more than half the records were based on the period 1900-1910, so that records tended to be repeated.
The
technique of combining the flows forces the residual massed curve to pass through zero at least fifteen times thus restricting the range of the combined data. requirement.
This would result in an underestimation of the storage But in Hazen's procedure this effect was compensated for
because his storage was determined on the basis of a semi-infinite reservoir. Hazen's curves are still used today to assess preliminary storage sizes in the eastern United States. Sudler (1927) utilized historical and representative annual flows which were entered on fifty cards.
These cards were shuffled and dealt
without replacement to produce a sequence of 50 years. twenty times, producing in all 1000 years of data.
This was repeated
The procedure is
limited by the process of non-replacement so that each 50 years is specified by the same parameter set.
The method assumes serial correlation to be zero.
Nevertheless, this was probably the first truly stochastic streamflow generation model. By dealing the cards without replacement, Sudler's mass curve passed through zero at the end of every 50 years. the range is curtailed.
Thus, as for Hazen's mass curve,
Because he dealt with finite storages, Sudler does
not have a compensating error as a result of his type of storage. Consequently, Sudler's technique would tend to underestimate the required, storage capacity. In estimating the capacity of the Upper Yarra Dam in Victoria (Australia), Barnes (1954) found that the annual flows were normally distributed and independent, and used a Monte Carlo approach to generate 1000 years of data.
Historically, this approach contrasts with the earlier
procedures in that they were distribution free methods.
Barnes adopted a
design criterion of probability of failure of 1 in 40 but used a semiinfinite storage approach as an added safety factor.
III
From 1936 to the 1960's, Hurst studied the river Nile, and developed various card sampling techniques to generate annual flows which were used in simulated operational studies of the Aswan High Dam.
Details can be
found in Hurst's text (1965), pp. 41-42. 5.3
ANNUAL MARKOV MODEL The Russian mathematician Markov (1856-1922) introduced the concept of
a process in which the probabi I i ty distribution of the outcome of any trial depends only on the outcome of the directly preceding trial and is independent of the previous his tory of the process.
In this case the "trial"
is the passage of one year and its "outcome" is the streamflow for that year.
If the probability distribution of annual streamflow is either
independent of previous streamflows or correlated only with the previous years flow, we have a "simple" or "lag one" Markov process.
The concept
has been extended to include cases of lag greater than one.
The Markov
process was the basis of the developments at the Universities of Colorado and Harvard in stochastic streamflow generation procedures during the early 1960's (Julian, 1961;
Yevdjevich, 1961;
Brittan, 1961;
Maass et aL., 1962).
Brittan (1961) proposed the following Markov model to represent actual streamflows: (5.4) where xi+l' xi x s
annual runoffs for (i+l)th and ith years, mean annual historical flow, standard deviation of annual flows, annual lag one serial correlation coefficient, and normal random variate with a mean of zero and a variance of unity.
This equation was adopted in order that the expected vaZues of the mean, standard deviation and serial correlation of the computed xi+l's would be equal to the respective values of those parameters derived from the historical record and used in the right-hand side of the equation. Moreover, if the that the
xi+l
xi
values are normally distributed, then it follows
values will also be normally distributed.
(Appendix C
shows theoretically that this algorithm does preserve the mean, standard deviation and serial correlation of the flows.)
112
5.3.1
Practical Considerations (i)
The model CEq. 5.4) consists of two components: a deterministic or correlation component and a random component
[x
+
rl(x
- x)]
i
[tis (1 - rf)11.
If r = 0, the model is purely random. This sometimes l occurs with annual data, for example, as found by Barnes (1954) in generating 1000 years of inflows for the Upper Yarra Darn in Victoria.
For the model as proposed,
r 1 cannot exceed unity and for annual data is generally less than 0.4. (ii)
To use the model to generate annual flows, we need to compute the mean, standard deviation and serial correlation of the historical annual flows, and to assume that the flows are normally distributed.
(iii)
The normal random variate, ti' is generated by an appropriate routine which is available for all computers.
One method is
to generate pseudo-random numbers which are usually uniformly distributed with a mean of 1 and variance of 1/12.
If we
add 12 of these numbers together and subtract 6 the resulting variate may be regarded as a normal random variate with a mean of zero and a variance of unity - designated as N(O,I). (iv)
To initialize the model operation, Xl is set equal to
X.
Consequently the first ten or so generated flows should be discarded as they will be dependent on this initialisation procedure.
A similar initialisation procedure is used for
other variations of the Markov model. (v)
This and some other models can generate negative flows. When this occurs the negative value is to calculate the next flO\oJ, after which it is set to zero.
Such a procedure
is acceptable so long as the proportion of negative flows is not too high (say no more than 5%).
In addition, one
should check the difference in mean flow of the generated sequence with the negative values included and with them set to zero.
If the difference is greater than say one
percent, the model is probably unsatisfactory for that stream. (vi)
Sample calculations of annual generated flows for a stream with the following historical parameters are given in
113 Table 5.1 for the Yarra River at Doctor's Creek (229103) in Victoria, using: N
= 77 years, x = 180 10 m , s = 72 10 m , r l = 0.12
and a sequence of random numbers, ti' which are normally distributed with a mean of zero and a variance of one.
TABLE 5.1
i
1
x.
1
Sample calculation of annual Markov streamflow model.
x
X.1
r l (xi-
x)
x+rl (xi- x) deterministic component
t.
-0.52
180
0
0
180
1
\ s (l-rir± random component
xi + 1
-37
143
2
143
-37
-4
176
0.61
44
220
3
220
40
5
185
-0.36
-26
159
4
159
-21
-3
177
-0.39
-28
149
5
149
-31
-4
176
0.08
6
182
6
182
2
0
180
-0.93
-66
114
7
114
-66
-8
172
-0.03
-2
170
8
170
-10
-1
179
0.80
57
236
9
236
56
7
187
1.67
119
306
It illustrates very clearly the relative importance in the model of the deterministic and random components.
Even
though the serial correlation is about average, the fluctuation in the deterministic component is small relative to the fluctuations caused by the random component.
Even
if the serial correlation were 0.5 (an approximate upper limit for annual flows) the random component would still contribute 75% of the variance in the generated flows. (vii)
In the above example using annual data, ti is defined as a random normal variate, N(O,l). this assumption is not acceptable.
However, for many streams (For Australia, it is
valid for only about 20% of streams.)
In order to provide
for this non-Gaussian situation, the model can be modified in several ways which are outlined in Sec. 5.5.
114
In the annual Markov model as outlined above, only two of the four components assumed to make up the streamflow process, as defined in Eq. 5.1, are accounted for explicitly.
Trend and periodicity are not considered.
How then do we treat trend?
Unless there is an a priori reason for
knowing the type of trend, a non-parametric test such as Kendall's rank correlation procedure (Kendall and Stuart, 1968) should be used to measure its strength.
Trends can be modelled by fitting either polynomials or
moving averages although there are difficulties with both approaches (Tintner, 1968). The most common form of periodicity relates to seasonality, particularly with respect to monthly data generation.
Here the most
appropriate practical model is the one proposed by Thomas and Fiering (1962). 5.4
THOMAS AND FIERING SEASONAL MODEL The algorithm for the Thomas and Fiering seasonal model is as follows: x '+ 1 + b. (x. - x.) + t. s. 1 (l-r~) ~ J 1 J 1 J+ J J
(5.5)
generated flows during the (i+l)th, ith seasons reckoned from the start of the synthesized sequences, mean flows during (j+l)th, jth seasons within
b. J
a repetitive annual cycle
of seasons (if months
are being used, then 1
~
~
12),
least squares regression coefficient for estimating (j+1)th flow from the jth flow b. ]
t.
1
(5.6)
normal random variate with mean of zero and variance of unity, standard deviations of flows during the . l)th , J.th seasons, an d ( J+
r.
J
correlation coefficient between flows in ]. th an d (.J+l ) th seasons.
To use the model to generate monthly flows at a site, 36 parameters monthly means, standard deviations and lag one serial correlations - are required.
These are obtained from analysis of monthly historical flows.
115
To run the model, set xl where
ti
' and compute successively x , x , ... JAN 2 3 is the only unknown and for each step it is calculated as a
pseudo-random normal variate.
x
Thus,
xJAN ) xFEB )
2
xFEB
+
bFEB/JAN(x l -
x3
x MAR
+
bMAR/FEB(x2-
= xJAN
+
bJAN/OEC(X12-XOEC)
X
x
13
+
tl sFEB (1 -
+
t2 sMAR (1
-
+
t
-
s (l 12 JAN
r2FEB/JAN)~ r2MAR/FEB)~
(S.7a) (5.7b)
r2JAN/DEC)~
(S.7c)
As defined above this model is restricted to normally distributed flows, that is,
ti
is considered to be a Normal random variate.
In order
to cater for non-normal streams the model can be modified as shown in the next Section. MODIFICATIONS FOR NON-NORMAL STREAMFLOWS
5.5
To cater for non-normal annual and monthly streamflows, three alternatives are available: (i)
(ii)
modify
t.
1
by an appropriate transformation;
modify the streamflow parameters and the model algorithms such that the final generated data are distributed like the historical flows upon which they are based;
(iii)
and
generate normally distributed flows and apply inverse normalizing equations.
5.5.1
Modifying tj In dealing with the problem of skewed data, Thomas and Burden (1963)
transformed the Normal variate, t., to a skewed variate, t , with an Y
l
approximate Gamma distribution (designated as 'like Gamma' in the following text), using the Wilson and Hilferty (1931) transformation thus: t
2 Y t,j
Y
where Yt,j
[
1
+
Y
t,~
t
i
y2.~
t,J - 36
(5.8)
coefficient of skewness of the like Gamma variate,
ti
N (0,1)
t
like G(O,l'Y t .), and ,J
Y
3
repetitive annual cycles of seasons usually 1
~
j
~
12.
116
In order to maintain the historical skewness in the generated flows, the historical skewness is increased to account for the effect of serial correlation.
Using expectation theory Thomas and Burden derived the
following algorithm to do this. (5.9)
seasonal coefficient of skewness for (j+1) th and jth seasons. To apply this method, called the like Gamma transformation, Eq. 5.5 is replaced by
t
Y
from Eq. 5.8,
y
.
t,]
t.
1
in
being calculated with
Eq. 5.9. The Wilson and Hilferty transformation is an approximation to the Gamma distribution but it breaks down for large skews and serial calculations (McMahon and Miller, 1971;
Phatarfod, 1976);
the limits are given
in Fig. 5.2. In general these limits do not affect annual models but, for monthly models restrict the use of this method to flow sequences with low to medium variability. 0.6
////
Wilson and Hilferty/ transformation / unacceptable 0.4
0.2
0+-----r-----r----.,----1~~~ 3 2 C s
·0.2
FIG 5.2
Limits of applicability of the Wilson and Hilferty approximation.
Kirby (1972) provided an alternative transformation to Eq. 5.8 which remains theoretically satisfactory over the whole range of hydrologic interest. (5.10)
where A, Band G are a function of skewness and given in Kirby's paper, and
117
EXAMPLE 5.1 Over the period of record the January flows for the Mitta Mitta River (Appendix E) exhibit a skewness of 1.8;
for February, it is 1.1.
The serial correlation coefficient of monthly flows between January and February is 0.58.
[N CO, 111
Show how a random number from a Normal distribution
can be transformed to a like Gamma skewed variable [G(O,l,1
*
*
*
*
.)].
t, J
*
A random number taken from the Normal distribution [N(O,l)J is - 0.4305.
- r~ I. J ]
From Eq. 5.9:
1.1 - (0.58)3(1.8) (1 - 0.58 2 )3/2
= From Eq. 5.8:
t
1. 385
-2 -
Y
Yt,j 1.
=-
~05
[1 [
+
1 +
'J
It,j t.1
It ,]. ~
6
(1. 385) (- 0.4305)
6
2
Yt, j _ (1.385)2] 3 36
2 1.385
0.5655
That is, the corresponding number to the normally distributed - 0.4305 is skewed gamma distributed - 0.5655.
Random numbers transformed in this way
can be used directly in the Thomas and Fiering seasonal model (Eq. 5.7).
5.5.2
Moment Transformation Equations Matalas (1967) presented moment transformation equations which
theoretically preserve the moments and the lag one serial correlation coefficients.
This method assumes that the Zogarithms of the flows are
normally distributed.
Thus the procedure is first to calculate a series of
logarithms using a Normal model, and then obtain absolute flows by exponentiation.
The generating algorithms and the parameter estimation
equations for the three parameter log-normal model are as follows:
118
Generating Algorithm: R2.)! - j+l + Bj (X i - X-].) + t i Sj+l (1 - ] X
(5.12)
(5.13)
where
N(O,l),
t. l
Xi+l
generated flow logarithms, and
xi+l
generated flow in absolute units.
Other symbols are defined below.
Parameter Estimation: x.
Aj
s~
exp
J
J
+
sj
exp (0.5 [2(S~
+
Xj )
+ it.)] - exp
J
J
(5.14) (S~
+ 2X.)
J
(5.15)
J
exp [3S~] - 3 exp [S~] + 2 J
g. J
J
(5.16 )
{exp [S2] _ l} 3/2 J
exp [So S. 1 R.] ]
r.
J+
J
(5.17)
J
where Historical data mean
x.
standard deviation
S.
coefficient of skewness
gj r.
J
lag one serial correlation and
Log-transformed value
J
J
R. Sj+lJ S J [ j
B.
J
(5.18)
To solve for A., X., S., R. begin with Eq. 5.16 and solve for S .. J
J
J
J
J
This is not explicit in S.and an iterative solution is required. J
One fast
converging technique is the Newton-Raphson method providing a reasonable initial guess is used.
The procedure is given in Appendix D.
been determined, then use Eqs. 5.14, 5.15 and 5.17 to obtain
Once S. has
xJ.,
J
A. and R.. J
J
119 EXAMPLE 5.2 Use moment transformation equations to transform the monthly parameters for the Mitta Mitta River (Appendix E) so as to preserve these characteristics in a generation model using normally distributed random numbers.
*
*
*
*
*
For January the historical parameters are (from Appendix E): 3 39.9 (x 106)m 3 26.4 (x 106)m
mean, x standard deviation, s serial correlation, r
0.58
skewness, g
1.8
From Eq. 5.16 : exp 3S~
-
J
gj
{exp
3 exp S2 + 2 J
S~
J
- 1}3/2
This cannot be solved explicitly for Sj; required.
a trial and error procedure is
If a reasonable first trial value is obtainable, the Newton-
Raphson procedure (Appendix 0) gives rapid convergence to the solution. Using a starting value of 0.5 for Sl; f(Sp
[exp (3Sf)
that is,
3 exp (Sf) + 2] - gl (exp Sf
-
si 1)
= 0.25 3/2
exp (0.75) - 3 exp (0.25) + 2 - 1.8 [exp (0.25) - 1]
(Eq. 05) 3/2
0.0075. 3 exp 3Sf - 3 exp Sf - 1.5 gl(exp Sf - 1)0.5 exp S1
(Eq. D6)
3 exp (0.75) - 3 exp (0.25) - (1.5)(1.8)[exp (0.25)-1]0.5exp (0.25) 0.6513. Therefore, second estimate
0.25 -
- 0.0075 0.6513
0.2615. Similarly, after repeating the above, the third estimate is found to be 0.260752 and the fourth, 0.260747, (that is, convergence). Thus Sf
0.26075 and
Sl
0.5106.
120 Equation 5.15 can be rearranged to give:
0.5 log [26.4 2 /(exp (2 x 0.2607) - exp (0.2607)] 3.748. Equation 5.14 can be rearranged to give:
39.9 - exp (0.5 x 0.2607 + 3.748) 8.469. Rl cannot be obtained from Eq. 5.17 until S2 is found from Eq. 5.16 (S2
0.3419) . Equation 5.17 can be rearranged thus:
log [0.58,j{exp (0.5106)2 - l}{exp (0.3419 2 )- 1}+ 1]/(0.5106)(0.3419) 0.6054. Similarly the computations can be done for the other 11 months; results are given in Table 5.2. TABLE 5.2
Log-transformed parameters for the Mitta Mitta River. Log-Transformed Parameters
Month
1 2 3 4 5 6 7 8 9 10 11
12
Mean
Standard Deviation
3.748 3.720 3.323 3.856 3.879 4.657 5.697 5.833 5.842 6.390 5.990 4.445
0.5106 0.3419 0.7546 0.9124 0.8424 0.7420 0.3945 0.3944 0.2829 0.1893 0.2018 0.4891
A. ]
- 8.469 - 16.83 5.774 - 24.88 - 6.860 - 36.25 -169.3 -170.2 -159.7 -399.9 -273.0 - 23.39
Serial Correlation 0.605 0.658 0.626 0.892 0.854 0.684 0.609 0.665 0.737 0.654 0.805 0.639
the
121
If a two parameter log-normal model is used, gj will be assumed zero
= 0 and Eq. 5.13 becomes
in which case A.
J
(5.19) and the model parameters for input into Eq. 5.12 can be determined explicitly from Eqs. 5.14, 5.15 and 5.17.
The model modified in this way
is based on a two parameter log-normal distribution rather than the three parameter one. For annual data generation, Eqs. 5.12 to 5.17, which are set down above with monthly subscripts, are modified appropriately. An important limitation arises in applying two parameter models to streams with high coefficients of variation.
For this distribution, the
coefficient of skewness (C ) is related to the coefficient of variation (C ) s v in the following manner (Chow, 1964, p. 8-17):
= 3C v
C
+
v
(5.20)
3
In practice, the implied values of skewness are modified among other things by the effect of serial correlation and so the generated value is always less than the value given by Eq. 5.20. 5.5.3
Normalizing Flows This procedure was proposed by Beard and has been adopted by the
United States Army Corps of Engineers (Beard,1972).
The following equations
are for annual flows: (i)
Compute logarithms of all flows after a small increment has been added to each in order to eliminate zero values.
(ii)
Compute mean, standard deviation and coefficient of skewness of log flows.
(iii)
Standardize the log values
L-=-.Z
v
where
s
s
v
standardized log flow,
y
loge (x + E)
y
mean of loge (x + E) flows,
y
(5.21)
y
standard deviation of loge (x + E) flows,
X
historical flows, and
E
small increment.
122
(iv)
Normalize the standardized values,
v,
to eliminate
skewness using the inverse Wilson and Hilferty transformation thus: v =
where v
(vi)
g
+
2
1)1/3_ I} + ~
(5.22)
6
normalized values, and coefficient of skewness of the log values.
g
(v)
~ {(~ v
Compute serial correlation of normalized values. Generate standardized variates by the Normal Markov process (5.23)
generated standardized variates N(O,l), lag one serial correlation of normalized variates, and ti (vii)
(viii)
=
Normal random deviate N(O,l).
Apply inverse transformations as follows: v
=
x
= exp (y
{[! (v
~) + 1]3 - l} -2
(5.24)
vs ).
(5.25)
g
6
+
Y
Subtract the small increment
(E)
added in step (i) .
If negative
flows result, set them to zero. Because of the initial decrease in skewness as a result of taking logarithms, the procedure does not suffer from the Wilson-Hilferty limitation (Fig. 5.2) but it has been found that the serial correlations of the absolute flows are poorly :,reserved.
These and other aspects are covered
more fully in Sec. 5.8 which deals with the performance of these models. For monthly flows, the mean, standard deviation and toefficient of skewness need to be modified.
Details are given in Beard's (1972) report.
EXAMPLE 5.3 Apply the normalizing flow procedure to the annual flows for the Mi tta Mi tta River (Appendix E) to demonstrate this method of flow generation.
*
*
*
*
*
123 Following the steps listed in Sec. 5.5.3 and referring to Table 5.3: (i)
= 0.01) to all of the annual flows
Add 0.01 (that is, s
in column (2) of the table, and enter the natural logarithm of each in column (3). (ii)
Standardize the flows using Eq. 5.21. mean of Column (3)
7.00121
standard deviation of Column (3)
0.559156
Column (3) - 7.0012 0.5592
then Column (4) (iii)
Use Eq. 5.22 to normalize the flows.
then Column (5)
First, calculate
= - 0.0825
skewness of Column (3)
= _ 0.0825 6 [(-0.0 825 (Column (4)) 2 +
(iv)
First, calculate:
+
1) 1/3 -
1]
- 0.0825 6
Use Eq. 5.23 to generate standardized variates.
=-
calculate serial correlation of Column (5) Next, assume an initial value of vi;
First, 0.008.
zero is appropriate.
Hence, for a random normal variate (say
tl
1.0752)
=
0.008(0) + 1.0752 (1 - 0.008 2)t 1.0752;
for t
z=
-
0.0064, say, 0.008 (1.075Z) - 0.0064 (1 -
0.0082)~
0.0064. Note: (v)
The very low serial correlation means that the vi+1
Apply the inverse transformation Eq. 5.Z4. V
z
= {[-
0.~825
(v
2
- -
0.~825)
+
1r -
- 1.1053. From Eq. 5.25, x
exp
(y
+ v
2
s) y
exp (7.00121 - 1.1053 x 0.5592) 591.76
1} _
0.~825
t .. l
124
Hence the first generated flow in the sequence is 591.76 - 0.01 = 591.75 ~ 592. Subsequent flows are calculated in the same way.
(vi)
In normal application the first few generated flows would be discarded to remove the effect of the starting condition applied in (iv). TABLE 5.3
(1)
Flow x 10 6m3 (2)
Year
5.6
Derivation of normalized annual flows for the Mitta Mitta River. Loge(Flow + 0.01) (3)
Standardized Flows (4)
Normalized Flows (5)
1936
1553
7.3480
0.6202
0.6120
1937
650
6.4770
- 0.9374
- 0.9392
1969
1010
6.9177
- 0.1493
- 0.1639
TWO TIER MODEL Monthly models do not necessarily preserve the annual flow charac-
teristics.
To overcome this deficiency, Harms and Campbell (1967) extended
the Thomas and Fiering model to constrain the annual and monthly flows separately, and also to preserve the annual serial correlation. flows were generated by an annual Normal Markov model.
Annual
A Thomas and
Fiering log-normal model was used for monthly generation, but the values were adjusted to sum to the appropriate annual values by the following algori thm:
x ..
1J
x! .
-12-I x ..
1]
j =1
where
Qi
1J
x'.
adjusted monthly generated flow volumes,
X
unadjusted monthly generated flow volumes,
Qi
annual generated flow volumes,
i
year, and
1J
ij
month.
(5.26)
125
Results presented in the Harms and Campbell paper suggest that the model works well.
In cases where the annual flows are not normally dis-
tributed, a skewed distribution could be used in place of the normality assumption.
One minor drawback with this approach is that the method of
adjusting monthly data does not allow the monthly serial correlation coefficient from the end of one year to the beginning of the next to be preserved. 5.7
OTHER CONSIDERATIONS Many other considerations are involved in stochastic generation of
streamflow other than those aspects treated in this chapter, for example, correlograms, partial correlation functions, spectral analysis, and daily models.
Nevertheless, the annual and monthly models outlined above do
provide a basis for the practical application of data generation techniques to single site situations.
In this text it would be out of place to
consider in detail multi-site models but some background material is given in Chapter 7. This discussion would be incomplete without a brief comment on the use of so-called long memory models.
Three data generation models, the
ARIMA, Broken Line and Fractional Gaussian Noise models fall into this category.
They have been proposed as replacements for Markovian schemes in
order that the Hurst phenomenon (see Sec. 3.3.1) is preserved in the streamflow sequence.
In Markovian models the exponent
K in Eq. 3.4 tends to
0.5, yet in reality its observed mean value is about 0.72. But are these three models of importance to the practising water engineer?
High
K values in Eq. 3.4 imply long-term persistence and result
in longer and more extreme events in the flow sequence.
From Eq. 3.4, high
K values result in larger ranges than would otherwise occur, and consequently relatively larger storage sizes. important are these effects.
What is not clear is how
For example, Wallis and Matalas (1972)
observed that differences in storage estimates between the Markovian and fast Fractional Gaussian Noise models occurred only for drafts greater than 80%.
Yet Kottegoda (1970) found for British rivers that the Fractional
Gaussian Noise model gave unrealistically high estimates of storage if compared with those estimated from historical or Markovian sequences.
While
there are these and other differences in detail, the consensus of opinion in the literature suggests that high reliabilities.
K should be preserved for high drafts and
126
5.S
MODEL VERIFICATION AND PERFORMANCE Before a data generation model is used in storage-yield analysis it
is necessary to check not only that it satisfactorily reproduces the main statistical characteristics defining the streamflow process, but also that critical periods are being satisfactorily generated.
A validation procedure
for an annual or a monthly model might include the following tests: (i)
comparison of the mean and variability of various statistics (annual and monthly means, standard deviations, coefficients of skewness and serial correlations) computed from many sets of generated data (each of length equal to that of the historical record) with the actual values of those statistics computed from the historical record;
(ii)
comparison of flow duration and frequency curves based on the generated data with the corresponding curves based on the historical record;
(iii)
comparison of correlograms based on monthly generated data with that derived from the historical record;
(iv)
and
comparison of the mean and variability of reservoir storage estimates based on replicated generated data compared with estimates using historical data.
The number of replicates of generated data that are required will vary with respect to streamflow variability.
Twenty-five are generally
sufficient. In the above tests, it should be noted that the historical values from (i) and the flow distribution (used in the flow duration comparison) in (ii) are part of the model structure, and, therefore, are more likely to be satisfactorily modelled than the other factors listed. To illustrate the level of performance achieved with the Thomas and Fiering model, published results (McMahon et al., 1972a, 1972b, 1973) have been included here as Tables 5.4, 5.5 and 5.6,
In addition, results from
a recently completed evaluation of a number of annual Markov models are available (R. Srikanthan, personal communication, 1977). In Tables 5.4 to 5.6 historical monthly flows and storage-yield results are compared with results from generated data using the Thomas and Fiering monthly model incorporating three of the distributions discussed
TABLE 5.4
Comparison of historical and generated monthly and annual streamflow parameters
River Australian Stream Gauging Station No. (Years of Record)
O'Shannassy 229103 (59)
M:JNTHLY Distribution
Historical LGLT
LN-3 Historical LGLT LN -2 LN-3 Torrens 504501 (72)
Historical LGLT LN-2 LN-3
Warragamba 212240 (72)
(Mm3)
ANNUAL
Standard Deviation
Skew
Seri al Correlation
(Mm3)
LN-2
Gordon 308007 (36)
Mean
Historical LGLT LN-2 LN-3
8.8
5.7
1.2
0.76
8.8 (0.4) 8.9 (0.5) 9.1 (0.5)
5.8 (0.6) 5.9 (0.5) 7.3 (0.7)
1.6 (0.7) 1.S (0.3) 2.3 (0.5)
0.78 (0.03) 0.78 (0.02) 0.74 (0.02)
1.4
0.38
1.6 (0.5) 1.9 (0.8) 1.3 (0.3)
0.38 (0.06) 0.39 (0.07) 0.37 (0.05)
150
109
153 (5.4) 152 (5.8) 152 (3.8)
113 (8.3)
III (12.2)
III (4.6)
Standard Deviation (Mm 3 ) 31
Skew
Serial Correlation
0.7
0.04
32 (6.4) 32
1.1 (0.9)
(4.2)
(0.4) 0.9 (0. S)
0.13 (0.13) 0.06 (0.15) 0.08 (0.15)
420
-0.1
0.06
390 (66) 390 (79) 390
(60)
0.4 (0.5) 0.7 (0.6) 0.4 (0.6)
0.00 (0.18) -0.07 (0.14 ) -0.01 (0.19)
39
1.5
0.02
2.5 (0.9) 2.4 (1. 1) 1.5 (0.8)
-0.02 (0.07) -0.02 (0.10) 0.00 (0.12)
42 (6.0)
1.0
4.2
7.8
3.1
0.57
4.3 (0.4) 4.4 (0.4) 4.7 (0.4)
9.9 (1. 5) 9.0 (1.5) 9.3 (0.9)
5.9 (2.0) 4.9 (1. 8) 3.6 (0.7)
0.56 (0.07) 0.61 (0.05) 0.60 (0.04)
4.6
0.45
1200
2.7
0.29
15.0 (8.0) 6.2 (1. 7) 4.2 (1.3)
0.34 (0.16) 0.47 (0.07) 0.43 (0.06)
2500 (2500) 920 (230) 970 (190)
4.4 (2.0) 2.3 (1.2) 1.7 (0.7)
0.04 (0.15) -0.03 (0.08) 0.00 (0.08)
93
200
110 560 (35) (690) 91 170 (9.0) (35) 110 190 (10) (32)
Generated parameters are means of 20 replicates of length shown. of generated parameters.
49 (9.6) 43 (10) 43 (5.6)
Values in parentheses are standard deviations
TABLE 5.5 Statistic
Comparison of historical and generated monthly parameters for O'Shannassy River in Victoria Model Historical
Mean (Mm3)
Note:
A
M
J
J
5.4
3.8
3.7
4.2
6.5
8.8
12.1
A 14.9
S
0
N
D
14.3
13.0
10.6
8.3 8.1
5.4
3.8
3.7
4.2
6.7
8.8
12.1
14.9
14.3
13.0
10.5
5.4
3.9
3.8
4.2
6.4
8.5
12 .5
15.3
14.7
13.1
10.6
8.0
LN-3
5.7
4.4
3.8
4.2
6.4
8.5
13.3
16.9
14.3
13.3
10.7
7.5
2.0
1.1
1.2
1.9
3.7
5.1
5.2
5.7
5.2
4.9
4.7
4.8
LGLT
2.0
1.0
1.2
1.9
4.3
5.3
5.6
6.0
5.3
4.8
4.6
4.2
LN-2
2.3
1.5
1.4
1.7
3.3
4.6
6.3
6.8
6.2
5.1
4.3
3.9
LN-3
2.7
3.3
1.7
1.6
3.6
4.6
5.3
13.4
5.3
6.0
4.6
2.7
1.2
0.5
0.8
1.6
1.7
2.0
0.9
0.4
1.0
0.7
0.9
3.4
LGLT
1.3
0.5
0.8
1.4
2.4
1.9
1.3
1.0
1.1
0.5
l.0
1.9
LN-2
l.2
l.0
0.8
1.1
1.3
1.4
1.4
1.2
l.2
0.9
1.0
1.3
LN-3
1.3
1.2
0.9
1.0
1.3
1.4
1.3
1.1
0.9
0.7
0.8
l.4
0.86
0.64
0.56
0.44
0.64
0.82
0.68
0.73
0.61
0.76
0.66
0.70
LGLT
0.86
0.69
0.59
0.39
0.77
0.87
0.67
0.72
0.77
0.77
0.73
0.90
LN-2
(j.92
0.68
0.51
0.33
0.62
0.87
0.79
0.80
0.66
0.75
0.50
0.80
LN-3
0.98
0.81
0.42
0.40
0.61
0.94
0.94
0.73
0.75
0.74
0.33
0.85
Historical Serial Correlation
M
LGLT
Historical Skew
F
LN-2
Historical Standard Deviation CMm3)
J
Generated parameters are mean values of fifty replicates.
129 TABLE 5.6
Reservoir capacity estimates based on generated flows compared with historical values.
River Australian Stream Gauging Station No. (Years of Record)
Annual coefficient of variation
O'Shannassy 229103
Model LGLT
a
LN-2
LN-3
a
a
0.29
94
0.23
90 ( 7)
74 (7)
92 (10)
0.44
86 (10)
85 (8)
116 (12)
0.77
94 (19)
59 (13)
89 (13)
1. 07
107 (24 )
53 (20)
60 (12)
97
l57
(SO)
Gordon 308007 (36 ) Yarra 229103 (50) Torrens 504501 (72)
Warragamba 212240 (72)
aMedian value based on ten replicates. Note:
Storage estimates for conditions of 1% probability of failure and 50% draft are expressed as percentages of the long-term historical Gould values. Generated storage estimates are means of 20 replicates of length shown. Values in parentheses are standard deviations expressed as percentages of appropriate mean value.
earlier - two and three parameter log-normal (denoted by LN-2 and LN-3) and the like Gamma distribution (LGLT).
In using the latter distribution a
logarithmic transformation was initially applied to the data.
These tables
highlight a number of points: (i)
In Tables 5.4 and 5.5, except for LGLT for Warragamba and the LN-3 August standard deviation, the annual and monthly means and standard deviations compare well with the historical estimates.
(ii)
Coefficients of skewness are generally too high but are considered satisfactory except for the LGLT monthly estimate for Warragamba.
Seasonal historical skews are erratic and
the variations are poorly modelled (Table 5.5).
Some workers
recommend smoothing seasonal coefficients of skewness (and other moments) prior to analysis (Beard, 1965).
130 (iii)
Monthly serial correlations are well modelled (Tables 5.4 and 5.5) yet high annual serial correlations are poorly simulated, for example, Warragamba in Table 5.4.
This
inadequacy is typical of monthly Markov models and can be overcome by using the two tier approach outlined in Sec. 5.6. (iv)
In Table 5.6 reservoir capacity estimates using Gould's procedure (Sec. 4.6) and based on generated flows are compared with historical values. discrepancies.
The results show large
Overall,the storage values from the LGLT
model compare most favourably with the historical estimates. At least two of the storage estimates for the LN-3 model deviate considerably from their historical values.
The
LN-2 model shows even greater discrepancies and is considered satisfactory for only one river. From this analysis, it can be seen that using the historical values as a basis of comparison, the LGLT model shows least variations yet exhibi ts some unsatisfactory parameter estimates.
On the other hand, the LN-2 and
LN-3 models reproduce the parameter values, but deviate markedly from the historical storage estimates.
Thus no model is wholly satisfactory.
Tables 5.7, 5.8 and 5.9 deal with a more detailed evaluation of Markov models than that discussed above, although the evaluation was restricted to the annual lag one type (R. Srikanthan, personal communication, 1977).
In all, 16 rivers * which represent the range of streamflow varia-
bility. encountered across the Australian continent were examined.
Up to
seven variations of model distributions were considered and for each case 5000 years of data were generated.
In addition to the parameters and
characteristics listed at the beginning of this section results were examined for the range, Hurst's
H and
K exponents, run lengths, extreme
events, spectral values and distribution types.
* In Table 5.7, 5.8 and 5.9, the names and national stream gauging numbers refer to the following Australian rivers as follows:
(1) King (309001),
(2) Wilmot (315003), (3) South Johnstone (112101), (4) Yarra (229103), (5) Murray (401201), (6) South Esk (318001), (7) Wungong (615071), (8) Serpentine (615074), (9) Loddon (407203), (10) Torrens (504501), (11) Ord, (809302), (12) Peel (419004), (13) Warragamba (212240), (14) Burnett (136001), (15) Wide Bay Creek (138002) and (16) Goulburn (21006) .
131
TABLE 5.7
Parameter and
Di5tr1-
Comparison of historical parameters with generated parameters based on annual flows generated using an annual Markov model.
f-_____________Ri_v_e_'_'_e_fe_'_en_'_e_n_o_._ _ _ _ _ _ _ _ _ _ _ _ _ _~
button
Hist. ::nm)
2230
1620
1710
2350
1610
1720
2320
1620
1710
540
)60
238
214
122
370
236
210
119
530 540
61
10
11
12
13
14
15
16
144
94
209
120
44
126
27
205
121
43
124
26
132
96
210
123
44
127
26
119
43
124
26
61 61
LN-2
540
)70
239
216
123
61
145
96
209
123
45
127
LN-3
540
)70
239
217
123
62
145
97
213
124
46
132
Hist.
0.18
0.22
0.38
0.18
0.22
0.37
0.18
0.22
0.38
540
370
240
216
122
63
0.40
0.47
0.47
0.50
0.57
0.78
0.46
0.47
0.49
0.56
0.39 0.40
LN-2
0.40
LN-J
0.40
0.40 0.8
Hist.
-0.2
0.3
0.2
-0.2
0.4
0.2
0.46
-9
LN-2 LN-3
Hist.
-0.09 -0.06 -0.10
97
214
124
46
131
28
0.80
0.82
1.11
1.14
1.26
1.79
0.81
1.11
loll
1, 15
0.78
0.79
0.79
1.09
1.11
1. 22
1. 71
1.07
1.13
1.23
1.64
8
88~
0.47
0.49
0.56
D,75
0.76
0.76
0.77
1.02
0,46
0.47
0.49
0.56
0.74
0.76
0.76
0.78
1.05
1.07
1.158
0.46
0.48
0.46
0.79
0.86 0 1 . 1 1
1.24
1.33
1.80
1.3
0.9
0.6
1.0
2.4
2.5
3.6
1.2
0.9
0.6
0.8
D.578
0.7 0.7
146
0.78
0.77 0.74
27
0
0.9
1.5
1.2
1.4
1.2
2.8
C":\r,:-...-0
0.9 0.9
1.1
::: ~~~:: ~ 2.32.0
1.]
GC08C0GG@ A @
0.7
1.2
0.8
1.]
1.2
0.12
0.11
0.01
0.8
0.9
0.9
~ee 0.14
1.3 1.5
0.02
1.1
~
2.6 1.9
2.5
1.9
2.9
2.9
2.8
'd
0.24
0.30
0.20
0.28
0.18
0.25
0.31
0.1'.1
0.29
0.16
0,30
0,20
0,27
0.18
1.1
@8 0.07
2.6
@)~
2.7
0.21
0.21
0.13
0,10 8 0 , 2 1
0,22
0,16
0.01
0.30
0.22
0,27
0.16
0.11
0.09
0.00
0.20
0,19
0.13
0.01
0.06
0.22
0,]0
0.20
0,27
0.19
0,11
0,09
0,00
0.20
0,19
0.13
0.01
0.06
0.22
0.29
0.19
0.26
0,16
-0.10 -0.06 -0.10
,
0.13
G
-0.088-0,12
LN-2 LN-3
__
-
0.14
80,22
~~ ~_-=--_~- _98 o,o~~~~_~~~ S88888 wh~r~
,0[ .. normal distributlon K .. Kirby's transformation B .. 8eard's method
W - Weibull LN-2 ,. T.... o parameter log normal
G .. like Gamma distribution LN-3 = Three parameter log nonnal
1.)Z
TABLE 5.8
Parameter and error
% negative flows
8
Distribution*
0
Ri ver reference no. 9
10
11
lZ
13
14
15
16
N
-
-
-
-
-
-
-
-
W
@])
G
K
LN-2
% increase in x
Percentage of negative flows in generated sequences from annual Markov model and their effect on mean flow estimate.
e
-
1.6
-
0
LN-3
g
B
0
-
g 0
3.3 0
80
N
-
-
W
0.9
-
-
G
C[D
0.1
0.7
-
K
LN-Z LN-3 B
0
C[D 0
-
-
-
0
Ci]) CQ)~@ C8) 0.9 6.5 ~
-
-
0
0
0.4
0.8
0
0
0
o 0
~ 16. 0
0
®
~ 3l. 0
QJ) cQ)§@ @ 0
0
0
-
-
9 CO>
0.6
-
0
cQ) 0
0
-
g
0.1
0.4
0
0.6
0
0
0.9 0
0
-
-
~~ 1.6
5.8
Z.O
3.8
0
0
cQ) CQ) ~ 0
0
0
* For definition of symbols, see Table 5.7. The characteristics compared in Tables 5.7 to 5.9 include the mean flow (x), the annual coefficients of. variation (C ) and skewness (C )' the v s annual serial correlation coefficient (r), storage capacities for 50% and 90% draft rates (Sso and S90) using the Sequent Peak method, the rank one two-year and ten-year consecutive low flows (1:2 and 1:10), the percentage of generated negative flows and the percentage increase in mean in setting them to zero.
In all, seven distributions or variations are examined - Normal
eN), two and three parameter log-normal (LN-Z and LN-3), like Gamma (G); Kirby's modification (K), Beard's normalizing procedure (B) and a Weibull distribution (W).
The values given in Tables 5.7 to 5.9 are averages based
on replicates in length equal to the historical records and equivalent to 5000 years of flow.
To assist in evaluating the performance of the various
distributions, generated values outside ± 5%, ± 10% or ± 25% from the appropriate historical values are circled in three tables. indicate an arbitrary level of performance.
These limits
133
TABLE 5.9
Parameter and
River reference no.
Distribution·
Hist.
10 0.00 0, 01
8
5"
G
Comparison of historical low flow and storage characteristics with those based on annual flows generated using an annual Markov model (in units of mean annual flow) .
0.00
0,23
88 0.01
0. 23
LN-2
0.20
0.23
0.50
0,50
-
0.48
0.94
II
12
13
14
15
16
l.2
1.2
1.5
1.5
1.1
2.4
-
-
-
-
-
1.~ 8~® l.~ ~:~ ~: 8~ @ 0.56@8 e@ 9880.58
8°.21 GO.57 o •
0.21
85 1.4
8
1.7
1.3
1.00
1.1
G
LN·3
His!.
1.4
1.3
1.1
1.5
1.3
0.9
1.4
J.4
0.9
"'
0.24
0.50
0,58
1.2
0.86
1.0
l.0
1.6
1.6
1.7
2.2
3.3
5.2
3.5
5.9
5.9
13.7
8.8
11.1
eG@
11.6
15,1
2.6
5.1
6.9
12.5
9.8
11.0
12.4
2.4
6.1
6.5
11.7
9.7
10.4
13.4
1.10
0.94
0.32
0.28
0.23
0.11
0.30
'.7
2.9
0.77
0.63
0,58
-
8
O,8S D,79
0,86
0. 90
0.56
8
0 . 72
7.9
7.2
7.7
7.7
6.3
7.3
7.2
7.2
LN-2
7.7
7.4
7.6
7.1
6.8
LN·,
7.6
7.4
7.5
@
6.5
7.7
7.'
7.6
7.1
0.8
8.9
8.2
For definition of symbols, see Table 5.7.
0.33
-
-
-
0.21
-
0.26
0.01
.
~~(§B@5
0.02
0.11 0.10
0.27
0.11
0.23
0,19
0.11
0.23
0.36
0.36
0.32
0.26
'.6
6.0
5.4
4.8
4.0
0.03
4.'
2.7
7.4
G
-
0.08 .
0.46
9.0
0.23
8· 8 · 8 '~0.02 0.54
'10
17.1
2.2
0.54
8.2
IS.1
8
12,0
0.60
8.9
12.7
10.0
0.75
9.1
10.1
10.7
12.2
0.86
8.9
14 . 2 12.8
10.2
6.2
13.78
0.82
9.2
8
6.8
0.86
'.9
4.9
4.7
0.84
Hist.
8
2.5
0.88
LN·'
6.2 6.3
2.1
13.9
2.0
0.97
LN-Z
e
2.2
0.88
G
2.6
1.4
1.4
0.22
2.0
LN-2
5.1
1.8
0 : 9 0 . ; 7 8 8 5 2.5
~ 4.5
0.20
(!.43~5"
3.1
..
0.20
0.69
0.43
0.67
1.1
0.18
0.14
LN-3
Hist.
0.18
8
6.3
134 Overall the models preserved the mean and coefficient of variation satisfactorily but the coefficient of skewness was poorly modelled for the two parameter log-normal distribution
and Beard's normalizing procedure.
The overestimation of LN-2 is expected because the implied skewness given in Eq. 5.20 is always considerably greater than the observed skewness for C greater than unity (McMahon, 1975, p. 384). The fourth input parameter v to the models - serial correlation - is well preserved except for Beard's model which underestimates all but one historical value.
A further relevant
factor relates to the proportion of negative generated flows and their effect on the mean.
This is shown in Table 5.8 for the eight streams with
a coefficient of variation greater than 0.7.
Those with a coefficient of
variation less than 0.7 generate less than one percent negative flows, and were not tabulated.
The table shows that except for the two parameter log-
normal distribution and Beard's model, all others generate too many negative flows and are regarded as being unsatisfactory.
From Tables 5.7 and 5.8 we
conclude that for annual data generation streams with an annual coefficient of skewness of less than 0.3, the most suitable model is like Gamma;
for
other streams with an annual coefficient of variation of less than 0.7, the three parameter log-normal or like Gamma are suitable;
for streams with
larger variability, the two parameter log-normal model is recommended. Unlike the generated parameter values which should approach the historical values input into the model, the correct generated values for other characteristics like low flow values and storage capacities are .unknown.
Nevertheless, if one takes several sets of flows drawn from a
range of geographic, climatic and hydrologic regions as in Table 5.9, estimates of generated characteristics based on historical input parameters should be a reasonable approximation overall of the historically observed characteristics.
Accepting this approach and examining the two storage
values (S50 and S90) and the 2 year and 10 year low flow sums
(~2and ~10)
in Table 5.9, it is concluded that Beard's procedure performed the most satisfactorily, followed by the two and three parameter log-normal distributions.
Generally all the models overestimated the historical storage
values but Beard's estimates are on the average no more than 10% different to the historical estimates.
This is considered to be very satisfactory.
For streams with low skewness (say Cs < 0.3) the Gamma model is recommended.
135 This detailed review confirms our observations regarding the results in Tables 5.4, 5.5 and 5.6, namely that no model 'is wholly satisfactory for all purposes.
Consequently, if one is using a data generation model in
practice, one needs to understand clearly the objectives of the study which should influence model choice. 5.8.1
Unrepresentative Streamflow Data Input parameters to generating models are based on historical data.
If the parameters are not representative of the flow population, generating more data will not improve the relevant information.
For this reason it is
important to use the "best" estimates of the parameters. be wary of bias in historical data.
Modellers should
If suspected, it may be necessary to
employ regional techniques to estimate the parameters (Benson and Matalas, 1967). 5.9
A similar approach is recommended for ungauged catchments.
SIMULATION Simulation analysis is defined as "
a process which duplicates the
essence of a sys tern or acti vi ty wi thout actually attaining reality i tsel f" (Hufschmidt and Fiering, 1967).
This involves developing an algebraic model
of all the inherent characteristics and probable responses of the system to an operating rule.
Simul ation analysis has been described as a "brute
force" fitting technique (Fiering, 1961);
however, planners are often
forced to use the method to effectively deal with large and complex systems that become intractable with analytical techniques. Digital simulation does have several limitations which need to be recognised. (i)
Where a large number of variables has to be optimised, it may become infeasible to examine all possible combinations (Dorfman, 1965).
(ii)
It is a trial and error approach that does not necessarily lead to an optimal solution (Maass et al., 1962).
Notwithstanding these limitations simulation analysis using stochastically generated data as input is a very powerful tool particularly for a large complex system; are dealt with in Chapter 7.
aspects relating to multi-reservoir systems
136 5.9.1
When and How to use Generated Data In reservoir capacity-yield analysis data generation procedures should
be used to provide alternative yet equally likely flow sequences to the historical one.
In a large number of generated sequences, some will contain
less severe droughts than the historical record and some more severe.
When
the sequences are used in simulation or behaviour studies with a range of assumed storages and demand values a quantitative picture of the probability of failure-storage-yield relation can be built up. The point is best illustrated by an example.
If a behaviour analysis
(see Sec. 3.2.3) is carried out on the Mitta Mitta River (Appendix E) for various combinations of draft and probability of failure a series of curves will be obtained (Fig. 5.3).
From such a diagram the storage required for
any given draft and probability of failure can be obtained;
for example,
for 75% draft and 5% probability of failure, the storage required is 760 x 10 6m3 . 100~----------------------------------
____________________,
Probability of failure
10%
90
2%
5%
80
70
~
60
.:::
0.85, which can occur at low
This unrealistic result can occur for other
As will be noted in Sec. 6.3, the procedure tends to
underestimate capacity for the smaller reservoirs.
143 EXA~lPLE
5.4
Use Gould's Synthetic Data procedure to estimate the storage required to meet 75% draft with a 5% probability of failure for the Mitta Mitta River (Appendix E). *
*
*
*
*
Following the steps given in Sec. 5.10.1: (i)
From Table E3 for annual flows: 1274 (x 10 6 m3 )
!vIe an
Standard Deviation
(ii)
731 (x 10 6 m3 )
Coeff. of Skewness
1.5
Serial correlation
0.06
Modify probability: From Eq. 5.29:
P
(P' - 12r)/(1.7r (5 - 12
x
+
1)
0.06)/(1.7
x
0.06 + 1)
4.28/1.1 3.9% (iii)
From Eq. 5.27:
kl
mean flow - draft standard deviation 1274 - 0.75 731
x
1274
0.44 (i v)
From Fig. 5.6:
k2 (P
3.9%, skew
1. 0)
2.0
From Fig. 5.7:
k2 (P
3.9%, skew
2.0)
1.3
Interpol ating:
k2 (P
3.9%, skew
1. 5)
2.0 + 1.3 2 1. 65
(v)
From Eq. 5.28:
Reservoir capacity
standard deviation x k2 731 x 10 6 x 1.65
144
Guglij's and Svanidze's Synthetic Data Procedures
5.10.2
Guglij and Svanidze used Monte Carlo models and a modification of the Pearson Type III distribution known as the Kritskii-Menkel curve (Kartvelishvili, 1969, p. 35) to generate long sequences of annual flows (1000 years or more) for various combinations of coefficients of variation,
skewness and serial correlation.
These data sequences were used to deter-
mine finite storage capacities (as ratios of mean annual flow) for given values of draft and reliability.
Guglij results are summarized in 66 graphs
(provided by S. Selvalingam, personal communication, 1976).
Svanidze
includes more details in his 120 graphs (Kartvelishvili, 1969, Appendix VII). But both sets of results are of use only for streams with low variability less than 0.4 or 0.8 for Svanidze's curves and 1.4 for Guglij's relations. Moreover, their usefulness for high drafts is limited because the computed storage sizes are limited to values less than 2.8.
These relationships
include annual serial correlation as a parameter which is not available in Gould's Synthetic Data curves. It wi 11 be noted in Sec. 6.3 that reservoi r capacity estimates from
these procedures are not as satisfactory as other estimates;
consequently
other procedures are recommended before these.
5.11
NOTATION
a.
number of days in month
A
variahle in Kirby's transformation (Eq. 5.10)
A.
location parameter in three parameter log-normal distribution
J B
variable in Kirby's transformation CEq. 5.10)
B
Beard's procedure in Tables 5.7, 5.8 and 5.9
1
J b.
B.
J C v
regression coefficient for estimating Cj+l)th flow
log transformed regression coefficient coefficient of variation
C
coefficient of skewness
CL
confidence limits
g
s
i
gj
coefficient of skewness of standardized log flows (Sec. 5.5.3) ·· coe ff lClent a f s k ewness of f lows during J. th season
G
like-Gamma distribution in Tables 5.7, 5.8 and 5.9
G
variable in Kirby's transformation CEq. 5.10)
GCO,I,y)
Gamma distribution with zero mean, unite variance and skew = y
H
variable in Kirby's transformation (Eq. 5.11)
k
lag between flow events under analysis
145 draft ratio defined as mean less draft, divided by standard deviation of flow storage ratio defined as reservoir capacity divided by standard deviation of flow K
Kirby's procedure in Tables 5.7, 5.8 and 5.9
K
correlation component of the streamflow process
LGLT
like-Gamma Logarithmic Transformation model (Sec. 5.5.1)
t
LN-2
2 parameter Log-normal model
LN- 3
3 parameter Log-normal model
N
Normal distribution in Tables 5.7, 5.8 and 5.9
N
number of flow events
N (0,1)
normal distribution with mean zero and variance equal to one
P
probability of failure or probability of occurrence
P'
design probability of failure in Gould's synthetic data procedure (Sec. 5.10.1) annual generated flow volumes annual serial correlation coefficient lag one serial correlation of normalized variates (Sec. 5.5.3)
r
serial correlation between (j+l)th and jth months lag
k
serial correlation coefficient
lag one serial correlation coefficient log transformed serial correlation coefficient s s.
J
standard deviation . . stan d ar d d eV1at1on of mont h ly flows f or·J th month
s
standard deviation of loge (x + €) flows . t·lon of flows for J.th season 1og t rans f orme d s t an d ar d d eVla
St
seasonal component of the streamflow process
SSO ,S90
storage capacities for 50 and 90% draft rate us.ing the sequent peak procedure
Y S. J
t
time
t.
normal random variate
1
G(O,l,y
.) like-Gamma variate t,J
Kirby's modified like Gamma variate CEq. 5.10) trend component of the streamflow process standardized log flow values (Sec. 5.5.3) W
Weibull distribution in Tables 5.7, 5.8 and 5.9
x .. 1J x.
unadjusted monthly generated flow volumes CEq. 5.26) . .th perlo . d f low volumes durlng 1
x
monthly flow
1
t
146 x! .
adjusted monthly generated flmv volumes CEq. 5.26)
Xl
standardized monthly flow
lJ
t X
Xj ,X j +l
\
mean flow . J' th and CJ' +1) th season mean mont hI y fl ow durlng . th perla . d f low ln t
X.
generated flow logarithms for (i+l)th and ith seasons .th 1og t rans f orme d mean flow for J season
Y
mean of loge
\+1'\ J Y
loge
ex
+ E) flow
ex
+ E)
flows
za' zp
standardized normal variate
Yj 'Y j +l
seasonal coefficients of skewness for jth and
Yt,j
coefficient of skewness of the like -Gamma variate
U+l) th seasons small increment
E E
t
random component in streamflow process
v
normalized flow values
1: 7 ,L)0
sum of the rank one 2 and 10 year consecutive low flows respectively
147 CHAPTER 6
QUANTITATIVE ASSESSMENT OF TECHNIQUES FOR
CAPACITY· YIELD
SINGLE RESERVOIRS
Over the past several years, the authors have been involved in four comparative studies using Australian streams in which reservoir capacityyield techniques have been assessed. been published (Joy, 1970; Codner and McMahon, 1973;
Results of three of the studies have
Joy and McMahon, 1972;
~jcMahon
McMahon, Codner and Joy, 1973);
and Codner, 1973; the fourth will
be published shortly (C.H. Teoh, personal communication, 1977). In all of the above studies the storage estimates provided by the behaviour analysis method (with the reservoir initially full) were taken as the benchmark to which other methods could be compared.
It was selected
for this purpose because of its current widespread use by water authorities as a final design technique, and because it suffers from only one major drawback - that of dependence of storage es timates on the initial condi tion chosen. 6. I
CRITICAL PERIOD AND PROBABILITY MATRIX METHODS The first study was concerned with illustrating quantitatively the
inadequacies in some of the techniques discussed in Chapters 3 and 4 by applying them to six Australian rivers on which major dams already existed or were proposed.
The rivers and their flow characteristics are given in
Table 6.1. In Figs. 6.1 and 6.2 storage estimates at 50% and 90% drafts respectively for the six rivers are compared among the critical period techniques - minimum flow (Waitt), Alexander's method corrected for the assumption of zero serial correlation, overlapping series mass curve frequency method (Thompson), independent series mass curve frequency method (Stall) and behaviour estimates for 5% probability of failure.
(Note that
to obtain a suitable scale, storages have been standardised by dividing by the standard deviation of monthly flows.
Reference numbers refer to the
rivers listed in Table 6.1.) 6 .1.1
~lass
Curves and Minimum Flow (Waitt)
From Figs. 6.1 and 6.2 it is seen that the Waitt technique (excluding the additional one year's supply) which is equivalent to a mass curve
TABLE 6.1
No.
Rivers investigated in evaluation of critical period and probability matrix methods.
River (Australian Streamgauging Station Reference Number)
1
Yarra (229103)
2
Murrumbidgee ( 410008)
3
Annual Area (sq. km)
Period of Record
Mean
Monthly
(mm)
Coeff. of Var.
Coeff. of Skew
Serial Corre1.
Coeff. of Var.
Coeff. of Skew
Serial Corre1.
334
1892-1968
540
0.40
0.77
0.12
l.00
1.77
0.62*
13000
1927-1960
128
0.85
l. 99
0.08
l.39
3.66
0.60*
Lachlan ( 412010)
8290
1931-1967
110
l.09
l. 89
0.14
l. 79
3.42
0.61*
4
Warragamba (212240)
8750
1881-1959
122
1.11
2.67
0.30*
2.14
4.71
0.45*
5
Namoi ( 419007)
5700
1924-1960
76
1. 06
l. 96
0.23
l.97
4.22
0.42*
6
Burdekin (120090)
114000
1915-1950
57
l.00
l.72
0.07
2.86
5.05
0.41*
* Values significantly different from zero at the 5% level.
149 0 10
Ii
(4)
(3)
0 6
• •
8
Waitt Thompson Stall Alexander
(5)
0 6
SI
(f
"
(6) 0 (2) 4
•
0
(1)
I
•
•
0
•
• • ,,/ " • "•
2
"
O~---------.----------r----------'~
o
FIG. 6.1
2
Sbl (f
4
6
Comparison of critical period storage estimates (ordinate) with historical behaviour estimates (abscissa) at 50% draft and 5% probability of failure.
analysis, overestimates the behaviour storage other critical period techniques.
values computed using all
This is to be expected as the approach
allows only one failure whereas in the other techniques we have adopted a 5% probability of failure criterion. 6.1.2
Alexander's Method It was suggested earlier (Sec. 3.4.2) that Alexander tends to under-
estimate the behaviour storage for short critical periods (neglect of seasonality) and to approximate it for long critical periods.
From Fig. 6.1
it is seen that the Alexander method underestimates the behaviour storage
for rivers 1, 3, 4, 5 and 6, but overestimates it for river 2, possibly because the flows for 2 are not adequately described by a two-parameter Gamma distribution.
For long critical periods, as expressed in Table 6.2,
the behaviour storages are closely approximated except for river 4.
150
80
o (4)
o
Waitt
6. Thompson • Stall
•
60
Alexander
(3)
0
•
(2)
40 (5)
0
•
e (1)
(6)
0
0
•
20
• o FIG. 6.2
6.1.3
• ••
•
• 20
40
60
Comparison of critical period storage estimates (ordinate) with historical behaviour estimates (abscissa) at 90% draft and 5% probability of failure.
Overlapping Series Frequency Mass Curve Method (Thompson) Earlier it was shown (Sec. 3.4.5) that the relationship of the
Thomp~on
storage estimate to the behaviour estimate depends on the
clustering effect of sequences of the same duration, which leads to overestimation of the behaviour storage results. critical periods.
This is significant for long
At short critical periods, due to the neglect of repeated
failures, it can be shown theoretically that Thompson's method tends to underestimate the required behaviour storage. These views are confirmed in practice by Fig. 6.1 which shows that the Thompson procedure underestimates the behaviour storage for rivers 1, 2, 5 and 6 and overestimates it for rivers 3 and 4.
From Table 6.2 it is
seen that rivers 1, 2, 5 and 6 have short critical periods whereas rivers 3 and 4 have long ones. For 90% draft, the duration of Thompson's critical period approaches that of Waitt for rivers 2, 3, 4, 5 and 6.
In this investigation, the
151 TABLE 6.2
Duration of critical periods for storages designed for 5% probability of failure (years).
Method
1
2
Mass curve and Waitt
1.5
2.0
flow with probabi li ty (Alexander)
0.5
2.2
Overlapping series (Thompson)
1.5
Independent series (Stall)
0.8
River* 3 4
5
6
8.3
5.0
6.7
3.1
3.2
3.0
2.7
1.5
8.3
5.0
2.0
1.5
2.0
3.0
3.0
0.5
1.5
50?, Draft ~linimum
10
90% Draft
Mass curve and Waitt
7.5
Minimum flow with probability (Alexander)
10
Ove r 1 app ing series (Thompson) Independent series (Stall)
1.5
20
18
15
15
15
+
greater than 50
->
+
greater than 12
->
5.0
I
6.7
I 8.3 I
6.7
I
8.3
* As listed in Table 6.1. maximum duration of critical periods examined for Thompson's procedure was 12 years. ri vers .
This was insufficient to define Thompson's storage for these Hotvever, for river 1 repeated fai lure compens ates the cl us tering
effect and so the Thompson method approximates the behaviour storage. In view of these inadequacies, we recommend that this procedure should not be used. 6.1.4
Independent Series Frequency Mass Curve Method (Stall) It was shown earlier that Stall's method tends to overestimate the
required behaviour storage for short critical periods and to underestimate it for long cri tical periods. of Figs. 6.1 and 6.2.
This is confi rmed by the empirical results
Table 6.2 contains the durations for Stall's critical
periods. From Fig. 6.1 it is seen that Stall's procedure overestimates the required storage for rivers 1 and 5, and underestimates it for rivers 2, 3, 4 and 6.
From Table 6.1, rivers 1 and 5 have short critical periods while
rivers 2, 3, 4 and 6 have longer critical periods.
152
At 90% draft, Stall's procedure underestimates all the behaviour storages.
From Table 6.1 we see that all rivers have long critical periods,
except for river I in which Stall and behaviour estimates are very similar. Because of these limitations we recommend that Stall's procedure should not be used. 6.l.5
Gould's Probability
~latrix
Method
This is the only method in the second group of storage-yield techniques for which there are comparative results.
The limitations of Moran's
discrete probability matrix models were discussed in detail in Chapter 4. In Figs. 6.3 and 6.4 results from Gould's probability matrix method (as proposed by Gould and also as modified to cater for monthly failures rather than annual failures) are compared with behaviour estimates for conditions of 50% and 90% drafts and 5% probability of failure. In both Figs. 6.3 and 6.4 the original Gould method overestimates the behaviour storage.
On the other hand, the modified Gould method fits the
behaviour relationship satisfactorily. 8 6
Gould
•
Modified Gould (4)
6.
6
• (6)
(5)
6. 6.
4 (2)
•
•
•
6.
(ll
./ • 6.
2
O~---------.----------.----------'r---------~
o
FIG. 6.3
2
4
Sb/
6
8
(f
Comparison of Gould's probability matrix storage estimates (ordinate) with historical behaviour estimates (abscissa) at 50% draft and 5% probability of failure.
153
(4) to. (3)
to.
60
(5) (2)
•
to. to.
•
•
40
(6)
•
to.
s/ () 20
(ll
•
6
•
6.
Gould
•
Modified Gould
O~-----------'~-----------r------------~----~
o
FIG. 6.4
6.1.6
20
40
60
Comparison of Gould's probability matrix storage estimates (ordinate) with historical behaviour estimates (abscissa) at 90% draft and 5% probability of failure.
Further Comparison of Gould and Behaviour Methods A second study, which was aimed at establishing empirical equations
relating storage to other flow parameters, involved 156 Australian streams (Sec. 4.7.1, McMahon, 1977).
In the analysis for each stream, storage
sizes required to meet conditions of 5% probability of failure and 50% and 90% drafts were computed using Gould's modified approach (20 zones) and two
behaviour approaches, one assuming the reservoir to be initially full and the other initially empty.
Storage estimates expressed as ratios of the
mean annual flow are plotted in Figs. 6.S and 6.6 which represent respectively 50% and 90% drafts. Overall, the fit at 50% draft is satisfactory with the behaviour empty case tending to overestimate the steady state storage requirement for the more variable streams.
In other words, the initial emptiness state is
significant for the conditions examined.
At 90% draft, there is a tendency
for steady state storage need to be underestimated for the behaviour full case as a result of the initially full assumption.
154 2.5
/
,!,V/ 1.0
'i
£
"cc'"
0.5
'"c
.,'" E --.,
0.25
X
0>
:::: 0
§
0.1
.&til E
:;:;
~
0.05
:;
o .;;
a-Behaviour Method:
Initially empty
til
6-Behaviour Method:
Initially full
~ 0.025 III
0.01 / / /
o;y /
0.005~/~__~~______~~~__~~____~~______~~____~~____-L________~
0.005
0.01
0.025
0.05
Gould Estimate
FIG. 6.5
0.1
0.25
0.5
1.0
2.5
(Storage/mean annual flowl
Comparison of behaviour results (initially full and empty) with modified Gould estimates for 156 Australian streams at 50% draft and 5% probability of failure.
It is believed that in both cases the effect of ignoring annual serial correlation in Gould's procedure is masked by the various inadequacies in the behaviour approaches.
The results confirm the modified Gould pro-
cedure as being analytically superior to the behaviour approaches. 6.1.7
Summary From these two studies we conclude that the modified Gould technique
is a suitable analytical storage-yield procedure for final design.
In
addition, it is noted that the behaviour analysis, which was used as a basis for comparison, gave results consistent with theoretically acceptable procedures so long as the effect of initial conditions is recognized.
155 25
I
I
I
I
I
I
/
/
/
/ /
/ /
lOt-
/
!
/
/
-
/
~! ~~{~
5.0 t-
fA!
B
:&
2.5 -
1.0
/
!
d~
r
0.5 t-
/
0.25 t-
~
/
-
1,
!
g1
!
~lr
/
:or
! !
-
0-
Behaviour Method:
Initially em pty
6-
Behaviour Method:
Initially full
-
/
/
O.l~____~~I__~~I~__~IL-____~~l
0.1
0.25
0.5
1.0
Gould Estimate
FIG. 6.6
5.2
____~I~____L-I______~
2.5
5.0
10
{Storage/mean annual flowl
Comparison of behaviour results (initially full and empty) with modified Gould estimates for 156 Australian streams at 90% draft and 5% probability of failure.
CAPACITIES BASED ON STOCHASTIC DATA GENERATION By generating several streamflow sequences, all with the same
statistical characteristics and all assumed to be equally likely to occur, the designer is able to determine something about the precision of his storage estimate.
This result contrasts strongly with methods like the
)ehaviour analysis using historical data in which only one value is )btained;
nothing can be deduced about the distribution of this single
,stimate. A study of three rivers (numbers 1,4 and 5 in Table 6.1) involved :omparing historical behaviour and modified Gould estimates with those
25
f-'
Uo
1.0
5.1
1.14
~
80% bond 0.8
/ /
Range~
",
\
\
/ /
M
\
/
\
/
\
\
M
\
E
.... ....
"",,,,,,,""
I
E 3
I
.... ....
0.6
.... ....
01
'-
•
.0
~~..