DATA HANDLING IN SCIENCE AND TECHNOLOGY -VOLUME
4
Advanced scientific computing in BASIC with applications in chemist...
169 downloads
753 Views
5MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
DATA HANDLING IN SCIENCE AND TECHNOLOGY -VOLUME
4
Advanced scientific computing in BASIC with applications in chemistry, biology and pharmacology
DATA HANDLING IN SCIENCE AND TECHNOLOGY Advisory Editors: B.G.M. Vandeginste, O.M. Kvalheim and L. Kaufman Volumes in this series: Volume 1 Microprocessor Programming and Applications for Scientists and Engineers by R.R. Smardzewski Volume 2 Chemometrics: A textbook by D.L. Massart, B.G.M. Vandeginste, S.N. Deming, Y. Michotte and L. Kaufrnan Volume 3 Experimental Design: A Chemometric Approach by S.N. Deming and S.N. Morgan Volume 4 Advanced Scientific Computing in BASIC with Applications in Chemistry, Biology and Pharmacology by P. Valk6 and S.Vajda
DATA HANDLING IN SCIENCE AND TECHNOLOGY -VOLUME
4
Advisory Editors: B.G.M. Vandeginste, O.M. Kvalheim and L. Kaufman
Advanced scientific computing in BASIC with applications in chemistry, biology and pharmacology
P. VALKO Eotvos Lorand University, Budapest, Hungary
S. VAJDA Mount Sinai School of Medicine, New York, N Y, U.S.A.
ELSEVIER Amsterdam - Oxford - New York - Tokyo
1989
ELSEVIER SCIENCE PUBLISHERS B.V. Sara Burgerhartstraat 2 5 P.O. Box 2 1 1, 1000 AE Amsterdam, The Netherlands Disrriburors for the United Stares and Canada:
ELSEVIER SCIENCE PUBLISHING COMPANY INC. 655, Avenue of the Americas New York. NY 10010, U.S.A.
ISBN 0-444-87270-1 (Vol. 4) (software supplement 0-444-872 17-X) ISBN 0-444-42408-3 (Series)
0Elsevier Science Publishers B.V., 1989 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in ariy form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher, Elsevier Science Publishers B.V./ Physical Sciences & Engineering Division, P.O. Box 330, 1000 AH Amsterdam, The Netherlands. Special regulationsfor readers in the USA - This publication has been registered with the Copyright Clearance Center Inc. (CCC), Salem, Massachusetts. Information can be obtained from the CCC about conditions under which photocopies of parts of this publication may be made in the USA. All other copyright questions, including photocopying outside of the USA, should be referred to the publisher. No responsibility is assumed by the Publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Although all advertising material is expected t o conform t o ethical (medical) standards, inclusion in this publication does not constitute a guarantee or endorsement of the quality or value of such product or of the claims made of it by its manufacturer Printed in The Netherlands
I m I m 1 1.1 1.1.1 1.1.2 1.1.3 1.1.4 1.2 1.2.1 1.2.2 1.3 1.3.1 1.3.2 1.3.3 1.3.4 1.4 1.5 1.6 1.7 1.8 1.B.1
1.8.2 1.8.3 1.8.4 1.8.5 1.8.6 1.8.7 1.8.0
2 2.1 2.1.1 2.1.2 2.1.3 2.1.4 2.1.5 2.1.6 2.2 2.2.1 2.2.2 2.3 2.3.1 2.3.2 2.3.3
.....................................................
......................................
D Y F U T A T I m L I N P R CYGEBFW Basic concepts and mett-uds Linear vector spaces Vector coordinates in a new basis Solution of matrix equations by Gauss-Jordan mliminatim Matrix inversion by Gauss-Jordan eliminatim Linear programming Simplex method for normal form Reducing general problenw to normal form The two phase simplex method LU decomposition Gaussian eliminatim Performing the LU decomposition Solution of matrix equations Matrix inversion Inversion of symnetric, positive definite matrices Tridiagonel systms of equations Eigenvalues and eigenvectors of a symnetric matrix Accuracy in algebraic canpltatims Ill-cmditimed problems Applications and further problms Stoichianetry of chemically reacting species Fitting a line by the method of least absolute deviations Fitting a line by minimax method Chalyis of spectroscopic data for mixtures with unknctm backgrcund absorption Canonical form of a quadratic response functim Euclidean norm and conditim n h b e r of a square matrix Linear dependence in data Principal component and factor analysis References
........................................ .............................................. ................................. .......... ...................... ................................................ .................................... . ............................................................ ..................................................
.............................................. ................................... ...................................... .................................................. ................ .................................. ................ . ...... ................................. ......................
......... .................................. ........................................................ ................... ............ ......................................... ...........................
........................................................ NELINAR EGWTIONS CYUD E X T E W I PRoBLDVls ......................... Nanlinear equations in m e variable ............................... Cardano method for cubic equatims ................................ Bisection ......................................................... False positim method ............................................. Secant method ..................................................... Newton-Raphsa, method ............................................. Successive approximatim .......................................... Minimum of functims in me d i m s i m ............................. Golden section search ............................................. Parabolic interpolation ........................................... Systems of nonlinear equatims .................................... Wegstein method ................................................... Newton-Raphsm method in m u l t i d i m s i m s .......................... Sroyden method ....................................................
VIII
1 2 2 5 9 12 14 15 19 27 27
28 32 34 35 39 41 45 47 47 51 54
96 59
68 61 65 67 69 71 71 74
77 00
82 85
87 88 96 99
W 104 107
VI
................................... ................................. .................................... ................................. ........ ............................................... ............................. .................. ........................................................ PMNCIER ESTIMCITIaV .............................................. Fitting a straight line by weighted linear regression ............. Wltivariable linear regression ................................... Nonlinear least squares ........................................... Linearization. weighting and reparameterization ................... Ill-conditioned estimation problems ............................... Ridge regression .................................................. Overparametrized nonlinear models ................................. kltirespmse estimation .......................................... Equilibrating balance equations ................................... Fitting error-in-variables models ................................. Fitting orthogonal polynomials .................................... Applications and further problems ................................. 0-1 different criteria for fitting a straight line .................
Minimization in multidimnsions Simplex method of Nelder and Mead Davidon-Fletcher-Powell method Applications and further problems Analytic solution of the Michaelis-l*lenten kinetic e q w t i m Solution equilibria Liquid-liquid equilibrium calculation Minimization subject to linear equality constraints chemical equilibrium composition in gas mixtures RefermcE
112 113 119 123 123 125 127
3 3.1 3.2 3.3 3.4 3.5 3.5.1 3.5.2 3.6 3.7 3.8 3.9 3.10 3.10.1 3.10.2 Design of experiments for parameter estimation 3.10.3 Selecting the order in a family of homrlogous models 3.10.4 Error-in-variables estimation of van Laar parameters fran vapor-
139 145 151 161 173 178 179 182 184
2.4 2.4.1 2.4.2 2.5 2.5.1 2.5.2 2.5.3 2.5.4
.................... .............. liquid equilibrium data ........................................... References ........................................................
4 4.1 4.1.1 4.1.2 4.1.3 4.1.4 4.2 4.2.1 4.2.2 4.3 4.3.1 4.3.2 4.3.3 4.4 4.4.1 4.4.2
................................................. ................................................. ..................................................... ......................................................... ................................................... ....................................................... ............................. ............................................. ................................................. ................................ ................................. ................................... ....................... ................................. .......................... ..................................
SI(3rYy PRonSsING Classical mthods Interpolation Smmthing Differentiation Integratim Spline functions in signal prccessing Interpolating splines Smmthing splines Fourier transform spectral methods Continuous Fourier transformation Discrete Fourier transformation Application of Fourier transform techniques Applications and further problem Heuristic methods o f local interpolation Praessing of spectroscopic data References
........................................................
130 137
1BE
194 205 209 209 210 213 214 217 220
224 224 228 230
234 235 235 2416 246 247 249 252 257 257 258 2&0
VII 5 5.1 5.1.1 5.1.2 5.1.3 9.2
5.3 5.4 5.5 5.6 5.7 5.8 5.8.1 5.8.2
.................................................. ............. ............................................. ................................................. ........................................
D W I W MWLS rUunerical solution of ordinary differential e q w t i m s Runge .Kutta methods hltistep methods Adaptive step size control Stiff differential equations Sensitivity analysis Qmsi steady state approximation Estimation of parameterm in differential equations Identification of linear system Determining thh input of a linear system by numrrical deconvolutim Applications and furtkr problem Principal component analysis of kinetic models Identification of a linear cunpartmntal model References
311 311 313 317
WECT
319
...................................... ..............................................
.................................. ................ .................................. .................................
.................... .................... ........................................................ INDEX .....................................................
261 263 266
269 272 273 278 283
286 297 306
VIII
INTRODUCTION This book is a practical introduction to scientific computing and offers BPSIC subroutines, suitable for use on a perscnal complter, for solving a number of important problems in the areas of chmistry, biology and pharmacology. Althcugh our text is advanced in its category, we assume only that you have the normal mathmatical preparation associated with an undergraduate degree in science, and that you have some familiarity with the S I C programing language. We obviously do not persuade you to perform quantum chemistry or molecular dynamics calculations on a PC , these topics are even not considered here. There are, however, important information handling needs that can be performed very effectively. A
PC
can be used to
model many experiments and provide information what should be expected as a
result. In the observation and analysis stages of an experiment it can acquire raw data and exploring various assumptions aid the detailed analysis that turns raw data into timely information. The information gained f r m the data can be easily manipulated, correlated and stored for further use. Thus the PC has the potential to be the major tool used to design and perform experiments, capture results, analyse data and organize information. Why do w e use BASIC? Althcugh we disagree with strong proponents of one or another programing language who challenge the use of anything else on either technical or purely motional grounds, m
t BASIC dialects certainly have
limitations. First, by the lack of local variables it is not easy to write multilevel, highly segmented programs. For example, in FWTRAN you can use e in a largely subroutines as "black boxes" that perform 5 ~ operations unknown way, whereas programing in BASIC requires to open tl-ese black boxes
up to certain degree. We do not think, hOwever, that this is a disadvantage for the purpose of a book supposed to teach you numerical methods. Second, BASIC is an interpretive language, not very efficient for programs that do a large amwnt of "number - crunching'' or programs that are to be run many times. kit the loss of execution speed is compensated by the interpreter's ability to enable you to interactively enter a program, immdiately execute it and see the results without stopping to compile and link the program. There exists no more convenient language to understand how a numerical method works. BASIC is also superb for writing relatively small, quickly needed programs of less than llaaw program lines with a minimvn programming effort. Errors can be found and corrected in seconds rather than in hours, and the machine can be inmediately quizzed for a further explanation of questionable answers or for exploring further aspects of the problem. In addition, once the program runs properly, you can use a S I C
compiler to make it run faster. It is also important that
IX on most PC’s BASIC is usually very powerful for using all re5Wrce5,
including graphics, color, sound and commvlication devices, although such aspects will not be discussed in this book. Why do we claim that cur text is advanced? We believe that the methods and programs presented here can handle a number of realistic p r o b l w with the power and sophistication needed by professionals and with simple, step - by step introductions for students and beginners. In spite of their broad range of applicability, the subrcutines are simple enwgh to be completely understood and controlled, thereby giving m r e confidence in results than software packages with unknown source code. Why do we call cur subject scientific computing? First, w e asthat you, the reader, have particular problems to solve, and do not want to teach you neither chemistry nor biology. The basic task we consider is extracting useful information fran measurements via mcdelling, simulatim and data evaluation, and the methods you need are very similar whatever your particular application is. More specific examples are included only in the last sections of each chapter to show the power of some methods in special situations and pranote a critical approach leading to further investigation. Second, this book is not a course in numerical analysis, and we disregard a number of traditional topics such as function approximation, special functions and numerical integration of k n m functions. These are discussed in many excellent books, frequently with PASIC subroutines included. Y o u will find here, however, efficient and robust numerical methods that are well established in important scientific applications. For each class of problems w e give an introduction to the relevant theory and techniques that should enable you to recognize and use the appropriate methods. Simple test examples are chDsRl for illustration. Although these examples naturally have a numerical bias, the dominant theme in this book is that numerical methods are no substitute for poor analysis. Therefore, we give due consideration to problem formlation and exploit every opportunity to emphasize that this step not only facilitates your calculations, but may help ycu to avoid questionable results. There is nothing mt-e alien to scientific computing than the use of highly sophisticated numerical techniques for solving very difficult p r o b l w that have been made 50 difficult only by the lack of insight when casting the original problem into mathematical form. What is in this book? It cmsists of five chapters. The plrpose of the preparatory Chapter 1 is twofold. First, it gives a practical introduction to basic concepts of linear algebra, enabling you to understand the beauty of a linear world. FI few pages will lead to comprehending the details of the two phase simplex method of linear programing. Second, you will learn efficient numerical procedures for solving simultaheous linear equations, inversion of matrices and eigenanalysis. The corresponding subrcutines are extensively used
X in further chapters and play an indispensable auxiliary role. chang the direct applications we discuss stoichimtry of chemically reacting systems, robust parameter estimation methods based on linear progrming, as well as elements of principal component analysis. Chapter 2 g i w s an overview of iterative methods of solving ncnlinear equations and optimization problems of m e or several variables. Though the one variable case is treated in many similar bcoks, wm include the corresponding simple subroutines since working with them may help you to fully understand t h
use of user supplied subroutines. For solution of simultaneous nonlinear equations and multivariable optimization problmns sane well established methods have been selected that also amplify the theory. Relative merits of different m e t W s are briefly discussed. As applications we deal with equilibrium p r o b l w and include a general program for complting chemical equilibria of gasems mixtures. Chapter 3 plays a central role. It concerns estimation of parameters in complex models fran relatively small samples as frequently encavltered in scientific applications. To d m s t r a t e principles and interpretation o f estimates we begin with two linear statistical methods (namely, fitting a line to a set of points and a subroutine for mrltivariable linear regression), but the real emphasis is placed on nonlinear problems. After presenting a robust and efficient general purpose nonlinear least squares -timation praedure we proceed to more involved methods, such as the multirespmse estimation of Box and Draper, equilibrating balance equations and fitting error-invariables models. Thcugh the importance of the6e techniques is emphasized in the statistical literature, no easy-twsc programs are available. The chapter is concluded by presenting a subroutine for fitting orthogonal polynomials and a brief s w r v ~ r yof experiment design approaches relevant to parameter estimation. The text has a numerical bias with brief discussion of statistical b a c k g r m d enabling you to select a method and interpret rexllts. Sane practical aspects of parameter estimation such as near-singularity, linearization, weighting, reparamtrization and selecting a m d e l from a harologous family are discussed in more detail. Chapter 4 is devoted to signal processing. Through in most experiments w e record
5om
quantity as a function of an independent variable (e.g.,
time,
frequency), the form of this relationship is frequently u n k n m and the methods of the previous chapter do not apply. This chapter gives a s w r v ~ r yof classical
XI techniques for interpolating, smoothing, differentiating and integrating such data sequences. The same problems are also s o l d using spline functions and discrete Fourier transformation methods. Rpplications in potentiaetric titration and spectroscopy are discussed. The first two sections of Chapter 5 give a practical introduction to dynamic models and their numslrical solution. In addition to 5omp classical methods, an efficient procedure is presented for solving systems of stiff differential equations frequently encountered in chemistry and biology. Sensitivity analysis of dynamic models and their reduction based on quasy-steady-state approximation are discussed. The secaxl central problem of this chapter is estimating parameters in ordinary differential quations. ch efficient short-cut method designed specifically for PC's is presmted and applied to parameter estimation, numerical deconvolution and input determination. Application examples concern enzyme kinetics and pharmacokinetic canpartmental modelling.
Prwram modules and sample p r w r w For each method discussed in the book you will find a W I C subroutine and an example consisting of a test problem and the sample program we use to solve it. k r main asset5 are the subroutines we call program modules in order to distinguish than from the problem dependent user supplied subroutines. These modules will serve you as building blocks when developing a program of ycur aw and are designed to be applicable in a wide range of problem areas. To this end concise information for their use is provided in remark lines. Selection of available names and program line numbers allow you to load the modules in virtually any cabinatim. Several program modules call other module(.). Since all variable names consist of two characters at the most, introducing longer names in your o w user supplied subroutines avoids any conflicts. These user supplied subroutines start at lines 600, 700, OE0 and %El , depending on the need of the particular module. Results are stored for further use and not printed within the program module. Exceptions are the ones corresponding to parameter estimation, where we wanted to save you from the additional work of printing large amxlnt of intermediate and final results. You will not find dimension statements in the modules, they are placed in the calling sample programs. The following table lists our program modules.
XI1 Table 1 Program mcdules
M10
M11 M14 M15
Ml6 M17
M1B
m Mzl
Mn Mz3 Mz4
M25
M26
m M31
M32 M34 M36 M4QJ M41 M42
M45
MW
Vector coordinates in a new basis Linear programing two phase simplex method LU decanposition of a square matrix Solution of sirmltanecus linear equations backward substitution using LU factors Inversion of a positive definite symmetric matrix Linear equations with tridiagonal matrix Eigenvalues and eigenvectors of a symmetric matrix - Jacobi method Solution of a cubic equation - Cardano method Solution of a nonlinear equation bisection method Solution of a nmlinear equation regula falsi method Solution of a nonlinear equation secant method Solution of a nonlinear equation Newton-Raphson method Minirmm of a function of one variable method of golden sections Minimum of a function of one variable parabolic interpolation - Brent's method Solution of sirmltanmus equations X = G ( X ) Wegstein method Solution of sirmltanmus equations F(X)=0 Newton-Rapkon method Solutim of sirmltanmus equations F(X)=0 Broyden method Minimization of a function of several variables Nelder-Mead method Minimization of a function of several variables Davidonfletcher-Powell method Fitting a straight line by linear regression Critical t-value at 95 % confidence level kltivariable linear regression weighted least squares Weighted least squares estimation of parameters in multivariable nonlinear models Gauss-Newton-Marquardt method Equilibrating linear balance equations by least squares method and outlier analysis
1002
1044
llm
1342 1460
14eW
1500 lM0
1538
1702
1656 1740
1WIZI Zen0
1938 2078
21m
2150
2200
2254
2302
2354
24m
2454
25m
2540
2 m
2698
ma0
3074
31m
3184
3 m
3336
34m
3564
3 m 4 m
41m
3794 4096 4156
4 m
4454
45x3
4934
mzizl
5130
XI11 M52
M55 M.0
M6l M62 M63 M64
M65 M67 M70 M71
M72
M75
Fitting an error-in-variables model of the form F(Z,P)=RJ modified Patindeal - Reilly method Polynomial regression using Forsythe orthogonal polynomials Newton interpolations computation of polynomial coefficients and interpolated valws Local cubic interpolation 5-pint cubic smoothing by Savitzky and Golay Determination of interpolating cubic spline Function value, derivatives and definite integral of a cubic spline at a given point Determination of Maothing cubic spline method of C.H. Reinsch Fast Fourier transform Radix-2 algorith of Cooley and Tukey Solution of ordinary differential equations fourth order Wga-Uutta method Solution of ordinary differential equations predictor-corrector method of Milne Solution of stiff differential equations semi-implicit Wge-Kutta method with backsteps
54m 5628 6054 6156 6250
6392 6450 6662 6782
7058
7288
RosRlbrak-Gottwald-khnner
7416
Estimation of parameters in differential equations by direct integral method extension of the Himnelblau-Jones-Bischoff method
8040
While the program modules are for general application, each sample program is mainly for demonstrating the use of a particular module. To this end the programs are kept as concise as possible by specifying input data for the actual problem in the DFITA statements. TMs test examples can be checked simply by loading the corresponding sample program, carefully merging the required modules and running the obtained program. To solve your you should replace DFITA lines and
ow7
problems
the user supplied subroutines (if
neecled). In more advanced applications the READ and DFITA statemeots may be replaced by interactive input. The following table lists thp sample programs.
DISKETTE, SUITABLE FOR MS-DOS COMPUTERS. THE DISKETTE CAN BE ORDERED SEPARATELY. PLEASE, SEE THE ORDER CARD IN THE FRONT OF THIS BOOK.
-
!
I -
XIV Table 2 Sample programs
Identifier Example EX112 EX114
1.1.2 1.1.4
EX12
1.2
EX132 EX133
1.3.2 1.3.3
EX134
1.3.4
EX14
1.4
EX15
1.5
EX16
1.6
EX182
1.8.2
EX183 EX184
1.8.3 1.8.4
EX211 EX212 EX221 EX231 EX232
2.1.1 2.1.2 2.2.1 2.3.1 2.3.2
EX241
2.4.1
EX242
2.4.2
EX253
2.5.3
EX254 EX31
EX32
2.5.4 3.1 3.2
EX33
3.3
EX37 EX38
3.7 3.8
Title
Modules called
Vector coordinates in a new basis Inversion o f a matrix by Gauss-Jordan elimination Linear programing by two phase simplex method Determinant by LU decomposition Solution of linear equations by LU decomposition Inversion of a matrix by LU decomposition Inversion of a positive definite symmetric matrix Solution of linear equations with tridiagonal matrix Eigmvalue-eigmvector decomposition of a sym. matrix Fitting a line - least absolute deviations Fitting a line - minimax method Analysis of spectroscopic data with backgrwnd Molar volume by Cardano method Molar volume by bisection Optimm dosing by golden section method Reaction equilibrium by Wegstein method Reaction equilibrium by Newton-Raphsm
M1O
method
M14,M15,MJ1
Rosenbrock problem by Nelder-Mead method Rosenbrock problem by DavidonfletcherPowell method Liquid-liquid equilibrium by Broyden method Chemical equilibrium of gasews mixtures Fitting a regression line Wltivariable linear regression acid catalysis Nonlinear Ls(3 parameter estimation Bard example Equilibrating linear balances Error-in-variables parameter estimation calibration
see EX112
M10,M11 M14 M14,M15 M14,M15 Mlb M17 M18
see EX12 see EX12 see EX12
m M21
m5 M30
M34 M36
M32
M14,M15 M40,M41 Ml6,MlB,M41,M42 Mlb,MlB,M41,M45 Ml6,M50 Mlb,MlB,M41,M45, M52
EX39
3.9
EX3104
3.10.4
EX411 EX413
4.1.1 4.1.3
EX421 EX422 EX433 EX511
4.2.1 4.2.2 4.3.3 5.1.1
EX52
5.2
EX53
5.3
EX55
5.5
EX56
5.6
Polynomial regression using Forsythe MS5 orthogona1 polynomials Van Laar parameter estimation (error-inMlb,Ml8,M41,M45, variables method) M52
Newton interpdlatim Smmthed derivatives by Savitzky and Golay Spline interpolation Smoothing by spline Application of FFT techiques Fermentation kinetics by RungeKutta method Solutim of the Dregmator model by semi-implicit method Sensitivity analysis of a microbial growth process Direct integral parameter estimation Direct integral identification of a linear system
M60
M62
m,w M65 M67
m M14 , M 1 5 , M 2 M14,M15,M2 M14,M15,Mlb,Ml8, M4l,M63,M72,M75
Mlb,M18,M41,M42, M63
EX57
5.7
Inplt function determination to a given respmse
see EX56
Prwram portability We have attempted to make the programs in this b m k as generally useful as possible, not just in terms of the subjects concerned, but also in terms of their degree of portability among different PC’s. This is not easy in BASIC, since the recent interpreters and compilers are usually much more generous in terms of options than the original version of BASIC developed by J o h Kemeny and Thomas Kurtz. Standardization did not keep up with the various i m p r o v m t s made to the language. Restricting consideration to the c m subset of different W I C dialects would mean to give up sane very comfortable enhancements introduced during the last decade, a price t m high for complete compatibility. Therefore, we choose the popllar Microsoft’s WSIC that canes installed on the IBM FC family of complters and clmes under the name (disk) BASIC, BASICA or GWBASIC. A disk of Ms WS (i.e., PC WS) format, containing all programs listed in Tables 1 and 2 is available for purchase. If you plan to use more than a few of the programs in this b m k and you work with an IBM PC or compatible, ycu may find it useful to obtain e copy of the disk in order to save time required for typing and debugging. If you have the
XVI sample programs and the program mcdules on disk, i t i s very easy t o run a t e s t Example 4.2.2
example. For instance, t o reproduce
BASIC
, then
you should s t a r t your
load the f i l e "EX4Z.EAS", merge the f i l e "M65.EAS" and run the
program. I n order t o ease merging the programs they are saved i n
format
ASCII
on the disk. Y o u w i l l need a p r i n t e r since the programs are w r i t t e n w i t h
LPRINT the
statements.
I f you prefer p r i n t i n g t o the screen, you may change a l l
LFRINT statements t o
BASIC
PRIM
statements, using the e d i t i n g f a c i l i t y o f the
interpreter o r the more user f r i e n d l y change o p t i m of any e d i t o r
program. d i a l e c t s you may experience sane
BCISIC
Using our programs i n other
d i f f i c u l t i e s . For example, several d i a l e c t s do not allow zero indices of an array, r e s t r i c t the feasible names o f variables, give
+1
instead o f
f o r a logical expression i f i t i s true, do not allow the structure
... ELSE,
have other syntax f o r formatting a
PRINT
IF
-1
... Tl€N
statement, etc. &cording
t o our experience, the most dangerous e f f e c t s are connected w i t h the d i f f e r e n t treatment o f
FOR
... N X T
Imps. In s ~ vee r s i m s of the language the
statements inside a l w p are carried out mce, even i f the I m p c c n d i t i m does not allow it. I f running the following program
10 FOR 1.2 TO 1
28 PRINT "IF YOU SEE THIS, THEN YOU SHOULD BE CBREFUL WITH YOUR BASIC' 38 NEXT I w i l l result i n no o u t p i t , then you have no r e a m t o worry. Otherwise you w i l l
f i n d i t necessary t o i n s e r t a test before each m p t y . For example, i n the module
1 i s greater than
K
-
1 (i.e.,
M15 K
max
,
where we have n constraints, m+n variables and A denotes the (extended) coefficient matrix of dimensions nx(m+n). (Here w e assume that the right-hand side is nonnegative - a further assumption to be relaxed later on.) The key to the basic solutions of
solving the original problem is the relationship bet-
the matrix equation Ax = b and the vertices of the feasible polyhedron. Pn obvious, but far from efficient procedure is calculating all basic solutions of the matrix equation and comparing the values of the objective function at the feasible ones. The simplex algorith (refs.7-8) is a way of organizing the above procedure rmch more efficiently. Starting with a feasible basic solution the praedure will move into another basic solution which is feasible, and the objective function will not decrease in any step. These advantages are due to the clever choice of the pivots. A starting feasible basic solution is easy to find if the original constraints are of the form
(1.6) with a nonnegative right-hand side. The
extended coefficient matrix
A
the colwms of A
in
(1.32) includes the identity matrix (i.e.,
corresponding to the slack variables xm+1,...,xm.
1
Ccnsider the canonical basis and set xi = 0 for i = l,...,m, and x ~ =+ bi~ for i = l,..,n. This is clearly a basic solution of Ax = b , and it is feasible by the assumption bi 2 0 Since the starting basis is canonical, we
.
know the coordinates of all the vectors in this basis. As in Section 1.1.3 , we consider the right-hand side b as the last vector % = b, where M = m+n+l To describe me step of the simplex algorithn assume that the vectors %. We need this indirsct present in the current basis are %,
.
...,
notation because the indices El, BZ,
..., Bn
are changing during the steps o f
.
the algorithn. They can take values from 1 to m+n Similarly, we use the notation cBi for the objective function coefficient corresponding to the i-th basis variable. Assume that the current basic solution is feasible, i.e, the coordinates of 4.1 are nonnegative in the current basis. We first list the operations to perform: (i)
Complte the indicator variables zj - cj where z j
for all j = 1,
...,m+n
is defined by
n
zj =
2, a.1J.cEll . -77
(1.33)
'
i=l The expression
(1.33) can be complted also for
j =
M. In this case it
17
gives the current value of the objective function, since the "free" variables vanish and aim is the value of the i-th basis variable. zq - c < z . - c . for all 9- J J i.e., the column with the least indicator variable
(ii) Select the column index q such that j = 1,
..., m+n,
value. If zq - cq 2 0 , then we attained the optima1 solution, otherwise proceed to step (iii).
...,
< 0 for each i = 1, n, (i.e., there is no positive entry in 1q the selected column), then the problem has no bounded optimal solution.
(iii) If a -
Otherwise proceed to step (iv). (iv) Locate a pivot in the q-th column, i.e.,select the row index that am (v)
>
0 and
apM/aw
< aiM/aiq
for all
i = l,...,n
, and
Replace the p-th vector in the current basis by new coordinates by (1.18).
p wch
if aiq
>
0.
calculate the
To understand why the algorithm works it is convenient to consider the
indicator variable z j i j as loss minus profit. Indeed, increasing a "free" variable xj from zero to one results in the profit cj CI, the other hand, W 5 t be reduced by aij the values of the current basis variables xBi =
.
for i = l,...,n in order to satisfy the constraints. The loss thereby occuring is zj. Thus step (ii) of t k algorithm will help us to mve to a new basic solution with a nondecreasing value of the objective function. Step (iv) will shift a feasible basic solution to another feasible basic solution. By (1.18) the basis variables (i.e., the current coordinates of the in the new basis are right-hand side vector 9) (1.34)
.
0 really indicates the optimal solution. q
9 -
This requires a -hat deeper analysis. Let B denote the nrh matrix formed by the c o l m vectors agl,agZ, a& We have to show that for every feasible solution y , the objective function does not increase, i.e.,
...,
cTBB-'b
2 cTy
.
.
(1.37)
We will exploit
the fact that all indicator variables are nonnegative:
zj 2 cj, j = 1,2
,...,m+n
.
(1.38)
& virtue of the definition (1.33) z J. =
cTg-laj, j =
1,2,
...m+n .
Using this expression in nonnegative y,
(1.39)
(1.38) and multiplying t h e
gives m+n
j-th
inequality by the
inequalities whose sum is
(1.40)
m+n Since y
is the solution of the matrix equation,
2,
7-l
ajyj = b
. Introducing
j=1
this equality into
(1.40) gives the inequality
(1.37) that we wanted to
prove. Similarly to the derivatim of that
(1.35) and
(1.36)
one can easily show
19
(1.41)
Thus the coordinate transformations (1.18) apply also to the indicator variables and to the objective function. Ch the basis of this observation it is convenient to perform all calculations on a matrix extended by the z j i j values and the objective function value as its last row. This extended matrix is the so-called simplex tableau.
If the j-th column is in the basis then zjij = 0 follows, but an entry of the last row of the simplex tableau may vanish also for a column that is not in the basis. If this situation occures in the optimal simplex tableau then the linear programming problem has several optimal basic solutions. In our preliminary example this may happen when contour lines of the objective function are parallel to a segment of the kxlndary of the feasible region. The simplex algorith will reach the optimal solution in a finite number of steps if the objective function is increased in each of them. In special situations, however, the objective function value may be the same in several consecutive steps and we may return to the same basis, repeating the cycle again. The analysis of cycling is a nice theoretical problem of linear programming and the algorithms can be made safe against it. It is very unlikely, however, that you will ever encounter cycling when solving real-life problems. 1.2.2 Reducinq qeneral problems to normal form. The twO-DhaSe simplex method In this section we state a rmch more general linear programming problem, introducing notations which will be used also in our linear programming module. xw The N Let W be the number of variables, denoted by x1,x2, constraints are of the form
...,
.
-
where we adopt the notation < <min. J’ . CIlL3X-r
(1.44)
This generalized problem can easily be translated to the normal form by the follming tricks. If the right-hand side is negative, multiply the constraint by (-1).
As discussed, a constraint with
48) d o e s not alter the location of the minimum m r e than the desired tolerance EP = 0.1 mg In Fig. 2.10 we have already
.
shown t k concentration of the drug following the dosis
Dopt, taken at the
beginning o f each period of length T = 24 h. kcording to this solution, one tablet a day does not enable us to keep drug concentration c(t) within the therapwtic range for all times. We could decrease the period, i.e., T = 20 h w w l d be a suitable choice, but it is not a practical advice to take a tablet each 20 hours. Taking two tablets a day (i.e., with
= 12 h), there exists an
T
interval CDL, DU1 such that f(D) = 0 for all D in this interval. F r a physiological point of view the best choice is 9, i.e., the least dosis that gives the desired drug concentration in blood. The golden section search d u l e as presented here will result in this lower limit line 2536
we
used the relatimi sign
">"
and not
(4 = 138.2 ">="
.
mg)
because
in
Table 2.6 Steps in the golden section search step 1 2 3 4
18 19 final
XL, mg 0
xu3
mg
relation of fl to f2
10DZI
418.034 236.M
472.136
335.275 335.J82
335.555
>
The method fails if the three points are on a straight line, since then the dmaninator is zero (i.e., the parabola has no minimum). In addition, equation (2.20) will locate
.
the maximum rather than the minimum if the coefficient of the second order term in the interpolating parabola is negative. To avoid these problerm Brent (ref. 11) aggested a canbination of the parabolic fit and the golden section bracketing tRhnique. The main idea is to apply equation (2.i0) only if (i) the next estimate falls within the most recent bracketing interval; (ii) the movement f r m the last estimate is less than half the step taken in the iteration before the last. Otherwise a golden section step is taken. The follcwing module based on (ref. 12) tries to avoid function evaluation near a previously evaluated pint. Prwram module M26 ib0B REH t t t t t t t t l t l t t t t l t l l t l l t t t t l t t l l ~ ~ t ~ l l l ~ l ~ l t l t t t t t ~ l I HINIHUM OF A FUNCTION OF ONE VAHIPBLE ?b0? REM 1 2604 REM t PARABOLIC INTERPOLATIDN - RRENT'S NETHOD t 2506 RE! ~ t t t t ~ t ~ ~ t l t t t t l ~ l t t l l l t l l l l ~ t t l t ~ l l ~ l l l l ~ l t l ! l ~ l ~ l 2608 REH INPUT: 2blB REM XL LOUEF! BOUND 2512 REtl XU UPPER BOiiND 2614 REM EP ERROR TOLERANCE ON BINIHUH POINT ?blb REIl IN tlAxlMUH NUMBER OF ITERATION
97 2618 REM 2620 RE4 2622 REH 2624 RE1 2626 REM 2 6 3 REH 2630 HEM 2632 RE1 2634 ER.8 76% V=X 263P REM
OUTPUT: X F ER
ESTIflBTE OF THE flINIHU1 POINT MINIMUIi FUNCTION VALUE F ( X ) ST4TUS FLAG 0 1 TOLERANCE NOT ATTAINED NI 'Ins ITERATIONS USER-SUPPLIED SUBROUTINE FRO\ LINE 901; X ---) F ( FUNCTION EVALUATION 1 :RL~(SQR(S)-l)/2 :DI=(XU-XL)/2 :X=(XU+XL)/2 :W=X :E=0: GOSUE 900 :FX=F :FV=F :F&F ----- LOOP 2640 FOR I T = l TO Ifl 2642 ! M ~ ( X L t X U ) i 2 : I F ABS[X-Xn)i=2tEP-(XU-XLII2 THEN 2696 2644 I F ABS(E)(EP THEN 2664 2646 RER ----- AUXILIARY QUANTITIES TO A PARABOLIC STEP 2649 Rz ( X - W ) t (FX-FV) :Il=(X-V! 1(FX-FIJI :P'( 14') tQ-( X-W 1 tR 23550 Q = ? t ! Q - R ! : I F 0:>=0 THEN P=-P ELSE 0.-0 2152 EL=E :E=DX 2654 I F APS(P))=ABS(QtEL/2) OR Pi=Qt(XL-X) OR P)=QtlXU-X) THEN 2664 2656 REH ----- PARABOLIC STEP 2658 DX=F'/Q : U = l t D I THEN I F Xn:}X THEN DX=EP ELSE DXz-EP 26t.8 I F (U-XL)