MATRIX-ANALYTIC METHODS THEORY AND APPLICATIONS
This page is intentionally left blank
PROCEEDINGS OF THE FOURTH INTERNATIONAL CONFERENCE
MATRIX-ANALYTIC METHODS THEORY AND APPLICATIONS Adelaide, Australia
1 4 - 1 6 July 2002
edited by
Guy Latouche Universite Libre de Bruxelles, Belgium
Peter Taylor The University of Melbourne, Australia
V^fe World Scientific wb
New Jersey • London • Sir Singapore • Hong Kong
Published by World Scientific Publishing Co. Pte. Ltd. P O Box 128, Farrer Road, Singapore 912805 USA office: Suite IB, 1060 Main Street, River Edge, NJ 07661 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.
MATRIX-ANALYTIC METHODS: THEORY AND APPLICATIONS Copyright © 2002 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
Printed in Singapore by World Scientific Printers (S) Pte Ltd
V
Preface Matrix-analytic methods are fundamental to the analysis of a family of Markov processes rich in structure and of wide applicability. They are extensively used in the modelling and performance analysis of computer systems, telecommunication networks, network protocols and many other stochastic systems of current commercial and engineering interest. Following the success of three previous conferences held in Flint (Michigan), Winnipeg (Manitoba) and Leuven (Belgium), the Fourth International Conference on Matrix-Analytic Methods in Stochastic Models was held in Adelaide (Australia) in July 2002. The conference brought together the top researchers in the field who presented papers dealing with new theoretical developments and applications. This volume contains a selection of papers presented at the conference. The papers were subject to a rigorous refereeing process comparable to that which would be used by an international journal in the field. The papers fall into a number of different categories. Approximately a third deals with various aspects of the theory of block-structured Markov chains. They demonstrate how the specific structure of transition matrices can be exploited. Another third of the papers deals with the analysis of complex queueing models. The final third deals with parameter estimation and specific applications to such areas as cellular mobile systems, FS-ALOHA, the Internet and production systems. Three leading researchers in the field were invited to present a lecture to the conference: Masakiyo Miyazawa presented a paper on Markov additive processes in the context of matrix-analytic methods, V. Ramaswami discussed a number of applications and G.W. Stewart shared his vast experience on numerical methods for block Hessenberg matrices. A paper based on Masakiyo Miyazawa's talk is included in this volume. In order to encourage young researchers to attend the conference, the organisers implemented a streamlined procedure, accepting submissions from students long after the general deadline. As a consequence, some late submissions by students are not included in these proceedings. They were, nevertheless, part of the official conference programme. We would like to thank Kathryn Kennedy, David Green, Angela Hoffmann and Michael Green for their help in preparing the manuscripts for final publication. We acknowledge with gratitude financial assistance from several sources, specifically, the Australian Mathematical Society, the Teletraffic Research Centre, the Department of Applied Mathematics at the University of
VI
Adelaide and the University of Adelaide itself. Finally, it is a pleasure to acknowledge that the workshop could not have been held had it not been for the active involvment of reviewers and the authors who were all very good at respecting deadlines. Guy Latouche Peter Taylor
VII
Contents
Preface
v
Author Index
xi
Organisers
xii
Reviewers
xiii
Sponsors
xiv
A New Algorithm for Computing the Rate Matrix of GI/M/1 Type Markov Chains Attahiru Sule Alfa, Bhaskar Sengupta, Tetsuya Takine and Jungong Xue
1
Decay Rates of Discrete Phase-Type Distributions with Infinitely-Many Phases Nigel Bean and Bo Friis Nielsen
17
Distributions of Reward Functions on Continuous-Time Markov Chains Mogens Bladt, Beatrice Meini, Marcel F. Neuts and Bruno Sericola
39
A Batch Markovian Queue with a Variable Number of Servers and Group Services Srinivas R. Chakravarthy and Alexander N. Dudin
63
Further Results on the Similarity Between Fluid Queues and QBDs Ana da Silva Soares and Guy Latouche
89
Penalised Maximum Likelihood Estimation of the Parameters in a Coxian Phase-Type Distribution Malcolm Faddy
107
VIII
M A P / P H / 1 Queues with Level-Dependent Feedback and Their Departure Processes David Green
115
A Matrix Analytic Model for Machine Maintenance David Green, Andrew V. Metcalfe and David C. Swailes
133
A Linear Program Approach to Ergodicity of M / G / 1 Type Markov Chains with a Tree Structure Qi-Ming He and Hui Li
147
Matrix Geometric Solution of Fluid Stochastic Petri Nets Andrds Horvdth and Marco Gribaudo
163
A Markovian Point Process Exhibiting Multifractal Behavior and Its Application to Traffic Modeling Andrds Horvdth and Miklos Telek
183
Convergence of the Ratio "Variance Over Mean" in the IPhP 3 Guy Latouche and Marie-Ange Remiche
209
Application of the Factorization Property to the Analysis of Production Systems with a Non-Renewal Input, Bilevel Threshold Control, Setup Time and Maintenance Ho Woo Lee, No Ik Park and Jongwoo Jeon
219
A Constructive Method for Finding ^-Invariant Measures for Transition Matrices of M/G/1 Type Quan-Lin Li and Yiqiang Zhao
237
A Paradigm of Markov Additive Processes for Queues and Their Networks Masakiyo Miyazawa
265
Spectral Methods for a Tree Structure MAP Shoichi Nishimura
291
Sojourn and Passage Times in Markov Chains Claudia Nunes and Antonio Pacheco
311
IX
Matrix-Analytic Analysis of a MAP/PH/1 Queue Fitted to Web Server Data Alma Riska, Mark S. Squillante, Shun-Zheng Yu, Zhen Liu and Li Zhang
333
Analysis of Parallel-Server Queues under Spacesharing and Timesharing Disciplines Jay Sethuraman and Mark S. Squillante
357
Robustness of FS-ALOHA Benny van Houdt and Chris Blondia
381
Accurate Estimate of Spectral Radii of Rate Matrices of GI/M/1 Type Markov Chains Qiang Ye
403
This page is intentionally left blank
Author
Index
Alfa, A.S. Bean, N.G. Bladt, M. Blondia, C. Chakravarthy, S.R. da Silva Soares, A. Dudin, A.N. Faddy, M.J. Green, D. Gribaudo, M. He, Q.-M. Horvath, A. Jeon, J. Lee, H.W. Li, H. Li, Q.-L. Liu, Z. Latouche, G. Meini, B. Metcalfe, A.V. Miyazawa, M.
1 17 39 381 63 89 63 107 115, 133 163 147 163,, 183 219 219 147 237 333 89 ,209 39 133 265
Neuts, M.F. Nielsen, B.F. Nishimura, S. Nunes, C. Pacheco, A. Park, N.I. Remiche, M.-A. Riska, A. Sengupta, B. Sericola, B. Sethuraman, J. Squillante, M.S Swailes, D.C. Takine, T. Telek, M. van Houdt, B. Xue, J. Ye, Q. Yu, S.-Z. Zhang, L. Zhao, Y.Q.
Organisers Conference chair David Green, University of Adelaide, Australia Programme co-chairs Guy Latouche, Universite Libre de Bruxelles, Belgium Peter Taylor, University of Adelaide, Australia Organising committee Nigel Bean, University of Adelaide, Australia Mark Fackrell, University of Adelaide, Australia Barbara Gare, University of Adelaide, Australia Angela Hoffmann, University of Adelaide, Australia Kathryn Kennedy, University of Adelaide, Australia Scientific advisory committee Attahiru Alfa, University of Windsor, Canada Dieter Baum, University of Trier, Germany Nigel Bean, University of Adelaide, Australia Dario Bini, University of Pisa, Italy Lothar Breuer, University of Trier, Germany Srinivas Chakravarthy, Kettering University, United States of America David Green, University of Adelaide, Australia Qi-Ming He, Dalhousie University, Canada Dirk Kroese, University of Queensland, Australia Herlinde Leemans, Catholic University of Leuven, Belgium Yuanlie Lin, Tsinghua University, China Naoki Makimoto, The University of Tsukuba, Japan Beatrice Meini, University of Pisa, Italy Marcel F. Neuts, The University of Arizona, United States of America Bo Friis Nielsen, Technical University of Denmark, Denmark Shoichi Nishimura, Science University of Tokyo, Japan Phil Pollett, University of Queensland, Australia V. Ramaswami, AT&T Labs, United States of America Marie-Ange Remiche, Universite Libre de Bruxelles, Belgium Werner Scheinhardt, University of Twente, The Netherlands Mark Squillante, IBM T.J. Watson Research Centre, United States of America Yukio Takahashi, Tokyo Institute of Technology, Japan Miklos Telek, Technical University of Budapest, Hungary Erik van Doom, University of Twente, The Netherlands Qiang Ye, University of Kentucky, United States of America
Reviewers Attihuru Alfa Dieter Baum Nigel Bean Dario Bini Lothar Breuer Srinivas Chakravrathy Mark Fackrell David Green Boudewijn Haverkort Qi-Ming He Dirk Kroese Guy Latouche Yuanlie Lin Naoki Makimoto Beatrice Meini Marcel Neuts Bo Priis Nielsen Shoichi Nishimura Phil Pollett V Ramaswami Marie-Ange Remiche Mark Squillante Yukio Takahashi Peter Taylor Miklos Telek Erik van Doom Qiang Ye
XIV
Sponsors Australian Mathematical Society TeletrafRc Research Centre Department of Applied Mathematics, University of Adelaide University of Adelaide
1 A N E W ALGORITHM FOR COMPUTING THE RATE MATRIX OF G I / M / 1 T Y P E MARKOV CHAINS
ATTAHIRU SULE ALFA Department of Industrial and Manufacturing Systems Engineering, Windsor, Windsor, Ontario, Canada, N9B 3P4 E-mail:
[email protected] University of
BHASKAR SENGUPTA C&C Research Labs., NEC USA Inc., 4 Independence Way, Princeton NJ 08540, U.S.A. Email:
[email protected] TETSUYA TAKINE Department of Applied Mathematics and Physics, Graduate School of Kyoto University, Kyoto 606-8501, Japan Email: takineQamp.i.kyoto-u.ac.jp
Informatics,
JUNGONG XUE Department of Industrial and Manufacturing Systems Engineering, University of Windsor, Windsor, Ontario, Canada, N9B 3P4 E-mail:
[email protected] In this paper, we present a new method for finding the R matrix which plays a crucial role in determining the steady-state distribution of Markov chains of the GI/M/1 type. We formulate the problem as a non-linear programming problem. We first solve this problem by a steepest-descent-like algorithm and point out the limitations of this algorithm. Next, we carry out a perturbation analysis and develop a new algorithm which circumvents the limitations of the earlier algorithm. We perform numerical experiments and show that our algorithm performs better than what we call the "standard method" of solution.
1
Introduction
Consider a Markov chain {{Xv, Nv); u = 0 , 1 , . . . } in which Xv takes a countable number of values 0 , 1 , 2 , . . . and Nv takes a finite number of values 1 , . . . , m . T h e transition probability matrix in block partition form is given
2
by
B0 Ao Ao B2 A2 A! Ao B3 A3 A2 A! Ao Bi At
where Ai and Bi for i = 0 , 1 , . . . are all mxm matrices. This is the type of chain referred to as a Markov chain of the GI/M/1 type (see Neuts 1 5 ) . If it is stable, the steady-state distribution of this Markov chain is known to have the matrix-geometric form. Let -Kk be a l x m vector whose elements nkj represent the steady state probability that Xv = k and Nv = j for k = 0 , 1 , . . . and j = 1 , . . . , m. Then the solution is given by %k = n0Rk, where R is the minimal nonnegative solution to the non-linear matrix equation oo
R = Y,RkAk
(2)
*=o and 7r0 is the left invariant eigenvector (corresponding to the eigenvalue of 1) of YlT=o B-kBk when normalized by the equation 7To(7 — R)~xe — 1. Throughout the paper, e is an m x 1 vector of ones. The computation of R plays a crucial role in queuing analysis and has attracted considerable attention from many researchers (see Neuts 15 , Grassmann and Heyman 7 , Gun 8 , Kao 9 , Latouche 10 - 11 ) Lucantoni and Ramaswami 1 4 , Sengupta 1 9 , Akar and Sohraby 1 ) . Numerous algorithms have been designed to compute the R matrix. In 15 , Neuts suggests these two iteration schemes oo
X0 = 0,
Xk+1 = 1£lXvkAk,
k>0,
(3)
v=0
and oo
Xo = 0,
Xk+i = ( Y^ XZAMI-A!)-1,
k>0,
(4)
which are shown to be such that 0 < Xk f R as k t oo. It is pointed out that the iteration Eq. (4) converges faster than Eq. (3). However, these schemes all suffer from slow convergence when 77, the Perron eigenvalue of R, is close to 1. To speed up convergence in this case, one can use the Newton method, which can be described as Xk+i = Xk + Yk,
(5)
3
where Yk is the unique solution to the linear system OO
OO
V— 1
Yk = ( £ X%AV - Xk) + J2 £ XiYuXT^Au, v=0
(6)
v=l j = l
see 18 . Although Newton method converges in far less number of steps, it could actually be more time-consuming than even the direct method Eq. (3), because of the need to solve the large linear system Eq. (6) at each iteration. To this end, some modifications of Newton method are suggested, where Yk is approximated. Different approximation strategies lead to different iterative methods. Usually, more accurate approximations take more time to compute, but result in fewer iteration steps. It is not easy to resolve the trade-off between them. We refer to 18 for a comprehensive survey. Several breakthroughs have been achieved in recent years for some special cases of GI/M/1 type Markov chains, among them are the logarithmic reduction algorithm by Latouche and Ramaswami 12 for QBD and invariant subspace method by Akar and Sohraby 1 for those with rational generation function. Even though some efficient quadratically convergent algorithms, see 5 6,13 ' , have been designed for computing the G matrix of general M / G / l type Markov chains, the same is not true for the computation of the R matrix for general G I / M / 1 type chains." In an earlier paper Alfa, Sengupta and Takine 2 developed a non-linear programming method for finding the R and G matrices in the GI/M/1 and M / G / l type Markov chains, respectively. In that paper the Karush-KuhnTucker (KKT) conditions were obtained for these two non-linear programming problems. While the non-linear matrix equations resulting from the KKT conditions may be solved using Newton iterates, the resulting algorithm is not efficient. The paper later focuses on the M / G / l type chains and develops an efficient algorithm for the G matrix using a simpler formulation. In the current paper, we focus on the GI/M/1 type Markov chain and develop a simple and efficient algorithm for the R matrix. We formulate the problem of finding the R matrix as a non-linear programming problem, then we design a steepest-descent-like method to solve it. At each iteration, a line search problem is required to be solved. Instead of a time-consuming process to find an optimal solution for this line search problem, we compute a nearly optimal solution with very little effort. Throughout the paper we assume that the Markov chain is stable. We also assume that the following two conditions hold, which is true in most applications of interest: 1. Every row of the matrix AQ has at least one positive element.
4
2. A = YlT=o Av is stochastic and YlT=i -^v is irreducible. These two conditions guarantee that the rate matrix R is irreducible, and thus the eigenvector of R corresponding to the Perron eigenvalue has entries with the same sign. We will explore this fact to prove that R is the unique solution to the non-linear programming problem. Throughout this paper, we denote by || * ||i and || * ||oo the 1-norm and oo-norm, respectively. We let BT denote the transpose of matrix B. This paper is organized as follows. In Section 2, we formulate the problem of finding the R matrix as a non-linear programming problem and present a steepest-descent-like method to solve it. In Section 3, we carry out a perturbation analysis to overcome the limitations of the steepest-descent-like algorithm and develop a new algorithm. In Section 4, we report the numerical results. 2
The Non-linear Programming Problem
In this section, we formulate a non-linear programming problem, which leads to the solution of the R matrix for the GI/M/1 paradigm. Let A(z) = J2T=o AkZk, \z\ < 1, and let \{z) be the eigenvalue with maximal real part associated with A{z). It is well-known from Neuts 15 that T], the Perron eigenvalue of the matrix R, is the smallest positive solution to the equation z = x{z) and that u, the left eigenvector (of dimension l x m ) of R associated with n, is also the eigenvector of A(r)) associated with n. Since R is irreducible, u can be chosen to be positive. There exist simple methods for computing 77 and u (Neuts 1 5 ). In what follows, we assume that ue = 1. For broad classes of GI/M/1 Markov chains, this kind of computation takes very little time, see 17 . Let X be any mxm matrix and let f(X) = Yl'kLo XkAk. For two matrices Y and Z, let Y °Z denote their elementwise product. We define the function H(X) as m
H{X) = £
( [ / ( * ) ] « - XaY
= eT((f(X)
- X) o (f(X)
-
X))e.
Theorem 1 If the transition matrix of the GI/M/1 system is positive recurrent, then the R matrix is the unique optimal solution to the following non-linear programming problem: minimize H(X) subject to uX = 7/u X > 0,
(7) (8) (9)
5
Proof: First, we observe that R satisfies the constraints and has an objective function value of zero. Therefore, it is the optimal solution to Eq. (7-9). Now we prove it is the unique solution. Suppose there exists another optimal solution Z. From H(Z) — 0, we have f(Z) = Z. Since R is the minimal nonnegative solution to the equation f(X) = X, we have Z > R. Thus uZ > uR = nu. Because of the fact that u is positive and Z ^ R, xxZ ^ 7711, which contradicts constraint Eq. (8). % Now let us discuss how to solve this non-linear programming problem. Suppose X is a nonnegative approximation for R satisfying uX = r/u. We can come up with a "better" approximation (i.e., one with a lower value of the objective function) by adding to X a correction in the direction d = f(X) - X. This leads to the following line search problem: Minimize
H(X + 6d)
Subject to u(X + Od) = 7711 X + Od > 0.
(10)
Since uf(X) = uA(i]) = JJU, we have ud = 0 and thus u ( X + Od) = rju for any 9. We denote the (i, j)th elements of the matrices X and d by Xij and d,j respectively. To make X + 6d nonnegative, 8 is required to be in the interval Omax] where Qmax — miriij < - p - : dij < 0 >
and
9min = -miriij
0;
dij
Then problem Eq. (10) is equivalent to the following problem: minimize H(X + 9d) S u b j e c t t o 9min
< 0
(1 - T))w
> a(l-j?)e, which leads to
»
When dk = f{Xk) — Xk is sufficiently small, Ek = Xk — Ris tiny so that OO
V—1
T(Efc) = /(fl + Ek) - f(R) - 5353(12>JStie»-1--'')>l„, ti=l j = 0
the truncation error of the first order expansion of f{R + Ek) at R, is of order 0(||.Efc||f) and is small compared to Ek- With an assumption on the bound of T(Ek), we present the perturbation result. Theorem 2 Let Ek = Xk — R and uR = rju. Let /3 = max, Uj and 7 = min, Uj. / / || 1, we interpret the factor 8 as a multiplicative reward, earned every step of the Markov process. Probabilistic arguments can now be recovered, as described in the text following equation (4.3). The expected reward matrices, Gk(S) and Uk(S), have the following probabilistic interpretation. Let the (i, j)th element of the matrix Gkn' be the probability that the process first visits level k — 1 at time point n, and does so in phase j , given that it starts in phase i of level k oo
at time point 0. Then, Gk(6) is ^Gkn)8n. n=l
Similarly, let the (i, j)th element
25
of the matrix [/^ be the probability that the process first returns to level k at time point n, and does so in phase j , given that it starts in phase i of level oo
k at time point 0. Then, Uk(6) is ] T U(kn)8n. For more details on these ideas n=l
see the papers by Bean, Pollett, Taylor and co-authors 13,14,9 and the paper by Ramaswami 15 . It then follows from the physical interpretations, that oo
Nkk(6) = J2[Uk(S)]n,
*>1,
(4.7)
n=0
Nmk(6)=l
n
Gt{S)\ Nkk(6),
\/=fc+i
m>k>l,
(4.8)
/
where throughout we assume that an empty product of matrices is the identity matrix of appropriate order and that Y\T=k Gj (^) 1S interpreted as Gm(6)Gm-i(S)---Gk+1(S)Gk(6). It then follows9 that the convergence radius of T is 0 = sup{8>l:X(U1(8)) 1} exists, is also /3. Note that the supremum of the set of values S for which a nonnegative solution for the family of matrices {Gk(5), k > 2}, or equivalently the family {Uk(S), k > 1}, exists is at least as large as 0 and in some circumstances is strictly greater than (3. 4-3
Determination
of 7]
Let a oG(8) denote the series oo
/
fc
\
a°G{6) = Y,[ 7 and finite for all S < 7, and so I/77 = 7. • Therefore, the decay rate of the phase-type distribution (T, a ) is given by the maximum of 1. the reciprocal of 3, the convergence radius of the transition matrix T, and 2. the reciprocal of 7, where 7 is the convergence radius of the matrix-series a o G(8). Here, both the convergence properties of the initial probability vector a and the properties of the transition matrix T combine to determine the decay-rate of the phase-type distribution (T, a). 4-4
Single Unifying Condition
These two conditions to describe the decay rate can be summarised neatly in a single condition. Bean, Pollett and Taylor9 state that G\(S) = 00
6 2_] [Ui(S)]n A\
= 8Nn{5)A2
, and so we can rewrite equation (4.11) as
71=0 OO
/
k
\
P(S) = Ytcck[Y[Gj(S)\e
+ a0.
(4.13)
The decay-rate, n, of the phase-type distribution (T, a) is therefore given by the reciprocal of the convergence radius of this function. The probabilistic interpretation of this expression is, of course, exactly that of P(S): it represents the expected reward for visiting the absorbing state 0, given that the process starts according to the initial probability distribution a at time point 0, where a visit to the absorbing state at time n earns reward 6n. We then require the convergence radius for this expected reward, and then n is its reciprocal. The reason that we don't work with this expression directly in the above derivation is that ./Vn (/3) can be finite or infinite. Thus, detecting (numerically
27
or analytically) the convergence radius is a hard task. Instead it is much easier to explicitly identify 0 and then work with a series where all the terms themselves are guaranteed to be finite, in other words to work with a o G(8). To identify /3 it is easier to use an expression that is strictly less than one if and only if Nn(S) is finite. This is exactly what we have done above by defining /3 in terms of \ {Ui(S)) as in equation (4.9). 5
Processes with Block Upper-Triangular Matrices
Having developed the results in the previous section, we can also apply them to the situation where the transition matrix T has block upper-triangular form. In other words, the transition matrix represents a process of M / G / l type. The only difference in the mathematical statements is that the matrix family {Gk(8), k > 1} is now defined as the minimal nonnegative solutions to the family of equations oo
/n+k—1
Gfc(*) = * £ 4 * > n=0
n
\
Ge(6)\,
V l=k
*>1,
/
and the matrix family {Uk(S), k > 1} is now given by oo
/n+k — 1
II *W > fc^L
Uk(S) = sY,A^[ n=l
6
\
G
\f=Jfe+l
/
Transition Matrices that represent Level Independent Q B D s
In this section we consider the special case of Section 4 where the behaviour of the process is independent of the level in which the process currently resides. Thus A(0k) = A0, A[k) = Ax and A{2k) = A2, for all k > 1. In this situation the analysis simplifies quite considerably as we simply require the one matrix G(S) that is the minimal nonnegative solution to the matrix-quadratic equation G = S [A2 + AiG + A0G2] ,
(6.1)
and the one matrix U{6) = 6[A1+A0G}. Some consequences of this include:
(6.2)
28
• The composite function a o G(S) and the generating function P(S) now denote the simpler matrix-series oo
aoG( f e G ( < S ) f c - \
(6.3)
fc=i
and oo
P(6) = ^akG(6)k
+ a0.
(6.4)
k=l
The remaining arguments remain as in Section 4. 7
Transition Matrices that represent Level Independent Birth-and-Death Processes
In this section we consider the special case of Section 6 where the phase space at each level is a singleton and so the matrices A0, Ai, A2 are replaced by the scalars a0, a\ and a2- In this special case, other methods of progress are possible 16 ' 17 , but we find it more convenient to continue by specialising the results in the previous sections. In this situation the analysis simplifies quite considerably as we no longer need to deal with matrices. Consequently, the scalar g(5) that is the minimal nonnegative solution to the quadratic equation axg + a0g2} ,
g-S[a2+
(7.1)
can be deduced analytically to be 9{d)
_ (l-