•*#;
Proceedings of the 5th Ritsumeikan International Symposium
STOCHASTIC PPOC^SS^S AI1D APPLICATION TO MATHEMATICAL finAncE Editors
Jiro Akahori • Shigeyoshi Ogawa • Shinzo Watanabe
Proceedings of the 5th Ritsumeikan International Symposium
STOCtlASTK PROCESSES AHD APPLICATIONS TO MATHEMATICAL finAflCE
This page is intentionally left blank
Proceedings of the 5th Ritsumeikan International Symposium
STOCHASTIC PROCESSES AI1D A P P L I C A T I O N T O MATHEMATICAL FinAflCE Ritsumeikan University, Japan
3 - 6
March 2005
Editors
Jiro Akahori Shigeyoshi Ogawa Shinzo Watanabe Ritsumeikan University, Japan
Y | ^ World Scientific NEW JERSEY
• tONDON
• SINGAPORE
• BEIJING • S H A N G H A I
• HONGKONG
• TAIPE! • CHENNA
Published by World Scientific Publishing Co. Pte. Ltd. 5 Toh Tuck Link, Singapore 596224 USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.
STOCHASTIC PROCESSES AND APPLICATIONS TO MATHEMATICAL FINANCE Proceedings of the 5th Ritsumeikan International Symposium Copyright © 2006 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN 981-256-519-1
Printed in Singapore by World Scientific Printers (S) Pte Ltd
PREFACE The international colloquium on Stochastic Processes and Applications to Mathematical Finance was held at Biwako-Kusatsu Campus (BKC) of Ritsumeikan University, March 3-6,2005. If counted from the first symposium on the same title held in the year 2001, this colloquium was the fifth one of that series of symposia. The colloquium has been organized under the joint auspices of Research Center for Finance and Department of Mathematical Sciences of Ritsumeikan University, and financially supported by MEXT (Ministry of Education, Culture, Sports, Science and Technology) of Japan, the Research Organization of Social Sciences (BKC), Ritsumeikan University, and Department of Mathematical Sciences, Ritsumeikan University. The aim of this research project has been to hold assemblies of those interested in the applications of theory of stochastic processes and stochastic analysis to financial problems, in which several eminent specialists as well as active young researchers have been jointly invited to give their lectures. In the organization of this colloquium, the committee chaired by Mr. Shigeyoshi Ogawa aimed to organize it as a Winter School especially for those younger researchers who intend to join in or just begin the research activity on the relevant subjects. For this reason we asked some of the invited speakers to give introductory talks composing of two or three unified lectures on the same themes (cf. the program cited below). As a whole we had about eighty participants with nine invited lecturers. The present volume is the proceedings of this colloquium based on those invited lectures. We, members of the editorial committee of this proceedings listed below, would express our deep gratitude to those who contributed their works to this proceedings and to those who kindly helped us in refereeing them. We would express our cordial thanks to Professors Toshio Yamada and Keisuke Hara at the Department of Mathematical Sciences, of Ritsumeikan University, for their kind assistance in our editing this volume. We would thank also Mr. Satoshi Kanai for his works in editing TeX files and Ms. Chelsea Chin of World Scientific Publishing Co. for her kind and generous assistance in publishing this proceedings. December, 2005, Ritsumeikan University (BKC) Jiro Akahori Shigeyoshi Ogawa Shinzo Watanabe V
PROGRAM March, 3 (Thursday) 9:50-10:00 Opening Speech, by Shigeyoshi Ogawa (Ritsumeikan University) 10:00-10:50 Shinzo Watanabe (Ritsumeikan University, Kusatsu) Martingale representation and chaos expansion I 11:10-11:50 Monique Jeanblanc (Universite d'Evry, Val d'Essonne) Hedging defaultable claims I (joint work with T. Bielecki and A. M. Rutkowski) 12:00-13:30 Lunch time 13:30-14:20 Paul Malliavin (Academie des Sciences, Paris) Stochastic calculus of variations in mathematical finance I 14:30-15:20 Yoshio Miyahara (Nagoya City University) Geometric Levy process models in finance I 16:00-16:50 Arturo Kohatsu-Higa (Universitat Pompeu Fabra, Barcelona) Insider modelling in financial market I 17:30- Welcome party March, 4 (Friday) 10:00-10:50 Paul Malliavin (Academie des Sciences, Paris) Stochastic calculus of variations in mathematical finance II 11:10-11:50 Shinzo Watanabe (Ritsumeikan University, Kusatsu) Martingale representation and chaos expansion II 12:00-13:30 Lunch time 13:30-14:20 Monique Jeanblanc (Universite d'Evry, Val d'Essonne) Hedging defaultable claims II (joint work with T. Bielecki and A. M. Rutkowski) 14:30-15:20 Arturo Kohatsu-Higa (Universitat Pompeu Fabra, Barcelona) Insider modelling in financial market II 16:00-16:50 Hideo Nagai (Osaka University) A family of stopping problems of certain multiplicative functionals and utility maximization with transaction costs. vi
vii
March, 5 (Saturday) 10:00-10:50 Monique Jeanblanc (Universite d'Evry, Val d'Essonne) Hedging defaultable claims III (joint work with T. Bielecki and A. M. Rutkowski) 11:10-11:50 Paul Malliavin (Academie des Sciences, Paris) Stochastic calculus of variations in mathematical finance III 12:00-13:30 Lunch time 13:30-14:20 Shinzo Watanabe (Ritsumeikan University, Kusatsu) Martingale representation and chaos expansion III 14:30-15:20 Makoto Yamazato (University of Ryukyus) Levy processes in mathematical finance 15:30-16:00 Break 16:00-16:50 Yoshio Miyahara (Nagoya City University) Geometric Levy process models in finance II 18:30- Reception (at Kusatsu Estopia Hotel) March, 6 (Sunday) 10:00-10:50 Toshio Yamada (Ritsumeikan University, Kusatsu) On stochastic differential equations driven by symmetric stable processes (joint work with H. Hashimoto and T. Tsuchiya) 11:00-12:20 Short Communications; 1. Romuald Elie (Centre de Recherche en Economie et Statistique, France) Optimal Greek weights by kernel estimation 2. Kiyoshi Kawazu (Yamaguchi University) The recurrence of product stochastic processes in random environment
This page is intentionally left blank
CONTENTS
Preface
v
Program
vi
Harmonic Analysis Methods for Nonparametric Estimation of Volatility: Theory and Applications E. Barucci, P. Malliavin and M. E. Mancino
1
Hedging of Credit Derivatives in Models with Totally Unexpected Default T. R. Bielecki, M. Jeanblanc and M. Rutkoivski
35
A Large Trader-Insider Model A. Kohatsu-Higa and A. Sulem
101
[GLP & MEMM] Pricing Models and Related Problems Y. Miyahara
125
Topics Related to Gamma Processes M. Yamazato
157
On Stochastic Differential Equations Driven by Symmetric Stable Processes of Index a H. Hashimoto, T. Tsuchiya and T. Yamada Martingale Representation Theorem and Chaos Expansion S. Watanabe
ix
183
195
Harmonic Analysis Methods for Nonparametric Estimation of Volatility: Theory and Applications Emilio Barucci1, Paul Malliavin 2 , and Maria Elvira Mancino 3 1
Dipartimento di Matematica, Politecnico di Milano, Italy email:
[email protected] 2 10 rue Saint Louis en l'lsle, 75004 Paris, France email:
[email protected] 3 DIMAD, Universita di Firenze, Italy email:
[email protected] Key words: Volatility, Fourier analysis, time series, hedging.
1.
Introduction We have proposed in [41] a method to compute the volatility of a semimartingale based on Fourier series (Fourier method). The method allows to compute both the instantaneous volatility and the volatility in a time interval (integrated volatility). The method is well suited to employ high frequency data and therefore to compute volatility of financial time series. Since the method has been proposed, it has been extended/applied in several directions. This paper aims to review these contributions. The benchmark to compute the volatility of a financial time series in a time interval with high frequency data (e.g. daily volatility) is provided by the cumulative squared intraday returns (realized volatility), see [2,11]. In the limit, as the time interval between two consecutive observations converges to zero the realized volatility converges to the quadratic variation of the process and its derivative provides the instantaneous volatility of the process, e.g. the coefficient in front of the Brownian motion in case of a semimartingale. The major novelty of the Fourier method is that it allows to reconstruct the instantaneous volatility as a series expansion with coefficients gathered from the Fourier coefficients of the price variation, by using an integration procedure, instead of a differentiation one. The key point of Fourier approach is the realization of the volatility as a function of time; this fact makes possible to iterate the volatility functor and, for instance, to compute the volatility of the volatility function. As we will show below, this feature is useful in several applications: a double application of 1
2
the volatility functor will lead to an effective computation of the leverage effect, which is a first order effect; by iterating three times the volatility functor we obtain the feedback volatility rate effect. The contribution is organized as follows. Section 2 presents the Fourier methodology and the main theoretical results. We present the method to compute the volatility both in an univariate and in a multivariate setting and we show consistency of the estimator and a central limit theorem. In Section 3 we show how the method has been implemented to compute the volatility of financial time series and we compare its performance to the realized volatility. In Section 4 some applications of the Fourier methodology are presented in order to illustrate the potentiality of the method. In Section 5 we obtain an estimator of the instantaneous volatility using Laplace transform. In Section 6 we generalize the Fourier methodology to obtain a non-parametric estimation of the Heath-Jarrow-Morton generator for the interest rate curve and of the Lie bracket of the diffusion driving vectors. A key fact is that all the infinitesimal generators of risk free measures have a drift which is completely determined by their second order terms: it vanishes for assets in Black-Scholes type model and for the interest rate curve the drift is fully determined by the HJM model. The second order terms can be computed in real time, and model free, by a volatility analysis made on a single trajectory of the market. By consequence pathwise volatility analysis gives access to a pathwise computation and model free of the infinitesimal generator of the risk free measure; by iterating this procedure the Greek Delta can be computed pathwise and model free; in the same spirit the hypoelliptic structure underlying the HJM infinitesimal generator can be computed pathwise and model free. 2.
Fourier Methods for Volatility Computation Let p be the asset price process, we will make the following assumption: p(t) is a continuous Brownian semi-martingale satisfying the stochastic differential equation (H)
dp(t) = o(t) dW(t) + b(t) dt
where W is a Brownian motion on a filtered probability space (Q, CFt)t£[oj],P), CT and b are stochastic processes such that £[ f o\t)dt] < oo , £[ f b2(t)dt] < oo. Jo Jo We suppose that a is adapted and b is not necessarily adapted.
3
The semi-martingale satisfying hypothesis (H) is the most familiar semimartingale in econometrics and in finance. Note that this class includes stochastic volatility models. The Fourier method for estimating the instantaneous (and integrated) volatility are based on an exact mathematical formula relating the Fourier transform of the price process p(t) to the Fourier transform of the volatility process o2(t). This identity is obtained in [43] and is proposed in Theorem 2.1. Different methods which address the problem of estimating the instantaneous volatility have been proposed in [25, 20, 46]. They are based on the quadratic variation formula and use a double asymptotic in order to perform both the numerical derivative and the approximating procedure. Before presenting the result, we recall some definitions from harmonic analysis theory (see for instance [38]). By change of the origin of time and rescaling the unit of time we can always reduce ourselves to the case where the time window [0, T] becomes [0,2n]. Then given a function (p on the circle S1 its Fourier transform is defined on the group of integers Z by the formula 1 C2n T(&)(k):= — s^¥) i
+
Usidfys^dp1)).
k
AT-»oo T\ + 1 —no s=« —0 N
(23)
ME''>) = lim — 2
V Mdp?ybt+k(dpi) + a,{dpi)bs+k(oo i \ + 1 — HQ ^ - ^ s=n0
Finally using the Fourier-Fejer inversion formula we can reconstruct £' ,; (t) from its Fourier coefficients: (24)
£«(*) = lim I#(f)
where for any t e (0,2n) N
^NW - Z 3.
, ( 1
" ^)(«t(E y )cos(«) +
bk(L^)sm(kt)).
Time Series Analysis This Section has two main goals: to show how the Fourier method can be implemented to compute both volatility and cross volatility; to compare the method with other methods proposed in the literature to compute the integrated volatility.
10
3.1 Volatility computation Efficiency of the Fourier method to compute the integrated volatility of a stochastic process representing the asset price has been analyzed in several papers, see [15,16,33,50,48]. To implement the method, [15, 16] proceed as follows. Given a time series of N observations (f,-,p(f,)), i = 1,... N, data are compressed in the interval [0,2n] and integrals are computed through integration by parts: ~ P ( 0 ) + k hip). n To compute the integrals, we need an assumption on how data are connected. As a matter of fact, high frequency data are not equispaced and therefore there is no constant time length between two consecutive observations. To handle this problem, a time grid with a fixed time interval is chosen (e.g., 30 seconds, 1 minute, 5 minute): h, k = l,2,.... We may not have an observation on a point of the grid. To cope with this problem, two methods have been proposed in the high frequency data literature: interpolation and imputation of data. In the first case p(f,-) and p(f,+i) are connected through a straight line: iff* e [f„ fi+1[, then p(f*) = {tk-U)p{t'^x7tfi). Instead according to the interpolation method p(fjt) = p(f,-) (piecewise constant). In [15,16] the imputation method has been employed, then the integral in (25) in the interval [f„ f,+i] becomes (25)
ak(dp) =
P(27r)
k r f ' +i (26)
-
I
k rtM sin(fcf)p(f)df = p(f,)- I
sin(fcf)df
= p(ti)-[cos{kti) - cos{kti+i)) thus avoiding the multiplication by k which amplifies cancellation errors when k becomes large. The methodology has been applied to compute volatility in a standard GARCH setting. Let p(f) = log S(f), where S(f) is a generic asset price, and rt = p(t) - p(f - 1) is the logarithmic return. It is assumed that the asset price follows the continuous time GARCH(1,1) model proposed in [47]: {
'
rfp(f) = a(f)rfWi(f) do2(t) = d(a> - a2(t))dt + V2A0a2(f)dW2(f),
where Wi, Wz are independent Brownian motions. This model is closed under temporal aggregation in a weak sense, see [22], and its discrete time analogous is given by: (28)
dt£t ~2 = _ 4> ,„ j + .„ 0f +1
. v2a-rf+p-o-f j . a . „2
11
where ef are i.i.d. Normal distributed random variables. The exact relation between (i/», a, ft) and (6, co, A) is derived in [22]. The task is to assess the capability of the Fourier method to reproduce the theoretical volatility of the G ARCH(1,1) model. The analysis is based on Monte Carlo simulations. High frequency unevenly sampled observations have been generated as follows: one day of trading [0,1] has been simulated by discretizing (27) with a time step of one second, for a total of 86.400 observations a day. Then observation times have been extracted in such a way that time differences are drawn from an exponential distribution with mean equal to T = 45 seconds, which corresponds to the average value observed for many financial time series. As a result, we have a dataset [k, pih), k = l,...,N) with tk unevenly sampled. The most used way to compute the integrated volatility is to exploit the quadratic variation and therefore to compute i a2(x)rfr as the sum of squared intra-day returns (realized volatility), see [36] for the relation with the Fourier method. Provided a grid with m points ( - , - , . . . 1), volatility in a day is computed as follows:
(29,
a > ) = £[ p (i)_ p (i^)]
J
.
Theoretically, thanks to the Wiener theorem, by increasing the frequency of observations, an arbitrary precision in the estimate of the integrated volatility can be reached. Note that the Fourier method uses all observations, instead the sum of squared intraday returns uses only a fraction of observations, i.e., for low m some observations are lost. In most of the papers estimating volatility with high frequency data, e.g. see [1], (29) is computed with m = 288 corresponding to five-minute returns. In the simulation setting it is also considered m = 144, corresponding to ten-minute returns, and m = 720 corresponding to two-minutes returns. In the literature the interpolation technique is employed. The performance of the Fourier method is compared to that of (29) with m = 144,288,720 by the statistics: L a2(s)ds - a2
X
°2&ds
/
I
l
„.
..\2"|2i
J 0 V(s)ds- • (1 - a - f) = 1. All the values are computed with 10000 "daily" replications.
/U a-»
0.5 std R2 0.6 std
R2 0.7 std R2 0.8 std R2 0.9 std R2
2' 23.9 4.4 4.92 23.9 4.4 6.79 23.9 4.4 9.67 23.9 4.4 15.2 23.9 4.4 30.8
0.05 5' 10.0 7.7 4.79 10.0 7.7 6.62 10.0 7.7 9.43 10.0 7.6 14.7 10.0 7.6 29.9
F 0.1 4.9 4.98 0.1 4.9 6.88 0.1 4.9 9.79 0.1 4.9 15.3 0.1 4.8 30.9
2' 23.9 4.5 9.58 23.9 4.5 13.8 23.9 4.4 21.5 23.9 4.4 36.9
0.1 5' 10.0 7.8 9.44 10.0 7.8 13.7 10.0 7.7 21.3 10.0 7.7 36.5
0.15 0.25 0.2 F 2' 5' F 2' 5' F 2' 5' F 0.1 23.910.0 0.1 23.910.0 0.1 23.9 10.0 0.1 5.0 4.5 7.9 5.0 4.6 7.9 5.1 4.6 8.0 5.1 9.70 14.0 13.9 14.2 18.6 18.5 18.8 23.8 23.8 24.0 0.1 23.910.0 0.1 23.910.0 0.1 23.9 10.0 0.1 4.9 4.5 7.8 5.0 4.5 7.8 5.0 4.5 7.8 5.0 14.0 21.9 21.8 22.1 32.5 32.5 33.0 44.6 44.5 45.6 0.1 23.910.0 0.1 23.9 10.0 0.1 23.9 10.0 0.1 4.9 4.4 7.7 4.9 4.5 7.8 4.9 4.5 7.8 4.9 21.7 38.2 37.9 38.7 51.3 51.1 52.2 48.2 48.1 48.8 0.1 23.910.0 0.1 4.9 4.4 7.7 4.9 37.2 54.3 53.8 54.5
Table 2 R2 for the two time series. R2 Estimator DM-$ Y-$ 0.400 0.128 L«NP(258)-PW Fourier 0.470 0.143
observations (from October, 1 st 1992 to September 30th 1993), i.e., the dataset analyzed in [1]. The dataset consists of 1.466.944 quotes for the Deutsch Mark-Us dollar and 567.758 quotes for the Yien-Us dollar exchange rate. Performance of the GARCH(1,1) model has been evaluated when the integrated volatility is computed according to the Fourier method. The parameters of the model are those estimated in [1]. Table 2 provides the corresponding R2. We observe that the GARCH model performs well in forecasting when the Fourier method is employed to compute the integrated volatility. Its performance is better than that associated with the sum of squared intraday returns as an integrated volatility measure. On performance of volatility
14
models and volatility computation see also [31]. As the Fourier method provides an accurate estimate of the volatility, we can handle the volatility as an observable process. In the literature, time varying volatility has been handled as a latent process, e.g., the GARCH process. In [13], this idea has been applied to compute the volatility of the overnight interest rate in the Italian money market and to test for the martingale hypothesis correcting for heteroschedasticity with a good proxy of the volatility and thus avoiding estimation problems connected with the use of a GARCH model. In [17] an autoregressive process for the volatility estimated according to the Fourier method is estimated to forecast oneday return volatility and to define the value at risk threshold. The method performs well in forecasting, better than the GARCH(1,1) model and the exponential smoothing proposed by RiskMetrics. In recent years other volatility models have been proposed to cope with empirical regularities observed for volatility of financial time series. Among the models, we have models with long memory in the volatility process considering a fractional Brownian motion W2 of order d independent of Wi in (27), i.e., W2(t) = J0 nj^)^W(s). To capture sharp increase in volatility, jumps in the volatility process have been introduced adding a Poisson process in (27). In these more general models, the sum of squared intraday returns and the Fourier method do not provide consistent estimators of the volatility. [48] compare through Monte Carlo simulations the bias and the root mean square error of the Fourier method, of the cumulative squared intraday returns and of a wavelet estimator. They show that the Fourier method provides the lowest bias and root mean squared error. They also compare the three methods when a bid-ask bounce effect is inserted, i.e., as a random buy/sell order arrives in the market there is a liquidity effect on the price creating spurious serial correlation in returns and volatility. In this case the realized volatility and the Fourier method are no longer consistent for the integrated volatility. As far as the bias and the root mean square error is concerned, the Fourier method performs better than the wavelet and the cumulative squared intraday returns method. On the comparison between Fourier method and the wavelet method see also [33]. 3.2 Computation of cross volatility In this section we analyze the performance of the method in the bivariate case using Monte Carlo simulations of high frequency asset prices as studied in [44]. As in [51], we simulate two correlated asset price diffusions with the
15
-0.5
-0.4
600
-0.3
-0.2
-0.1
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
(b) Mean = 0.10 Std = 0.078
400 200
-0.5
-0.4
-0.3
-0.2
-0.1
0
c Mean = 0.239 Std = 0.045 -0.5
-0.4
-0.3
-0.2
-0.1
0.1
0.2
0.3
0.4
0.5
-0.5
-0.4
-0.3
-0.2
-0.1
0.1
0.2
0.3
0.4
0.5
Figure 1. Distribution of » _
, where a2 are three different estimators of the integrated
Jo "i2 ( 0 *
volatility: (a) estimator (29) with m = 144; (b) estimator (29) with m = 288; (c) estimator (29) with m = 720; (d) Fourier estimator. The distribution is computed with 10,000 "daily" replications.
bi-variate continuous GARCH(1,1) model: dpi(t) o2(t)dW2(t), ^l7J^e[a\{t)dm(t), da\(t) = X\[w\ - •a\{t)\dt+ da\{t) = A2[w2 - • a\{t)\dt + V2A^<j2(f)rfW4(f), corr(dWi,dW2) = P> dpi(t) •
(31)
and all the other correlations between the Brownian motions set to zero. The choice of this particular model comes from the fact that it is the continuous time limit of the very popular GARCH(1,1) model, and it has been studied extensively in the literature, e.g. see [28]. We will use the parameter values estimated by [1] on foreign exchange rates, i.e. 0i = 0.035,wx = 0.636, Ai = 0.296, d2 = 0.054, w2 = 0.476, A2 = 0.480. We will instead analyze two mirror cases for the correlation coefficient: p = 0.35 and p = -0.35. To get a representation of high-frequency tick-by-tick data, after discretizing (31) by a first-order Euler discretization scheme with a time step
16 Table 3 Average correlation measurement on 10,000 Monte Carlo replications of the model (31). Two generated values of the correlation are considered, p = 0.35 and p = -0.35. We compute the variance-covariance matrix via the Fourier estimator and via the realized volatility estimator (32). L.I. means Linear Interpolation, while P.T. means Previous Tick interpolation. Standard deviations of in-sample measurements are reported in the columns named Std. Estimator Fourier Realized 5', L.I. Realized 5', P.T. Realized 15', L.I. Realized 15', P.T. Realized 30', L.I. Realized 30', P.T.
Generated correlation P = 0.35 Measured Std 0.350 0.039 0.204 0.058 0.060 0.181 0.338 0.090 0.329 0.091 0.345 0.127 0.127 0.342
Generated correlation P = 0.35 Measured Std -0.349 0.039 -0.203 0.055 -0.180 0.058 -0.337 0.090 -0.328 0.092 -0.344 0.126 -0.341 0.126
of one second, we extract observation times drawing the durations from an exponential distribution with mean 30 seconds and 60 seconds respectively. Observation times are drawn independently for the two time series. After simulating the process (31) we compute on it daily (86400 seconds, corresponding to 24 hours of trading, as for currencies) variance-covariance matrix according to the Fourier theory and according to the realized volatility measure of [3], given by:
The choice of m in (32) comes from a tradeoff between increasing precision and cutting out microstructure distortions. A typical choice is m - 288, corresponding to five minute returns. As pointed out above, to obtain an equally spaced time series we can rely upon a linear interpolation (L.I.) or a previous tick interpolation (P.T.). Both methods have been applied for m - 288,96,48 (corresponding respectively to 5,15,30 minute returns), when measuring correlations on Monte Carlo experiments. Table 3 shows the results. First of all we notice that the Fourier estimator performs considerably better than the realized volatility, which is biased toward zero. The bias in the correlation measurement of realized volatility is more and more severe when the sampling frequency is increased. For the fiveminute estimator with the previous-tick interpolation, we get a mean value
17
of 0.181 (-0.180), which is quite far from the true value of 0.35 (-0.35); this bias is completely due to the non synchronicity of quotes. Realized volatility with linearly interpolated returns is closer to the right value, but this is because of the downward bias in the volatility measurement due to the linear interpolation documented in [15, 16]. In these papers, it is shown that the spurious positive serial correlation induced by the linear interpolation technique lowers volatility estimates. Since variances are spuriously measured to be lower, correlations turn out to be spuriously higher, thus compensating in some way the bias due to non-synchronicity. This is also true, but to a much lower extent, for the 15-minute and 30-minute realized volatility estimator. The precision of the Fourier estimator, as measured by the standard deviation of measurement errors across Monte Carlo replications, is always better than the realized volatility estimator. We implemented the Fourier estimator with N = 500 coefficients for the first (30 seconds average spacing) time series, N = 160 coefficients for the second (60 seconds average spacing), and N = 160 coefficients for the computation of covariance. Increasing the number of coefficients would increase the variance measurement precision but not the covariance measurement precision, because of the Epps effect, see [51]. On the other hand, even the gain in precision of the realized covariance measurement obtained when increasing the sampling frequently is cancelled out by the bias. 4.
Applications The potentiality of our method relies in the fact that it reconstructs instantaneous volatility and cross-volatility as functions of time. This feature of the Fourier method is essential when a stochastic derivation of volatility along time evolution is performed as in contingent claim pricinghedging. In this Section we present some developments of the multivariate Fourier estimation procedure which illustrate this point. 4.1 Leverage effect There is a vast empirical literature showing that the asset price and its volatility are related. Among other phenomena, it has been observed the so called leverage effect, i.e., a negative return sequence is associated with a volatility increase. In some recent papers, see [12,10], the integrated volatility estimation methodology based on the so called realized volatility has been extended to allow for a leverage effect. From the mathematical point of view the no leverage hypothesis means that a{t) is independent from the Brownian motion W. The no leverage hypothesis simplifies the study of the properties of the volatility estimator, but is not realistic for the analysis of equity returns. The asset price model H is very general and includes stochastic volatil-
18
ity as well as level dependent volatility models, i.e. volatility is a time independent function of the asset price, it allows for feedback effects of asset price on volatility and in particular the leverage effect. In this section it is proved that these effects can be non-parametrically estimated using the Fourier methodology without knowledge of the exact form of the evolution equation for the volatility process. A first order indicator of the stochastic dependence between the asset price process and the volatility process is obtained and we refer to it as leverage effect, we also consider a second order indicator which is called feedback effect rate. Consider the asset price model which satisfies hypothesis H. Moreover assume that the process a(t) is a random process satisfying (H)
da(t) = a(t)dW(t) + j3(f)dWi (t) + y(t)dt
where a, fS and y are predictable functions (for simplicity suppose they are almost sure bounded) and Wi is another Brownian motion independent of W. Then the leverage effect is defined as B(t) := dp(t) * do2(t), where * denotes the Ito stochastic contraction divided by dt. Therefore B(t) = 2a(t)oz(t). Nevertheless we have no knowledge of the random function a(t). Therefore it is interesting to find a formula for estimating B(t) starting from the price observations. By Ito calculus the following result holds: Theorem 4.1. Let p and a satisfy H and H, then we have B(t) = \(Vol(p + a2)(t) - Vol(p - a2){t)), where Vol(p ± a1) means the volatility function of the stochastic process p ± o2. In order to estimate B(t) from the asset prices we see that the Fourier transform of p can be computed from price observations; the Fourier transform of a2 is computed by Theorem 2.1; therefore the Fourier transform of p + a2 and p - a2 are known; a second application of Theorem 2.1 gives the Fourier transform of the volatility Vol(p ± a2) and finally the usual inversion formula produces the computation of B(t).
19 4.2 Delta hedging (estimation of the gradient) Suppose that the price process p(t) in logarithmic scale satisfies (33)
dp(t) = a(p(t))dW(t) - \a2{p{t))dt
where W is a Brownian motion and a is an unknown C1-function . No hypothesis on the shape of this function are done. In the following it is shown that is possible to estimate the pathwise gradient using Fourier transform methodology, therefore also the Greek Delta can be estimated non-parametrically. In financial applications the Greek Delta is the sensitive which allows to perform the so called Delta-hedging, that is to make the portfolio neutral with respect to small modifications of the initial value of the price. In model (33), the Greek Delta is defined as
A(t)=-fEf(t)[E=0 where pc{t) corresponds to the solution of (36) with pe(to) = p(to) + £• Then (34)
dA(t) = A{t)[a'{p(t)) dW(t) - a(p{t))o'(p(t))dt).
It follows that the computation of A(t) depends on the estimation of the volatility a(p(t)) and the derivative of the volatility a'(p(t)). Denote by B(t) the function obtained in Theorem 4.1, then it holds
(35
>
"'W))-M)Y
This first result expresses the derivative of the volatility function as the ratio of terms which can be estimated using prices data. By substitution of (35) in (34) and using (33), it follows
Note that all terms in the above equation are given or estimated by Fourier method from market asset prices. A more general Delta hedging result has been obtained in [14] by using the Fourier method. 4.3 Feedback volatility rate In [14] we have constructed a second order indicator of the feedback effects of asset price on volatility and we have used the Fourier estimation method in order to compute this effect from asset price data. Working under this general assumption, we produced a time-dependent pricevolatility feedback effect rate function and in the multivariate case a timedependent matrix, which will be called elasticity matrix, implementable in real time from asset price observations.
20
The mathematical theory suggests that eigenfunctions associated with positive eigenvalues of the elasticity matrix correspond to instability directions of the market; eigenfunctions associated with negative eigenvalues correspond to stability directions of the market. The key mathematical tool to construct the elasticity matrix is a new methodology of transferring price perturbations through time, using the inertial frame transport, this transport is designed so that through time the variation is governed by a first order linear ordinary differential equation. The rate of variation through time of the initial perturbation is given by the elasticity matrix. Computation of the elasticity matrix is a three-steps outcome; at each steps it is necessary to compute volatilities of observed quantities or computed in the previous step. Therefore the Fourier methodology works out. We can write explicit expressions in terms of Fourier coefficients of asset prices leading to real-time determination of feedback rates. For simplicity we present the univariate case, but the method can be developed for any finite number of assets (see [14]). Let p(t) be the risky asset price at time t. Suppose it satisfies the stochastic differential equation in logarithmic scale: (36)
dp(t) = o(p{t)) dW(t) -
\a2(p(t))dt
where W is a Brownian motion, o is a fixed but unknown C2(R) function. Let C(f) be the variation process which is solution of the linearized stochastic differential equation dlit) = C(0 k'(p(0) dW(t) - a'(p(t))o(p(t))dt]. We associate to C(0 the rescaled variation defined as z{t) =
a(p(t)Y
In Section 4.2 we have obtained the SDE driving the Delta propagation. It is a remarkable fact that this SDE can be reduced to an ODE at the price of a renormalization of the Greek Delta, as it is showed in the following result. Theorem 4.2. The rescaled variation is a differentiable function with respect to t; denote by A(t) its logarithmic derivative, then
where Ak are "driving vector fields" defined on C. An appropriate notion of "smoothness" of vector fields is a necessary hypothesis in order to prove existence and uniqueness of solutions for (42), see [24]. In [40] a non-parametric estimation of HJM generator is obtained and a method similar to the methodology of the price-volatility feedback rate is developed for the interest rate curve. The question of the efficiency of the mathematical theory proposed in this section to decipher the state of the market has not yet confirmed by numerical computations. The first point is the possibility of measuring, in real time by high frequency market data, the full historical volatility matrix. We remark that, while the correlations between stocks prices at high frequency has no clear economic meaning, on the contrary the high frequency cross-correlations are clearly significant in the HJM model. Assume that the HJM hypothesis for the infinitesimal generator is satisfied; no assumption on the explicit expression of the vector field Ak in (42) are done. It is proved in [40] that an estimation of the maps 11-> Ak(rt) is achieved by the observation of an unique evolution of the market.
29
Fix N different maturities (N can be extremely large and in particular it can ben « N), then define the time-dependent N xN covariance matrix [a2(t)h,n := [drt(E) * drt(»j)]4„
(43)
where £, r/ vary between all maturities and * denotes the ltd contraction divided by dt. For simplicity of notation consider two maturities £,17 and the bidimensional process: t \-> (rt(E,),rt(q)). Then it holds: Theorem 6.1. Given two different maturities £, q we consider the 2-dimensional process: 11-> (rt(£,), rt(r\)); then (44)
[o2(t)hn = \[ Vol(rt(0 + 17(17)) - Vol{rt{E,) - rt{rr))\
The vector Aj(rt) is defined as thef1 column of the matrix yJoHf) Note that the r.h.s. of (44) can be computed using the Fourier methodology for the estimation of volatility. 6.1 Non parametric estimation of Hormander bracket Definition 6.2. Given two vector fields A\, Az denote by A^ the 4-th component of Ak. The Lie bracket of A\ and Az is the vector field having the components:
,
(45)
[
^
dA\
2 ] 4 : =
dA\
^.^
„( =
ndA\
dA\\
j ; ^ . ^ .
The Lie algebra 31 generated by n vector fields A\,..., A„ is defined as the vector space of all fields obtained as linear combinations with constant coefficients of (46)
Ak, [Ak,Ail [[Ak,A,],As],
[[[Ak,A,lAs],Au],...
Given r e R N , let & := {C e R N | C = Z(r) for some Z e &\. Definition 6.3. The vector fields A1,... A„ satisfy the Hormander criterion for hypoellipticity if A\,...,A„ are infinitely often differentiable and if JR(r) = RN for any r e RN. Due to the non-ellipticity of the market, the multivariate price-volatility feedback rate constructed in [14] cannot be applied, so [40] construct a pathwise econometric computation of the bracket of the driving vector of the diffusion.
30
Without any hypothesis on the explicit form of the vector fields Ak{rt), we estimate the bracket [A\, A?\{rt) from a single trajectory. For simplicity we omit writing explicitly the dependence of Ak on rt. In order to get this result, by the definition (45), the main point is the estimation of
= l,2.
P,k drn
This estimation is obtained by the relation
(47)
dA2 *dr? = J^^x
[a2(t)l,lA
n together with the fact that the l.h.s. of (47) can be computed by Fourier methodology. Therefore we can state the result: Theorem 6.4. The brackets of the driving vector fields of the HJM diffusion can be numerically computed from a single time series of market data, without any assumption on the model. 6.2 Global and pathwise compartmentage by maturities Let C the space of continuous functions on [0, oo[. Define the logarithm derivative of a measure ji,\ depending on a parameter A as the function satisfying
h ff(r)
Mdr) = ff(r)
(d, log(^))(r) ^(dr),
for every test function / . Denote nt(r0, dr) the fundamental solution of the heat equation associated to the HJM model; define an Hilbert norm on tangent vectors at r0 by IN? = f [-f
(logfa(r 0 + ez,*)){r)f
nt(r0,dr).
Definition 6.5. We say that the global compartmentage lowing condition holds:
holds if the fol-
the norms ||z||s and ||z||sT| (note that H is a bounded G-submartingale). We say that x admits a G-intensity if there exists a G-adapted, nonnegative process A such that the process WAT _^
pt
(1)
Mt = Ht-
Audu = HtJo
Kdu Jo
is a G-martingale (the second equality in (1) follows from the fact that the process H is stopped at x). Then M is called the compensated Gmartingale of the default process H. In order for a G-stopping time x to admit a G-intensity A, it has to be totally inaccessible with respect to G, so that P(x = 9) = 0 for any G-predictable stopping time 9. The simplest example is the moment of the first jump a Poisson process. Note that the intensity A necessarily vanishes after default. Remark 1.1. Some authors define the intensity as the process A such that Ht - jC A„ du is a G-martingale. In that case, the process A is not uniquely defined after time x. 1.1.2 F-intensity of a random time We change the perspective, and we no longer assume that the filtration G is given a priori. We assume instead that x is a positive random variable on some probability space (Q,£,P). Let H = {'Ht, t > 0) be the natural filtration generated by the default process (Ht, t > 0), and let F = (ft, f > 0) be some reference filtration in {£l,@, P). We assume throughout that the information available to an investor is modeled by the filtration G = F V H . Consequently, we can reduce our
38
study to the case where the default intensity (if it exists) is G-adapted, meaning that the process M given by (1) is a G-martingale for some Gadapted process A. In this setting, there exists a process A = (At, t > 0), called the W-intensity of T, which is F-adapted and equal to A before default, so that Afli( 0 and take Qt = TT for every t e [0, T]. Then any F-martingale L satisfies Ep(L( | @s) = Lt for s < t, and thus L is not a G-martingale, in general. It is even possible, but more difficult, to produce an example of an Fmartingale, which is not a semi-martingale with respect to G. For other counter-examples, in particular those involving progressive enlargement of filtrations, we refer interested reader to Protter [35], or Mansuy and Yor [32]. The original formulations of the hypothesis (H) refer to martingales (or even square-integrable martingales), rather than local martingales. We
39
shall show that in our set-up the definition given above is equivalent to the original definition. In fact, the hypothesis (H) postulates a certain form of conditional independence of a-fields associated with F and G, rather than a specific property of F-(local) martingales. In particular the following well known result is valid. Lemma 1.2. Assume that G = F v H, where F is an arbitrary filtration and M is generated by the process Ht - t\T by virtue of part (i) in Lemma 1.5, the hypothesis (H) is valid under Q as well. The last claim in the statement of the lemma can be deduced from
45
the fact that the hypothesis (H) holds under Q and, by Girsanov's theorem, the process Mt=Mt-
I llu)yi> dv)du QoYvdv) J = l-exp(-
yue-£r°dvdu
I (1 + Cv)yv dv) + exp ( - I C ^ d») exp ( - I yvdv) = l,
where the second last equality follows by an application of the chain rule.
46
1.3.3 Extension to orthogonal martingales Equality (17) suggests that Proposition 1.1 can be extended to the case of arbitrary orthogonal local martingales. Such a generalization is convenient, if we wish to cover the situation considered in Kusuoka's counterexample. Let N be a local martingale under P with respect to the filtration F. It is also a G-local martingale, since we maintain the assumption that the hypothesis (H) holds under P. Let Q be an arbitrary probability measure locally equivalent to P on (C1,Q). We assume that the Radon-Nikodym density process q of Q with respect to P equals (18)
drlt = tlt-(eldNt + QdMt)
for some G-predictable processes 9 and C > —1 (the properties of the process 6 depend, of course, on the choice of the local martingale N). The next result covers the case where N and M are orthogonal G-local martingales under P, so that the product MN follows a G-local martingale. Proposition 1.2. Assume that the following conditions hold: (a) N and M are orthogonal G-local martingales under P, (b) N has the predictable representation property under P with respect to F, in the sense that any W-local martingale L under P can be written as Lt = L0+ \ £udNUl Jo
V£eR+/
for some W-predictable process £,, (c) P is a probability measure on (Q, Q) such that (16) holds. Then we have: _ (i) the hypothesis (H) is valid under P, (ii) if the process 6 is W-adapted then the hypothesis (H) is valid under Q. The proof of the proposition hinges on the following simple lemma. Lemma 1.6. Under the assumptions of Proposition 1.2, we have: (i) N is a G-local martingale under P, (ii) N has the predictable representation property for W-local martingales under P. Proof. In view of (c), we have d¥\Sl = if^ dV\Sl, where the density process r/2) is given by (14), so that dnf* = rjf}C,tdMt. From the assumed orthogonality of N and M, it follows that N and ;/ 2) are orthogonal G-local martingales under P, and thus NT/ 2 ) is a G-local martingale under P as well. This means that N is a G-local martingale under P, so that (i) holds. To establish part (ii) in the lemma, we first define the auxiliary process rf by setting r\t = Ep(j;|2) | ft). Then manifestly dP| r > = r\tdV\Tl, and thus
47
in order to show that any F-local martingale tinder IP follows an F-local martingale under P, it suffices to check that /ft = 1 for every t e R+, so that P = P on F. To this end, we note that ipv(if\Tt)
-Me'{l
= EP\6t\
| QudMu
Tco) = l,
VfeR+,
where the first equality follows from part (v) in Lemma 1.2, and the second one can established similarly as the second equality in (17). We are in a position to prove (ii). Let L be an F-local martingale under P. Then it follows also an F-local martingale under P and thus, by virtue of (b), it admits an integral representation with respect to N under P and P. This shows that N has the predictable representation property with respect to F under P. • We now proceed to the proof of Proposition 1.2. Proof of Proposition 1.2. We shall argue along the similar lines as in the proof of Proposition 1.1. To prove (i), note that by part (ii) in Lemma 1.6 we know that any F-local martingale under P admits the integral representation with respect to N. But, by part (i) in Lemma 1.6, N is a G-local martingale under P. We conclude that L is a G-local martingale under P, and thus the hypothesis (H) is valid under P. Assertion (ii) now follows from part (i) in Lemma 1.5. • Remark 1.3. It should be stressed that Proposition 1.2 is not directly employed in what follows. We decided to present it here, since it sheds some light on specific technical problems arising in the context of modeling dependent default times through an equivalent change of a probability measure (see Kusuoka [29]). Example 1.1. Kusuoka [29] presents a counter-example based on the two independent random times xi and X2 given on some probability space (Q, Q, P). We write M\=H\- £Ml yi(u) du, where H\ = 1((>T,| and y{ is the deterministic intensity function of T,- under P. Let us set dQ \gt = r\t dV \ gt, where r\t = r/| 77; and, for i = 1,2 and every t e R+,
1{P = 1 + j T rfiLtf dK = St (JT Ci? dM I Tt) = E P U If £® dMj J | W,1 J = 1, V r e R + . However, the hypothesis (H) is not necessarily valid under Q if the process C'1' fails to be F-adapted. In Kusuoka's counter-example, the process C(1> was chosen to be explicitly dependent on both random times, and it was shown that the hypothesis (H) does not hold under Q. For an alternative approach to Kusuoka's example, through an absolutely continuous change of a probability measure, the interested reader may consult Collin-Dufresne era/. [13]. 2.
Semimartingale Model with a Common Default In what follows, we fix a finite horizon date T > 0. For the purpose of this work, it is enough to formally define a generic defaultable claim through the following definition.
Definition 2.1. A defaultable claim with maturity date T is represented by a triplet (X, Z, T), where: (i) the default time T specifies the random time of default, and thus also the default events (T < /} for every / e [0, T], (ii) the promised payoff X e TT represents the random payoff received by the owner of the claim at time T, provided that there was no default prior to or at time T; the actual payoff at time T associated with X thus equals Xl(rT|.
49
When dealing with replicating strategies, in the sense of Definition 2.2, we will always assume, without loss of generality, that the components of the process are F-predictable processes. 2.1 Dynamics of asset prices We assume that we are given a probability space (Q, Q, P) endowed with a (possibly multi-dimensional) standard Brownian motion W and a random time T admitting an F-intensity y under P, where F is the filtration generated by W. In addition, we assume that T satisfies (4), so that the hypothesis (H) is valid under P for filtrations F and G = F v E Since the default time admits an F-intensity, it is not an F-stopping time. Indeed, any stopping time with respect to a Brownian filtration is known to be predictable. We interpret T as the common default time for all defaultable assets in our model. For simplicity, we assume that only three primary assets are traded in the market, and the dynamics under the historical probability P of their prices are, for i = 1,2,3 and t e [0, T], (19)
dY\ = Y\_(in,t dt + ox dWt + Ki4 dMt),
or equivalently, (20)
dY\ = Y\_({^tt - K;,fy(l,f 0), ;' = 1,2,3, are assumed to be G-adapted, where G = F V H In addition, we assume that K; > - 1 for any i = 1,2,3, so that V are nonnegarive processes, and they are strictly positive prior to T. Note that, according to Definition 2.2, replication refers to the behavior of the wealth process V((p) on the random interval [0, T A T] only. Hence, for the purpose of replication of defaultable claims of the form (X, Z, x), it is sufficient to consider prices of primary assets stopped at T A T. This implies that instead of dealing with G-adapted coefficients in (19), it suffices to focus on F-adapted coefficients of stopped price processes. However, for the sake of completeness, we shall also deal with T-maturity claims of the form Y = G(Y\, Y\, Y\,HT) (see Section 5 below). 2.1.1 Pre-default values As will become clear in what follows, when dealing with defaultable claims of the form (X, Z, T), we will be mainly concerned with the so-called pre-default prices. The pre-default price Y' of the fth asset is an F-adapted, continuous process, given by the equation, for i = 1,2,3 and t e [0, T], (21)
dY\ = Yt((n» - Ki,tyt) dt + aiit dWt)
50
with Yj, = Y[,. Put another way, Y1 is the unique F-predictable process such that (see Lemma 1.1) YJ1(, In this general set-up, the most natural assumption is that the dimension of a driving Brownian motion W equals
51
the number of tradable assets. However, for the sake of simplicity of presentation, we shall frequently assume that W is one-dimensional. One of our goals will be to derive closed-form solutions for replicating strategies for derivative securities in terms of market observables only (whenever replication of a given claim is actually feasible). To achieve this goal, we shall combine a general theory of hedging defaultable claims within a continuous semimartingale set-up, with a judicious specification of particular models with deterministic volatilities and correlations. 2.1.3 Recovery schemes It is clear that the sample paths of price processes Y' are continuous, except for a possible discontinuity at time T. Specifically, we have that A Y ; := r T - r T _ = K,-TY*T_, so that YT = Y;_(l + K,-T) = Y\_{\ + K,>). A primary asset Y' is termed a default-free asset (defaultable asset, respectively) if %i = 0 (K,- £ 0, respectively). In the special case when K,- = —1, we say that a defaultable asset Y' is subject to a total default, since its price drops to zero at time T and stays there forever. Such an asset ceases to exist after default, in the sense that it is no longer traded after default. This feature makes the case of a total default quite different from other cases, as we shall see in our study below. In market practice, it is common for a credit derivative to deliver a positive recovery (for instance, a protection payment) in case of default. Formally, the value of this recovery at default is determined as the value of some underlying process, that is, it is equal to the value at time x of some F-adapted recovery process Z. For example, the process Z can be equal to 5, where 5 is a constant, or to g(t, 5Yt) where g is a deterministic function and (Yf, t > 0) is the price process of some default-free asset. Typically, the recovery is paid at default time, but it may also happen that it is postponed to the maturity date. Let us observe that the case where a defaultable asset Y' pays a predetermined recovery at default is covered by our set-up defined in (19). For instance, the case of a constant recovery payoff 6/ > 0 at default time T corresponds to the process K,/t = 6;(YJ_)_1 - 1. Under this convention, the price Y' is governed under P by the SDE (23)
dY\ = Y;_(^,,t dt + at,t dWt + (SKYlJ-1 - 1) dMt).
If the recovery is proportional to the pre-default value Y\_, and is paid at default time T (this scheme is known as the fractional recovery of market value), we have K,/f = 6 , - 1 and (24)
dY\ = Yit_(^ii)t dt + on dWt + (6,- - 1 ) dMt).
52
2.2
Risk-neutral valuation To provide a partial justification for the postulated dynamics of the price of a defaultable asset delivering a recovery, let us study a toy example with two assets: a savings account with constant interest rate r and a def aultable asset Y represented by a defaultable claim (X, Z, T). In this toy model, the only source of noise is the default time, hence, the only relevant filtration is H (in other words, the reference filtration F is trivial). We assume that by choosing today's prices of a large family liquidly traded defaultable assets, the market implicitly specifies a martingale measure R such that the random variable h(j) is
53
Q-integrable. Then Ml = Mh0+ f(h(u)-g(u))dMu=M» + f (h(u) - Mhu_) dMUr Jo Jo where we write g(t) = er^EQ(l[t) + Y J
h2 0
- 1 . The unique martingale measure Q 1 is then given by the formula (45) where r\ solves (46), so that r\t = £ / (
f 0udWu\stl
f QudMuY
We are in a position to formulate the following result. Proposition 4.1. Assume that the process Q given by (44) satisfies (43), and (48)
Q = - L < ~ iiu ~ ^——(°v ~ °u)\ > - I yt \ o\,t - o~i,t i 1 2 3 Then the model M = (Y , Y , Y ; <E>) is arbitrage-free and complete. The dynamics of relative prices under the unique martingale measure Q 1 are dYtl = dYf
Yf1(o2,t-0i.t)dWt,
= Y^({fj3jt ~ aif)dWt - dMt).
Since the coefficients ^,,t, a^t, i - 1,2, are F-adapted, the process W is an F-martingale (hence, a Brownian motion) under Q 1 . Hence, by virtue of Proposition 1.1, the hypothesis (H) holds under Q 1 , and the F-intensity of default under Q 1 equals ~
/1
r s
(
yt = yt(l + Qt) = yt + \ ii3,t - pu \
W
~ l"2'(
I
\
(03,1 - o\,t) • o-\,t - <Ji,t I
66
Example 4.1. We present an example where the condition (48) does not hold, and thus arbitrage opportunities arise. Assume the coefficients are constant and satisfy: jii = ^2 - ai = 0, ^3 < —y for a constant default intensity y > 0. Then Y? = 1„1,^)2,0) with (p2 = f-. Hence, the arbitrage strategy would be to sell the asset Y3, and to follow the strategy f|V*((^>); we shall call this process the pre-default wealth of (p. Consequently, the process Vf((p) := Vt((j))(Y^-i - ^,3 j s t e r m e c i m e relative pre-default wealth. Using Proposition 3.1, with suitably modified notation, we find that the F-adapted process V3(c/>) satisfies, for every t e [0, T],
Vf(Lt, where we set Lt = TEQT(e~Tr 17rX Hence, T(t, T) is equal to the product of a strictly positive, increasing, right-continuous, F-adapted process er', and a strictly positive, continuous F-martingale L. Furthermore, there exists an F-predictable process f>(l, T) such that L satisfies
dLt = LtJ(t,T)dWj with the initial condition
LQ
= E Q J ^ 7 ] . Formula (56) now follows by an
application of Ito's formula, by setting /3(t, T) = e_r,/3(f, T). To complete the proof, it suffices to recall that a continuous martingale is never of finite variation, unless it is a constant process. D Remark 4.5. It can be checked that |3(f, T) is also the volatility of the process T(t,T) =
Er(er'-rT\rt).
Assume that Tt - J^ y~u du for some F-predictable, nonnegative process y~. Then we have the following auxiliary result, which gives, in particular, the volatility of the defaultable ZC-bond. Corollary 4.2. The dynamics under QT of the pre-default price D(t, T) equals db(t, T) = D(t, T)((f?(^, T)+b{t, T)p(t, T)+ yt)dt+(b(t, T)+p(t, T))d(t, T)dWj).
73
Equivalently, the price D(t, T) of the defaultable ZC-bond satisfies under QT dD(t, T) = D{t,T)((fT(t, T) + b(t, T)p(t,T))dt + d(t,T)dWj - dMt). where we set d(t, T) = b(t, T) + p(t, T). Note that the process p(t, T) can be expressed in terms of market observables, since it is simply the difference of volatilities d(t, T) and b(t, T) of pre-default prices of tradeable assets. 4.1.6 Credit-risk-adjusted forward price Assume that the price Y2 satisfies under the statistical probability P (57)
dY2t = Y2(^2/, dt + at dWt)
with F-predictable coefficients y. and a. Let F^itJ) = Y2(B(f,T))_1 be the forward price of Y2. For an appropriate choice of d (see 50), we shall have that dFyiit, T) = Fy2(f, T)(a, - b(t, TJ) dWj. Therefore, the dynamics of the pre-default synthetic asset Y't under QT are dY] = Yf3(ot - b(t, T)) (dWj - fi(t, T) dt), and the process Yt = -Y^e - "' satisfies dY, = Y,(a, - b(t, T)) (dWj - p(t, T) dt). Let Q be an equivalent probability measure on (Q, Qr) such that Y (or, equivalently, Y") is a Q-martingale. By virtue of Girsanov's theorem, the process W given by the formula Wt = Wj-
f p(u,T)du, Jo
Vte[Q,T],
is a Brownian motion under in the spot market satisfies for every t e [0, T], on the set {t < T}, T |gi(^, Y2, Y3T) + l,r IR such thatnt{Y) = v{t,Y),Y],Y3;Ht). We find it convenient to introduce the pre-default pricing function v(;0) = v{t,yi,y2,y3',0) and the post-default pricing function u ( ; l ) = v(t,y\,y2,y3;l). In fact, since Y3 = 0 if Ht = 1, it suffices to study the post-default function v(t, 1/1,1/2; 1) = v(t, y\, y2,0; 1). Also, we write ai = fa - ai
^1 - P-2 , b = (ju3 - jUiXffi - 0 be the constant default intensity under P, and let C > - 1 be given by formula (48). Proposition 5.1. Assume that the functions v{-; 0) and v{-; 1) belong to the class C 1 ' 2 ^ , T] x R 3 , R). Then v(t, yx, y2,1/3; 0) satisfies the PDE 2
dtv(-; 0) + ^
x
3
aiyidiVi-; 0) + {a3 + Qy3d3v(-; 0) + - ^
1=1
a/CT;y,i//