Communicated by Christopher Bishop
Flat Minima Sepp Hochreiter ¨ Informatik, Technische Universit¨at Munchen, ¨ ¨ Fakult¨at fur 80290 Munchen, Germany
Jurgen ¨ Schmidhuber IDSIA, Corso Elvezia 36, 6900 Lugano, Switzerland
We present a new algorithm for finding low-complexity neural networks with high generalization capability. The algorithm searches for a “flat” minimum of the error function. A flat minimum is a large connected region in weight space where the error remains approximately constant. An MDL-based, Bayesian argument suggests that flat minima correspond to “simple” networks and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. Unlike many previous approaches, ours does not require gaussian assumptions and does not depend on a “good” weight prior. Instead we have a prior over inputoutput functions, thus taking into account net architecture and training set. Although our algorithm requires the computation of second-order derivatives, it has backpropagation’s order of complexity. Automatically, it effectively prunes units, weights, and input lines. Various experiments with feedforward and recurrent nets are described. In an application to stock market prediction, flat minimum search outperforms conventional backprop, weight decay, and “optimal brain surgeon/optimal brain damage.” 1 Basic Ideas and Outline Our algorithm tries to find a large region in weight space with the property that each weight vector from that region leads to similar small error. Such a region is called a flat minimum (Hochreiter and Schmidhuber 1995). To get an intuitive feeling for why a flat minimum is interesting, consider this: A sharp minimum (see Fig. 2) corresponds to weights that have to be specified with high precision. A flat minimum (see Fig. 1) corresponds to weights, many of which can be given with low precision. In the terminology of the theory of minimum description (message) length (MML, Wallace and Boulton 1968; MDL, Rissanen 1978), fewer bits of information are required to describe a flat minimum (corresponding to a “simple” or low-complexity network). The MDL principle suggests that low network complexity corresponds to high generalization performance. Similarly, the standard Bayesian Neural Computation 9, 1–42 (1997)
c 1997 Massachusetts Institute of Technology °
2
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Figure 1: Example of a flat minimum.
Figure 2: Example of a sharp minimum.
view favors “fat” maxima of the posterior weight distribution (maxima with a lot of probability mass; see, e.g., Buntine and Weigend 1991). We will see that flat minima are fat maxima. Unlike, e.g., Hinton and van Camp’s method (1993), our algorithm does not depend on the choice of a “good” weight prior. It finds a flat minimum by searching for weights that minimize both training error and weight precision. This requires the computation of the Hessian. However, by using an efficient second-order method (Pearlmutter 1994; Møller 1993), we obtain conventional backpropagation’s order of computational complexity. Automatically, the method effectively reduces numbers of units, weights, and
Flat Minima
3
input lines, as well as output sensitivity with respect to remaining weights and units. Unlike simple weight decay, our method automatically treats and prunes units and weights in different layers in different reasonable ways. The article is organized as follows: • Section 2 formally introduces basic concepts, such as error measures and flat minima. • Section 3 describes the novel algorithm, flat minimum search (FMS). • Section 4 formally derives the algorithm. • Section 5 reports experimental generalization results with feedforward and recurrent networks. For instance, in an application to stock market prediction, flat minimum search outperforms the following widely used competitors: conventional backpropagation, weight decay, and “optimal brain surgeon/optimal brain damage.” • Section 6 mentions relations to previous work. • Section 7 mentions limitations of the algorithm and outlines future work. • The Appendix presents a detailed theoretical justification of our approach. Using a variant of the Gibbs algorithm, Section A.1 defines generalization, underfitting and overfitting error in a novel way. By defining an appropriate prior over input-output functions, we postulate that the most probable network is a flat one. Section A.2 formally justifies the error function minimized by our algorithm. Section A.3 shows how to compute derivatives required by the algorithm. 2 Task Architecture and Boxes 2.1 Generalization Task. The task is to approximate an unknown function f ⊂ X × Y mapping a finite set of possible inputs X ⊂ RN to a finite set of possible outputs Y ⊂ RK . A data set D is obtained from f (see Section A.1). All training information is given by a finite set D0 ⊂ D. D0 is called the training set. The pth element of D0 is denoted by an input-target pair (xp , yp ). 2.2 Architecture and Net Functions. For simplicity, we will focus on a standard feedforward net (but in the experiments, we will use recurrent nets as well). The net has N input units, K output units, L weights, and differentiable activation functions. It maps input vectors x ∈ RN to output vectors o(w, x) ∈ RK , where w is the L-dimensional weight vector, and the weight on the connection from unit j to i is denoted wij . The net func-
4
Sepp Hochreiter and Jurgen ¨ Schmidhuber
induced by w is denoted net(w): for¢ x ∈ RN , net(w)(x) = o(w, x) = ¡tion 1 o (w, x), o2 (w, x), . . . , oK−1 (w, x), oK (w, x) , where oi (w, x) denotes the ith component of o(w, x), corresponding to output unit i. 2.3 Training Error. We use squared error E(net(w), D0 ) :=
P
(xp ,yp )∈D0
kyp − o(w, xp )k2 , where k · k denotes the Euclidean norm. 2.4 Tolerable Error. To define a region in weight space with the property that each weight vector from that region leads to small error and similar output, we introduce the tolerable error Etol , a positive constant (see Section A.1 for a formal definition of Etol ). “Small” error is defined as being smaller than Etol . E(net(w), D0 ) > Etol implies underfitting. 2.5 Boxes. Each weight w satisfying E(net(w), D0 ) ≤ Etol defines an acceptable minimum (compare M(D0 ) in Section A.1). We are interested in a large region of connected acceptable minima, where each weight w within this region leads to almost identical net functions net(w). Such a region is called a flat minimum. We will see that flat minima correspond to low expected generalization error. To simplify the algorithm for finding a large connected region (see below), we do not consider maximal connected regions but focus on so-called boxes within regions: for each acceptable minimum w, its box Mw in weight space is an L-dimensional hypercuboid with center w. For simplicity, each edge of the box is taken to be parallel to one weight axis. Half the length of the box edge in the direction of the axis corresponding to weight wij is denoted by 1wij (X). The 1wij (X) are the maximal (positive) values such that for all L-dimensional vectors κ whose components κij are restricted by |κij | ≤ 1wij (X), we have: E(net(w), net(w + κ), X) ≤ ², where P E(net(w), net(w + κ), X) = x∈X ko(w, x) − o(w + κ, x)k2 , and ² is a small positive constant defining tolerable output changes (see also equation 3.1). Note that 1wij (X) depends on ². Since our algorithm does not use ², however, it is notationally suppressed. 1wij (X) gives the precision of wij . Mw ’s Q box volume is defined by V(1w(X)) := 2L i,j 1wij (X), where 1w(X) denotes the vector with components 1wij (X). Our goal is to find large boxes within flat minima. 3 The Algorithm Let X0 = {xp | (xp , yp ) ∈ D0 } denote the inputs of the training set. We approximate 1w(X) by 1w(X0 ), where 1w(X0 ) is defined like 1w(X) in Section 2.5 (replacing X by X0 ). For simplicity, we will abbreviate 1w(X0 ) as 1w. Starting with a random initial weight vector, flat minimum search (FMS) tries to find a w that not only has low E(net(w), D0 ) but also defines a box Mw with maximal box volume V(1w) and, consequently, minimal P ˜ B(w, X0 ) := − log( 21L V(1w)) = i,j − log 1wij . Note the relationship to
Flat Minima
5
MDL: B˜ is the number of bits required to describe the weights, whereas the number of bits needed to describe the yp , given w (with (xp , yp ) ∈ D0 ), can be bounded by fixing Etol (see Section A.1). In the next section we derive the following algorithm. We use gradient descent P to minimize E(w, D0 ) = E(net(w), D0 ) + λB(w, X0 ), where B(w, X0 ) = xp ∈X0 B(w, xp ), and
à !2 X X ∂ok (w, xp ) 1 B(w, xp ) = −L log ² + log 2 ∂wij i,j k
(3.1)
2 ¯ k ¯ ¯ ∂o (w,xp ) ¯ ¯ ∂wij ¯ X X . + L log r ³ ´ 2 k P i,j
k
k
∂o (w,xp ) ∂wij
Here ok (w, xp ) is the activation of the kth output unit (given weight vector w and input xp ), ² is a constant, and λ is the regularization constant (or hyperparameter) that controls the trade-off between regularization and training error (see Section A.1). To minimize B(w, X0 ), for each xp ∈ X0 we have to compute ∂B(w, xp ) X ∂B(w, xp ) ∂ 2 ok (w, xp ) ³ k ´ = for all u, v. ∂o (w,xp ) ∂wuv ∂wij ∂wuv k,i,j ∂
(3.2)
∂wij
It can be shown that by using Pearlmutter’s and Møller’s efficient secondorder method, the gradient of B(w, xp ) can be computed in O(L) time. Therefore, our algorithm has the same order of computational complexity as standard backpropagation. 4 Derivation of the Algorithm 4.1 Outline. We are interested in weights representing nets with tolerable error but flat outputs (see Sections 2 and A.1). To find nets with flat outputs, two conditions will be defined to specify B(w, xp ) for xp ∈ X0 and, as a consequence, B(w, X0 ) (see Section 3). The first condition ensures flatness. The second condition enforces equal flatness in all weight space directions, to obtain low variance of the net functions induced by weights within a box. The second condition will be justified using an MDL-based argument. In both cases, linear approximations will be made (to be justified in Section A.2). 4.2 Formal Details. We are interested in weights causing tolerable error (see “acceptable minima” in Section 2) that can be perturbed without
6
Sepp Hochreiter and Jurgen ¨ Schmidhuber
causing significant output changes, thus indicating the presence of many neighboring weights leading to the same net function. By searching for the boxes from Section 2, we are actually searching for low-error weights whose perturbation does not significantly change the net function. In what follows we treat the input xp as fixed. For convenience, we suppress xp , abbreviating ok (w, xp ) by ok (w). Perturbing the weights w by δw P (with components δwij ), we obtain ED(w, δw) := k (ok (w + δw) − ok (w))2 , where ok (w) expresses ok ’s dependence on w (in what follows, however, w often will be suppressed for convenience; we abbreviate ok (w) by ok ). Linear approximation (justified in Section A.2) gives us flatness condition 1: 2 X X ∂ok δwij ≤ EDl,max (δw) ED(w, δw) ≈ EDl (δw) := ∂wij i,j k
(4.1)
2 ¯ ¯ X X ¯¯ ∂ok ¯¯ := ¯ ¯ |δwij | ≤ ² , ¯ ∂wij ¯ k
i,j
where ² > 0 defines tolerable output changes within a box and is small enough to allow for linear approximation (it does not appear in B(w, xp )’s and B(w, D0 )’s gradients; see Section 3). EDl is ED’s linear approximation, and EDl,max is max{EDl (w, δv)| ∀ij : δvij = ±δwij }. Flatness condition 1 is a “robustness condition” (or “fault tolerance condition” or “perturbation tolerance condition”; see, e.g., Minai and Williams 1994; Murray and Edwards 1993; Neti et al. 1992; Matsuoka 1992; Bishop 1993; Kerlirzin and Vallet 1993; Carter et al. 1990). Many boxes Mw satisfy flatness condition 1. To select a particular, very flat Mw , the following flatness condition 2 uses up degrees of freedom left by inequality 4.1: 2
∀i, j, u, v : (δwij )
X k
Ã
∂ok ∂wij
!2 = (δwuv )
2
X k
Ã
∂ok ∂wuv
!2 .
(4.2)
Flatness condition 2 enforces equal “directed errors” X X EDij (w, δwij ) = (ok (wij + δwij ) − ok (wij ))2 ≈ k
k
Ã
∂ok δwij ∂wij
!2 ,
where ok (wij ) has the obvious meaning, and δwij is the i, jth component of δw. Linear approximation is justified by the choice of ² in inequality 4.1. As will be seen in the MDL justification to be presented below, flatness condition 2 favors the box that minimizes the mean perturbation error within the box. This corresponds to minimizing the variance of the net functions induced by weights within the box (recall that ED(w, δw) is quadratic).
Flat Minima
7
4.3 Deriving the Algorithm from Flatness Conditions 1 and 2. We first solve equation 4.2 for |δwij |: v u P ³ k ´2 u ∂o u k ∂wuv u |δwij | = |δwuv |t (fixing u, v for all i, j) . P ³ ∂ok ´2 k
∂wij
Then we insert the |δwij | (with fixed u, v) into inequality 4.1 (replacing the second “≤” in 4.1 with “=”, because we search for the box with maximal volume). This gives us an equation for the |δwuv | (which depend on w, but this is notationally suppressed): |δwuv | =
√ ² v 2 . u ¯ ¯ u r ¯ ¯ k ∂o u ¯ ∂w ¯ P ³ ∂ok ´2 uP P ij r u k ∂wuv t k i,j P ³ k ´2 ∂o k
(4.3)
∂wij
The |δwij | (u, v is replaced by i, j) approximate the 1wij from Section 2. The box Mw is approximated by AMw , the box with center w and edge lengths 2δwij . Mw ’s volume V(1w) is approximated by AMw ’s box volume Q ˜ V(δw) := 2L ij |δwij |. Thus, B(w, xp ) (see Section 3) can be approximated by P 1 B(w, xp ) := − log 2L V(δw) = i,j − log |δwij |. This immediately leads to the algorithm given by equation 3.1. 4.4 How Can the Above Approximations Be Justified? The learning process itself enforces their validity (see Section A.2). Initially, the conditions above are valid only in a very small environment of an “initial” acceptable minimum. But during the search for new acceptable minima with more associated box volume, the corresponding environments are enlarged. Section A.2 will prove this for feedforward nets (experiments indicate that this appears to be true for recurrent nets as well). 4.5 Comments. Flatness condition 2 influences the algorithm as follows: (1) The algorithm prefers to increase the δwij ’s of weights whose current contributions are not important to compute the target output; (2) The algorithm enforces equal sensitivity of all output units with respect to weights of connections to hidden units. Hence, output units tend to share hidden units; that is, different hidden units tend to contribute equally to the computation of the target. The contributions of a particular hidden unit to different output unit activations tend to be equal too. Flatness condition 2 is essential. Flatness condition 1 by itself corresponds to nothing more than first-order derivative reduction (ordinary sensitivity
8
Sepp Hochreiter and Jurgen ¨ Schmidhuber
reduction). However, what we really want is to minimize the variance of the net functions induced by weights near the actual weight vector. Automatically, the algorithm treats units and weights in different layers differently, and takes the nature of the activation functions into account. 4.6 MDL Justification of Flatness Condition 2. Let us assume a sender wants to send a description of the function induced by w to a receiver who knows the inputs xp but not the targets yp , where (xp , yp ) ∈ D0 . The MDL principle suggests that the sender wants to minimize the expected description length of the net function. Let EDmean (w, X0 ) denote the mean value of ED on the box. Expected description length is approximated by µEDmean (w, X0 ) + B(w, X0 ) + c, where c, µ are positive constants. One way of seeing this is to apply Hinton and van Camp’s “bits back” argument to a uniform weight prior (EDmean corresponds to the output variance). However, we prefer to use a different argument: We encode each weight wij of the box center w by a bitstring according to the following procedure (1wij is given): (0) Define a variable interval Iij ⊂ R. (1) Make Iij equal to the interval constraining possible weight values. (2) While Iij 6⊂ [wij − 1wij , wij + 1wij ]: Divide Iij into two equally sized disjunct intervals I1 and I2 . If wij ∈ I1 , then Iij ← I1 ; write ‘1’. If wij ∈ I2 , then Iij ← I2 ; write ‘0’. The final set {Iij } corresponds to a bit box within our box. This bit box contains ˜ X0 ) + c, where Mw ’s center w and is described by a bitstring of length B(w, the constant c is independent of the box Mw . From ED(w, wb − w) (wb is the center of the bit box) and the bitstring describing the bit box, the receiver can compute w by selecting an initialization weight vector within the bit box and using gradient descent to decrease B(wa , X0 ) until ED(wa , wb − wa ) = ED(w, wb − w), where wa in the bit box denotes the receiver’s current approximation of w (wa is constantly updated by the receiver). This is like “FMS without targets.” Recall that the receiver knows the inputs xp . Since w corresponds to the weight vector with the highest degree of local flatness within the bit box, the receiver will find the correct w. ED(w, wb − w) is described by a gaussian distribution with mean zero. Hence, the description length of ED(w, wb − w) is µED(w, wb − w) (Shannon 1948). wb , the center of the bit box, cannot be known before training. However, we do know the expected description length of the net function,
Flat Minima
9
˜ which is µEDmean + B(w, X0 ) + c (c is a constant independent of w). Let us approximate EDmean : EDl,mean (w, δw) :=
1 V(δw)
Z AMw
EDl (w, δv)dδv
à X X 1 L1 = 2 (δwij )3 V(δw) 3 i,j k à ×
∂ok ∂wij
!2
Y
δwuv
u,vwith u,v6=i,j
à !2 ¢2 X ∂ok 1 X¡ . δwij = 3 i,j ∂wij k Among those w that lead to equal B(w, X0 ) (the negative logarithm of the box volume plus L log 2), we want to find those with minimal description length of the function induced by w. Using Lagrange multipliers (viewing the δwij as variables), it can be shown that EDl,mean is minimal under the condition B(w, X0 ) = constant iff flatness condition 2 holds. To conclude: With given box volume, we need flatness condition 2 to minimize the expected description length of the function induced by w. 5 Experimental Results 5.1 Experiment 1: Noisy Classification. 5.1.1 Task. The first task is taken from Pearlmutter and Rosenfeld (1991). The task is to decide whether the x-coordinate of a point in two-dimensional space exceeds zero (class 1) or does not (class 2). Noisy training–test examples are generated as follows: data points are obtained from a gaussian with zero mean and standard deviation 1.0, bounded in the interval [−3.0, 3.0]. The data points are misclassified with probability 0.05. Final input data are obtained by adding a zero mean gaussian with stdev 0.15 to the data points. In a test with 2 million data points, it was found that the procedure above leads to 9.27% misclassified data. No method will misclassify less than 9.27%, due to the inherent noise in the data (including the test data). The training set is based on 200 fixed data points (see Fig. 3). The test set is based on 120,000 data points. 5.1.2 Results. Ten conventional backprop (BP) nets were tested against ten equally initialized networks trained by flat minimum search (FMS). After 1000 epochs, the weights of our nets essentially stopped changing
10
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Figure 3: The 200 input examples of the training set. Crosses represent data points from class 1. Squares represent data points from class 2.
(automatic “early stopping”), while BP kept changing weights to learn the outliers in the data set and overfit. In the end, our approach left a single hidden unit h with a maximal weight of 30.0 or −30.0 from the x-axis input. Unlike with BP, the other hidden units were effectively pruned away (outputs near zero). So was the y-axis input (zero weight to h). It can be shown that this corresponds to an “optimal” net with minimal numbers of units and weights. Table 1 illustrates the superior performance of our approach. Parameters: Learning rate: 0.1. Architecture: (2-20-1). Number of training epochs: 400,000. With FMS: Etol = 0.0001. See Section 5.6 for parameters common to all experiments. 5.2 Experiment 2: Recurrent Nets. 5.2.1 Time-varying inputs. The method works for continually running fully recurrent nets as well. At every time step, a recurrent net with sigmoid activations in [0, 1] sees an input vector from a stream of randomly chosen input vectors from the set {(0, 0), (0, 1), (1, 0), (1, 1)}. The task is to switch on
Flat Minima
11
Table 1: Ten Comparisons of Conventional Backpropagation (BP) and Flat Minimum Search (FMS).
Backpropagation
1 2 3 4 5
FMS
Backpropagation
FMS
MSE
dto
MSE
dto
MSE
dto
MSE
dto
0.220 0.223 0.222 0.213 0.222
1.35 1.16 1.37 1.18 1.24
0.193 0.189 0.186 0.181 0.195
0.00 6 0.09 7 0.13 8 0.01 9 0.25 10
0.219 0.215 0.214 0.218 0.214
1.24 1.14 1.10 1.21 1.21
0.187 0.187 0.185 0.190 0.188
0.04 0.07 0.01 0.09 0.07
Note: The MSE column shows mean squared error on the test set. The dto column shows the difference between the percentage of misclassifications and the optimal percentage (9.27). The remaining rows provide the analogous information for FMS, which clearly outperforms backpropagation.
the first output unit whenever an input (1, 0) had occurred two time steps ago and to switch on the second output unit without delay in response to any input (0, 1). The task can be solved by a single hidden unit. 5.2.2 Non-weight-decay-like results. With conventional recurrent net algorithms, after training, both hidden units were used to store the input vector. This was not so with our new approach. We trained 20 networks. All of them learned perfect solutions. As with weight decay, most weights to the output decayed to zero. But unlike with weight decay, strong inhibitory connections (−30.0) switched off one of the hidden units, effectively pruning it away. Parameters: Learning rate: 0.1. Architecture: (2-2-2). Number of training examples: 1,500. Etol = 0.0001. See Section 5.6 for parameters common to all experiments.
12
Sepp Hochreiter and Jurgen ¨ Schmidhuber
5.3 Experiment 3: Stock Market Prediction 1. 5.3.1 Task. We predict the DAX1 (the German stock market index) using fundamental indicators. Following Rehkugler and Poddig (1990), the net sees the following indicators: (1) German interest rate (Umlaufsrendite), (2) industrial production divided by money supply, and (3) business sentiments (IFO Gesch¨aftsklimaindex). The input (scaled in the interval [−3.4, 3.4]) is the difference between data from the current quarter and last year’s corresponding quarter. The goal is to predict the sign of next year’s corresponding DAX difference. 5.3.2 Details. The training set consists of 24 data vectors from 1966 to 1972. Positive DAX tendency is mapped to target 0.8; otherwise the target is -0.8. The test set consists of 68 data vectors from 1973 to 1990. FMS is compared against (1) conventional backpropagation (BP8) with 8 hidden units, (2) BP with 4 hidden units (BP4) (4 hidden units are chosen because pruning methods favor 4 hidden units, but 3 is not enough), (3) optimal brain surgeon (OBS; Hassibi and Stork 1993), with a few improvements (see Section 5.6), and (4) weight decay (WD) according to Weigend et al. (1991) (WD and OBS were chosen because they are well known and widely used). 5.3.3 Performance Measure. Since wrong predictions lead to loss of money, performance is measured as follows. The sum of incorrectly predicted DAX changes is subtracted from the sum of correctly predicted DAX changes. The result is divided by the sum of absolute DAX changes. 5.3.4 Results. Table 2 shows the results. Our method outperforms the other methods. Note that MSE is not a reasonable performance measure for this task. For instance, although FMS typically makes more correct classifications than WD, FMS’s MSE often exceeds WD’s. This is because WD’s wrong classifications tend to be close to 0, while FMS often prefers large weights yielding strong output activations. FMS’s few false classifications tend to contribute a lot to MSE. Parameters Learning rate: 0.01. Architecture: (3-8-1), except BP4 with (3-4-1). Number of training examples: 20 million.
1 Raw DAX version according to Statistisches Bundesamt (Federal Office of Statistics). Other data are from the same source (except for business sentiment). Collected by Christian Puritscher, for a diploma thesis in industrial management at LMU, Munich.
Flat Minima
13
Table 2: Comparisons of Conventional Backpropagation (BP4, BP8), Optimal Brain Surgeon (OBS), Weight Decay (WD), and Flat Minimum Search (FMS).
Method
BP8 BP4 OBS WD FMS
Train MSE
Test MSE
0.003 0.043 0.089 0.096 0.040
0.945 1.066 1.088 1.102 1.162
Removed w u
Max
14 22 24
47.33 42.02 48.89 44.47 47.74
3 4 4
Performance Min 25.74 42.02 27.17 36.47 39.70
Mean 37.76 42.02 41.73 43.49 43.62
Note: All nets except BP4 start out with eight hidden units. Each value is a mean of seven trials. Column MSE shows mean squared error. Column w shows the number of pruned weights. Column u shows the number of pruned units. The final three rows (max, min, mean) list maximal, minimal, and mean performance (see text) over seven trials. Note that test MSE is insignificant for performance evaluations (this is due to targets 0.8/ − 0.8, as opposed to the “real” DAX targets). Our method outperforms all other methods.
Method specific parameters: FMS: Etol = 0.13; 1λ = 0.001. WD: like with FMS, but w0 = 0.2. OBS: Etol = 0.015 (the same result was obtained with higher Etol values, e.g., 0.13). See Section 5.6 for parameters common to all experiments. 5.4 Experiment 4: Stock Market Prediction 2. 5.4.1 Task. We predict the DAX again, using the basic setup of the experiment in Section 5.3. However, the following modifications are introduced: • There are two additional inputs: dividend rate and foreign orders in manufacturing industry. • Monthly predictions are made. The net input is the difference between the current month’s data and last month’s data. The goal is to predict the sign of next month’s corresponding DAX difference. • There are 228 training examples and 100 test examples. • The target is the percentage of DAX change scaled in the interval [−1, 1] (outliers are ignored). • Performance of WD and FMS is also tested on networks “spoiled” by conventional BP (“WDR” and “FMSR”—the “R” stands for Retraining).
14
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Table 3: Comparisons of Conventional Backpropagation (BP), Optimal Brain Surgeon (OBS), Weight Decay after Spoiling the Net with BP (WDR), Flat Minimum Search after Spoiling the Net with BP (FMSR), Weight Decay (WD), and Flat Minimum Search (FMS).
Method
BP OBS WDR FMSR WD FMS
Train MSE
Test MSE
Removed w u
Max
0.181 0.219 0.180 0.180 0.235 0.240
0.535 0.502 0.538 0.542 0.452 0.472
15 0 0 17 19
57.33 50.78 62.54 64.07 54.04 54.11
1 0 0 3 3
Performance Min 20.69 32.20 13.64 24.58 32.03 31.12
Mean 41.61 40.43 41.17 41.57 40.75 44.40
Note: All nets start out with eight hidden units. Each value is a mean of ten trials. Column MSE shows mean squared error. Column w shows the number of pruned weights, column u shows the number of pruned units, the final three rows (max, min, mean) list maximal, minimal and mean performance (see text) over ten trials (note again that MSE is an irrelevant performance measure for this task). Flat minimum search outperforms all other methods.
5.4.2 Results. Table 3 shows the results. The average performance of our method exceeds the ones of weight decay, OBS, and conventional BP. Table 3 also shows the superior performance of our approach when it comes to retraining “spoiled” networks (note that OBS is a retraining method by nature). FMS led to the best improvements in generalization performance. Parameters Learning rate: 0.01. Architecture: (5-8-1). Number of training examples: 20,000,000. Method-specific parameters: FMS: Etol = 0.235; 1λ = 0.0001; if Eaverage < Etol then 1λ is set to 0.001. WD: like with FMS, but w0 = 0.2. FMSR: like with FMS, but Etol = 0.15; number of retraining examples: 5,000,000. WDR: like with FMSR, but w0 = 0.2. OBS: Etol = 0.235. See Section 5.6 for parameters common to all experiments.
Flat Minima
15
5.5 Experiment 5: Stock Market Prediction 3. 5.5.1 Task. This time, we predict the DAX using weekly technical (as opposed to fundamental) indicators. The data (DAX values and 35 technical indicators) were provided by Bayerische Vereinsbank. 5.5.2 Data Analysis. To analyze the data, we computed (1) the pairwise correlation coefficients of the 35 technical indicators and (2) the maximal pairwise correlation coefficients of all indicators and all linear combinations of two indicators. This analysis revealed that only four indicators are not highly correlated. For such reasons, our nets see only the eight most recent DAX changes and the following technical indicators: (a) the DAX value, (b) change of 24-week relative strength index (RSI)—the relation of increasing tendency to decreasing tendency, (c) 5-week statistic, and (d) MACD (smoothened difference of exponentially weighted 6-week and 24week DAX). 5.5.3 Input Data. The final network input is obtained by scaling the values (a–d) and the eight most recent DAX changes in [−2, 2]. The training set consists of 320 data points (July 1985–August 1991). The targets are the actual DAX changes scaled in [−1, 1]. 5.5.4 Comparison. The following methods are applied to the training set: (1) conventional BP, (2) optimal brain surgeon/optimal brain damage (OBS/OBD), (3) weight decay (WD) according to Weigend et al. (1991), and (4) flat minimum search (FMS). The resulting nets are evaluated on a test set consisting of 100 data points (August 1991–July 1993). Performance is measured as in Section 5.3. 5.5.5 Results. Table 4 shows the results. Again, our method outperforms the other methods. Parameters Learning rate: 0.01. Architecture: (12-9-1). Training time: 10 million examples. Method-specific parameters: OBS/OBD: Etol = 0.34. FMS: Etol = 0.34; 1λ = 0.003. If Eaverage < Etol then 1λ is set to 0.03. WD: like with FMS, but w0 = 0.2. See Section 5.6 for parameters common to all experiments.
16
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Table 4: Comparisons of Conventional Backpropagation (BP), Optimal Brain Surgeon (OBS), Weight Decay (WD), and Flat Minimum Search (FMS).
Method
BP OBS WD FMS
Train MSE
Test MSE
Removed w u
Max
0.13 0.38 0.51 0.46
1.08 0.912 0.334 0.348
55 110 103
28.45 27.37 26.84 29.72
1 8 7
Performance Min −16.7 −6.08 −6.88 18.09
Mean 8.08 10.70 12.97 21.26
Note: All nets start out with nine hidden units. Each value is a mean of ten trials. Column MSE shows mean squared error. Column w shows the number of pruned weights. Column u shows the number of pruned units. The final three rows (max, min, mean) list maximal, minimal, and mean performance (see text) over ten trials (note again that MSE is an irrelevant performance measure for this task). Flat minimum search outperforms all other methods.
5.6 Details and Parameters. With the exception of the experiment in Section 5.2, all units are sigmoid in the range of [−1.0, 1.0]. Weights are constrained to [−30, 30] and initialized in [−0.1, 0.1]. The latter ensures high first-order derivatives in the beginning of the learning phase. WD is set up to barely punish weights below w0 = 0.2. Eaverage is the average error on the training set, approximated using exponential decay: Eaverage ← γ Eaverage + (1 − γ )E(net(w), D0 ), where γ = 0.85. 5.6.1 FMS Details. To control B(w, D0 )’s influence during learning, its gradient is normalized and multiplied by the length of E(net(w), D0 )’s gradient (same for weight decay; see below). λ is computed as in Weigend et al. (1991) and initialized with 0. Absolute values of first-order derivatives are replaced by 10−20 if below this value. We ought to judge a weight wij as being pruned if δwij (see equation 4.5) exceeds the length of the weight range. However, the unknown scaling factor ² (see inequality 4.3 and equation 4.5) is required to compute δwij . Therefore, we judge a weight wij as being pruned if, with arbitrary ², δwij is much bigger than the corresponding δ’s of the other weights (typically, there are clearly separable classes of weights with high and low δ’s, which differ from each other by a factor ranging from 102 to 105 ). If all weights to and from a particular unit are very close to zero, the unit is lost. Due to tiny derivatives, the weights will never again increase significantly. Sometimes it is necessary to bring lost units back into the game. For this purpose, every ninit time steps (typically, ninit = 500,000), all weights wij with 0 ≤ wij < 0.01 are randomly reinitialized in [0.005, 0.01]; all weights
Flat Minima
17
wij with 0 ≥ wij > −0.01 are randomly initialized in [−0.01, −0.005], and λ is set to 0. 5.6.2 Weight Decay Details. We used Weigend et al.’s weight decay term: P w2 /w0 D(w) = i,j 1+wij 2 /w . As with FMS, D(w, w0 )’s gradient was normalized and ij
0
multiplied by the length of E(net(w), D0 )’s gradient. λ was adjusted as with FMS. Lost units were brought back as with FMS. 5.6.3 Modifications of OBS. Typically most weights exceed 1.0 after training. Therefore, higher-order terms of δw in the Taylor expansion of the error function do not vanish. Hence, OBS is not fully theoretically justified. Still, we used OBS to delete high weights, assuming that higher-order derivatives are small if second-order derivatives are. To obtain reasonable performance, we modified the original OBS procedure (notation following Hassibi and Stork 1993): • To detect the weight that deserves deletion, we use both Lq = (the original value used by Hassibi and Stork) and Tq := 1∂ E 2 2 ∂w2q wq . Here H 2
w2q [H−1 ]qq
∂E ∂wq wq
+
denotes the Hessian and H−1 its approximate inverse.
We delete the weight-causing minimal training set error (after tentative deletion). • As with OBD (LeCun et al. 1990), to prevent numerical errors due to small eigenvalues of H, we do: if Lq < 0.00001 or Tq < 0.00001 or kI − H−1 Hk > 10.0 (bad approximation of H−1 ), we delete only the weight detected in the previous step. The other weights remain the same. Here k · k denotes the sum of the absolute values of all components of a matrix. • If OBS’s adjustment of the remaining weights leads to at least one absolute weight change exceeding 5.0, then δw is scaled such that the maximal absolute weight change is 5.0. This leads to better performance (also due to small eigenvalues). • If Eaverage > Etol after weight deletion, then the net is retrained until either Eaverage < Etol or the number of training examples exceeds 800,000. Practical experience indicates that the choice of Etol barely influences the result. • OBS is stopped if Eaverage > Etol after retraining. The most recent weight deletion is countermanded.
18
Sepp Hochreiter and Jurgen ¨ Schmidhuber
6 Relation to Previous Work Most previous algorithms for finding low-complexity networks with high generalization capability are based on different prior assumptions. They can be broadly classified into two categories (see Schmidhuber 1994a for an exception): 1. Assumptions about the prior weight distribution. Hinton and van Camp (1993) and Williams (1994) assume that pushing the posterior weight distribution close to the weight prior leads to “good” generalization (see more details below). Weight decay (e.g., Hanson and Pratt 1989; Krogh and Hertz 1992) can be derived, for example, from gaussian or Laplace weight priors. Nowlan and Hinton (1992) assume that a distribution of networks with many similar weights generated by gaussian mixtures is “better” a priori. MacKay’s weight priors (1992b) are implicit in additional penalty terms, which embody the assumptions made. The problem with these approaches is that there may not be a “good” weight prior for all possible architectures and training sets. With FMS, however, we do not have to select a “good” weight prior but instead choose a prior over input-output functions. This automatically takes the net architecture and the training set into account. 2. Prior assumptions about how theoretical results on early stopping and network complexity carry over to practical applications. Such assumptions are implicit in methods based on validation sets (Mosteller and Tukey 1968; Stone 1974; Eubank 1988; Hastie and Tibshirani 1990), for example, generalized cross-validation (Craven and Wahba 1979; Golub et al. 1979), final prediction error (Akaike 1970), and generalized prediction error (Moody and Utans 1992; Moody 1992). See also Holden (1994), Wang et al. (1994), Amari and Murata (1993), and Vapnik’s structural risk minimization (Guyon et al. 1992; Vapnik 1992). 6.1 Constructive Algorithms and Pruning Algorithms. Other architecture selection methods are less flexible in the sense that they can be used only before or after weight adjustments. Examples are sequential network construction (Fahlman and Lebiere 1990; Ash 1989; Moody 1989), input pruning (Moody 1992; Refenes et al. 1994), unit pruning (White 1989; Mozer and Smolensky 1989; Levin et al. 1994), and weight pruning, for example, optimal brain damage (LeCun et al. 1990), and optimal brain surgeon (Hassibi and Stork 1993). 6.2 Hinton and van Camp (1993). They minimize the sum of twoRterms. The first is conventional error plus variance; the other is the distance p(w | p(w|D ) D0 ) log p(w)0 dw between posterior p(w | D0 ) and weight prior p(w). They
Flat Minima
19
have to choose a “good” weight prior. But perhaps there is no “good” weight prior for all possible architectures and training sets. With FMS, however, we do not depend on a “good” weight prior; instead we have a prior over input-output functions, thus taking into account net architecture and training set. Furthermore, Hinton and van Camp have to compute variances of weights and unit activations, which (in general) cannot be done using linear approximation. Intuitively, their weight variances are related to our 1wij . Our approach, however, does justify linear approximation, as seen in Section A.2. 6.3 Wolpert (1994a). His (purely theoretical) analysis suggests an interesting different additional error term (taking into account local flatness in all directions): the logarithm of the Jacobi determinant of the functional from weight space to the space of possible nets. This term is small if the net output (based on the current weight vector) is locally flat in weight space (if many neighboring weights lead to the same net function in the space of possible net functions). It is not clear, however, how to derive a practical algorithm (e.g., a pruning algorithm) from this. 6.4 Murray and Edwards (1993). They obtain additional error terms consisting of weight squares and second-order derivatives. Unlike our approach, theirs explicitly prefers weights near zero. In addition, their approach appears to require much more computation time (due to secondorder derivatives in the error term). 7 Limitations, Final Remarks, and Future Research 7.1 How to Adjust λ. Given recent trends in neural computing (see, e.g., MacKay 1992a, 1992b), it may seem like a step backward that λ is adapted using an ad hoc heuristic from Weigend et al. (1991). However, for determining λ in MacKay’s style, one would have to compute the Hessian of the cost function. Since our term B(w, X0 ) includes first-order derivatives, adjusting λ would require the computation of third-order derivatives. This is impracticable. Also, to optimize the regularizing parameter λ (see MacKay R 1992b), we need to compute the function dL w exp(−λB(w, X0 )), but it is not obvious how; the “quick and dirty version” (MacKay 1992a) cannot deal with the unknown constant ² in B(w, X0 ). Future work will investigate how to adjust λ without too much computational effort. In fact, as will be seen in Section A.1, the choices of λ and Etol are correlated. The optimal choice of Etol may indeed correspond to the optimal choice of λ. 7.2 Generalized Boxes. The boxes found by the current version of FMS are axis aligned. This may cause an underestimate of flat minimum volume. Although our experiments indicate that box search works very well, it
20
Sepp Hochreiter and Jurgen ¨ Schmidhuber
will be interesting to compare alternative approximations of flat minimum volumes. 7.3 Multiple Initializations. First, consider this FMS alternative: Run conventional BP starting with several random initial guesses and pick the flattest minimum with the largest volume. This does not work. Conventional BP changes the weights according to the steepest descent; it runs away from flat ranges in weight space. Using an “FMS committee” (multiple runs with different initializations), however, would lead to a better approximation of the posterior. This is left for future work. 7.4 Notes on Generalization Error. If the prior distribution of targets p( f ) (see Section A.1) is uniform (or if the distribution of prior distributions is uniform), no algorithm can obtain a lower expected generalization error than training error reducing algorithms (see, e.g., Wolpert 1994b). Typical target distributions in the real world are not uniform, however; the real world appears to favor problem solutions with low algorithmic complexity (see, e.g., Schmidhuber 1994a). MacKay (1992a) suggests searching for alternative priors if the generalization error indicates a “poor regularizer.” He also points out that with a “good” approximation of the nonuniform prior, more probable posterior hypotheses do not necessarily have a lower generalization error. For instance, there may be noise on the test set, or two hypotheses representing the same function may have different posterior values, and the expected generalization error ought to be computed over the whole posterior, not for a single solution. Schmidhuber (1994b) proposes a general “self-improving” system whose entire life is viewed as a single training sequence and continually attempts to modify its priors incrementally based on experience with previous problems (see also Schmidhuber 1996). 7.5 Ongoing Work on Low-Complexity Coding. FMS can also be useful for unsupervised learning. In recent work, we postulate that a generally useful code of given input data fulfills three MDL-inspired criteria: (1) It conveys information about the input data, (2) it can be computed from the data by a low-complexity mapping, and (3) the data can be computed from the code by a low-complexity mapping. To obtain such codes, we simply train an auto-associator with FMS (after training, codes are represented across the hidden units). In initial experiments, depending on data and architecture, this always led to well-known kinds of codes considered useful in previous work by numerous researchers. We sometimes obtained factorial codes, sometimes local codes, and sometimes sparse codes. In most cases, the codes were of the low-redundancy, binary kind. Initial experiments with a speech data benchmark problem (vowel recognition) already showed the true usefulness of codes obtained by FMS: Feeding the codes into standard,
Flat Minima
21
supervised, overfitting backpropagation classifiers, we obtained much better generalization performance than competing approaches. Appendix – Theoretical Justification An alternative version of this Appendix (but with some minor errors) can be found in Hochreiter and Schmidhuber (1994). An expanded version of this Appendix (with a detailed description of the algorithm) is available on the World-Wide Web (see our home pages). A.1. Flat Nets: The Most Probabale Hypotheses—Outline. We introduce a novel kind of generalization error that can be split into an overfitting error and an underfitting error. To find hypotheses causing low generalization error, we first select a subset of hypotheses causing low underfitting error. We are interested in those of its elements causing low overfitting error. After listing relevant definitions we will introduce a somewhat unconventional variant of the Gibbs algorithm, designed to take into account that FMS uses only the training data D0 to determine G(· | D0 ), a distribution over the set of hypotheses expressing our prior belief in hypotheses (here we do not care where the data came from; this will be treated later). This variant of the Gibbs algorithm will help us to introduce the concept of expected extended generalization error, which can be split into an overfitting error (relevant for measuring whether the learning algorithm focuses too much on the training set) and an underfitting error (relevant for measuring whether the algorithm sufficiently approximates the training set). To obtain these errors, we measure the Kullback-Leibler distance between posterior p(· | D0 ) after training on the training set and posterior pD0 (· | D) after (hypothetical) training on all data (here the subscript D0 indicates that for learning D, G(· | D0 ) is used as prior belief in hypotheses too). The overfitting error measures the information conveyed by p(· | D0 ) but not by pD0 (· | D). The underfitting error measures the information conveyed by pD0 (· | D), but not by p(· | D0 ). We then introduce the tolerable error level and the set of acceptable minima. The latter contains hypotheses with low underfitting error, assuming that D0 indeed conveys information about the test set (every training-seterror-reducing algorithm makes this assumption). In the remainder of the Appendix, we will focus only on hypotheses within the set of acceptable minima. We introduce the relative overfitting error, which is the relative contribution of a hypothesis to the mean overfitting error on the set of acceptable minima. The relative overfitting error measures the overfitting error of hypotheses with low underfitting error. The goal is to find a hypothesis with low overfitting error and, consequently, low generalization error. The relative overfitting error is approximated based on the trade-off between low training set error and large values of G(· | D0 ). The distribution
22
Sepp Hochreiter and Jurgen ¨ Schmidhuber
G(· | D0 ) is restricted to the set of acceptable minima, to obtain the distribution GM(D0 ) (· | D0 ). We then assume the data are obtained from a target chosen according to a given prior distribution. Using previously introduced distributions, we derive the expected test set error and the expected relative overfitting error. We want to reduce the latter by choosing a certain GM(D0 ) (· | D0 ) and G(· | D0 ). The special case of noise-free data is considered. To be able to minimize the expected relative overfitting error, we need to adopt a certain prior belief p( f ). The only unknown distributions required to determine GM(D0 ) (· | D0 ) are p(D0 | f ) and p(D | f ). They describe how (noisy) data are obtained from the target. We have to make the following assumptions: the choice of prior belief is “appropriate,” the noise on data drawn from the target has mean 0, and small noise is more probable than large noise (the noise assumptions ensure that reducing the training error— by choosing some h from M(D0 )—reduces the expected underfitting error). We do not need gaussian assumptions, though. We show that FMS approximates our special variant of the Gibbs algorithm. The prior is approximated locally in weight space, and flat net(w) are approximated by flat net(w0 ) with w0 near w in weight space. A.1.1. Definitions. Let A = {(x, y) | x ∈ X, y ∈ Y} be the set of all possible input-output pairs (pairs of vectors). Let NET be the set of functions that can be implemented by the network. For every net function g ∈ NET we have g ⊂ A. Elements of NET are parameterized with a parameter vector w from the set of possible parameters W. net(w) is a function that maps a parameter vector w onto a net function g (net is surjective). Let T be the set of target functions f , where T ⊂ NET. Let H be the set of hypothesis functions h, where H ⊂ T. For simplicity, take all sets to be finite, and let all functions map each x ∈ X to some y ∈ Y. Values of functions with argument x are denoted by g(x), net(w)(x), f (x), h(x). We have (x, g(x)) ∈ g; (x, net(w)(x)) ∈ net(w); (x, f (x)) ∈ f ; (x, h(x)) ∈ h. Let D = {(xp , yp ) | 1 ≤ p ≤ m} be the data, where D ⊂ A. D is divided into a training set D0 = {(xp , yp ) | 1 ≤ p ≤ n} and a test set D \ D0 = {(xp , yp ) | n < p ≤ m}. For the moment, we are not D was obtained. Pminterested in how kyp − h(xp )k2 , where k · k is the We use squared error E(D, h) := p=1 Pn Pm Euclidean norm. E(D0 , h) := p=1 kyp −h(xp )k2 . E(D\D0 , h) := p=n+1 kyp − h(xp )k2 . E(D, h) = E(D0 , h) + E(D \ D0 , h) holds. A.1.2. Learning. We use a variant of the Gibbs formalism (see Opper and Haussler 1991, or Levin et al. 1990). Consider a stochastic learning algorithm (random weight initialization, random learning rate). The learning algorithm attempts to reduce training set error by randomly selecting a hypothesis with low E(D0 , h), according to some conditional distribution
Flat Minima
23
G(· | D0 ) over H. G(· | D0 ) is chosen in advance, but in contrast to traditional Gibbs (which deals with unconditional distributions on H), we may take a look at the training set before selecting G. For instance, one training set may suggest linear functions as being more probable than others, another one splines, and so forth. The unconventional Gibbs variant is appropriate because FMS uses only X0 (the set of first components of D0 ’s elements; see Section 3) to compute the flatness of net(w0 ). The trade-off between the desire for low E(D0 , h) and the a priori belief in a hypothesis according to G(· | D0 ) is governed by a positive constant β (interpretable as the inverse temperature from statistical mechanics, or the amount of stochasticity in the training algorithm). We obtain p(h | D0 ), the learning algorithm applied to data D0 : p(h | D0 ) = where Z(D0 , β) =
G(h | D0 ) exp(−βE(D0 , h)) , Z(D0 , β) X
G(h | D0 ) exp(−βE(D0 , h)) .
(A.1)
(A.2)
h∈H
Z(D0 , β) is the error momentum generating function or the weighted accessible volume in configuration space or the partition function (from statistical mechanics). For theoretical purposes, assume we know D and may use it for learning. To learn, we use the same distribution G(h | D0 ) as above (prior belief in some hypotheses h is based exclusively on the training set). There is a reason that we do not use G(h | D) instead: G(h | D) does not allow for making a distinction between a better prior belief in hypotheses and a better approximation of the test set data. However, we are interested in how G(h | D0 ) performs on the test set data D \ D0 . We obtain pD0 (h | D) = where ZD0 (D, β) =
G(h | D0 ) exp(−βE(D, h)) , ZD0 (D, β) X
G(h | D0 ) exp(−βE(D, h)).
(A.3)
(A.4)
h∈H
The subscript D0 indicates that the prior belief is chosen based on D0 only. A.1.3. Expected Extended Generalization Error. We define the expected extended generalization error EG (D, D0 ) on the unseen test exemplars D\D0 : X EG (D, D0 ) := p(h | D0 )E(D \ D0 , h) (A.5) h∈H
−
X
h∈H
pD0 (h | D)E(D \ D0 , h).
24
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Here EG (D, D0 ) is the mean error on D \ D0 after learning with D0 , minus the mean error on D \ D0 after learning with D. The second (negative) term is a lower bound (due to nonzero temperature) for the error on D \ D0 after learning the training set D0 . For the zero temperature limit β → ∞ we get (summation convention explained at the end of this paragraph) X
EG (D, D0 ) =
h∈H,D0
G(h | D0 ) E(D \ D0 , h), Z(D0 ) ⊂h
where Z(D0 ) =
X
G(h | D0 ).
h∈H,D0 ⊂h
In this case, the generalization error depends on G(h | D0 ), restricted to those hypotheses h compatible with D0 (D0 ⊂ h). For β → 0 (full stochasticity), we get EG (D, D0 ) = 0. P Summation convention: In general, h∈H,D0 ⊂h denotes summation over those h satisfying h ∈ H and D0 ⊂ h. In what follows, we will keep an analogous convention: the first symbol is the running index, for which additional expressions specify conditions. A.1.4. Overfitting and Underfitting Error. Let us separate the generalization error into an overfitting error Eo and an underfitting error Eu (in analogy to Wang et al. 1994 and Guyon et al. 1992). We will see that overfitting and underfitting error correspond to the two different error terms in our algorithm: decreasing one term is equivalent to decreasing Eo , and decreasing the other is equivalent to decreasing Eu . Using the Kullback-Leibler distance (Kullback 1959), we measure the information conveyed by p(· | D0 ), but not by pD0 (· | D) (see Fig. 4). We may view this as information about G(· | D0 ): since there are more h that are compatible with D0 than there are h compatible with D, G(· | D0 )’s influence on p(h | D0 ) is stronger than its influence on pD0 (h | D). To get the nonstochastic bias (see definition of EG ), we divide this information by β and obtain the overfitting error: Eo (D, D0 ) := =
1X p(h | D0 ) p(h | D0 ) ln β h∈H pD0 (h | D) X h∈H
p(h | D0 )E(D \ D0 , h) +
(A.6)
ZD0 (D, β) 1 ln . β Z(D0 , β)
Analogously, we measure the information conveyed by pD0 (· | D), but not by p(· | D0 ) (see Fig. 5). This information is about D \ D0 . To get the nonstochastic bias (see the definition of EG ), we divide this information by
Flat Minima
25
Figure 4: Positive contributions to the overfitting error Eo (D, D0 ), after learning the training set with a large β.
β and obtain the underfitting error: Eu (D, D0 ) :=
1X pD (h | D) pD0 (h | D) ln 0 β h∈H p(h | D0 )
= −
X h∈H
pD0 (h | D)E(D \ D0 , h) +
(A.7) Z(D0 , β) 1 ln . β ZD0 (D, β)
Peaks in G(· | D0 ) that do not match peaks of pD0 (· | D) produced by D \ D0 lead to overfitting error. Peaks of pD0 (· | D) produced by D \ D0 that do not match peaks of G(· | D0 ) lead to underfitting error. Overfitting and underfitting error tell us something about the shape of G(· | D0 ) with respect to D \ D0 , that is, to what degree the prior belief in h is compatible with D \ D0 . A.1.5. Why Are They Called “Overfitting” and “Underfitting” Error? Positive contributions to the overfitting error are obtained where peaks of p(· | D0 ) do not match (or are higher than) peaks of pD0 (· | D): there some h will have large probability after training on D0 but lower probability after training on all data D. This is either because D0 has been approximated too closely or because of sharp peaks in G(· | D0 ); the learning algorithm specializes either on D0 or on G(· | D0 ) (“overfitting”). The specialization on D0 will become even worse if D0 is corrupted by noise (the case of noisy D0 will be treated later). Positive contributions to the underfitting error are obtained where peaks of pD0 (· | D) do not match (or are higher than) peaks of p(· | D0 ): there some h will have large probability after training on all data
26
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Figure 5: Positive contributions to the underfitting error Eu (D0 , D), after learning the training set with a small β. Again, we use the D-posterior from Figure 4, assuming it is almost fully determined by E(D, h) (even if β is smaller than in Figure 4).
D but will have lower probability after training on D0 . This is due to either a poor D0 approximation (note that p(· | D0 ) is almost fully determined by G(· | D0 )), or to insufficient information about D conveyed by D0 (“underfitting”). Either the algorithm did not learn “enough” of D0 , or D0 does not tell us anything about D. In the latter case, there is nothing we can do; we have to focus on the case where we did not learn enough about D0 . A.1.6. Analysis of Overfitting and Underfitting Error. EG (D, D0 ) = Eo (D, )+E D u (D, D0 ) holds. For zero temperature P limit β → ∞ we obtain ZD0 (D) = P0 G(h | D ) and Z(D ) = 0 0 h∈H,D0 ⊂h G(h | D0 ). Eo (D, D0 ) = Ph∈H,D⊂h G(h|D0 ) h∈H,D0 ⊂h Z(D0 ) E(D \ D0 , h) = EG (D, D0 ). Eu (D, D0 ) = 0; that is, there is no underfitting error. For β → 0 (full stochasticity) we get Eu (D, D0 ) = 0 and Eo (D, D0 ) = 0 (recall that EG is not the conventional but the extended expected generalization error). Since D0 ⊂ D, ZD0 (D, β) < Z(D0 , β) holds. In what follows, averages after learning on D0 are denoted by hD0 .i, and averages after learning on D are denoted by hD ·i. Since
ZD0 (D, β) =
X h∈H
G(h | D0 ) exp(−βE(D0 , h)) exp(−βE(D \ D0 , h)),
Flat Minima
27
we have X ZD0 (D, β) = p(h | D0 ) exp(−βE(D \ D0 , h)) Z(D0 , β) h∈H = hD0 exp(−βE(D \ D0 , ·))i . Analogously, we have hD0E(D \ D0 , ·)i +
1 β
Z(D0 ,β) ZD0 (D,β)
= hD exp(βE(D \ D0 , ·))i. Thus, Eo (D, D0 ) =
lnhD0 exp(−βE(D \ D0 , ·))i, and Eu (D, D0 ) = −hD E(D \
D0 , ·)i + lnhD exp(βE(D \ D0 , ·))i.2 With large β, after learning on D0 , Eo measures the difference between average test set error and a minimal test set error. With large β, after learning on D, Eu measures the difference between average test set error and a maximal test set error. Assume we do have a 1 β
ZD (D,β)
0 ). We have to large β (large enough to exceed the minimum of β1 ln Z(D 0 ,β) assume that D0 indeed conveys information about the test set. Preferring hypotheses h with small E(D0 , h) by using a larger β leads to smaller test set error (without this assumption, no error-decreasing algorithm would make sense). Eu can be decreased by enforcing less stochasticity (by further increasing β), but this will increase Eo . Similarly, decreasing β (enforcing more stochasticity) will decrease Eo but increase Eu . Increasing β decreases the maximal test set error after learning D more than it decreases the average test set error, thus decreasing Eu , and vice versa. Decreasing β increases the minimal test set error after learning D0 more than it increases the average test set error, thus decreasing Eo , and vice versa. This is the trade-off between stochasticity and fitting the training set, governed by β.
A.1.7. Tolerable Error Level and Set of Acceptable Minima. Let us implicitly define a tolerable error level Etol (α, β) that, with confidence 1 − α, is the upper bound of the training set error after learning: X
p(E(D0 , h) ≤ Etol (α, β)) =
p(h | D0 ) = 1 − α .
(A.8)
h∈H,E(D0 ,h)≤Etol (α,β)
With (1−α)-confidence, we have E(D0 , h) ≤ Etol (α, β) after learning. Etol (α, β) decreases with increasing β, α. Now we define M(D0 ) := {h ∈ H | E(D0 , h) ≤ Etol (α, β)}, which is the set of acceptable minima (see Section 2). The set of acceptable minima is a set of hypotheses with low underfitting error. 2
We have −
∂ 2 ln Z(D0 ,β) ∂β 2
∂ ln Z(D0 ,β) ∂β
∂ ln ZD0 (D,β) ∂β ∂ 2 ln ZD0 (D,β)
= hD0 E(D0 , ·)i and − , ·)i)2 i
= hD E(D, ·)i. Furthermore,
= hD0 (E(D0 , ·) − hD0 E(D0 and = hD (E(D, ·) − hD E(D, ·)i)2 i. ∂β 2 See also Levin et al. (1990). Using these expressions, it can be shown: by increasing β (starting from β = 0), we will find a β that minimizes further makes this expression go to 0.
1 β
ln
ZD0 (D,β) Z(D0 ,β)
< 0. Increasing β
28
Sepp Hochreiter and Jurgen ¨ Schmidhuber
With probability 1 − α, the learning algorithm selects a hypothesis from M(D0 ) ⊂ H. Note that for the zero temperature limit β → ∞, we have Etol (α) = 0 and M(D0 ) = {h ∈ H | D0 ⊂ h}. By fixing a small Etol (or a large β), Eu will be forced to be low. We would like to have an algorithm decreasing (1) training set error (this corresponds to decreasing underfitting error) and (2) an additional error term, which should be designed to ensure low overfitting error, given a fixed small Etol . The remainder of Section A.1 will lead to an answer as to how to design this additional error term. Since low underfitting is obtained by selecting a hypothesis from M(D0 ), in what follows we will focus on M(D0 ) only. Using an appropriate choice of prior belief, at the end of this section, we will finally see that the overfitting error can be reduced by an error term expressing preference for flat nets. A.1.8. Relative Overfitting Error. Let us formally define the relative overfitting error Ero , which is the relative contribution of some h ∈ M(D0 ) to the mean overfitting error of hypotheses set M(D0 ): E (D, D0 , M(D0 ), h) = pM(D0 ) (h | D0 )E(D \ D0 , h), where ro p(h | D0 ) h∈M(D0 ) p(h | D0 )
pM(D0 ) (h | D0 ) := P
(A.9) (A.10)
for h ∈ M(D0 ), and zero otherwise. For h ∈ M(D0 ), we approximate p(h | D0 ) as follows. We assume that G(h | D0 ) is large where E(D0 , h) is large (trade-off between low E(D0 , h) and G(h | D0 )). Then p(h | D0 ) has large values (due to large G(h | D0 )) where E(D0 , h) ≈ Etol (α, β) (assuming Etol (α, β) is small). We get p(h | D0 ) ≈
G(h | D0 ) exp(−βEtol (α, β)) Z(D0 , β)
The relative overfitting error can now be approximated by Ero (D, D0 , M(D0 ), h) ≈ P
G(h | D0 ) E(D \ D0 , h) . h∈M(D0 ) G(h | D0 )
(A.11)
To obtain a distribution over M(D0 ), we introduce GM(D0 ) (· | D0 ), the normalized distribution G(· | D0 ) restricted to M(D0 ). For approximation A.11 we have Ero (D, D0 , M(D0 ), h) ≈ GM(D0 ) (h | D0 )E(D \ D0 , h) .
(A.12)
A.1.9. Prior Belief in f and D. Assume D was obtained from a target function f . Let p( f ) be the prior on targets and p(D | f ) the probability of
Flat Minima
29
obtaining D with a given f . We have p( f | D0 ) =
p(D0 | f )p( f ) , p(D0 )
(A.13)
P where p(D0 ) = f ∈T p(D0 | f )p( f ) . The data are drawn from a target function with added noise (the noisefree case is treated below). We do not make any assumptions about the nature of the noise; it does not have to be gaussian (as in MacKay’s work, 1992b). We want to select a G(· | D0 ) that makes Ero small; that is, those h ∈ M(D0 ) with small E(D \ D0 , h) should have high probabilities G(h | D0 ). We do not know D \ D0 during learning. D is assumed to be drawn from a target f . We compute the expectation of Ero , given D0 . The probability of the test set D \ D0 , given D0 , is p(D \ D0 | D0 ) =
X
p(D \ D0 | f )p( f | D0 ),
(A.14)
f ∈T
where we assume p(D \ D0 | f, D0 ) = p(D \ D0 | f ) (we do not remember which exemplars were already drawn). The expected test set error E(·, h) for some h, given D0 , is X
p(D \ D0 | D0 )E(D \ D0 , h)
D\D0
=
X f ∈T
p( f | D0 )
X
(A.15)
p(D \ D0 | f )E(D \ D0 , h) .
D\D0
The expected relative overfitting error Ero (·, D0 , M(D0 ), h) is obtained by inserting equation A.15 into equation A.12: Ero (·, D0 , M(D0 ), h) X X ≈ GM(D0 ) (h | D0 ) p( f | D0 ) p(D \ D0 | f )E(D \ D0 , h). f ∈T
(A.16)
D\D0
A.1.10. Minimizing Expected Relative Overfitting Error. We define a GM(D0 ) (· | D0 ) such that GM(D0 ) (· | D0 ) has its largest value near small expected test set error E(·, ·) (see equations A.12 and A.15). This definition leads to a low expectation of Ero (·, D0 , M(D0 ), ·) (see equation A.16). Define GM(D0 ) (h | D0 ) := δ(argminh0 ∈M(D0 ) (E(·, h0 )) − h),
(A.17)
where δ is the Dirac delta function, which we will use with loose formalism; the context will make clear how the delta function is used.
30
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Using equation A.15 we get GM(D0 ) (h | D0 )
X
= δ argminh0 ∈M(D0 )
X
p( f | D0 )
f ∈T
(A.18) p(D \ D0 | f )E(D \ D0 , h0 ) − h .
D\D0
GM(D0 ) (· | D0 ) determines the hypothesis h from M(D0 ) that leads to the lowest expected test set error. Consequently, we achieve the lowest expected relative overfitting error. GM(D0 ) helps us define G: G(h | D0 ) := P
ζ + GM(D0 ) (h | D0 ) , h∈H (ζ + GM(D0 ) (h | D0 ))
(A.19)
/ M(D0 ), and where ζ is a small constant where GM(D0 ) (h | D0 ) = 0 for h ∈ ensuring positive probability G(h | D0 ) for all hypotheses h. To appreciate the importance of the prior p( f ) in the definition of GM(D0 ) (see also equation A.24), in what follows, we will focus on the noise-free case. Let p(D0 | f ) be equal to
A.1.11. The Special Case of Noise-Free Data. δ(D0 ⊂ f ) (up to a normalizing constant): δ(D0 ⊂ f )p( f ) . p( f | D0 ) = P f ∈T,D0 ⊂ f p( f )
(A.20)
δ(D\D0 ⊂ f ) Assume p(D \ D0 | f ) = P
. Let F be the number of elements P δ(D\D0 ⊂ f ) in X. p(D \ D0 | f ) = . We expand D\D0 p(D \ D0 | f )E(D \ D0 , h) 2F−n from equation A.15: D\D0
1 2F−n
X
δ(D\D0 ⊂ f )
E(D \ D0 , h) =
D\D0 ⊂ f
= =
1 2F−n 1 2F−n
X
X
E((x, y), h)
(A.21)
D\D0 ⊂ f (x,y)∈D\D0
X
E((x, y), h)
(x,y)∈ f \D0
¶ F−n µ X F−n−1 i=1
i−1
1 E( f \ D0 , h) . 2
P Here E((x, y), h) = ky − h(x)k2 , E( f \ D0 , h) = (x,y)∈ f \D0 ky − h(x)k2 , and PF−n ¡F−n−1¢ = 2F−n−1 . The factor 1/2 results from considering the mean i=1 i−1
Flat Minima
31
test set error (where the test set is drawn from f ), whereas E( f \ D0 , h) is the maximal test set error (obtained by using a maximal test set). From equations A.15 and A.21, we obtain the expected test set error E(·, h) for some h, given D0 : X
p(D \ D0 | D0 )E(D \ D0 , h) =
D\D0
1X p( f | D0 )E( f \ D0 , h) . 2 f ∈T
(A.22)
From equations A.22 and A.12, we obtain the expected Ero (·,D0 ,M(D0 ),h): Ero (·, D0 , M(D0 ), h) ≈
1 2
GM(D0 ) (h | D0 )
P f ∈T
(A.23) p( f | D0 )E( f \ D0 , h) .
For GM(D0 ) (h | D0 ) we obtain in this noise-free case GM(D0 ) (h | D0 ) =
X
δ argminh0 ∈M(D0 )
(A.24)
p( f | D0 )E( f \ D0 , h0 ) − h .
f ∈T
The lowest expected test set error is measured by D0 , h). See equation A.22.
1 2
P
f ∈T
p( f | D0 )E( f \
A.1.12. Noisy Data and Noise-Free Data: Conclusion. For both the noisefree and the noisy case, equation A.14 shows that given D0 and h, the expected test set error depends on prior target probability p( f ). A.1.13. Choice of Prior Belief. Now we select some p( f ), our prior belief in target f . We introduce a formalism similar to Wolpert’s (1994a ). p( f ) is defined as the probability of obtaining f = net(w) by choosing a w randomly according to p(w). R Let us first look at Wolpert’s formalism: p( f ) = dwp(w)δ(net(w) − f ). By restricting W to Winj , he obtains an injective function netinj : Winj → NET : netinj (w) = net(w) , which is net restricted to Winj . netinj is surjective (because net is surjective): Z δ(netinj (w) − f ) |det net0inj (w)|dw p(w) (A.25) p( f ) = |det net0inj (w)| W Z δ(g − f ) = dg p(net−1 inj (g)) |det net0inj (net−1 NET inj (g))| =
p(net−1 inj ( f )) |det net0inj (net−1 inj ( f ))|
,
32
Sepp Hochreiter and Jurgen ¨ Schmidhuber
where |det net0inj (w)| is the absolute Jacobian determinant of netinj , evaluated at w. If there is a locally flat net(w) = f (flat around w), then p( f ) is high. However, we prefer to follow another path. Our algorithm (flat minimum search) tends to prune a weight wi if net(w) is very flat in wi ’s direction. It prefers regions where det net0 (w) = 0 (where many weights lead to the same net function). Unlike Wolpert’s approach, ours distinguishes the probabilities of targets f = net(w) with det net0 (w) = 0. The advantage is that we do not only search for net(w) that are flat in one direction but for net(w) that are flat in many directions (this corresponds to a higher probability of the corresponding targets). Define net−1 (g) := {w ∈ W | net(w) = g}
(A.26)
and X
p(net−1 (g)) :=
p(w) .
(A.27)
w∈net−1 (g)
We have P p( f ) = P
g∈NET f ∈T
P
p(net−1 (g))δ(g − f )
g∈NET
p(net−1 (g))δ(g
− f)
= P
p(net−1 ( f )) . (A.28) −1 f ∈T p(net ( f ))
net partitions W into equivalence classes. To obtain p( f ), we compute the probability of w being in the equivalence class {w | net(w) = f }, if randomly chosen according to p(w). An equivalence class corresponds to a net function; net maps all w of an equivalence class to the same net function. A.1.14. Relation to FMS Algorithm. FMS (from Section 3) works locally in weight space W. Let w0 be the actual weight vector found by FMS (with h = net(w0 )). Recall the definition of GM(D0 ) (h | D0 ) (see equations A.17 and A.18); we want to find a hypothesis h that best approximates those f with large p( f ) (the test data have a high probability of being drawn from such targets). We will see that those f = net(w) with flat net(w) locally have high probability p( f ). Furthermore we will see that a w0 close to w with flat net(w) has flat net(w0 ) too. To approximate such targets f , the only thing we can do is find a w0 close to many w with net(w) = f and large p( f ). To justify this approximation (see definition of p( f | D0 ) while recalling that h ∈ GM(D0 ) ), we assume that the noise has mean 0 and that small noise is more likely than large noise (e.g., gaussian, Laplace, Cauchy distributions). To restrict p( f ) = p(net(w)) to a local range in W, we define regions of equal net functions F(w) = {w¯ | ∀τ 0 ≤ τ ≤ 1, w + τ (w¯ − w) ∈ W : net(w) = net(w + τ (w¯ − w))}. Note that F(w) ⊂ net−1 (net(w)). If net(w) is flat along long distances in many directions w¯ − w, then F(w) has many elements. Locally in weight space, at w0 with h = net(w0 ), for γ > 0 we define: if the
Flat Minima
33
¯ = f } exists, then minimum w = argminw¯ {kw¯ − w0 k | kw¯ − w0 k < γ , net(w) pw0 ,γ ( f ) = c p(F(w)), where c is a constant. If this minimum does not exist, then pw0 ,γ ( f ) = 0. pw0 ,γ ( f ) locally approximates p( f ). During search for w0 (corresponding to a hypothesis h = net(w0 )), to decrease locally the expected test set error (see equation A.15), we want to enter areas where many large F(w) are near w0 in weight space. We wish to decrease the test set error, which is caused by drawing data from highly probable targets f (those with large pw0 ,γ ( f )). We do not know, however, which w’s are mapped to target’s f by net(·). Therefore, we focus on F(w) (w near w0 in weight space), instead of pw0 ,γ ( f ). Assume kw0 − wk is small enough to allow for a Taylor expansion, and that net(w0 ) is flat in direction (w¯ − w0 ): net(w) = net(w0 + (w − w0 )) = net(w0 ) + ∇net(w0 )(w − w0 ) 1 + (w − w0 )H(net(w0 ))(w − w0 ) + · · · , 2 where H(net(w0 )) is the Hessian of net(·) evaluated at w0 , ∇net(w)(w¯ − w0 ) = ∇net(w0 )(w¯ − w0 ) + O(w − w0 ), and (w¯ − w0 )H(net(w))(w¯ − w0 ) = (w¯ − w0 )H(net(w0 ))(w¯ − w0 ) + O(w − w0 ) (analogously for higher-order derivatives). We see that in a small environment of w0 , there is flatness in direction (w¯ − w0 ) too. And, if net(w0 ) is not flat in any direction, this property also holds within a small environment of w0 . Only near w0 with flat net(w0 ), there may exist w with large F(w). Therefore, it is reasonable to search for a w0 with h = net(w0 ), where net(w0 ) is flat within a large region. This means searching for the h determined by GM(D0 ) (· | D0 ) of equation A.17. Since h ∈ M(D0 ), E(D0 , net(w0 )) ≤ Etol holds, we search for a w0 living within a large connected region, where for all w within this region P E(net(w0 ), net(w), X) = x∈X knet(w0 )(x) − net(w)(x)k2 ≤ ², where ² is defined in Section 2. To conclude, we decrease the relative overfitting error and the underfitting error by searching for a flat minimum (see the definition of flat minima in Section 2). A.1.15. Practical Realization of the Gibbs Variant. 1. Select α and Etol (α, β), thus implicitly choosing β. 2. Compute the set M(D0 ). 3. Assume we know how data are obtained from target f ; that is, we know p(D0 | f ), p(D\D0 | f ), and the prior p( f ). Then we can compute GM(D0 ) (· | D0 ) and G(· | D0 ). 4. Start with β = 0 and increase β until equation A.8 holds. Now we know the β from the implicit choice above.
34
Sepp Hochreiter and Jurgen ¨ Schmidhuber
5. Since we know all we need to compute p(h | D0 ), select some h according to this distribution. A.1.16. Three Comments on Certain FMS Limitations. 1. FMS only approximates the Gibbs variant given by the definition of GM(D0 ) (h | D0 ) (see equations A.17 and A.18). We only locally approximate p( f ) in weight space. If f = net(w) is locally flat around w, then there exist units or weights that can be given with low precision (or can be removed). If there are other weights wi with net(wi ) = f , then one may assume that there are also points in weight space near such wi where weights can be given with low precision (think of, for example, symmetrical exchange of weights and units). We assume the local approximation of p( f ) is good. The most probable targets represented by flat net(w) are approximated by a hypothesis h, which is also represented by a flat net(w0 ) (where w0 is near w in weight space). To allow for approximation of net(w) by net(w0 ), we have to assume that the hypothesis set H is dense in the target set T. If net(w0 ) is flat in many directions, then there are many net(w) = f that share this flatness and are well approximated by net(w0 ). The only reasonable thing FMS can do is to make net(w0 ) as flat as possible in a large region around w0 , to approximate the net(w) with large prior probability (recall that flat regions are approximated by axis-aligned boxes, as discussed in Section 7.2). This approximation is fine if net(w0 ) is smooth enough in “unflat” directions (small changes in w0 should not result in very different net functions). 2. Concerning point 3 above. p( f | D0 ) depends on p(D0 | f ) (how the training data are drawn from the target; see equation A.13). GM(D0 ) (h | D0 ) depends on p( f | D0 ) and p(D\D0 | f ) (how the test data are drawn from the target). Since we do not know how the data are obtained, the quality of the approximation of the Gibbs algorithm may suffer from noise that has not mean 0, or from large noise being more probable than small noise. Of course, if the choice of prior belief does not match the true target distribution, the quality of GM(D0 ) (h | D0 )’s approximation will suffer as well. 3. Concerning point 5 above. FMS outputs only a single h instead of p(h | D0 ). This issue is discussed in Section 7.3. A.1.17. Conclusion. Our FMS algorithm from Section 3 only approximates the Gibbs algorithm variant. Two important assumptions are made. The first is that an appropriate choice of prior belief has been made. The second is that the noise on the data is not too “weird” (mean 0, small noise more likely). The two assumptions are necessary for any algorithm based
Flat Minima
35
on an additional error term besides the training error. The approximations are that p( f ) is approximated locally in weight space, and flat net(w) are approximated by flat net(w0 ) with w0 near w’s. Our Gibbs variant takes into account that FMS uses only X0 for computing flatness. A.2. Why Does the Hessian Decrease? This section shows that secondorder derivatives of the output function vanish during flat minimum search. This justifies the linear approximations in Section 4. A.2.1. Intuition. We show that the algorithm tends to suppress the following values: unit activations, first-order activation derivatives, and the sum of all contributions of an arbitrary unit activation to the net output. Since weights, inputs, activation functions, and their first- and second-order derivatives are bounded, the entries in the Hessian decrease where the corresponding |δwij | increase. A.2.2. Formal Details. We consider a strictly layered feedforward network with K output units and g layers. We use the same activation function f for all units. For simplicity, in what follows we focus on a single input vector xp . xp (and occasionally w itself) will be notationally suppressed. We have ( yj ∂yl 0 = f (sl ) P ∂ym ∂wij m wlm wij
)
for i = l for i 6= l
,
(A.29)
P where ya denotes the activation of the ath unit, and sl = m wlm ym . The last term of equation 3.1 (the “regulator”) expresses output sensitivity (to be minimized) with respect to simultaneous perturbations of all weights. “Regulation” is done by equalizing the sensitivity of the output units with respect to the weights. The “regulator” does not influence the same particular units or weights for each training example. It may be ignored for the purposes of this section. Of course, the same holds for the first (constant) term in equation 3.1. We are left with the second term. With equation A.29, we obtain: X
log
i,j
X k
=2
Ã
∂ok ∂wij
!2
X
(fan-in of unit k) log | f 0 (sk )|
unit k in the gth layer
+2
X unit j in the (g−1)th layer
(fan-out of unit j) log |y j |
36
Sepp Hochreiter and Jurgen ¨ Schmidhuber
X
+
(fan-in of unit j) log
unit j in the (g−1)th layer
f 0 (sk )wkj
¢2
k
X
+2
X¡
(fan-in of unit j) log | f 0 (sj )|
unit j in the (g−1)th layer
X
+2
(fan-out of unit j) log |y j |
unit j in the (g−2)th layer
X
+
(fan-in of unit j) log
unit j in the (g−2)th layer
à f 0 (sk )
×
X
X k
!2 f 0 (sl )wkl wlj
l
X
+2
(fan-in of unit j) log | f 0 (sj )|
unit j in the (g−2)th layer
X
+2
(fan-out of unit j) log |y j |
unit j in the (g−3)th layer
X
+
log
i,j, where unit i in a layer 0, i pi = 1 and the state Xi (t) of the ith neuron at time t is updated to the new state according to N X aij rj Xj (t) + bi Xi (t + 1) = f j=1
but keep all other states unchanged: Xj (t + 1) = Xj (t), j 6= i. Here A = (aij , i, j = 1, . . . , N) is a symmetric N × N matrix, ri ≥ 0, i = 1, . . . , N and f (x) a continuous function that is strictly increasing if α ≤ x ≤ β and f (x) = α if x ≤ α, f (x) = β if x ≥ β. Note that f is not differentiable, and this is the difficulty for proving the global stability of the asynchronous dynamics. Without loss of generality we suppose that α < 0 = f (0) < β. A Lyapunov function for the stochastic process X(t) = (Xi (t), i = 1, . . . , N) is called a supermartingale (Chow and Teicher 1988), defined as follows: Definition 1.
A stochastic process L(X(t)) is called a supermartingale if
E(L(X(t + 1)) | Ft ) ≤ L(X(t))
(2.1)
where L is a measurable function, Ft = σ (X(1), . . . , X(t)) the sigma algebra generated by X(s), s = 1, . . . , t and E(· | Ft ) is the conditional expectation with respect to the sigma algebra Ft . The meaning of the definition of a supermartingale is clear. In average, that is, E(L(X(t + 1)) | Ft ) the function L decreases with respect to its dynamic evolution. Certainly when the dynamics is deterministic, equation 2.1 reduces to the condition of a usual Lyapunov function L(X(t + 1)) ≤ L(X(t)). Define L(X(t)) =
N Z X j=1
Xj (t) 0
rj f −1 (y) dy −
N N X 1X aji Xj (t)Xi (t)rj ri − rj bj Xj (t). 2 j,i=1 j=1
Notice that when f is differentiable, L coincides with the Lyapunov function used by Marcus and Westervelt (1989) to study the time evolution of iterated map networks, with the function in Herz and Marcus (1993) for distributed dynamics, and with Hopfield’s energy function for binary neurons and continuous neurons (Feng and Tirozzi 1995; Grossberg 1988; Hopfield 1982, 1984). Here another question arises: Does the asynchronous dynamics reach the set of attractors within finite time? In other words, is the system
Lyapunov Functions for Neural Nets
45
dissipative? For this purpose we introduce τ (²) := inf{t : ρ(X(t), A) ≤ ²}, which is the first time for the process X(t) to enter the ²-neighborhood of the attractors A of the asynchronous dynamics X(t), where ρ is the Euclidean distance on RN . Theorem 1. L(X(t)) is a supermartingale and ∀² > 0, τ (²) < ∞ provided that aii ≥ 0, i = 1, . . . , N. Proof. In terms of the definition of the asynchronous dynamics, which only changes state of neuron i with probability pi at a time step, we have E(L(X(t + 1)) | Ft ) =
1 1X aij ri rj Xi (t)Xj (t) − akk rk2 Xk2 (t + 1) 2 2 k=1 i,j6=k X Z Xi (t) X ri bi Xi (t) + ri f −1 (y) dy − N X
pk −
i6=k
i6=k
0
− bk rk Xk (t + 1) N X − aik ri rk Xk (t + 1)Xi (t) i=1 i6=k
Z
#
Xk (t+1)
+
rk f
0
−1
(y) dy .
(2.2)
Therefore E(L(X(t + 1)) | Ft ) − L(X(t)) N N X 1 X pk − ajk rj rk (Xk (t + 1) − Xk (t))Xj (t) − rk2 akk (Xk2 (t + 1) − Xk2 (t)) = 2 j=1 k=1 j6=k
Z
Xk (t+1)
+ 0
rk f −1 (y) dy −
Z 0
Xk (t)
rk f −1 (y) dy − bk rk Xk (t + 1) + bk rk Xk (t)
N N X X 1 = pk − ajk rj rk (Xk (t + 1) − Xk (t))Xj (t) − rk2 akk (Xk (t + 1) − Xk (t))2 2 j=1 k=1 Z Xk (t+1) Z Xk (t) −1 −1 + rk f (y) dy − rk f (y) dy − bk rk Xk (t + 1) + bk rk Xk (t) . 0
0
46
Jianfeng Feng
Figure 1: An explanation of the Legendre-Fenchel transformation. It is easily RX RY RX R f (X) −1 f (y) dy. seen that XY = 0 f (x) dx + 0 f −1 (y) dy = 0 f (x) dx + 0
In terms of the Legendre-Fenchel transformation, which in our case is fairly straightforward (see Fig. 1), Z
Xk (t+1)
f
−1
Z (y) dy +
0
uk (t)
0
f (x) dx = uk (t)Xk (t + 1),
t ≥ 0,
where uk (t) :=
N X
akj rj rk Xj (t) + bk
j=1
we have that E(L(X(t + 1)) | Ft ) − L(X(t)) " Z Z N uk (t) X p k rk − f (x) dx + ≤ k=1
0
0
uk (t−1)
# f (x) dx + Xk (t)(uk (t) − uk (t − 1))
≤ 0 since the function f is a strictly increasing function. So the function L(X(t))
Lyapunov Functions for Neural Nets
47
is a Lyapunov function (supermartingale) of the dynamics X(t). Next we consider the second conclusion of the theorem. As long as X(t) 6∈ S² := {x; ρ(x, A) ≤ ², x ∈ RN }, g(X(t)) := E(L(X(t + 1) | Ft )) − L(X(t)) < 0. Hence there is a δ = δ(²) > 0 such that g(X(t)) < −δ for X(t) ∈ [α, β]N \ S² . Therefore the process M(t) = L(X(t)) + tδ is also a bounded supermartingale. According to Doob’s (Chow and Teicher 1988) theorem, M(τ ∧ t) is again a bounded supermartingale. From the convergence theorem for supermartingales (Chow and Teicher 1988), it follows that lim M(τ ∧ t) = M < ∞
t→∞
a.s.
Thus we conclude that lim M(τ ∧ t) = lim (L(X(τ ∧ t)) + δ · (τ ∧ t)) < ∞
t→∞
t→∞
a.s.
Note that L(X(τ ∧ t)) is itself a bounded supermartingale; therefore, limt→∞ (τ ∧ t) < ∞ a.s. which implies τ < ∞ a.s. 3 Synchronous Dynamics Consider the following synchronous dynamics: Y(t + 1) = F(AR(Y(t))0 + B), t = 0, . . . ,
(3.1)
where Y(t) = (Yi (t), i = 1, . . . , N) ∈ RN for (·)0 representing the transpose of a vector, R = (ri δij ) for δij = 1 if i = j and δij = 0 if i 6= j, ri ≥ 0, i, j = 1, . . . , N, B = (bi , i = 1, . . . , N) ∈ RN and F = ( f, . . . , f ). As one may expect the behavior of the synchronous dynamics defined by equation 3.1 is substantially different from that of the asynchronous dynamics. In the case of asynchronous dynamics, we know from Theorem 1 that the set of attractors of the dynamics are all fixed-point attractors. Here, however, in addition to stable fixed points, there are attractors of two-state limit cycles for the synchronous dynamics, which is the same situation as in the Hopfield model. Due to the compactness of the state space of the dynamics (equation 3.1) we assert that the synchronous dynamics (equation 3.1) is dissipative. The proof of the following theorem is omitted since it is similar to that of Theorem 1.
48
Jianfeng Feng
Theorem 2.
The function
V(Y(t)) = −
X
aij ri rj Yi (t)Yj (t + 1) −
i,j
+
XZ i
0
Yi (t)
ri f −1 (u) du +
X
ri bi (Yi (t) + Yi (t + 1))
XZ i
Yi (t+1)
ri f −1 (u) du
0
is a Lyapunov function of the synchronous dynamics. The set of all fixed-point attractors of the asynchronous or synchronous dynamics can be divided into two categories: saturated attractors and unsaturated attractors. It is reported that saturated attractors are most likely to be observed in numerical simulations in Linsker’s model and the BSB model and are investigated in Feng et al. (1995, 1996). 4 Conclusions I construct Lyapunov functions for a class of asynchronous and synchronous dynamics, which include many models in neural networks as a special case. For the two most commonly utilized dynamics of neural networks with discrete time, asynchronous and synchronous dynamics, my results indicate that in general it is impossible to define a Lyapunov function of the model under consideration depending on only one state. For the asynchronous dynamics, a restriction on the matrix or on the interaction between neurons is needed for the existence of a Lyapunov function, while there may exist two-state limit cycles for the synchronous dynamics. This note constructs Lyapunov functions of the asynchronous and synchronous dynamics with nondifferentiable characteristics and may help our understanding of the dynamic properties of related models. Acknowledgments This article was partially supported by the A. von Humboldt Foundation of Germany. I thank an anonymous referee for bringing the references Herz and Marcus (1993) and Marcus and Westervelt (1989) to my attention. References Albeverio, S., Feng, J., and Qian, M. 1995. Role of noises in neural networks. Phys. Rev. E. 52, 6593–6606. Chow, S. Y., and Teicher, H. 1988. Probability Theory. Springer-Verlag, New York. Feng, J., Pan, H., and Roychowdhury, V. P. 1995. A rigorous analysis of Linskertype Hebbian learning. In Advances in Neural Information Processing Systems
Lyapunov Functions for Neural Nets
49
7, G. Tesauro, D. S. Touretzky, and T. K. Leen, eds., pp. 319–326. MIT Press, Cambridge, MA. Feng, J., Pan, H., and Roychowdhury, V. P. 1996. On neurodynamics with limiter function and Linsker’s developmental model. Neural Computation 8(5), 1003– 1019. Feng, J., and Tirozzi, B. 1995. The SLLN for the free-energy of the Hopfield and spin glass model. Helvetica Physica Acta 68, 365–379. Feng, J., and Tirozzi, B. 1996. An application of the saturated attractor analysis to three typical models. In Cybernetic and Systems ’96, R. Trappl, ed., pp. 1102– 1107. World Scientific, Singapore. Golden, R. M. 1993. Stability and optimization analyses of the generalized BrainState-in-a-Box neural network model. Journal of Mathematical Psychology 37, 282–298. Goles, E., and Martinez, S. 1990. Neural and Automata Networks. Kluwer, Dordrecht. Grossberg, S. 1988. Nonlinear neural networks: Principles, mechanisms, and architectures. Neural Networks 1, 17–61. Herz, A. V. M., and Marcus, C. M. 1993. Distributed dynamics in neural networks. Phys. Rev. E 47, 2155–2161. Hopfield, J. J. 1982. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79, 2554–2558. Hopfield, J. J. 1984. Neurons with graded response have computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA 81, 3088–3092. Hui, S., and Zak, S. H. 1992. Dynamical analysis of the Brain-State-in-a-Box (BSB) neural models. IEEE Transactions on Neural Networks 3, 86–94. Linsker, R. 1986. From basic network principle to neural architecture (series). Proc. Natl. Acad. Sci. USA 83, 7508–7512, 8390–8394, 8779–8783. Marcus, C. M., and Westervelt, R. M. 1989. Dynamics of iterated map networks. Phys. Rev. A 40, 501–504.
Received October 23, 1995; accepted April 9, 1996.
Communicated by George Gerstein
Detecting Synchronous Cell Assemblies with Limited Data and Overlapping Assemblies Gary Strangman Department of Cognitive and Linguistic Sciences, Brown University, Providence, RI 02912 USA
Two statistical methods—cross-correlation (Moore et al. 1966) and gravity clustering (Gerstein et al. 1985)—were evaluated for their ability to detect synchronous cell assemblies from simulated spike train data. The two methods were first analyzed for their temporal sensitivity to synchronous cell assemblies. The presented approach places a lower bound on the amount of data required to detect a synchronous assembly. On average, both methods required the same minimum amount of recording time to detect significant pairwise correlations, but the gravity method exhibited less variance in the recording time. The precise length of recording depends on the consistency with which a neuron fires synchronously with the assembly but was independent of the assembly firing rate. Next, the statistical methods were tested with respect to their ability to differentiate two distinct assemblies that overlapped in time and space. Both statistics could adequately differentiate two overlapping synchronous assemblies. For cross-correlation, this ability deteriorates quickly when considering three or more simultaneously active, overlapping assemblies, whereas the gravity method should be more flexible in this regard. The work demonstrates the difficulty of detecting assembly phenomena from simultaneous neuronal recordings. Other statistical methods and the detection of other types of assemblies are also discussed.
1 Introduction Although electrophysiological recordings have been a part of neuroscience research for over a century (Caton 1875), only in the past 20 years has there been considerable technological growth in multiple simultaneous electrode recordings (Pickard and Welberry 1976; Gross et al. 1977; McNaughton et al. 1983; Wilson and McNaughton 1993; Nicolelis et al. 1993; Nordhausen et al. 1994). Recently it has even become feasible to record individual spike trains from over 100 neurons simultaneously (e.g., Wilson and McNaughton 1993; Falk et al. 1993). Such technological advances do more than simply facilitate data collection. With the data from many simultaneously recorded neurons, one can begin to ask so-called functional questions about the brain. Neural Computation 9, 51–76 (1997)
c 1997 Massachusetts Institute of Technology °
52
Gary Strangman
One critical functional question stems from Barlow (1972, 1992), who postulates that each cell is an independent representational unit. In contrast with this view, several other theorists (including James 1890; Hebb 1949; Palm 1982; Edelman 1987) have predicted that neurons depend on one another for representation; that is, their responses are organized into functional assemblies. Evidence is beginning to surface in favor of the latter hypothesis—that is, assembly-based neuronal organization. When Georgopoulos et al. (1986) recorded from monkeys performing reaching tasks, they found data consistent with arm movement direction being encoded across populations of direction-selective neurons. Using information theory, Gochin et al. (1994) provide evidence that neurons in inferotemporal cortex also use some type of distributed (i.e., assembly-based) coding scheme, in this case for visually presented shapes. Data in Nicolelis et al. (1993) suggest that distributed representations exist for whiskers in the rat somatosensory thalamus as well. And it has even been suggested that neural codes in monkey frontal cortex can be found only by investigating synchrony across ensembles of neurons (Vaadia et al. 1995). Though these results support the existence of cell assemblies, in almost all cases it remains to be demonstrated precisely how groups of neurons work together to form these functional units. That is, what particular type of distributed code or cell assembly is implemented in the brain? Although this question is unresolved, many hypotheses have been introduced. Perhaps the most straightforward distributed code is synchrony, where neurons of an assembly fire synchronized volleys of spikes. In this case, assembly membership is revealed as temporally correlated neural discharges. It is worth dwelling briefly on the synchronous assembly hypothesis because such assemblies will form the basis for the analysis presented in this chapter. The simplest possible synchronous cell assembly is one where all assembly-cell spike trains are identical; that is, every neuron in the assembly fires at precisely the same time as every other neuron in the assembly. For a slightly more general synchronous cell assembly, relax the temporal precision constraint somewhat and thus introduce temporal jitter into otherwise identical spike trains. An even more general definition also relaxes the identity constraint. Thus, individual neurons are allowed to discharge at times when the assembly as a whole is silent (e.g., random background spikes in the spike trains) and/or neurons can “skip” some proportion of assembly discharge times (a measure of consistency). As an alternative to synchronous assemblies, it is possible that phaselocked oscillations in neuronal firing could bind active neurons together into assemblies. Eckhorn et al. (1988) and Gray et al. (1989) have suggested this, among others. More complex assembly definitions are also possible, including lateral inhibition networks, local feedback loops (Kubie 1930), attractor networks (e.g., Anderson et al. 1977; Hopfield 1982), or other spatiotemporal activation patterns (e.g., synfire chains; Abeles 1982, 1991).
Detecting Synchronous Cell Assemblies
53
Assuming that one has hypothesized a particular assembly type and provided that adequate numbers of neurons have been sampled (Strangman 1996), the following question remains: What are the appropriate analytical tools to test whether the hypothesized assembly is present in the data? Although many tools exist, few have been evaluated for their abilities to find assembly phenomena. The goal of this paper is to evaluate two statistics toward this end. Before selecting such tools, however, an assembly type must be established because the assembly type determines the relevant parameters for statistical analysis. Synchronous assemblies were the chosen type of functional assembly for two main reasons: (1) synchrony is commonly suggested as an important parameter of brain function, and (2) synchronous assemblies are one of the few types of functional units that require no additional a priori knowledge about the neurons. Assembly hypotheses like local feedback loops, miniature classifier networks, cellular automata, or even simple lateral inhibition all require some knowledge about the role of each recorded neuron in the computation (inhibiting versus inhibited, etc.) in order to classify correctly neurons as belonging to one or another active assembly. Any analysis of this sort requires single-unit spike train data known a priori to contain an assembly. This precludes the use of electrophysiological data, and therefore the required data were generated by a computer model. The synchronous assemblies (defined earlier) were modeled with the assembly jitter controlled by a parameter ε and the firing consistency controlled by the parameter κ (see Section 2.1). These simulated spike trains then formed the input to the statistical methods. The goal of this work was to compare two different statistical techniques for their abilities to detect assembly phenomena. Cross-correlation (Moore et al. 1966) and the gravity method (Gerstein et al. 1985) were chosen because synchronous assemblies are defined by coincident spiking, and these two statistics are fundamentally coincidence detectors. The two methods were evaluated for their temporal sensitivity to assembly activity and for their ability to differentiate two simultaneously active assemblies present in the same recorded region. 2 Simulation Data and Analysis Procedures 2.1 Simulated Synchronous Assemblies. Eighty spike trains were generated each time cell assembly data were needed. The simulated neurons were given properties that roughly correspond to pyramidal cells of motor cortex, though the results are not specific to any particular brain region. As will be shown, the implemented firing rates are immaterial to the analysis. The generation method is similar to that of Aertsen and Gerstein (1985). Each binary unit fired at an average background rate randomly selected between 4 and 10 Hz. The precise firing times were modeled as independent Poisson processes (equal probability of a spike at every time step) modified
54
Gary Strangman
A
B
C
D
Figure 1: Illustration of the spike train generation process for assembly units. A source spike train is generated (A), and then spikes are selected with a probability κ. Spike train (B) shows one possible result of this process for κ = 0.4. These spikes are then added to the spike train generated by the background firing rate of a unit (C), resulting in the final spike train (D). Any jitter would be incorporated after creating spike train (B) by moving each spike to the left or right by no more than ε msec.
by a 3 msec absolute refractory period following each spike. This completely describes the nature of spike trains for units uninvolved in assembly activity. For simulated units participating in assembly activity, an additional procedure was implemented (see Fig. 1). First, a source spike train was generated. This spike train also had a Poisson distribution of spike times and a 3 msec absolute refractory period but fired at an average rate, %0 , between 150 Hz and 200 Hz (randomly selected for each assembly). During assembly activity, each source spike was added to a given unit’s spike train with a probability κ. After the addition, jitter was introduced by moving each added spike slightly ahead or behind its original time of occurrence, t. This was done by adding a random value in the range [−ε, ε] to the original source spike time t, and thus ε controlled the amount of jitter in the synchrony. Finally, any spikes occurring during the 3 msec absolute refractory period of another spike were eliminated as a cleanup process. This results in a synchronous assembly unit firing at an average rate of κ%0 Hz plus the unit’s background firing rate. A high source-unit firing rate was used so that κ times this rate was still considerably larger than the 4–10 Hz base firing rates even when κ = 0.2. At values of κ near 1, the units fire at unrealistically high rates (up to 150–200 Hz). However, as will be reported in Section 3, (and derived in Appendix A), the minimum required recording time is—somewhat unintuitively—independent of the assembly firing rate, making the realism of these rates immaterial.
Detecting Synchronous Cell Assemblies
55
When two assemblies were needed for analysis, two independent source spike trains were computed. Units participating in both assemblies (50% of the simulated neurons) fired to the proportion κ of both assemblies. This was accomplished by applying the addition, jitter, and cleanup process twice— one time for each source spike train. All resulting spike trains contained additional internal structure only as occurred by chance. Again, 80 spike trains were generated per simulation, and the simulation time step was set at 0.2 msec. Data generation was performed on a VAX 6000-510 (Digital Equipment Corporation). Since the modeled spike trains depended heavily on the builtin random number generator, a battery of 10 statistical tests (Knuth 1981) was performed to ensure adequate randomness. All tests found the generator not significantly different from purely random (Students t-test: p > 0.2 for all ten tests). 2.2 Statistical Methods. Many statistics have been developed for multiple spike train analysis. Many of these analyze pairs of neurons (e.g., Moore et al. 1966; Gerstein and Perkel 1969; Gerstein et al. 1989; Gochin et al. 1989; Aertsen et al. 1989; Aertsen and Gerstein 1991; Yamada et al. 1993). Some techniques compare trios of neurons (e.g., Perkel et al. 1975; Abeles et al. 1994) or compare all recorded neurons simultaneously (e.g., Gerstein et al. 1978, 1985; Aertsen et al. 1987; Bernard et al. 1993; Gochin et al. 1994; Nicolelis and Chapin 1994; Abeles et al. 1995). In this article, two statistical methods were chosen for comparative analysis. The first method selected was cross-correlation (Moore et al. 1966), which compares pairs of spike trains. If the two spike trains are considered as functions f and g then the crosscorrelation of these two functions can be computed as follows: Z ∞ f (t + λ)g(t) dt Corλ ( f, g) = −∞
where λ is called the lag of function f relative to function g and can be positive or negative. A cross-correlation histogram (or cross-correlogram, CCH) is made by using discrete function evaluation points, selecting a histogram bin width, and summing (instead of integrating) the product of f (t + λ)g(t) into the bin covering the point λ. This process is repeated for all interesting values of λ. Cross-correlation was selected mainly because of its universal use in spike-train analysis. However, this method is also important because nearly all pairwise statistics are based on correlation and because rigorous study has provided it with adequate significance tests of the resulting cross-correlation histogram peaks (e.g., Perkel et al. 1967; Gerstein and Perkel 1972; Gochin et al. 1989). Computationally, each cross-correlogram requires O(N2 ) units of computing time (N being the number of spike trains analyzed), times the number of histograms that must be calculated, N(N − 1)/2, or O(N2 ). This makes the cross-correlation computations overall O(N4 ).
56
Gary Strangman
The second method selected was the gravity method (Gerstein et al. 1985), which analyzes all N simultaneously recorded neurons at once. This is accomplished in an N-dimensional space, where particles, one for each recorded neuron, originate at the corners of an N-dimensional hypercube (i.e., all N particles are initially equidistant). These particles then interact “gravitationally,” where the attractive force between two particles is proportional to the “charge” on the particles. A particle’s charge is instantaneously incremented whenever the neuron corresponding to that particle fires a spike, and the charge decays exponentially over time as controlled by a parameter, τ . Thus, for neurons that fire roughly synchronously, their corresponding particles will be charged simultaneously and therefore will attract one another. The result is a clustering of particles in the N-dimensional space, where clusters correspond to those groups of neurons that were simultaneously active. The method is adjusted to make the particle aggregation independent of average firing rate, to prevent net aggregation among independently firing neurons, and to eliminate attractive forces when particles reached a minimum distance. Such modifications are all described in Gerstein et al. (1985). The precise equations were as follows. For particle i at position xi (t) at time t, the position in the N-dimensional space at time t + h is calculated as xi (t + h) = xi (t) + hσg [qi (t) − τ ]fi (t), where σg is the aggregation parameter, qi (t) is the charge on particle i at time t, τ is the average charge on all particles (equal to the charge-decay constant), and fi (t) is the force on particle i at time t. This force is given by X [qj (t) − τ ] ri (j), fi (t) = j6=i
where ri (j) is the direction of the force on neuron i produced by neuron j and is given by the following ri (j) = [xj (t) − xi (t)]/sij (t), with sij (t) being the Euclidean distance between particles i and j at time t. Finally, the charge increment for neuron i, 1qi , had to be normalized for different mean firing rates among the various neurons. This is accomplished by making 1qi directly proportional to some measure of neuron i’s interspike interval (ISI). Two measures were used: (1) 1qi was set equal to the cumulative mean ISI (as in Gerstein et al. 1985), or (2) 1qi was set equal to the most recent ISI. Gravity analyses were selected as the comparison statistic because of its unique representations (spatial instead of histogram based) and because it is designed to analyze an arbitrary number of spike trains simultaneously. Computationally, the gravity method requires O(N3 ) units of computing time and, like the mass of correlograms above, requires further analysis
Detecting Synchronous Cell Assemblies
57
of the clustered particles to determine the precise nature of any recorded assembly. 2.3 Temporal Sensitivity Analyses. To evaluate the temporal sensitivity of cross-correlation and the gravity method, a synchronous assembly with zero jitter (ε = 0 msec) was generated and then analyzed. Zero jitter results in perfectly synchronous assembly-unit spikes, which are easiest for the methods to detect. Thus, the analysis should place a lower bound on the temporal capabilities of each method at each level of consistency, κ. Three seconds of data were simulated for 80 spike trains, 60 of which formed a single synchronous assembly. This was done for each of five values of κ. The resulting data were first analyzed by cross-correlations using successively longer portions of the synchronous data as input. The lengths of data analyzed were 25, 50, 100, 200, 300, . . . , 3000 msec. Twenty pairs of spike trains were selected at random for cross-correlation analysis, each pair analyzed using each length of data (up to a maximum of 32 correlograms per pair of neurons, or 6400 correlograms total). Histograms were made using a 2 msec bin width and were tested for a significant central peak using the smoothed-difference cross-correlogram (Gochin et al. 1989) at the 99% confidence level. This particular test avoids the large bin variance generated by subtracting the standard shift predictor histogram (Perkel et al. 1967). When the synchronous peak remained statistically significant for three consecutive data lengths, the time required to obtain the first significant peak was used as the “empirical” cross-correlation minimum amount of recording time to reach significance. This represented the minimum recording time required to detect significant assembly membership, Tmin . The temporal sensitivity analysis of the gravity method was performed on the same simulated data. The analyzed particle pairs were the same 20 neuron pairs used in the cross-correlograms, and the Euclidean distance between pairs of particles in the N-dimensional space was measured. In the absence of any rigorous statistical test for particle clusters, gravity Tmin was simply defined as the time it took for the particles to approach within five units of one another, given a starting position 100 units apart (with an absolute minimum particle separation of one unit). Implementing the gravity method requires setting two parameter values: the aggregation parameter, σg , and the charge decay constant, τ . The value of τ is not critical as long as it is kept small relative to the average interspike interval. However, false-positive or false-negative clustering can result from too large or too small a σg , respectively. Since there was prior knowledge available about which simulated neurons were members of the assembly, σg was optimized to afford maximal aggregation speed (i.e., to minimize recording time without giving false-positive clustering). The optimal value for these analyses was σg = 8.0 × 10−6 . In the usual experimental situation, lacking a priori knowledge of which cells are assembly cells, σg , can be a liability of the gravity method. Finally, 1qi was set to the cumulative mean ISI,
58
Gary Strangman
because this reduces the noise in the resulting interparticle distance plots. This normalization exploits the change in firing rate when assemblies turn on, but it also amplifies the clustering effects of synchrony, maximizing the speed of low-noise clustering (for further discussions about normalization, see Gerstein and Aertsen 1985). Since the method of evaluating significant assembly correlation differed dramatically between the two methods (smoothed-difference cross-correlogram significance versus a Euclidean distance of five units), there was no expectation that the absolute values of cross-correlation and gravity Tmin would be directly comparable. Planned comparisons therefore included dependence on firing consistency, κ, and the relative variability of the two methods. 2.4 Analysis of Two Assemblies. Cross-correlation and gravity clustering were also compared for their ability to detect and separate units from two active synchronous assemblies. Each statistic had to classify correctly the simulated neurons according to their assembly of origin. Two overlapping synchronous assemblies were simulated, again with 80 spike trains. Spike trains for units 1–9 and 71–80 fired at their base rates, 10–19 fired at base rates plus spikes for the first assembly, 61–70 fired at base rates plus spikes for the second assembly, and 20–60 fired at base rates plus spikes from both assemblies. Jitter for both assemblies was set at ±1 msec (i.e., ε = 1 msec). Cross-correlation and gravity methods were implemented on the entire overlapping assembly data set, and the methods were compared with respect to assembly detection and neuron classification errors. Parameter settings were as follows: unit consistency, κ = 0.4; the cross-correlogram bin width, 1 = 2 msec (i.e., 2ε); the gravity coalescence parameter (σg ) was 8.0 × 10−5 ; and the decay parameter (τ ) was set at 2 msec. The value of 1qi was set equal to the most recent ISI in neuron i to make the method most sensitive to synchrony and less so to changes in unit firing rates. 3 Results 3.1 Theoretical Temporal Sensitivity. The temporal sensitivity of cross-correlation to synchronous assemblies can be formally derived in a fashion parallel to Aertsen and Gerstein’s (1985) analysis of effective connectivity. (Refer to Appendix A for the complete formal analysis.) The result is an expression for the minimum amount of recording time required to detect significant pairwise spike-time correlation, Tmin : Tmin =
41 κ2
(or Tmin =
4σ 2 , if σ > 1), κ 21
(3.1)
where 1 is the cross-correlogram bin width, κ is the consistency with which the neurons fire in synchrony with the assembly, and σ is the width of
Detecting Synchronous Cell Assemblies
59
Table 1: Comparison of Minimum Recording Time Required to Detect a Perfectly Synchronous Assembly Using Cross-Correlation and Gravity Methods.
κ
Theoretical CCH Tmin (msec)
0.2 0.4 0.6 0.8 1.0
200 50 22 13 8
Simulated data: a Cross-Correlation Tmin (msec) Mean ± s.d. Range 95% limit 760 ± 400 200 ± 120 91 ± 100 51 ± 23 28 ± 8
100–1800 25–400 25–400 25–100 25–50
1200 400 300 100 50
Simulated data: b Gravity Tmin (msec) Mean ± s.d. Range 95% limit 760 ± 170 194 ± 52 97 ± 36 53 ± 27 25 ± 6
364–946 73–330 32–152 23–99 13–32
907 247 152 99 32
bin width, 1, was 2 msec. parameters: σg = 8.0 × 10−6 , τ = 2 msec.
a Correlogram b Gravity
the synchrony window (equal to 2ε). The derivation assumes only that a neuron’s firing rate when the assembly is active is much larger than background firing rates. (If this assumption is relaxed, the theoretical value of Tmin increases. Thus, Tmin truly represents the minimum amount of recording time.) From equation 3.1, the theoretical minimum recording time required to detect synchronous assembly correlation depends on 1 and 1/κ 2 and—perhaps unintuitively—is independent of the assembly firing rate %0 . This independence can be understood as follows: although increased firing rates provide more synchronous events per second, such rates also result in a higher background correlation level (and thus more noise). It turns out that these two effects cancel as long as background firing rates are much less than assembly firing rates. However, consistency does matter for minimum recording times; that is, the consistency with which units of an assembly fire synchronously with the assembly strongly influences Tmin . The first two columns of Table 1 show the theoretical Tmin values calculated by equation 3.1 for a range of consistency values (κ), with 1(= σ ) = 2 msec.
3.2 Empirical Temporal Sensitivity. To test these theoretical values, simulated synchronous assemblies were analyzed by both crosscorrelation and gravity methods, using five values of κ. Three typical crosscorrelograms and one gravity interparticle distance plot are shown in Figure 2 for κ = 0.4, all analyzing the same data set where ε = 0 (perfectly synchronous assembly cell spiking). The cross-correlograms show a single pair of simulated units correlated on 300, 400, and 500 msec of synchronized data, respectively. The central peak first becomes significant when the assembly has been active for 400 msec. The gravity plot shows three pairs of nonassembly units (top three traces), three pairs of assembly units (in-
60
Gary Strangman
Figure 2: Temporal sensitivity of cross-correlation and gravity methods to a single synchronous assembly, κ = 0.4. Three cross-correlograms (1 = 2 msec) are shown for a single pair of neurons, using progressively longer portions of the synchronous data (top = 300 msec, middle = 400 msec, bottom = 500 msec; 70 Hz average firing rate). The middle correlogram was the first significant central peak using the significance test in Gochin et al. (1989) at the 99% level. That is, the analyzed pair of simulated spike trains required 400 msec of data before their (known) correlation with the assembly was determined to be significant. (On average, for κ = 0.4, the neurons required approximately 200 msec; this pair had weaker average correlation, due to random variation around κ.) The gravity analysis of the same data is shown for comparison, using three assembly–assembly pairs, three assembly–nonassembly pairs, and three nonassembly–nonassembly pairs of neurons (σg = 8.0 × 10−6 , τ = 2 msec). Though no generalized significance tests have been developed for interparticle distance measurements of the gravity method, the assembly cells are distinctly separable by approximately 100 msec.
Detecting Synchronous Cell Assemblies
61
cluding the pair used for cross-correlations; bottom three traces), and three assembly–nonassembly pairs (middle three traces). The mean Tmin for 20 assembly pairs using cross-correlation was greater than the theoretical value by a constant factor of approximately 4 at all levels of κ (see Table 1, column 3). The factor of 4 is attributed to (1) the significance criterion used (99% plus three consecutive significant correlograms), which is a much more stringent signal detection condition than used in Appendix A, and (2) the fact that the background firing rates were not strictly zero. Importantly, the predicted linear trend of Tmin versus 1/κ 2 was highly significant (relative to no trend; F(1, 95) = 156.9, p ¿ 0.0001). Despite this, a broad range of experimental Tmin ’s was observed, particularly at smaller values of κ (Table 1, column 4). Since the assembly cells were perfectly synchronous (no jitter; ε = 0), only the individual unit 4–10 Hz background firing rates and random variation around κ remained as variables. To determine the minimum recording time for neurophysiological experiments, one is not concerned with the average length of recording required to detect assembly cells; rather, one is interested in the length of recording required to detect, say, 95% of the assembly cells. Such a limit for the assemblies used here is presented in Table 1, column 5. The 95% recording time boundary is typically two standard deviations above the average Tmin at all levels of κ. The same set of spike trains was then analyzed by the gravity method. The resulting gravity Tmin values were almost identical with those obtained for cross-correlation methods, despite the extremely disparate methods for determining significant correlation. No significant differences were found in a two-tailed Wilcoxon t-test, at any level of κ (p > 0.2), suggesting that gravity and cross-correlation are equally sensitive to synchronous assemblies in the temporal realm. Again, the 1/κ 2 trend was highly significant (F(1, 95) = 1099.5, p ¿ 0.0001). It is of note, however, that the variance was less pronounced in the gravity analysis than with cross-correlation. In addition, the 95% limit occurred at somewhat less than two standard deviations above the mean Tmin s. In practice, this means that gravity interparticle distance measurements required shorter recordings to detect 95% of significant assembly neuron correlations (Table 1, column 8). Optimal tuning of the gravity method (not usually possible) and the cumulative average ISI normalization procedure for 1qi (which was somewhat sensitive to the change in unit firing rates) slightly weakens the result that the gravity method is more sensitive to synchronous assemblies. A reasonable conclusion, therefore, is that the gravity method is no less sensitive to synchronous assemblies than cross-correlation. 3.3 Multiple Assemblies. Assuming that some kind of intermediatelevel structures (cell assemblies) exist in the functioning nervous system, are there any consequences of more than one assembly being active at the same time? Position emission tomography and functional magnetic re-
62
Gary Strangman
Figure 3: Ten of the 20 unique spatial arrangements of three cell assemblies. Examples 9 and 10 show two cases where the cells of one assembly (assembly A) must be spatially discontiguous to achieve the given geometric arrangements (i.e., assembly A must have at least two members).
sponse imaging scans have suggested that multiple regions of the cortex are active during overt or imagined behavior (e.g., Roland 1982; Grafton et al. 1991; Rao et al. 1993; Constable et al. 1993). If multiple assemblies are active at the same time, it is possible that multiple assemblies are active at the same time in the same area of cortex (Vaadia et al. 1989). Indeed, many theories of brain function assume precisely this type of representation (e.g., Hebb 1949; Hinton et al. 1986), and in fact any distributed code used by the brain would be extremely inefficient if this were not the case. Thus, distributed neural coding and multiple assemblies together could result in individual neurons being simultaneously activated in association with two or more assemblies. The complications introduced by this possibility are substantial. If two assemblies are simultaneously active in the same general brain area, there are three distinct spatial arrangements: (1) the two assemblies are nonoverlapping (that is, no neuron participates in the activity of both assemblies), (2) the assemblies partially overlap, or (3) the neurons of one assembly form a subset of the neurons of the second assembly. Disambiguation of the two assemblies thus involves classifying four types of neurons: those not involved in either assembly, those involved in assembly A only, those involved in assembly B only, and those associated with both assemblies. If more than two assemblies are simultaneously active, it can be shown (n−1) that the number of possible spatial arrangements increases faster than 22 , where n is the number of simultaneously active assemblies (Appendix B). In the case of three simultaneously active assemblies, there are at least 20 distinct geometrical arrangements to be differentiated (see Fig. 3); four assemblies can have a minimum of 266 distinct arrangements.
Detecting Synchronous Cell Assemblies
63
Assemblies exist in time as well as in space, and thus they can also differ in the temporal sequencing of their activity. Even if an assembly can activate at only three times (before, after, or at roughly the same time as another assembly), there are at least n! +
n−1 µ X i=1
¶ n! (n − i)! (n − i − 1)!(i + 1)!
permutations of assembly onset times. Again n is the number of simultaneously active assemblies. The same number of possibilities would exist for assembly offset times. Thus, for two assemblies (n = 2), there are three possible geometrical arrangements, and 3 × 3 possible temporal (on×off) arrangements. This results in some 27 ways in which the assemblies can interact in the spatiotemporal domain. Three assemblies have 3380 interaction possibilities, while four assemblies can interact in more than 1,266,426 spatiotemporally unique ways. A more finely divided temporal spectrum rapidly expands these possibilities. Thus, multiple active assemblies are capable of vast numbers of interactions. It remains to be determined, however, whether this can cause problems in practice. To test this, cross-correlation and gravity methods were used to separate two partially overlapping (simulated) synchronous assemblies. In one analysis, the two simulated assemblies activated 700 msec apart, while in the second case both assemblies activated simultaneously. Figure 4 shows the analysis results when assembly A began firing at 1000 msec, and assembly B activates at 1700 msec. Both assemblies continued to fire until the end of the simulated data (4000 msec). Cross-correlating a pair of simulated neurons from two different assemblies—using the first 3000 msec of the spike train—shows no significant peak (leftmost column). Crosscorrelating pairs of neurons from the same assembly shows intermediatelevel peaks (around 40 coincident spikes; middle four columns). Crosscorrelating a pair of neurons participating in both assemblies—firing to the proportion κ of the spikes from assembly A plus a similar proportion from assembly B—resulted in a substantially larger peak (approximately 70 coincident spikes; rightmost column). Hence, assuming assembly effects are more or less additive, one can classify all four types of neurons: neuron pairs with the highest peaks belong to the intersection of the two assemblies; medium-height peaks indicate neuron pairs from the same group, and flat correlograms indicate that the two neurons are from different assemblies or that at least one neuron is a member of neither assembly. Given a method for determining relative peak height significance (i.e., whether two significant peaks are different from one another) and careful bookkeeping, one can completely categorize all simulated neurons. Determining the relative temporal onsets of the two assemblies would require scrolling correlograms. The gravity interparticle (Euclidean) distance plots for the same set of simulated spike trains are shown in the third row of Figure 4. Three neu-
64
Gary Strangman
Figure 4: A comparison of cross-correlation and gravity methods when detecting two asynchronously activated cell assemblies. Circles represent the assemblies, with schematic electrodes indicating the region from which a unit is sampled. Assembly A (solid circle) begins firing 700 msec before assembly B (dashed circle). For cross-correlation (second row) 1 = 2 msec, and for the gravity analysis (third row) σg = 8.0 × 10−5 and τ = 2 msec. See text.
ron pairs are plotted in each graph to give a rough characterization of the variance. The first observation is that assembly A–assembly B neuron pairs aggregate substantially (Fig. 4, column 1), so there is no trivial separation of units from different assemblies. This occurs because units of the two assemblies are firing at approximately 70 Hz, and therefore a unit from A fires only within 7.1 msec of any unit in B, on average. Not shown is the fact that pairs of completely independent neurons (firing at 4–10 Hz) remain 100 units apart, while nonassembly–assembly pairs aggregate to approximately 75 units of separation by 4000 msec. This serves to separate pairs of units where at least one has no assembly affiliations. Pairs where at least one neuron is involved with assembly B alone (columns 3 and 5) start aggregating at 1700 msec (when assembly B activates). All other pairs begin aggregating at 1000 msec. By considering the onset times of aggregation and calculating every possible pairwise distance, it is difficult but possible to separate assembly B units as well as units involved in both assemblies (“AB units”). The remaining units are then classified as assembly A units. Finally, notice the aggregation artifact in column 3 at 840 msec. This occurred because the two units of that pair fired within 1 msec of one another and prior to that event had not fired for 630 and 372 msec, respectively. Thus both gained a large charge when they coincidentally fired in near synchrony. Such artifacts are partly a consequence of the rate normalization, whereby neurons that fire infrequently receive large charge increments. Such arti-
Detecting Synchronous Cell Assemblies
65
Figure 5: A comparison of cross-correlation and gravity methods when detecting two synchronously activated cell assemblies. Circles represent the assemblies, with schematic electrodes indicating the region from which a unit is sampled. Both assemblies begin firing at 1000 msec. Again, for cross-correlation 1 = 2 msec and for gravity σg = 8.0 × 10−5 and τ = 2 msec. See text.
facts can be reduced, but not eliminated, by a smaller value for σg (slower aggregation), or a smaller τ (a narrower synchrony window). Figure 5 shows the exact same analysis as in Figure 4 but performed on spike trains wherein assemblies A and B activate simultaneously at 1000 msec. Aside from the small (but significant) central cross-correlation peaks in columns 2 and 3, the results of this analysis are identical to those in Figure 4. This demonstrates cross-correlation’s ability to separate the two assemblies and highlights its insensitivity to assembly onset times. Again, scrolling cross-correlograms could be used, but this would be cumbersome for 80 recorded neurons (3160 cross-correlograms per time step). The gravity analysis was less successful in this situation. First, there is considerable cross-assembly aggregation again (see Fig. 5, column 1), which prevents trivial separation of the two types of assembly units. Second, the analysis was considerably more affected by aggregation artifacts. When the two assemblies first activate, particles in both assemblies are nearsynchronously charged, and these charges are large due to large prior interspike intervals. The result is sudden and dramatic particle aggregation among some assembly neurons (e.g., Fig. 5, columns 3 and 4). Again, one could decrease σg or τ , but this slows the aggregation process, making more recording necessary. At the parameter values used (σg = 8.0 × 10−5 , τ = 2 msec), the two assemblies were not quite differentiable. Unlike the situation in Figure 4, here the two assemblies activated at the same time, and thus separations
66
Gary Strangman
based on the initiation time of aggregation are impossible. A smaller value for τ reduces this A–B aggregation but makes the method less sensitive to near-synchrony. If the assembly firing rates are reduced considerably (e.g., to 10–20 Hz, or κ = 0.06) the A–B pairs aggregate much less, and thus the assembly units can be classified into their respective assemblies by computing all pairwise distances. Such a rate reduction, however, was unnecessary for cross-correlation. 4 Discussion In summary, detecting assembly activity in simultaneous neural recordings was more difficult than expected. First, cross-correlation and the gravity method may require several hundred milliseconds of assembly data in order to reveal significant correlations, even for relatively consistent assembly participation. Highly inconsistent participation may require neural recordings in excess of several seconds. Unfortunately, consistency is not a directly measurable quantity. Second, although cross-correlation and gravity were both able to differentiate units from two simultaneously active synchronous assemblies, separating three or more assemblies may well be impossible for cross-correlation. The gravity method appears somewhat more flexible in this regard. These points, as well as some of their consequences, will now be considered in more detail. 4.1 Temporal Sensitivity. The formal analysis of cross-correlated spike trains from synchronous assemblies (see Appendix A) showed that the minimum recording time to reach significance (Tmin ) is strongly dependent on κ, the consistency with which a neuron participated in the assembly. Tests of cross-correlation on simulated data supported the theoretical predictions. No formal results were obtained for the minimum recording times required for the gravity method, though the near-identical empirical Tmin ’s suggest that gravity analyses will behave similarly to cross-correlation and thus be dependent on κ, not on assembly firing rates. Using equation 3.1, and thereby assuming that assembly firing rates are significantly elevated with respect to background rates, it is theoretically possible to predict the minimum recording time required to detect a synchronous assembly in a planned experiment. Prediction relies on an estimate of κ, however, which may be unavailable. In such a case, equation 3.1 can be solved for κ and the recording time from a significant correlogram peak can be inserted for Tmin . This will give a value for κmin , which reflects the minimum consistency with which neurons must be participating in an assembly. This value, then, can be used as a reference when making further estimates of Tmin . It is noteworthy that despite perfectly synchronous assembly data and finely tuned analysis parameters to capture only those neurons participating in the assembly, the minimum recording time to detect 95% of the assembly
Detecting Synchronous Cell Assemblies
67
neuron pairs could be quite long. A minimum of 1200 msec of recording was necessary when κ = 0.2. There are two major consequences of long recording times. First, such long periods require that the assembly remains active for this entire period, or that multiple trials must be run (and thus the total assembly-active recording time must equal the derived minimum recording time). Keeping the assembly operating is usually beyond experimental control. On the other hand, summing across multiple trials has its own, familiar drawbacks. Specifically, it assumes that each assembly is activated on essentially every trial and that assembly activity is stationary across time. Neither assumption is reasonable for experimental paradigms involving learning, and the first assumption may never be reasonable. The alternative to assuming that assemblies are consistently activated on successive trials is to select data to be analyzed for assembly phenomena using some objective criteria. This approach can be successful, but it risks possible circularity by predicting effects in data that were selected for precisely those effects. The second major consequence of long minimum recording times is that they make detection of certain types of assemblies impossible using these methods. A useful illustration is a wave of activity spreading across the cortex, such as generated by a cortical seizure. Since this is a traveling wave, individual neurons participate in this event only briefly. From the point of view of a stationary microelectrode, perhaps only three or four spikes will be recorded as the wave of activity passes by. This may not be enough data to detect that a wave has passed at all. Traveling waves are usually considered pathological (Petsche and Sterc 1968), but it is certainly possible that analogous waves could form a functional part of the brain. Blum (1967), Walter (1968), Ermentrout and Cowan (1979), and others all suggested this possibility, and there is electroencephalogram evidence for traveling waves in the human cortex under normal conditions (Hughes 1995). This is not the only type of synchronous assembly that is undetectable by the correlation methods described here. For example, neurons activated in association with one (synchronous) assembly and then with another would also result in neurons only transiently correlated with any given group of cells. Both examples point to fundamental difficulties in investigating certain types of assembly behavior with these techniques. Other statistics, new or old, can avoid this limitation if they are able to exploit different parameters of assembly activity (e.g., Abeles and Gerstein 1988; Abeles et al. 1993, 1994). 4.2 Multiple Assemblies. As the number of active assemblies within a cortical area increases linearly, the number of possible spatiotemporal arrangements of those assemblies increases much faster than factorially. This huge number of possible spatiotemporal interactions is quite advantageous from the point of view of behavioral flexibility. At the same time, it is a serious problem for the neurophysiologist who is attempting to pin down the
68
Gary Strangman
spatiotemporal interaction actually in use. Cross-correlation can separate two overlapping synchronous assemblies, but this depends on a method for determining whether two independently significant correlogram peaks are significantly different from one another. More generally, when trying to separate neurons of n simultaneously active assemblies, n different correlogram peak heights will have to be differentiable. Currently no such method exists. Furthermore, in the typical experimental situation, the precise number of assemblies recorded from (i.e., the value of n) is not known in advance. In practice, these last two constraints make it unlikely that cross-correlation can differentiate three or more simultaneously active assemblies. Perhaps more problematic, cross-correlation scales very poorly. For N neurons, N(N − 1)/2 cross-correlograms must be calculated and compared. To obtain even minimal temporal resolution, cross-correlograms must be calculated using a scrolling time window. In this case, N(N − 1)/2 crosscorrelograms must be calculated for each time step. It is apparent that crosscorrelation quickly becomes untenable; for 80 neurons and 10 time steps (a resolution of 400 msec when using 4 sec of data), some 31,600 crosscorrelograms must be evaluated and compared. The time problem, but not the large number of neuron pairs, can be avoided by using the joint peri– stimulus time histogram (JPSTH; Aertsen et al. 1989), which is closely related to cross-correlation. Using the JPSTH method requires only 3160 figures, but comparison of these figures becomes considerably more complicated. The gravity method is somewhat more flexible than cross-correlation for multiple assembly analysis. The nature of the method lends itself to determining relative assembly onset times. Furthermore, it is not hampered as cross-correlation is with regard to knowing the value of n and can indeed be applied regardless of the number of active assemblies. The simultaneousonset, high-firing-rate assemblies represent a fairly extreme situation, so the difficulties of the gravity method there are of only limited consequence. In theory, the gravity method is limited by one’s ability to analyze particle clusters in an N-dimensional space. Techniques for such analyses are plentiful. In practice, limitations on the gravity method center around the aggregation parameters and the firing rate normalization procedure. As mentioned, large clustering artifacts can be minimized (but not eliminated) by shrinking σg or τ . This approach, however, makes the method less sensitive overall and thus requires more recording data. That σg is free to vary can also make it difficult to determine which clusters are significant. If rate normalization uses an average value for a neuron’s firing rate (instead of the instantaneous rate), artifacts are spread over time but not eliminated. Large increases in neuronal firing rates and sudden changes from asynchronous to synchronous firing (both of which appeared here) are most likely to cause such artifacts, though artifacts can occur spontaneously (see Fig. 4, column 3).
Detecting Synchronous Cell Assemblies
69
4.3 Other Assembly Types. The results here addressed only synchronous cell assemblies. However, there are many other plausible types of neuronal assemblies. Network-level lateral inhibition, attractor networks, and cellular automata are just a few from a wide array of alternative assembly definitions. Some assembly types may require the development of entirely new statistical tests or the application of existing techniques to a new domain. Ideally one wants to find and use the most sensitive statistic for uncovering such assemblies. Regrettably, there is no general solution to this problem. Instead, each candidate statistic must be evaluated with respect to its ability to detect each hypothesized assembly type. Techniques like principal components analysis (Nicolelis and Chapin 1994), firing space descriptions (Gerstein and Gochin 1992; Gochin et al. 1994), and JPSTH (Aertsen et al. 1989) are steps in this direction, but such techniques remain to be examined for their sensitivity to assembly phenomena. There is one final aspect of cell assembly phenomena that should be considered. It is possible that various types of functional cell assemblies are implemented simultaneously. For example, the cortex may subserve lateral inhibition networks and synchronous assemblies, with one type of assembly overlapping the other. This type of interaction is perhaps currently beyond the realm of parsimony but may need to be considered in the future. Critically, one would have to consider (1) what type(s) of assembly are present and (2) how these assembly types might interact and interfere with one another at the cellular level. Such issues are bound to become crucial as knowledge about assembly phenomena in the brain accumulates.
4.4 Conclusions. So how do we experimentally test the cell assembly hypotheses of Hebb, Abeles, and others? It would seem that four steps are necessary: (1) select an experimental paradigm to activate the hypothesized assemblies, (2) record from a sufficient number of cells in the investigated region (Strangman 1996), (3) select the statistical technique that is best able to detect the hypothesized type of assembly, and (4) record sufficient amounts of data to detect any existing assembly activity. This paper bears on the last two requirements. The results suggest that cross-correlation and the gravity method have approximately equal temporal sensitivities with regard to synchronous assemblies. The equations here can be used to help estimate the amount of data that needs to be collected. Furthermore, when dealing with multiple, overlapping assemblies, it appears that the gravity method is more flexible than cross-correlation. Ideally, one desires methods that take full advantage of the parallelism in such data. Knowing that the gravity method approaches this ideal more closely than cross-correlation may help bridge the gap between assembly-level theories of brain function and the simultaneous electrode recording data now becoming available.
70
Gary Strangman
Spikes d
e 0
eo 0 Offset
Figure 6: Theoretical cross-correlation histogram for synchronous cell assemblies (adapted from Aertsen and Gerstein 1985). Here, d is the height (in spikes) of the cross-correlogram peak, σ is the width of this peak (in msec), e0 is the average correlation level for two independent neurons, e is the average correlation level when a synchronous assembly is active, and s is the noise in correlation level e.
Appendix A The formal analysis presented here for cross-correlation closely follows the analysis of effective connectivity in cross-correlograms as given by Aertsen and Gerstein (1985). Refer to Figure 6 for notation as well as for the theoretical cross-correlation curve of two neurons active as part of the same synchronous assembly. In the absence of an active assembly, the two neurons being recorded will have an average base-level cross-correlation, e0 , such that, ignoring statistical variation, e0 = %1 %2 T1, where %1 and %2 are the neuron background firing rates, T is the length of recording, and 1 is the cross-correlogram bin width (see Table 2 for a summary of all variables). If one neuron to be correlated is part of the active synchronous assembly and the other is a neuron firing at its background rate, the average correlation level, e, is e = %10 %2 T1, where the prime indicates the (higher) firing rate of an assembly neuron when the assembly is operational. Similarly, the average correlation level for two assembly neurons, e, is e = %10 %20 T1 = (%1 + κ%0 )(%2 + κ%0 )T1,
(A.1)
where %0 is the firing rate of the assembly as a whole. Assume, then, that all spikes defined as synchronous (i.e., firing within ± ε msec of one another)
Detecting Synchronous Cell Assemblies
71
Table 2: Variables. Variable d e0
Definition Height of a cross-correlogram peak (in spikes) Average correlation level for two independent neurons (in spikes) Average correlation level during cell assembly activity (in spikes) Noise associated with the correlation level e (in spikes) Length of spike train recording (in msec) Minimum length of spike train recording (in msec) necessary to detect a synchronous assembly Consistency parameter: probability that a unit fires when the assembly source unit discharges Average firing rate of nonassembly units or neurons (in spikes/sec); Subscript 0 indicates average firing rate of the assembly source spike train Firing rate of an assembly unit or neuron = %i + κ%0 (in spikes/sec) The synchrony jitter parameter (in msec) Width of synchrony window from assembly firing (= 2ε, in msec) Cross-correlation histogram bin width (in msec) The gravity aggregation parameter (Gerstein et al. 1985 uses σ ) The gravity charge decay parameter (in msec) The gravitational charge increment at the time of a spike in neuron i
e s T Tmin κ %i %i0 ε σ 1 σg τ 1qi
are contained in a single correlogram bin (so, 2ε = σ ≤ 1). For example, if ε = 2 msec (e.g., Abeles 1982, 1991), one would set 1 ≥ 4 msec. In this case d, the height of the synchronous correlogram peak is given by µ d = κ 2 %0 T,
if σ ≤ 1.
Otherwise, d = κ 2 %0 T
¶ 1 . σ
(A.2)
Using a standard signal detection criterion, |d| ≥ 2s,
(A.3)
√ where s is the magnitude of the noise (s = e for a Poisson process). Taking equation A.3 and using e from equation A.1 and the first expression for d in equation A.2, one can solve for the recording time T; the time required to
72
Gary Strangman
detect significant correlation, T≥
41%10 %20 κ 4 %02
=
41(%1 + κ%0 )(%2 + κ%0 ) κ 4 %02
when σ ≤ 1.
(A.4)
This requires an estimate of the average firing rate of the assembly as a whole, %0 . Assuming the base firing rates, %1 and %2 , are small relative to κ%0 , they drop out and T can be expressed as Tmin =
41 κ2
(or Tmin =
4σ 2 , if σ > 1), κ 21
(A.5)
where greater-than-or-equal-to has been replaced with equals, and thus T is changed to the minimum required recording time to detect significant correlation, Tmin . So assuming that %1 , %2 ¿ κ%0 , it follows that Tmin is independent of the assembly √ firing rate. This follows from equations A.1 through A.3: the ratio of d to 2 e (the assembly detectability) is essentially unchanged by variations in %0 , and this is precisely true when %1 = %2 = 0. (See Gerstein and Aertsen 1985 for related calculations regarding the gravity method.) Appendix B Counting unique assembly Venn diagrams depends on the constraints used to define “unique.” The counting here was accomplished by a computer algorithm that evaluated vectors of binary numbers. The vectors were composed of 2n − 1 elements, where n is the number of assemblies and 2n − 1 is equal to the maximum possible number of nonempty assembly intersections. For three assemblies, the bits in the vector represented assembly intersections A, B, C, A∩B, A∩C, B∩C, and A∩B∩C, in this order (where ¯ is written as A, and so on.) Each “on” bit in a 2n − 1 element vector A∩B¯ ∩ C means the respective assembly intersection has more than zero members. Thus, 0100011 corresponds to the set {B, B∩C, A∩B∩C} and represents a nonempty assembly A being a proper subset of assembly C, which is in turn a proper subset of assembly B. All possible binary vectors of 2n −1 elements were evaluated according to the most conservative Venn diagram counting procedure. This requires that four criteria be met. First, any arrangements differing only by how many cells (> 0), or which particular cells appear in each region of the diagram, are classified as the same arrangement. This is built into the vector representation above. Second, each assembly must exist in the final intersection group. For example, for three groups A, B, and C, the set {B, B∩C, A∩B∩C} is valid, while the set {A, B, A∩B} is not a valid arrangement for three assemblies (the latter implies that assembly C is empty, meaning that there are actually only two assemblies). Third, there must be at least n distinct intersections, where n is the number of active assemblies (i.e., {A∩B∩C} is not
Detecting Synchronous Cell Assemblies
73
valid, because it represents three coincident assemblies). Thus, there must be at least n “on” bits in the vector. Fourth, one assembly must have neurons that are part of only that assembly (thus avoiding degenerate cases of two completely coincident assemblies). That is, at least one of the leftmost n bits in the vector must be on. Criteria 3 and 4 are of questionable value, and eliminating one or both slightly increases the number of unique geometric arrangements beyond the numbers reported here. Acknowledgments This article is based on work supported by a National Science Foundation Graduate Fellowship. I am indebted to J. A. Anderson for his insight into the importance of this problem. I also thank J. P. Donoghue, D. Ascher, and B. W. Connors for their helpful suggestions regarding earlier versions of this paper. References Abeles, M. 1982. Local Cortical Circuits: An Electrophysiological Study. SpringerVerlag, Berlin. Abeles, M. 1991. Corticonics. Cambridge University Press, Cambridge. Abeles, M., and Gerstein, G. L. 1988. Detecting spatiotemporal firing patterns among simultaneously recorded single neurons. J. Neurophysiol. 60, 909–924. Abeles, M., Bergman, H., Margalit, E., and Vaadia, E. 1993. Spatiotemporal firing patterns in the frontal cortex of behaving monkeys. J. Neurophysiol. 70(4), 1629–1638. Abeles, M., Prut, Y., Bergman, H., and Vaadia, E. 1994. Synchronization in neuronal transmission and its importance for information processing. Prog. Brain Res. 102, 395–404. Abeles, M., Bergman, H., Gat, I., Meilijson, I., Seidemann, E., Tishby, N., and Vaadia, E. 1995. Cortical activity flips among quasi-stationary states. Proc. Nat. Acad. Sci. 92, 8616–8620. Aertsen, A. M. H. J., and Gerstein, G. L. 1985. Evaluation of neuronal connectivity: Sensitivity of cross-correlation. Brain Res. 340, 341–354. Aertsen, A. M. H. J., and Gerstein, G. L. 1991. Dynamic aspects of neuronal cooperativity: Fast stimulus-locked modulations of effective connectivity. In Neuronal Cooperativity, J. Kruger, ¨ ed. Springer-Verlag, Berlin. Aertsen, A. M. H. J., Bonhoeffer, T., and Kruger, ¨ J. 1987. Coherent activity in neuronal populations: Analysis and interpretation. In Physics of Cognitive Processes, E. R. Caianiello, ed. World Scientific, Singapore. Aertsen, A. M. H. J., Gerstein, G. L., Habib, M. K., and Palm, G. 1989. Dynamics of neuronal firing correlation: Modulation of “effective connectivity.” J. Neurophysiol. 61, 900–917. Anderson, J. A., Silverstein, J. W., Ritz, S. A., and Jones, R. S. 1977. Distinctive features, categorical perception, and probability learning: Some applications of a neural model. Psych. Rev. 84, 413–451.
74
Gary Strangman
Barlow, H. B. 1972. Single units and sensation: A neuron doctrine for perceptual psychology? Perception 1, 371–394. Barlow, H. B. 1992. The biological role of neocortex. In Information Processing in the Cortex, A. Aertsen and V. Braitenberg, eds., pp. 53–80. Springer-Verlag, Berlin. Bernard, C., Axelrad, H., and Giraud, B. G. 1993. Effects of collateral inhibition in a model of the immature rat cerebellar cortex: Multineuron correlations. Cog. Brain Res. 1, 100–122. Blum, H. 1967. A new model of global brain function. Perspect. Biol. Med. 10, 381–408. Caton, R. 1875. The electric currents of the brain. British Med. J. 2, 278. Constable, R. T., McCarthy, G., Allison, T., Anderson, A. W., and Gore, J. C. 1993. Functional brain imaging at 1.5T using conventional gradient echo MR imaging techniques. Magn. Reson. Imaging 11, 451–459. Eckhorn, R., Bauer, R., Jordan, W., Brosch, M., Kruse, W., Munk, M., and Reitboeck, H. J. 1988. Coherent oscillations: A mechanism of feature linking in the visual cortex? Biol. Cybern. 60, 121–130. Edelman, G. M. 1987. Neural Darwinism: The Theory of Neuronal Group Selection. Basic Books, New York. Ermentrout, G. B., and Cowan, J. D. 1979. Temporal oscillations in neuronal nets. J. Math. Biol. 7, 265–280. Falk, C. X., Wu, J. Y., Cohen, L. B., and Tang, A. C. 1993. Nonuniform expression of habituation in the activity of distinct classes of neurons in the Aplysia abdominal ganglion. J. Neurosci. 13(9), 4072–4081. Georgopoulos, A. P., Schwartz, A. B., and Kettner, R. E. 1986. Neuronal population coding of movement direction. Science 233, 1416–1419. Gerstein, G. L., and Aertsen, A. M. H. J. 1985. Representation of cooperative firing activity among simultaneously recorded neurons. J. Neurophysiol. 54, 1513–1528. Gerstein, G. L., Bedenbaugh, P., and Aertsen, A. M. H. J. 1989. Neuronal assemblies. IEEE Trans. Biomed. Eng. 36, 4–14. Gerstein, G. L., and Gochin, P. M. 1992. Neuronal population coding and the elephant. In Information Processing in the Cortex, A. Aertsen and V. Braitenberg, eds., pp. 139–160. Springer-Verlag, Berlin. Gerstein, G. L., and Perkel, D. H. 1969. Simultaneously recorded trains of action potentials: Analysis and functional interpretation. Science 164, 828–830. Gerstein, G. L., and Perkel, D. H. 1972. Mutual temporal relationships among neuronal spike trains. Biophys. J. 12, 453–473. Gerstein, G. L., Perkel, D. H., and Subramanian, K. H. 1978. Identification of functionally related neural assemblies. Brain Res. 140, 43–62. Gerstein, G. L., Perkel, D. H., and Dayhoff, J. E. 1985. Cooperative firing activity in simultaneously recorded populations of neurons: Detection and measurement. J. Neurosci. 5, 881–889. Gochin, P. M., Kaltenbach, J. A., and Gerstein, G. L. 1989. Coordinated activity of neuron pairs in anesthetized rat dorsal cochlear nucleus. Brain Res. 497, 1–11.
Detecting Synchronous Cell Assemblies
75
Gochin, P. M., Columbo, M., Dorfman, G. A., Gerstein, G. L., and Gross, C. G. 1994. Neural ensemble coding in inferior temporal cortex. J. Neurophysiol. 71, 2325–2337. Grafton, S. T., Woods, R. P., Mazziotta, J. C., and Phelps, M. E. 1991. Somatotopic mapping of the primary motor cortex in humans: Activation studies with cerebral blood flow and positron emission tomography. J. Neurophysiol. 66, 735–743. Gross, G. W., Rieske, E., Kreutzberg, G. W., and Meyer A. 1977. A new fixed-array multi-microelectrode system designed for long-term monitoring of extracellular single unit neuronal activity in vitro. Neurosci. Lett. 6, 101–105. Gray, C. M., Konig, ¨ P., Engel, A. K., and Singer, W. 1989. Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties. Nature 338, 334–337. Hebb, D. O. 1949. The Organization of Behavior. John Wiley, New York. Hinton, G. E., McClelland, J. L., and Rumelhart, D. E. 1986. Distributed representations. In Parallel Distributed Processing, D. E. Rumelhart and J. L. McClelland, eds., vol. 1, pp. 77–109. MIT Press, Cambridge, MA. Hopfield, J. J. 1982. Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci. 79, 2554–2558. Hughes, J. R. 1995. The phenomenon of travelling waves: A review. Clin. Electroencephalogr. 26, 1–6. James, W. 1892. Psychology (Briefer Course). Holt, New York. Knuth, D. E. 1981. Seminumerical Algorithms. Addison-Wesley, Reading, MA. Kubie, L. S. 1930. A theoretical application to some neurological problems of the properties of excitation waves which move in closed circuits. Brain 53, 166–177. McNaughton, B. L., O’Keefe, J., and Barnes, C. A. 1983. The stereotrode: A new technique for simultaneous isolation of several single units in the central nervous system from multiple unit records. J. Neurosci. Methods 8, 391–397. Moore, G. P., Perkel, D. H., and Segundo, J. P. 1966. Statistical analysis and functional interpretation of neuronal spike data. Ann. Rev. Physiol. 28, 493– 522. Nicolelis, M. A., and Chapin, J. K. 1994. Spatiotemporal structure of somatosensory responses of many-neuron ensembles in the rat ventral posterior medial nucleus of the thalamus. J. Neurosci. 14(6), 3511–3532. Nicolelis, M. A., Lin, R. C. S., Woodward, D. J., and Chapin, J. K. 1993. Dynamic distributed properties of many-neuron ensembles in the ventral posterior medial thalamus of awake rats. Proc. Natl. Acad. Sci. USA 90, 2212–2216. Nordhausen, C. T., Rousche, P. J., and Normann, R. A. 1994. Optimizing recording capabilities of the Utah Intracortical Electrode Array. Brain Res. 637, 27–36. Palm, G. 1982. Neural Assemblies: An Alternative Approach to Artificial Intelligence. Springer-Verlag, Berlin. Perkel, D. H., Gerstein, G. L., and Moore, G. P. 1967. Neuronal spike trains and stochastic point processes. II Simultaneous spike trains. Biophys. J. 7, 419–440. Perkel, D. H., Gerstein, G. L., Smith, M. S., and Tatton, W. G. 1975. Nerve impulse patterns: A quantitative display technique for three neurons. Brain Res. 100, 271–296.
76
Gary Strangman
Petsche, H., and Sterc, J. 1968. The significance of the cortex for the travelling phenomenon of brain waves. Electroenceph. Clin. Neurophysiol. 25, 11–22. Pickard, R. S., and Welberry, J. R. 1976. Printed circuit microelectrodes and their application to honeybee brain. J. Exp. Biol. 64, 39–44. Rao, S. M., Binder, J. R., Bandettini, P. A., Hammeke, T. A., Yetkin, F. Z., Jesmarowicz, A., Lisk, L. M., Morris, G. L., Mueller, W. M., Estkowski, L. D., Wong, W. C., Haughton, U. M., and Hyde, J. S. 1993. Functional magnetic resonance imaging of complex human movements. Neurology 43, 2311–2318. Roland, P. E. 1982. Cortical regulation of selective attention in man: A regional cerebral blood flow study. J. Neurophysiol. 48, 1059–1078. Strangman, G. 1996. Searching for cell assemblies: How many electrodes do I need? J. Computational Neuroscience. 3, 111–124. Vaadia, E., Bergman, H., and Abeles, M. 1989. Neuronal activities related to higher brain functions—theoretical and experimental implications. IEEE Trans. Biomed. Eng. 36, 25–35. Vaadia, E., Haalman, I., Abeles, M., Bergman, H., Prut, Y., Slovin, H., and Aertsen, A. 1995. Dynamics of neuronal interactions in monkey cortex in relation to behavioral events. Nature 373, 515–518. Walter, W. G. 1968. Electric signs of expectancy and decision in the human brain. In Cybernetic Problems in Bionics, H. L. Oestreocher and D. R. Moore, eds., pp. 361–396. Gordon and Breach Science Publishers, New York. Wilson, M. A., and McNaughton, B. L. 1993. Dynamics of the hippocampal ensemble code for space. Science 261, 1055–1057. Yamada, S., Nakashima, M., Matsumoto, K., and Shiono, S. 1993. Information theoretic analysis of action potential trains. Biol. Cybern. 68, 215–220.
Received June 26, 1995; accepted April 11, 1996.
Communicated by Bard Ermentrout
A Simple Neural Network Exhibiting Selective Activation of Neuronal Ensembles: From Winner-Take-All to Winners-Share-All Tomoki Fukai Department of Electronics, Tokai University, Kitakaname 1117, Hiratsuka, Kanagawa, Japan Laboratory for Neural Modeling, Frontier Research Program, RIKEN (Institute of Physical and Chemical Research), 2-1 Hirosawa, Wako, Saitama 351-01, Japan
Shigeru Tanaka Laboratory for Neural Modeling, Frontier Research Program, RIKEN (Institute of Physical and Chemical Research), 2-1 Hirosawa, Wako, Saitama 351-01, Japan
A neuroecological equation of the Lotka-Volterra type for mean firing rate is derived from the conventional membrane dynamics of a neural network with lateral inhibition and self-inhibition. Neural selection mechanisms employed by the competitive neural network receiving external inputs are studied with analytic and numerical calculations. A remarkable finding is that the strength of lateral inhibition relative to that of self-inhibition is crucial for determining the steady states of the network among three qualitatively different types of behavior. Equal strength of both types of inhibitory connections leads the network to the well-known winnertake-all behavior. If, however, the lateral inhibition is weaker than the self-inhibition, a certain number of neurons are activated in the steady states or the number of winners is in general more than one (the winnersshare-all behavior). On the other hand, if the self-inhibition is weaker than the lateral one, only one neuron is activated, but the winner is not necessarily the neuron receiving the largest input. It is suggested that our simple network model provides a mathematical basis for understanding neural selection mechanisms. 1 Introduction It is believed that information processing is performed by many neurons in a highly parallel distributed manner. For example, about half of the neocortex was shown to be involved in visual information processing in the macaque monkey (Van Essen et al. 1984). This implies that coactivation of a large population of neurons is a fundamental strategy employed by the Neural Computation 9, 77–97 (1997)
c 1997 Massachusetts Institute of Technology °
78
Tomoki Fukai and Shigeru Tanaka
brain in sensory information processing and motor control. In fact, there is evidence that the neuronal population coding is adopted in various regions of the brain, including the primary visual cortex (Gilbert and Wiesel 1990), the superior colliculus for saccadic eye movements (Wurtz et al. 1980; Van Opstal and Van Gisbergen 1989), the inferotemporal cortex for human face recognition of the monkey (Young and Yamane 1992), the motor cortex decoding arm movements of the monkey (Georgopoulos et al. 1993), and the CA1 region of rat hippocampus encoding place information (Wilson and McNaughton 1993). The question is how a particular population of neurons is selected for engagement in an information processing task. Furthermore, a similar neural selection seems to provide a functional basis for active information processing of the brain such as the planning and execution of voluntary movement. It seems that voluntary movement is initiated by a selective transformation of neural activities coded in the several motor cortices into temporal sequences of motor commands through the subcortical nuclei (Hikosaka 1991; Tamori and Tanaka 1993). Actually, nerve projections from basal ganglia, which receive inputs from the motor cortices, converge to parts of the thalamus (Alexander et al. 1986). This implies that the number of active neurons involved in voluntary movement is reduced in the course of generating temporal sequences of neural activities related to motor actions from spatially distributed information in the motor cortices. In other words, it is likely that the subcortical nuclei are involved in the selection of neural activities. Because similar circuits are observed in almost the entire cerebral cortex besides the motor cortices (Alexander et al. 1986), such neural selection might be common in the processing of temporal information as in language generation and logical thinking. In this sense, it is of importance to understand mechanisms of neural selection in order to build computational models for cognitive functions as well as motor functions. A rule for selection requires some competition in general. In order to model the neural competition that might be induced in the brain, we propose a simple model describing the competition of neural activities in terms of excitatory inputs and all-to-all recurrent inhibitory connections including self-inhibition. Competitive behavior of neural systems has been repeatedly reported in the literature (Grossberg 1973; Cohen and Grossberg 1983; Yuille and Grzywacz 1989; Coultrip et al. 1992; Ermentrout 1992; Kaski and Kohonen 1994). In this paper, particular attention is paid to qualitative and quantitative changes in the selection of winners as the strength of the lateral inhibition relative to that of the self-inhibition is varied. When the ratio of strength of the lateral inhibition to that of the self-inhibition is larger than unity, only one neuron is selected to be active. On the other hand, when the ratio is smaller than unity, a certain number of neurons are selected to be active. We examine properties of such “winners-share-all” solutions to the dynamic equation of neural activities, which is derived from the conventional equations of the membrane dynamics and the sigmoidal output
A Simple Neural Network Exhibiting Selective Activation
79
function in order to make analytical studies possible. Our equation can be classified as one type of the Lotka-Volterra equations that have been used for the description of ecological systems. Thus, we may say that temporal information is generated through selection by the ecological system of neural activities. In the literature, the K-winner-take-all behavior is known as the solution that admits K (≥ 1) winners for competition in inhibitory neural networks similar to the present one (Majani et al. 1989). However, in the K-winner networks, the solutions could be systematically studied only for homogeneous external inputs (i.e., when they are neuron independent), and their dynamic behavior with nonhomogeneous inputs is in general complicated (Wolfe et al. 1991). Hence particular attention was paid to the behavior in which winners were selected according to the initial conditions. On the other hand, we show that winners are always selected according to the magnitudes of external stimuli for arbitrary initial states when each neuron receives a sufficiently strong inhibitory feedback. Such competitive behavior implies that the neural network always evolves into an internal state well representing the structure of the external world. In this paper, we analyze the stable fixed-point solutions, which can be classified into the winners-share-all or winner-take-all type, to our dynamic equation. Simulated results are also shown for both types of solutions. Then, limiting ourselves to the winner-take-all solutions, we examine analytically the relaxation of one fixed point to another when the external inputs suddenly change. 2 Derivation of the Ecological Equation for Neurodynamics For the sake of later exact analysis of competitive network behavior, we derive a Lotka-Volterra equation for firing activity from the standard equation for the membrane dynamics. Such a Lotka-Volterra dynamics was first discussed by Cowan (1968, 1970). According to the conventional model for neural dynamics, the membrane potential and firing rate of the ith neuron are expressed as follows: dui = −λui + (input current), dt
(2.1)
zi = f (ui − θ), where f (ui − θ) represents a nonlinear output function. If we assume that this function is given by the logistic function f (x) = f0 [1 + exp(−βx)]−1 ,
(2.2)
80
Tomoki Fukai and Shigeru Tanaka
the dynamics of the firing rate can be expressed by µ · ¶ ¸ βzi ( f0 − zi ) f0 − zi λ dzi = −λθ + log + (input current) . (2.3) dt f0 β zi The input current is assumed to be given by afferent inputs and lateral connections: (input current) = γ + Wi −
X
Vii0 zi0 ,
(2.4)
i0
where γ represents the input current induced by the afferent inputs nonspecific to each cell, and Wi represents the input current generated by the specific afferent inputs. Vii0 is the strength of the lateral inhibitory connections. If the firing rate is always much smaller than its maximum value f0 , this equation can be simplified as " # ¶ µ X f0 − zi λ dzi = zi 1 + Wi − zi ( f0 − zi ) log , (2.5) Vii0 zi0 + dt β f0 (γ − λθ ) zi i0 by omitting the factor ( f0 − zi )/f0 in the first term without changing a qualitative feature of the dynamics. Here γ − λθ was assumed to be positive, and the time t, specific input current Wi , and lateral inhibition Vii0 were rescaled as β(γ − λθ)t → t, Wi /(γ − λθ ) → Wi and Vii0 /(γ − λθ ) → Vii0 , respectively. The second term, indicating that the nonlinear output function has two asymptotes, takes positive values for zi ¿ f0 , while it takes negative values for zi close to f0 . Note that the boundedness of the membrane potential in the original equation (2.1) is preserved owing to the presence of this term. Since our system has global inhibitory connections for which the divergence in activity never occurs, we do not need to retain the negativity of the term for zi close to f0 . Thus, for simplicity, the term can be replaced by a positive constant ε. As we will see later, this replacement does not change the qualitative behavior of the dynamics. Finally we obtain the following equation for neurodynamics: " # X dzi = zi 1 + Wi − Vii0 zi0 + ε, dt i0
(2.6)
which is a type of the Lotka-Volterra equation. In the following, we consider an N-neuron system that has self-inhibition as well as uniform lateral inhibition: ½ 1 i = i0 (2.7) Vii0 = k i 6= i0 ,
A Simple Neural Network Exhibiting Selective Activation
81
where k is a positive constant representing the relative strength of the lateral inhibitory connections to the self-inhibitory ones. The self-inhibition is introduced since biological neural networks often have local inhibitory interneurons that deliver feedback inhibition to the cells activating those interneurons (Shepherd 1990). The lateral inhibition is assumed to be uniform for the sake of mathematical simplicity. Thus equation 2.6 is written as X dzi = zi (1 − zi − k zj + Wi ) + ε, dt j6=i
i = 1, . . . , N
(2.8)
which yields the basic equation for the present analytic study of neurodynamics. 3 The Dynamic Properties and the Steady States of the Model The network model (see equation 2.8) with k = 1 was first derived for a selforganization model of primary visual cortex by one of the present authors (Tanaka 1990) to describe the competitive evolution of densities of synapses in local cortical regions. It is also possible to interpret the network model formally as a type of shunting inhibition model (Grossberg 1973; Cohen and Grossberg 1983). By using new variables xi = zi − µ with ε = (1 + Nk − k)µ2 , we can rewrite equation 2.8 as dxi = −(1 + Nk − k)µ(ε)xi + (1 + Wi − xi )(µ(ε) + xi ) dt X xj , − (xi + µ(ε))k
(3.1)
j6=i
which represents a shunting model with linear-response cells receiving cellnonspecific excitatory input µ(ε), positive feedback xi , and lateral inhibition P k j6=i xj . From equation 3.1, we can immediately see that each xi can fluctuate within a finite interval [−µ(ε), 1 + Wi ] if its initial states lie in the same interval. This implies that zi also fluctuates in a finite interval of positive values. An unusual feature of this shunting inhibition model is that the afferent input Wi appears in the cutoff factor for excitatory input, and accordingly the cutoff activity level is stimulus dependent. Also xi is usually interpreted as the membrane potential in the shunting models, whereas it may be interpreted as the firing rate in the present model. Cohen and Grossberg (1983) showed that nonlinear systems like equation 3.1 have global Lyapunov functions, which ensure absolute stability of dynamic evolution of the systems. In fact, by introducing zi = y2i , we can transform equation 2.8 into the following gradient dynamics in which energy function E is minimized with time: ∂E dyi =− , dt ∂yi
(3.2)
82
Tomoki Fukai and Shigeru Tanaka
with the energy function given by
E({yi }) = −
N X 1 + Wi i=1
4
y2i
à !2 N N N 1−k X k X εX 4 2 + yi + yi − log yi . (3.3) 8 i=1 8 i=1 2 i=1
This ensures the global stability of steady-state solutions to equation 2.8. The time evolution of the network state with k = 1 was reported to exhibit the winner-take-all (WTA) behavior in which activities of all cells except one receiving a maximum input are eliminated (Tanaka 1990). The other cases of 0 < k < 1 and k > 1 also show intriguing properties, which we call winners-share-all (WSA) and variant winner-take-all (VWTA), respectively. The WSA behavior admits more than one winner receiving inputs larger than some critical value determined by the distribution of strength of the inputs. The VWTA behavior admits only one winner, but it can be any cell receiving an input larger than some critical value. We first discuss the two cases of 0 < k < 1 and k > 1 since these cases are of primary concern in the present paper and can be dealt with in a similar manner. The case of k = 1 is also reviewed to complete the survey of the network dynamics. 3.1 WSA Case (0 < k < 1). We will derive an equilibrium solution, which represents the WSA behavior, to equation 2.8 when the strength of lateral inhibitory connections satisfies 0 < k < 1. We assume that ε and all Wi ’s are positive. It is also assumed that ε satisfies dε ¿ Wi for all i, with d being the number of winners in the steady state. We first examine solutions for ε = 0 and then analyze those for ε 6= 0 in a perturbative manner. We can explicitly study the stability of the solutions only for ε = 0. Therefore the results of the stability analysis hold only when ε is not too large. Studying this restricted case, however, should be sufficient for practical purposes since ε can be infinitesimally small in many applications of the present model. We can assume, without loss of generality, that external input Wi ’s obey the following inequality regarding their magnitudes: W1 ≥ W2 ≥ W3 ≥ · · · ≥ WN−1 ≥ WN ≥ 0.
(3.4)
A stable fixed-point solution {z(D) i } given by dzi /dt = 0 (i = 1 · · · N) for k 6= 1 and ε = 0 satisfies X j∈D
Kij zj(D) = 1 + Wi , z(D) = 0, i
i∈D (3.5) i∈ /D
A Simple Neural Network Exhibiting Selective Activation
83
where D = {1, 2, · · · , d} indicates a set of winners, and d × d matrix Kˆ = (Kij ) is given by 1 k Kˆ = . .. k
k 1 .. . ···
··· k k
k ··· .. . k
k k .. . . 1
(3.6)
When k 6= 1, Kˆ is not singular and has an inverse matrix, Kˆ −1 =
1 (k − 1)α
k−α k .. . k
k k−α .. . ···
··· k k
k ··· .. . k
k k .. . k−α
,
(3.7)
where α = dk + 1 − k > 0. Now defining ζi for i = 1, 2, . . . , N by Wi kd 1 + + hWiD , α 1 − k (k − 1)α 1X Wj , hWiD = d j∈D ζi =
(3.8) (3.9)
we obtain the stable fixed-point solution for k 6= 1 and ε = 0 as follows: = ζi δiD , z(D) i ½ 1 i∈D . δiD = 0 i∈ /D
(3.10)
The winners receiving larger inputs acquire larger values for nonvanishing z(D) i ’s as long as 0 < k < 1 is satisfied. Thus for this range of the magnitude of lateral inhibition, the network system exhibits the WSA behavior. Now we examine the stability conditions of solution 3.10. Substituting + δz(D) zi (t) = z(D) i i (t) into equation 3.8 and retaining the terms linear in the fluctuation δz(D) i (t), we obtain dδz(D) i dt
=
P −ζi (δz(D) +k δzj(D) ) + o(δz(D)2 ), i i j=1...N,j6=i
(1 −
k)ζi δz(D) i
+
o(δz(D)2 ). i
i∈D (3.11) i∈ /D
84
Tomoki Fukai and Shigeru Tanaka
Then the stability of the solution for d > 1 is determined by the eigenvalues of the following matrix:
−ζ1 −kζ2 .. . ˆ = −kζd M 0 . ..
−kζ1 −ζ2 ···
0
−kζ1 −kζ2 .. . −kζd ··· .. . ···
··· −kζ2 −ζd 0 .. . 0
··· ··· .. . −kζd (1 − k)ζd+1 .. . 0
··· ···
··· ···
−kζd 0 .. . ···
··· ··· ··· 0
−kζ1 −kζ2 .. . .(3.12) −kζd 0 .. . (1 − k)ζN
ˆ should be negative for the stability of the lowest-order All eigenvalues of M solution. This requires that ζi be positive for d winners and negative for (N − d) losers. Therefore the stability condition is consistent with the condition zD i > 0 (i ∈ D ) for the existence of the WSA solution. The condition can be expressed as 1 + Wmin >
kd (hWiD − Wmin ), 1−k
(3.13)
where Wmin is the smallest external input among Wi ’s for i ∈ D: Wmin = Wd by assumption 3.4. Since the right-hand side of equation 3.13 is an increasing function of d while the left-hand side is a decreasing function, there exists a critical upper bound dc above which condition 3.13 is not satisfied, or equivalently ζdc +1 , ζdc +2 , . . . , ζN < 0 (see Appendix A). This critical value gives the actual number of winners. Condition 3.13 indicates that the number of winners decreases as the relative strength k of the lateral inhibition approaches unity. The solution with no winner, d = 0, is forbidden even for k → 1− . From equations 3.8 and 3.12 with d = 0, it is found that all ζi = (1 − k)−1 (1 + Wi )’s must be negative in that case to ensure the stability of such a solution. This, however, never occurs for 0 < k < 1. Indeed, the network model exhibits the WTA behavior for k = 1, as shown later, and hence the dynamic behavior of the network continuously changes from WSA to WTA as k → 1− . From evolution equation 2.8 with ε = 0, we see that once some cells = 0, they can never become winners even when become losers with z(D) i the magnitudes of Wi ’s are changed to take a different order. Thus positive ε should be retained to ensure that the network undergoes transition in its activity pattern to adapt to changes in external inputs. The perturbative steady-state solution in the leading order of ε is easily obtained as = (ζi + εσi )δiD − ε z(D) i
1 (1 − δiD ) + O(ε 2 ), (1 − k)ζi
(3.14)
A Simple Neural Network Exhibiting Selective Activation
85
where σi =
X 1 (αζi−1 − k ζj−1 ). (1 − k)α j∈D
(3.15)
The stability condition at the zeroth order of ε (ζi > 0 for i ∈ D and ζi < 0 for ∈ / D) ensures that activities of losers are slightly raised from the zero level by positive ε. The averaged activity of the winners is also raised by P P (ε/d) j∈D σj = (ε/dα) j∈D ζj−1 . In Figure 1a, we show an example of the time course of the network state obtained for 0 < k < 1 by numerical simulations of the model network with 30 cells. Initial states at t = 0 were randomly selected in the interval [0, 1]. The inhibitory interactions reduce the net activity of the whole network system rapidly in the initial transient of the time evolution and then the surviving net activity is gradually distributed to winners according to the mechanism of the WSA. 3.2 VWTA Case (k > 1). For a stronger lateral inhibition of k > 1, the solution given by equation 3.14 is stable at the zeroth order of ε if the con/ D are obeyed. It is easy to see that ditions ζi < 0 for i ∈ D and ζi > 0 for i ∈ the number of winners cannot be more than one: It is not possible to satisfy the former condition for the winner zd receiving the smallest external input k P among the winners because αζd = 1 + Wd + k−1 ( j∈D Wj − dWd ) cannot be negative. Thus all the solutions with d > 1 are unstable. For d = 1, the stability analysis shows that all ζi ’s must be positive. Let cell a be the winner. Then ζa = 1 + Wa > 0 by the assumptions made previously, and furthermore, the condition k > 1 can make ζj = 1 + (kWa − Wj )/(k − 1) positive for ∀j 6= a even when Wa is not the largest among all inputs. Thus the network exhibits the WTA behavior in the sense that only a single cell can survive with a nonvanishing activity at equilibrium. This WTA behavior, however, is rather different from the one usually assumed: Any cell a other than the one receiving the largest input can be the winner, as long as it satisfies k(1 + Wa ) > (1 + Wj ),
(3.16)
for ∀j 6= a. In the time course of the network dynamics, a winner is selected from among those satisfying equation 3.16 according to an initial state given to the network at t = 0. Thus the results of selection in general depend on both external inputs and initial conditions. We call this type of WTA behavior the variant winner-take-all behavior to distinguish it from the conventional one in which the winner is always the cell receiving the largest input. In fact, behavior similar to VWTA has already been extensively discussed for a competitive shunting network (Grossberg and Levine 1975), and the socalled K-winner-take-all network (Majani et al. 1989; Wolfe et al. 1991), that
86
Tomoki Fukai and Shigeru Tanaka
Figure 1: The time courses of cell activities obtained by numerical simulations of the model network with N = 30 for (a) k = 0.85, (b) k = 1.06, and (c) k = 1. The distribution of {Wi } used in the simulations is shown in the form of a bar graph in (b). Several cells receiving the largest external inputs are numbered according to the magnitudes of the inputs.
A Simple Neural Network Exhibiting Selective Activation
87
Figure 2: The dynamical flow of the network state is shown for the network of two cells. W1 > W2 and k(1+W2 ) > 1+W1 . The filled circles represent attractors, while the empty one represents an unstable fixed point. A winner is selected according to initial states of time evolution under the operation of the VWTA mechanism.
is similar to the present one except that external inputs are often assumed to be neuron independent. Since our primary aim was to show that the special range (k < 1) of the ratio between the self-inhibition and uniform lateral inhibition achieved the neural activity selection in terms of the magnitudes of external inputs independent of initial conditions, we show only a few examples for the VWTA behavior without an extensive analysis of it. Figure 1b shows an example of the time evolution of zi (t)’s for the network exhibiting the VWTA behavior. The cell receiving the third-largest input defeated others and became a winner in this case. To obtain a better insight into the VWTA behavior, we show in Figure 2 the dynamical flow of the network state for N = 2. Both (1 + W1 , 0) and (0, 1 + W2 ) are attractors if k(1 + W2 ) > 1 + W1 is satisfied (i.e., the network is subjected to the VWTA behavior); otherwise, the former is the only attractor (i.e., it is subjected to the WTA behavior).
88
Tomoki Fukai and Shigeru Tanaka
3.3 WTA Case (k = 1). Now we discuss the behavior of the equation for k = 1, which is described by à ! N X dzi = zi 1 − zi0 + Wi + ε . (1 ≤ i ≤ N) . (3.17) dt i0 We can obtain fixed points by setting the time derivative dzi /dt equal to 0 in equation 3.17. Let us assume that Wi 6= Wi0 for any i and i0 such that i 6= i0 . By considering the first order in the perturbation expansion of ε, we obtain N fixed points, zE(n) (n = 1, . . . , N): X 1 1 δi,n z(n) = (1 + Wn )δi,n + ε − i 1 + Wn Wn − Wj j6=n +
ε(1 − δi,n ) + o(ε 2 ). Wn − Wi
(3.18)
Next, let us examine linear stability around a particular fixed point, zE(n) . (n) (n) When we let zi = z(n) i + δzi , we can derive a linearized equation for δzi from equation 2.1: P ε 2 δz(n) if i = n, −(1 + Wn ) δzj(n) − n + o(ε ), 1 + W n (n) j dδzi = (3.19) dt P (n) ε (n) 2 δzj + o(ε ), otherwise. −(Wn − Wi )δzi − W − W n i j Since the coefficient 1 + Wn is always positive, the fixed point under consideration is stable if Wn is the maximum of all Wi ’s. Thus we can say that neural activities play a survival game based on competitive exclusion, which finally leads to a winner-take-all state. Figure 1c shows an example of the time evolution of zi (t)’s for the network showing the WTA behavior. It is seen that the cell receiving the largest input becomes the only winner. When the magnitudes of the external stimuli are suddenly changed, the network state relaxes to a new equilibrium state in which some of the old winners are superseded by losers. The relaxation time for this process is an important quantity that determines the response time of the neural system. Due to the particularly simple structure of the equilibrium states in the WTA case, the relaxation time can be analytically estimated as ¶ µ 1W 0 1 (3.20) log τ∼ = 1W 0 2ε in a certain limited case for infinitesimal values of ε (see Appendix B). Here 1W 0 represents the difference between the largest and second-largest inputs in the new external stimulus environment.
A Simple Neural Network Exhibiting Selective Activation
89
3.4 Simulations of the Original Competitive Network. Transforming the network model (equation 2.1) into the Lotka-Volterra equation allows us to study the competitive behavior analytically. This transformation is justified when the ratio λ/β is negligibly small. To examine to what extent the present Lotka-Volterra system can approximate the original network model, simulations were conducted for the network model with the connections given by equation 2.7. Some facts are to be noted before the simulation results are presented. First, the distinction between winners and losers is not so manifest in the original model as in the Lotka-Volterra system (with ² = 0), since the values of output f (ui ) never reach zero for finite values of the membrane potential. In the present simulations, a neuron unit is regarded as a loser if its output in equilibrium is less than 10−3 . Second, in some cases, the winners for equation 2.8 exhibit output values beyond the range zi < f0 to which the behavior of the original model is limited. This is because we have omitted the factor ( f0 − zi )/f0 in equation 2.3 when deriving the Lotka-Volterra system. Therefore the qualitative behavior of equation 2.1 should be examined also in such a case. Figures 3a and 3b shows the numerically obtained boundaries on which different types of solutions switch to one another for the original model. The solid curves represent the boundaries between the WTA and VWTA solutions, while the dashed curves indicate those between the WSA and WTA solutions. The former curves were determined as the boundaries below which an initial configuration such as u2 (0) = W2 , u1 (0) = u3 (0) = · · · = 0 evolves into a final configuration with u1 (t) being the largest among all. The latter curves were obtained by simulations of equation 2.1 with initial configurations such as u1 (0) = u2 (0) = · · · = 0. Locations of the boundary curves depend on the criterion to distinguish winners from losers. In the corresponding Lotka-Volterra system, the critical value of k separating the WTA and WSA solutions is k ≈ 0.91, while that separating the WTA and VWTA solutions is k+ ≈ 1.1, for external inputs assumed in the simulations (see Section 4 for details of k± ). In Figure 3a f0 = 2, and all outputs of the Lotka-Volterra system are within the range 0 < z < f0 , while in Figure 3b f0 = 1 and some outputs are not within the range. Figure 3a shows that the behavior of the original network for different values of k can be appropriately described by the Lotka-Volterra system as long as λ is sufficiently small. Furthermore, the two neural networks exhibit qualitatively the same dynamic behavior in Figure 3b. It is noted that the solid and dashed curves intersect each other at a certain value of λ. Beyond this value, the WTA behavior occurs in an initial-value dependent manner in the region between the solid and dashed curves. In other words, an only winner in this region can be either a neuron with the largest input or one with the second-largest input depending on initial values. Such network behavior is classified as VWTA behavior by definition, and the VWTA and WSA behaviors switch at the boundaries given by the solid curves.
90
Tomoki Fukai and Shigeru Tanaka
Figure 3: The behavior obtained at equilibrium from simulation of the original model given in equation 2.1 Five neurons were used, and their external inputs were 0.9, 0.7, 0.5, 0.3, and 0.1. Two parameters β and γ were fixed at 5 and 0.4, respectively, while λ was varied. The asymptote of the response function was (a) f0 = 2 and (b) f0 = 1, respectively. In the latter case, 1 + Wi > f0 for any i.
A Simple Neural Network Exhibiting Selective Activation
91
Figure 4: A schematic diagram showing the switchings between different types of solutions to our Lotka-Volterra equation. The critical values of the ratio k of strength of lateral inhibition to that of self-inhibition are given in the text.
4 Discussion For convenience in mathematical analysis, we studied the solutions to our system separately for the three cases of 0 < k < 1, k = 1, and k > 1. For a given distribution of afferent inputs satisfying equation 3.4, the critical values of k at which switchings from one class of solutions to another occur can be obtained as follows: Setting d = 2 in equation 3.13 and taking only W1 and W2 into account, we can easily see that the switching between the WTA behavior and the WSA behavior with two winners takes place at k = k− ≡ (1 + W1 )/(1 + 2W1 − W2 ). Furthermore, setting d = N in equation 3.13, we find that PNall the neurons remain activated for k less than kL ≡ (1 + WN )/(1 + Wj − NWN ). Thus no neural selection occurs for 0 < k < kL . Note WN + j=1 that 0 < kL < k− < 1. Similarly, by setting a = 2 and j = 1 in equation 3.16, we can immediately see that the neuron receiving the second-largest input can be a winner when k is larger than k+ ≡ (1 + W1 )/(1 + W2 ), where k+ > 1. It is also noted that any neuron can be a winner if k > (1 + W1 )/(1 + WN ). Thus our competitive network exhibits the WSA behavior for kL < k < k− , the WTA behavior for k− < k < k+ , the VWTA behavior for k > k+ , and no selection for 0 < k < kL (see Fig. 4). Assuming different magnitudes for the self- and lateral inhibitions seems to be biologically reasonable. In the local feedback loop in biological neural networks, an interneuron may inhibit adjacent principal cells that activate the interneuron. The strength of this self-inhibition of the principal cells is likely to be different from that of lateral inhibition. Uniform strength was assumed for the lateral inhibition for the sake of rigorous analysis of the population dynamics of neurons. Under these assumptions, the model predicts that competitive neural networks tend to exhibit the WSA behavior when the self-inhibition is stronger than the lateral one. In particular, the model suggests a possible functional role of the self-inhibition: With it, the activity selection occurs in a stimulus-dependent manner, independent of initial conditions. Let us make comparisons of performances between the present network and networks with a conventional on-center off-surround type interactions,
92
Tomoki Fukai and Shigeru Tanaka
which has been fully discussed by Grossberg (1973) using shunting inhibition models. Regarding neural connectivity, our network differs from the conventional one only in the extent of the off-surround inhibition: all-to-all inhibitory connections are assumed in our model. Thus competition occurs not in the vicinity of edges where intensities of input signals change dramatically but among all input signals involved in a stimulus pattern. Based on our analysis presented here, for large values of k (k > k− ), only one neuron remains active, and the others become silent. This means that the network behaves to enhance the contrast between input signals in a WTA manner when the off-surround inhibition is strong. On the other hand, when the value of k is in the interval kL < k < k− , which represents not-so-strong offsurround inhibition, the network enhances the contrast in a WSA manner. For extremely small values of k (k < kL ), no tuning occurs, and an output activity pattern of the network is not very different from an input pattern. Biological significance of our network model may be seen in the neural selection performed by the superior colliculus (SC). In the SC, there is a map that represents the displacement vector of saccadic eye movement (Robinson 1972). A class of neurons in the intermediate layer of the SC exhibit burst firing just before the onset of saccadic eye movement (Wurtz and Munoz 1994). To generate a specific eye movement, the selection of neural activity that encodes a specific displacement vector needs to occur. It is expected that lateral inhibition within the SC may function for this neural selection. According to Wurtz et al. (1980), the lateral inhibition is so widely distributed in the SC that the inhibitory effect influences beyond 50 degrees in the visual space. This implies that inhibitory connections among neurons in the intermediate layer may be regarded as all-to-all connections, as we assumed in our model. Based on this idea of using long-range lateral inhibition, a model proposed by Van Opstal and Van Gisbergen (1989) successfully demonstrated the generation of saccadic eye movement. Recently, Optican (1994) proposed a network model of saccade generation in which burst cells in the intermediate layer of SC interact with one another through mutual inhibition. Although our mathematical model is much simpler than biological networks in the SC, the model is expected to provide an analytical understanding of the underlying mechanism for the activity selection that may actually occur in the SC. The extension of the competitive neural network from the WTA type to the WSA type also has the following theoretical and biological significance: Sensory information is likely to be encoded in the cortex by a large population of neurons. In the neuronal population coding, a single neuron may weakly respond to more than one specific sensory stimulus; that is, a stimulus can coactivate a large number of neurons. As a result, the populations of neurons responding to different stimuli will largely overlap with one another and the cortical representation of the stimulus space inevitably becomes rather complicated. The WSA mechanism in this study seems to provide a simple and appropriate theoretical basis for exploring the
A Simple Neural Network Exhibiting Selective Activation
93
information-theoretical implications of such a complicated neuronal population coding. It is of particular interest to investigate how and to what extent a self-organized cortical map of sensory information is changed if the WSA instead of the WTA behavior is employed in a layer of featuredetecting cells. The present model network will be useful for clarifying these questions because it yields a mathematically simple and tractable description of the feature-detecting layer. Furthermore, the neural selection mechanism based on the WTA for k = 1 has recently been employed in modeling the basal ganglia and motor cortices, which are involved in the generation of voluntary movement (Tamori and Tanaka 1993). It is worthwhile to examine how a new selection rule based on the WSA behavior enhances the model performance in encoding and decoding temporal information on voluntary movement. 5 Conclusion We derived a Lotka-Volterra equation of the mean firing rate from a standard membrane equation for a network composed of mutually inhibiting neurons. By employing the equation for neural selection, we proved that the ratio k of strength of lateral inhibition to that of self-inhibition is of paramount importance in categorizing the network behavior into three different types: WSA, WTA, and VWTA. The WSA behavior implies that the number of winners is in general larger than one and increases as k decreases. The VWTA behavior gives one winner, but it is not necessarily the cell that receives the maximum input. Appendix A: The Number of Winners for Equidistantly Distributed {Wi } We show a simple case where the number of winners (dc ) can be explicitly determined from condition 3.13. Assume that the external inputs are distributed equidistantly according to Wi = Wi−1 + 1 with a constant 1 for any i. Then a short manipulation gives the following expression of dc : 3 1 dc = − + 2 k
s
µ
3 1 − 2 k
¶2
µ
¶µ
1 −1 +2 k
1 + W1 1+ 1
¶
,
(A.1)
where [· · ·] stands for the gaussian notation. We can see that dc approaches unity from above as k → 1. Appendix B: Nonlinear Relaxation Time for Synaptic Modification We are interested in how the neural activities relax from an old stable state to a new stable state when the configuration of the afferent inputs {Wi } slightly
94
Tomoki Fukai and Shigeru Tanaka
changes to {Wi0 } at t = 0. Without loss of generality, we can assume that W1 is the maximum and W2 is the second maximum of all Wi ’s for t < 0, and that W20 is the maximum and W10 is the second maximum of all Wi0 ’s for t > 0. First, we discuss the linear stability around the old stable fixed point zE(1) for t > 0. The corresponding linearized equation is P (W10 − W1 )(1 + W1 ) − (1 + W1 ) j δzj(1) (1) dzi (1) 0 for i = 1 (B.1) = + (W1 − W1 )δz1 + o(ε), dt 0 (1) otherwise. (Wi − W1 )δzi + o(ε), Comparing the order of magnitude of the first term for i = 1 in equation B.1 with that of other terms, we find that this term determines the dynamical behavior at this stage. Thus, we find that the system moves linearly with 0 respect to time from zE(1) toward the following new fixed point zE(1) which is not stable: X 1 1 0 δi,1 = (1 + W10 )δi,1 + ε − z(1) i 1 + W10 W10 − Wj0 j6=1 +
W10
ε (1 − δi,1 ) + o(ε 2 ) . − Wi0
(B.2)
This relaxation requires time τ1 , which can be roughly estimated by , τ1 =
0 (z(1) 1
−
z(1) 1 )
1 dδz(1) i = + o(ε). dt 1 + W1
(B.3)
This time scale implies that the system relaxes very rapidly to the nearest new fixed point, which is not stable. 0 0 Next the system moves from zE(1) to a new stable fixed point zE(2) . In this process, we may expect that the only relevant variables are z1 (t) and z2 (t). Hence, we can neglect time evolution for other zi (t)’s. Consequently, we can reduce equation B.1 to the following equations: dz1 = z1 (1 − z1 − z2 − r + W10 ) + ε, dt
(B.4a)
dz2 = z2 (1 − z1 − z2 − r + W20 ) + ε, (B.4b) dt P zi = o(Nε). Using new variables ξ = z1 + z2 and η = z2 , we where r = i6=1,2
obtain dξ = ξ(1 − r + W10 − ξ ) + 2ε + 1W 0 η, dt
(B.5a)
A Simple Neural Network Exhibiting Selective Activation
dη = η(1 − r + W20 − ξ ) + ε, dt
95
(B.5b)
where 1W 0 = W20 − W10 . Here, the problem is how this system changes from 0 0 one state: ξ ≈ o(1) and η = z(1) 2 ≈ ε/1W for t = 0 to the other state: ξ ≈ o(1) 0 ≈ o(1) for t > 0. We regard the last two terms 2ε + 1W 0 η in and η = z(2) 2 equation B.5a as a small perturbation. Since ξ rapidly follows the change in η, ξ can be approximated by 1W 0 η + 2ε . ξ∼ = 1 − r + W10 + 1 − r + W10
(B.6)
If we substitute ξ of equation B.5b for the right-hand side of B.6, we obtain dη = aη(b − η) + o(ε), dt
(B.7)
where a = 1W 0 /(1 − r + W10 ) and b = 1 − r + W10 . This differential equation has the asymptotic solution η(t) = b
c(t) . 1 + c(t)
(B.8)
c(t) satisfies the equation dc(t)/dt = ac(t)+o(ε). The solution to this equation is c(t) = (c0 + ε/a)eat − o(ε), where c0 is the initial value of c(t) and is given by c0 ∼ = ε/1W 0 . Let us define the relaxation time τ2 at this stage (Suzuki 1978) by c(τ2 ) = 1. η becomes half of the maximum b/2 from the initial infinitesimal value after time τ2 . Therefore, the relaxation time is given by τ2 ∼ =
¶ µ 1 1W 0 . log 1W 0 2ε
(B.9)
From the relaxation times τ1 and τ2 obtained above, it is evident that τ2 is much larger than τ1 for infinitesimal values of ε. Consequently, the total relaxation time τ can be given by τ2 . References Alexander, G. E., DeLong, M. R., and Strick, P. L. 1986. Parallel organization of functionally segregated circuits linking basal ganglia and cortex. Ann. Rev. Neurosci. 9, 357–381. Cohen, M., and Grossberg, S. 1983. Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Systems, Man and Cybernetics 13, 815–826. Coultrip, R., Granger, R., and Lynch, G. 1992. A cortical model of winner-take-all competition via lateral inhibition. Neural Networks 5, 47–54.
96
Tomoki Fukai and Shigeru Tanaka
Cowan, J. D. 1968. Statistical mechanics of nervous nets. In Neural Networks, E. R. Caianiello, ed., pp. 181–188. Springer-Verlag, Berlin. Cowan J. D. 1970. A statistical mechanics of nervous activity. In Some Mathematical Questions in Biology, AMS Lectures on Mathematics in the Life Sciences 2, pp. 1–57. American Mathematical Society, Providence, RI. Ermentrout, B. 1992. Complex dynamics in winner-take-all neural nets with slow inhibition. Neural Networks 5, 415–431. Georgopoulos, A. P., Taira, M., and Lukashin, A. 1993. Cognitive neurophysiology of the motor cortex. Science 260, 47–52. Gilbert C. D., and Wiesel, T. N. 1990. The influence of contextual stimuli on the orientation selectivity of cells in primary visual cortex of the cat. Vision Res. 30, 1689–1701. Grossberg, S. 1973. Contour enhancement, short term memory, and constancies in reverberating neural networks. Studies in Appl. Math. 52, 213–257. Grossberg, S., and Levine, D. 1975. Some developmental and attentional biases in the contrast enhancement and short term memory in recurrent neural networks. Studies in Appl. Math. B53, 341–380. Hikosaka, O. 1991. Basal ganglia—possible role in motor coordination and learning. Current Opinion in Neurobiology 1, 638–643. Kaski, S., and Kohonen, T. 1994. Winner-take-all networks for physiological models of competitive learning. Neural Networks 7, 973–984. Majani, E., Erlarson, R., and Abu-Mostafa, Y. 1989. On the K-winners-take-all network. In Advances in Neural Information Processing Systems I, D. S. Touretzky, ed., pp. 634–642. Morgan Kaufmann, Los Altos, CA. Optican, L. M. 1994. Control of saccade trajectory by the superior colliculus. In Contemporary ocular motor and vestibular research: A tribute to David A. Robinson, A. F. Fuch, T. Brandt, U. Buttner, and D. S. Zee, eds., pp. 98–105. SpringerVerlag, Berlin. Robinson, D. A. 1972. Eye movements evoked by collicular electrical stimulation in the alert monkey. Vision Research 12, 1285–1302. Shepherd, G. M. 1990. The Synaptic Organization of the Brain. 3d ed. Oxford University Press, New York. Suzuki, M. 1978. Theory of instability, nonlinear Brownian motion and formation of macroscopic order. Phys. Lett. 67A, 339–341. Tamori, Y., and Tanaka, S. 1993. A model for functional relationships between cerebral cortex and basal ganglia in voluntary movement. Society for Neuroscience Abstracts 19, 547. Tanaka, S. 1990. Theory of self-organization of cortical maps: Mathematical framework. Neural Networks 3, 625–640. Van Essen, D. C., Newsome, W. T., and Maunsel, J. H. R. 1984. The visual field representation in striate cortex of the macaque monkey: Asymmetries, anisotropies and individual variability. Vision Res. 24, 429–448. Van Opstal, A. J., and Van Gisbergen J. A. M. 1989. A nonlinear model for collicular spatial interactions underlying the metrical properties of electrically elicited saccades. Biol. Cybern. 60, 171–183. Wilson, M. A., and McNaughton, B. 1993. Dynamics of the hippocampal ensemble code for space. Science 261, 1055–1058.
A Simple Neural Network Exhibiting Selective Activation
97
Wolfe, W. J., Mathis, D., Anderson, C., Rothman, J., Gotter, M., Brady, G., Walker, R., Duane, G., and Alaghband, G. 1991. K-winner networks. IEEE Trans. Neural Networks 2, 310–315. Wurtz, R. H., and Munoz, D. P. 1994. Organization of saccade related neurons in monkey superior colliculus. In Contemporary Ocular Motor and Vestibular Research: A Tribute to David A. Robinson, A. F. Fuch, T. Brandt, U. Buttner, and D. S. Zee, eds., pp. 520–527. Springer-Verlag, Berlin. Wurtz, R. H., Richmond, B. J., and Judge, S. J. 1980. Vision during saccadic eye movements III: Visual interactions in monkey superior colliculus. J. Neurophysiol. 43, 1168–1181. Young, M. P., and Yamane, S. 1992. Sparse population coding of faces in the inferotemporal cortex. Science 256, 1327–1331. Yuille, A. L., and Grzywacz N. M. 1989. A winner-take-all mechanism based on presynaptic inhibition feedback. Neural Comp. 1, 334–347.
Received April 25, 1995; accepted April 11, 1996.
Communicated by Eric Baum
Playing Billiards in Version Space P´al Ruj´an Fachbereich 8 Physik and ICBM, Postfach 2503, Carl-von-Ossietzky Universit¨at, 26111 Oldenburg, Germany
A ray-tracing method inspired by ergodic billiards is used to estimate the theoretically best decision rule for a given set of linear separable examples. For randomly distributed examples, the billiard estimate of the single Perceptron with best average generalization probability agrees with known analytic results, while for real-life classification problems, the generalization probability is consistently enhanced when compared to the maximal stability Perceptron. 1 Introduction Neural networks can be used for both concept learning (classification) and for function interpolation and/or extrapolation. Two basic mathematical methods seem to be particularly adequate for studying neural networks: geometry (especially combinatorial geometry) and probability theory (statistical physics). Geometry is illuminating, and probability theory is powerful. In this article I consider what is perhaps the simplest neural network, the venerable Perceptron (Rosenblatt 1958): Given a set of examples falling in two classes, and find the linear discriminant surface separating the two sets, if one exists. In this context, my goal is twofold: (1) give the optimal (or Bayesian) decision theory a geometric interpretation and (2) use that geometric content for designing a deceptively simple method for computing the single Perceptron with the best possible generalization probability. As shown in Figure 1, a Perceptron is a network consisting of N (binary or real) inputs xi and a single binary output neuron. The output neuron sums all inputs with the corresponding synaptic weights wi , then performs a threshold operation according to à σ = sign
N X
! xi wi − θ
E − θ] = ±1. = sign[(E x, w)
(1.1)
i=1
In general, the synaptic weights and the threshold can be arbitrary real E has only binary components is called a binary numbers. A network where w Perceptron. The output unit has binary values σ = ±1 and labels the class to E = (w1 , w2 , . . . , wN ) which the input vector belongs. Both the weight vector w Neural Computation 9, 99–122 (1997)
c 1997 Massachusetts Institute of Technology °
100
P´al Ruj´an
Figure 1: The Perceptron as neural network.
and the threshold θ should be learned from a set of M examples whose class is known (the training set). The Perceptron and its variants (Hertz et al. 1991) are among the few networks for which basic properties like the maximal information capacity (how many random1 examples can be faultlessly stored in the network) or the classification error of certain learning algorithms (how many examples are needed in order to achieve a given generalization probability) can be obtained analytically. Such calculations are done by first defining a learning protocol. For example, one assumes that the examples are independently and identically sampled from some stationary probability distribution; that one has access to a very large set of such examples for both training and testing purposes; that no matter how many examples one generates; that there is always a single Perceptron network solving the problem without errors; and so forth. Almost all statistical mechanical calculations also require the thermodynamic limit N → ∞, M → ∞, α = M/N = const. These assumptions are needed for technological reasons—some integrals 1 By random vectors we mean in the following vectors sampled independently and identically from a constant distribution defined on the surface of the unit sphere (E x, xE) = 1.
Playing Billiards in Version Space
101
must be concretely evaluated—but also because otherwise the problem is mathematically not well defined. In most practical situations, however, one is confronted with situations that do not satisfy some of these assumptions: One has a relatively small set of examples; their distribution is unknown and often not stationary; the examples are not independently sampled; and so forth. In a strict sense, such problems are mathematically not quantifiable. However, even a small set of examples contains some information about the nature of the problem (Polya ´ 1968). The basic question is then, among all possible networks trained on this set, which one has possibly the best generalization probability? As explained below, understanding the geometry of the version space leads to the correct answer and to algorithms for computing it. Our protocol assumes that both the training and test examples are drawn independently and identically from the distribution P(ξE ). The simplest way to generate such a rule is to define a teacher network with the same structure as shown in Figure 1, providing the correct class label σ = ±1, for M , ξE (ν) ∈ RN and their each input vector. Given a set of M examples {ξE (ν) }ν=1 corresponding binary class label σν = ±1, the task of the learning algorithm is to find a student network that mimics the teacher by choosing the network parameters that correctly classify the training examples. Equation 1.1 E xE) = θ implies that the learning task consists of finding a hyperplane (w, M : separating the positive from the negative labeled examples xE ∈ {ξE (ν) }ν=1 E ξE (α) ) ≥ θ1 , σα = 1 (w, E ξE (β) ) ≤ θ2 , σβ = −1 (w,
(1.2)
θ1 ≥ θ ≥ θ2 . The typical situation is that either none or infinitely many such solutions E θ) is called the version exist. The vector space whose points are the vectors (w, space and its subspace satisfying equation 1.2 the solution polyhedron. Most often one wishes to minimize the average probability of class error for a new xE vector drawn from P(ξE ). In other situations another network, that in average makes more errors but avoids very “bad” ones, is preferable. Yet another problem occurs when the data provided by the teacher are noisy (Opper and Haussler 1992). In what follows we shall restrict ourselves to the standard problem of minimal average classification error. In theory, one knows that for a given training set, the optimal Bayes E θ} soludecision (Opper and Haussler 1991) implies an average over all {w, tions satisfying equation 1.2. Since each solution is perfectly consistent with the training set, in the absence of any other a priori knowledge, one must consider them as equally probable. This is not the case, for instance, in the presence of noise. The best strategy is then to associate with each point in the version space a Boltzmann weight whose energy is the squared error function and whose temperature depends on the noise strength (Opper and Haussler 1992).
102
P´al Ruj´an
For examples independently and identically drawn from a constant distribution (P(ξE ) = const) Watkin (1993) has shown that in the thermodynamic limit the single Perceptron corresponding to the center of mass of the solution polyhedron has the same generalization probability as the Bayes decision. On the practical side, known learning algorithms like Adaline (Widrow and Hoff 1960; Diedrich and Opper 1987) or the maximal stability Perceptron (MSP) (Vapnik 1982; Anlauf and Biehl 1989; Ruj´an 1993) are not Bayes optimal. In fact, if all input vectors have the same length, then the maximal stability Perceptron network corresponds to the center of the largest spherical cone inscribed into the solution polyhedron, as shown in Section 4. There have been several attempts at developing learning algorithms approximating the Bayes decision. Watkin (1993) used a randomized AdaTron algorithm to sample the version space. More recently, Bouten et al. (1995) defined a class of convex functions whose minimum lies within the solution polyhedron. By changing the function’s parameters, one can obtain a minimum, which is a good approximation of the known Bayes solution. This idea is somewhat similar to the notion of an analytic center of a convex polytope introduced by Sonnevend (1988) and used extensively for designing fast linear programming algorithms (Vaidya 1990). A very promising but not yet fully exploited approach (Opper 1993) uses the Thouless-Anderson-Palmer (TAP) equations originally developed for the spin-glass problem. In this paper I propose a quite different method, based on an analogy to classical, ergodic billiards. Considering the solution polyhedron as a dynamic system, a long trajectory is generated and used to estimate the center of mass of the billiard. The same idea can be applied also to other optimization problems. Questions related to the theory of billiards are briefly considered in Section 2. Section 3 sets the stage by presenting an elementary geometric analysis in two dimensions. The more elaborated geometry of the Perceptron version space is discussed in Section 4. An elementary algorithmic implementation for open polyhedral cones and their projective geometric closures are summarized in Section 5. Numerical results and a comparison to known analytic bounds and to other learning algorithms can be found in Section 6. Conclusions and further prospects are summarized in Section 7. 2 Billiards A billiard is usually defined as a closed space region (compact set) P ∈ RN dimensions. The boundaries of a billiard are usually piecewise smooth functions. Within these boundaries, a point mass (ball) moves freely, except for the elastic collisions with the enclosing walls. Hence, the absolute value of the momentum is preserved, and the phase space, B = P × S N−1 , is the direct product of P and S N−1 , the surface of the N-dimensional unit velocity sphere. Such a simple Hamiltonian dynamics defines a flow and its
Playing Billiards in Version Space
103
Poincar´e map an automorphism. The mathematicians have defined a finely tuned hierarchy of notions related to such dynamic systems. For instance, simple ergodicity as implied by the Birkhoff-Hincsin theorem means that the average of any integrable function defined on the phase space over a single but very long trajectory equals the spatial mean (except for a set of zero measure). Furthermore, integrable functions invariant under the dynamics must be constant. From a practical point of view, this means that almost all very long trajectories will cover the phase space uniformly. Properties like mixing (Kolmogorov mixing) are stronger than ergodicity. They require the flow to mix uniformly different subsets of B . In hyperbolic systems one can go even further and construct Markov partitions defined on symbolic dynamics and eventually prove related central limit theorems. Not all convex billiards are ergodic. Notable exceptions are ellipsoidal billiards, which can be solved by a proper separation of variables (Moser 1980). Already Jacobi knew that a trajectory started close to and along the boundaries of an ellipse cannot reach a central region bounded by the socalled caustics. In addition to billiards that can be solved by a separation of variables, there are a few other exactly soluble polyhedral billiards (Kuttler and Sigillito 1984). Such solutions are intimately related to the reflection method— the billiard tiles the entire space perfectly. A notable example is the equilateral triangle billiard, first solved by Lam´e in 1852. Other examples of such integrable billiards can be obtained by mapping an exactly soluble onedimensional, many-particle system into a one-particle, high-dimensional billiard (Krishnamurthy et al. 1982). Apart from these exceptions, small perturbations in the form of the billiard usually destroy integrability and lead to chaotic behavior. For example, the stadium billiard (two half-circles joined by two parallel lines) is ergodic in a strong sense; the metric entropy is nonvanishing (Bunimovich 1979). The dynamics induced by the billiard is hyperbolic if at any point in phase space there are both expanding (unstable) and shrinking (stable) manifolds. A famous example is Sinai’s reformulation of the Lorentz gas problem. Deep mathematical methods were needed to prove the Kolmogorov mixing property and in constructing the Markov partitions for the symbolic dynamics of such systems (Bunimovich and Sinai 1980). The question whether a particular billiard is ergodic can be decided in principle by solving the Schrodinger ¨ problem for a free particle trapped in the billiard box. If the eigenfunctions corresponding to the high-energy modes are roughly constant, then the billiard is ergodic. Only a few general results are known for such quantum problems. In fact, I am not aware of theoretical results concerning the ergodic properties of convex polyhedral billiards in high dimensions. If all angles of the polyhedra are rational, then the billiard is weakly ergodic in the sense that the velocity direction will reach only rational angles (relative to the initial direction). In general, as long as two neighboring trajectories collide with the same polyhedral faces,
104
P´al Ruj´an
their distance will grow only linearly. Once they are far enough to collide with different faces of the polyhedron, their distance will abruptly increase. Hence, except for very special cases with high symmetry, it seems unlikely that high-dimensional convex polyhedra as generated by the training examples will fail to be ergodic. 3 A Simple Geometric Problem For the sake of simplicity let us illustrate the approach in a two-dimensional setting. In the next section we will show how the concepts developed here generalize to the more challenging Perceptron problem. Let P be a closed convex polygon and vE a given unit vector, defining a particular direction in the R2 space. The direction of vector vE can be also described by the angle φ it makes with the x-axis. Next, construct the line perpendicular to vE which halves the area of the polygon P , A1 = A2 (see Fig. 2a). In the next section we will show that this geometric construction is analog to the Bayes decision in version space. Choosing a set of vectors vE oriented at different angles leads to the set of “Bayes lines” seen in Figure 2b. It is obvious that these lines do not intersect at one single point. The same task is computationally not feasible in a high-dimensional version space. It is certainly more economical to compute and store a single point Er0 , which represents optimally the full information contained in Figure 2b. As evident from Figure 2b, the majority of lines passing through Er0 will not partition P in equal areas but will make some mistakes, denoted by 1A = A1 − A2 6= 0. 1A depends on both the direction of vE and on the v, P ). Different optimality criteria can be formulated polygon P , 1A = 1A(E depending on the actual application. The usual approach corresponds to minimizing the squared area-difference averaged over all possible vE directions: Z Er0 = arg min h(1A)2 i = arg min
E (1A)2 (E v)p(E v)dv
(3.1)
where p(E v) is some a priori known distribution of vectors vE. Another possible criterion optimizes the worst-case loss over all directions: n o Er1 = arg inf sup (1A)2 .
(3.2)
In what follows, the point Er0 is called the Bayes point. The calculation of the Bayes point according to equation 3.1 is computationally feasible, but a lot of computer power is still needed. A good estimate
Playing Billiards in Version Space
105
(a)
(b)
Figure 2: (a) Halving the area along a given direction. (b) The resulting Bayes lines.
of the Bayes point is given by the center of mass of the polgon P : R Erρ(Er)dA SE = RP r)dA P ρ(E where ρ(Er) = const is the surface mass density.
(3.3)
106
P´al Ruj´an
Table 1: Exact Coordinates (x, y) of the Bayes Point for the Polygon P = (1, 0), (4, 6), (9, 4), (11, 0), (9, −2) and Its Various Estimates: Center of Mass, Maximal Inscribed Circle, Trajectories of Different Lengths. Method
x
Bayes point Center of mass Largest inscribed circle
6.1048 6.1250 5.4960
Billiard—10 collisions Billiard—102 collisions Billiard—103 collisions Billiard—104 collisions Billiard—105 collisions Billiard—106 collisions
6.0012 6.1077 6.1096 6.1232 6.1239 6.1247
σx
y
σy
1.7376 1.6667 2.0672 0.701 0.250 0.089 0.028 0.010 0.003
1.6720 1.6640 1.6686 1.6670 1.6663 1.6667
h1A2 i 1.4425 1.5290 7.3551
0.490 0.095 0.027 0.011 0.004 0.003
7.0265 2.6207 1.6774 1.5459 1.5335 1.5295
Table 1 presents exact numerical results for the x and y coordinates of both the Bayes point and the center of mass. The center of mass is an excellent approximation of the Bayes point. In very high dimensions, as shown by E T. Watkin (1993), Er0 → S. A polyhedron can be described by either the list of its vertices or the set of vectors normal to its facets. The transition from one representation to another requires exponential many arithmetic operations as a function of dimension. Therefore, in typical classification tasks, this transformation is not practicable. For “round” polygons (polyhedra), the center of the smallest circumscribed and the largest inscribed circle (sphere) is a good choice for approximating the Bayes point in the vertex and the facet representation, respectively. Since in our case the polyhedron generalizing P is determined by a set of normal vectors, only the largest inscribed circle (sphere) is a feasible approximation (see Fig. 3). The numerical values of the (xR , yR ) coordinates of the center point are displayed in Table 1. A better approximation would be to compute the center of the largestvolume inscribed ellipsoid (Ruj´an in preparation), a problem also common in nonlinear optimization (Khachiyan and Todd 1993). The best-known algorithms are of order O(M3.5 ) operations, where M is in our case the number of examples. Additional logarithmic factors have been neglected (for details, see Khachiyan and Todd 1993). The purpose of this paper is to show that a reasonable estimate of rE0 can be obtained faster by following the (ergodic) trajectory of an elastic ball inside the polygon, as shown in Figure 4a for four collisions. Figure 4b shows the trajectory of Figure 4a from another perspective, by performing an appropriate reflection on each collision edge. A trajectory is periodic if
Playing Billiards in Version Space
107
Figure 3: The largest inscribed circle. The center of mass (cross) is plotted for comparison.
after a finite number of such reflections the polygon P is mapped onto itself. Fully integrable systems correspond in this respect to polygons that will fill without holes the whole space (that this geometric point of view applies also to other fully integrable systems is nicely exposed in Sutherland 1985). If the dynamics is ergodic in the sense discussed in Section 2, then a long enough trajectory should cover without holes the surface of the polygon. By computing the total center of mass of the trajectory, one should then obtain a good estimate of the center of mass. The question of whether a billiard formed by a “generic” convex polytope is ergodic is to my knowledge not solved. Extensive numerical calculations are possible only in low dimensions. The extent to which the trajectory covers P after 1000 collisions is visualized in Figure 5. By continuing this procedure, one can convince oneself that all holes are filled up, so that the trajectory will visit every point inside P . The next question is whether the area of P is homogeneously covered by the trajectory. The numerical results summarized in Table 1 were obtained by averaging over 100 different trajectories for each given length. As the length of the trajectory is increased, these averages converge to the center of mass. Also displayed are the empirical standard deviations σx,y of the trajectory center of mass coordinates and the average squared area-difference, h1A2 i.
108
P´al Ruj´an
(a)
(b)
Figure 4: (a) A trajectory with four collisions. (b) Its “straightened” form.
Playing Billiards in Version Space
109
Figure 5: A trajectory with 1000 collisions. To improve the resolution, a dotted line has been used.
4 The Geometry of Perceptrons Consider a set of M training examples, consisting of N-dimensional vectors ξE (ν) and their class σν , ν = 1, . . . , M. Now let us introduce the N + 1(ν) E = (w, E θ). In this representadimensional vectors ζE = (σν ξE (ν) , −σν ) and W tion equation 1.2 becomes equivalent to a standard set of linear inequalities E ζE (W,
(ν)
) ≥ 1 > 0.
(4.1)
E ζE (ν) ) is called the stability of the linear The parameter 1 = min{ν} (W, inequality system. A bigger 1 implies a solution that is more robust against small changes in the example vectors. (ν) The vector space whose points are the examples ζE is the example space. E corresponds to the normal vector of an N + 1-dimensional In this space W E Hence, a given hyperplane. The version space is the space of the vectors W. E example vector ζ corresponds here to the normal of a hyperplane. The inequalities (equation 4.1) define a convex polyhedral cone whose boundary (ν) M . hyperplanes are determined by the training set {ζE }ν=1 How can one use the information contained in the example set for making the best average prediction on the class label of a new vector drawn from
110
P´al Ruj´an (new)
the same distribution? Each new presented example ζE corresponds to a hyperplane in version space. The direction of its normal is defined up to a σ = ±1 factor, the corresponding class. The best possible decision for the classification of the new example follows the Bayes scheme: For each new (test) example, generate the corresponding hyperplane in version space. If this hyperplane does not intersect the solution polyhedron, consider the normal to be positive when pointing to the part of the version space containing the solution polyhedron. Hence, all Perceptrons satisfying equation 4.1 will classify unanimously the new example. If the hyperplane cuts the solution polyhedron in two parts, point the normal toward the bigger half. Therefore, the decision that minimizes the average generalization error is given by evaluating the average measure of pro versus the average measure of contra votes of all Perceptrons making no errors on the training set. We see that the Bayes decision is analogous to the geometric problem described in the previous section. However, the solution polyhedral cone is either open or is defined on the unit N + 1-dimensional hypersphere (see also Fig. 3.11b for an illustration). The practical question is how to find simple approximations for the Bayes point. One possibility is to consider the Perceptron with maximal stability, defined as the following: E W) E =1 E MSP = arg max E 1; (W, W W
(4.2)
or, equivalently, as E w) E =1 E MSP = arg maxwE {θ1 − θ2 }; (w, w
(4.3)
where θ = (θ1 + θ2 )/2 and 1 = (θ1 − θ2 )/2. The quadratic conditions E W) E = 1 [(w, E w) E = 1] are necessary because otherwise one could mul(W, E and θ1,2 by a large number, making θ1 − θ2 and thus 1 arbitrarily tiply w large. Equation 4.3 has a very simple geometric interpretation, shown in Figure 3.11. Consider the convex hulls of the vectors ξE (α) belonging to the positive examples σα = 1 and of those in the negative class ξE (β) , σβ = −1, respectively. According to equation 4.3, the Perceptron with maximal stability corresponds to the slab of maximal width one can put between the two convex hulls (the “maximal dead zone” [Lampert 1969]). Geometrically, this problem is equivalent (dual) to finding the direction of the shortest line segment connecting the two convex hulls (the minimal connector problem). Since the dual problem minimizes a quadratic function subject to linear 2 constraints, it is a quadratic programming problem. By choosing θ = θ1 +θ 2 (dotted line in Fig. 3.11 ) one obtains the MSP. E is determined by at most N + 1 vertices taken from The direction of w both convex hulls, called active constraints. Figure 3.11a shows a simple two-
Playing Billiards in Version Space
111
dimensional example; the active constraints are labeled A, B, and C, respectively. The version space is three-dimensional, as shown in Figure 3.11b. The three planes represent the constraints imposed by the examples A (left plane), B (right plane), and C (lower plane). The bar points from the origin to the point defined by the MSP solution. The sphere corresponds to the normalization constraint. If the example vectors ξE (ν) all have the same length E MSP N , then equations 4.1 and 4.2 imply that the distances between the W and the hyperplanes corresponding to active constraints are all equal to 1max N . All other hyperplanes participating in the polyhedral cone are farther away. Accordingly, the MSP corresponds to the center of the largest circle inscribed into the spherical triangle defined by the intersection of the unit sphere with the solution polyhedron, the point where the bar intersects the sphere in Figure 3.11b. A fast algorithm for computing the minimal connector requiring on average O(N2 M) operations and O(N2 ) storage place can be found in Ruj´an (1993). 5 How to Play Billiards in Version Space Each billiard game starts by first placing the ball(s) on the pool. This is not always a trivial task. In our case, the MSP algorithm (Ruj´an 1993) does it or signals that a solution does not exist. The trajectory is initiated by generating E in version space. at random a unit direction vector v The basic step consists of finding out where—on which hyperplane—the next collision will take place. The idea is to compute how much time the ball needs until it eventually hits each one of the M hyperplanes. Given a point E = (w, E θ ) in version space and a unit direction vector v E , let us denote W the distance along the hyperplane normal ζE by dn and the component of vE perpendicular to the hyperplane by vn . In this notation the flight time needed to reach this plane is given by E ζE ) dn = (W, vn = (E v, ζE ) τ =−
(5.1)
dn . vn
After computing all M flight times, one looks for the smallest positive τmin = min{ν} τ > 0. The collision will take place on the corresponding hyperplane. E 0 and the new direction v E 0 are calculated as The new point W E0 =W E + τmin v E. W 0 E =v E − 2vn ζE . v
(5.2) (5.3)
This procedure is illustrated in Figure 7a. In order to estimate the center of E and W E 0 . By assuming mass of the trajectory, one must first normalize both W
112
P´al Ruj´an
(a)
(b)
Figure 6: (a) The Perceptron with maximal stability in example space. (b) The solution polyhedron (only the bounding examples A, B, and C are shown). See text for details.
Playing Billiards in Version Space
113
Figure 7: Bouncing in version space. (a) Euclidean geometry. (b) Spherical geometry.
a constant line density, one assigns to the (normalized!) center of the segment E 0 +W E W E 0 − W. E This is then added to the actual center the length of the vector W 2 of mass—as when adding two parallel forces of different lengths. In high dimensions (N > 5), however, the difference between the mass of the full N + 1 dimensional solution polyhedron and the mass of the bounding Ndimensional boundaries becomes negligible. Hence, we could just as well record the collision points, assign them the same mass density, and construct their average. Note that by continuing the trajectory beyond the first collision plane, E one can also sample regions of the solution space where the network W makes one, two, and more mistakes (the number of mistakes equals the number of crossed boundary planes). This additional information can be used for taking an optimal decision when the examples are noisy (Opper and Haussler 1992). Since the polyhedral cone is open, the implementation of this algorithm must take into account the possibility that the trajectory might escape to infinity. The minimal flight time then becomes very large: τ > τmax . When this exception is detected, a new trajectory is started from the MSP point in yet another random direction. Hence, from a practical point of view, the polyhedral solution cone is closed by a spherical shell with radius τmax acting as a special “scatterer.” This flipper procedure is iterated until enough data are gathered.
114
P´al Ruj´an
If we are class conscious and want to remain in the billiard club, we must do a bit more. The solution polyhedral cone can be closed by normalizing the version space vectors. The billiard is now defined on a curved space. However, the same strategy works here also if between subsequent collisions one follows geodesics instead of straight lines. Figure 7b illustrates the change in direction for a small time step, leading to the well-known geodesic differential equation on the unit sphere: E˙ = v E. W ˙v E E = −W.
(5.4) (5.5)
The solution of these equations costs additional resources. Actually, the solution of the differential equation is strictly necessary only when there are no bounding planes on the actual horizon (assuming light travels along Euclidean straight lines). Once one or more boundaries are “visible,” the choice of the shortest flight time can be evaluated directly, since in the two geometries the flight time is monotonously deformed. Even so, the flipper procedure is obviously faster. Both variants deliver interesting additional information, like the mean escape time of a trajectory or the number of times a given border plane has been bounced on. The collision frequency classifies the training examples according to their “surface” area in the solution polyhedron—a good measure of their relative “importance.” 6 Results and Performance This section contains the results of numerical experiments performed to test the billiard algorithm. First, the ball is placed inside the billiard with the MSP algorithm (as described in Ruj´an 1993). Next, a number of typically O(N2 ) collision points are generated. Since the computation of one collision point requires M scalar products of N-dimensional vectors, the total load of this algorithm is O(MN3 ). The choice for N2 collision points is somewhat arbitrary and is based on the following considerations. By using the billiard method, one generates many collision points lying on the borders of the solution polyhedron. We could try to use this information for approximating the solution polyhedron with an ellipsoidal cone. The number of free parameters involved in the fit is of the order O(N2 ). Hence, at least a constant times that many points are needed. Such a fitted ellipsoid also delivers an estimate on the decision uncertainty. If one is not interested in this information, it is enough to monitor how one or more projections of the center of mass estimate changes as the number of collisions increases. Once these projections become stable, the program can be stopped. T. Watkin (1993) argues that the number of sampling points should be of O(1). I find his arguments unconvincing. For example, Figure 8 shows how
Playing Billiards in Version Space
115
Figure 8: Average of the generalization probability as a function of the number of collisions, N = 100, M = 1000.
the average generalization probability changes as the number of collisions increases up to N2 steps. Finding a point within the solution polyhedron is algorithmically equivalent to solving a linear programming problem. Therefore, his method of sampling the version space with randomly started AdaTron gradient descent (or any other Perceptron learning method) requires at least O(N2 M) operations per sampling point, compared to O(NM) per collison in the billiard method. In a first test, a known “teacher” Perceptron TE was used to label the randomly generated examples for training. The generalization probability G(α), α = M/N was then computed by measuring the overlap between the resulting solution (“student”) Perceptron with the teacher Perceptron: G(α) = 1 −
1 E w); E T) E = (w, E (T, E w) E = 1. cos−1 (T, π
(6.1)
The numerical results obtained from 10 different realizations for N = 100 are compared with the theoretical Bayes learning curve in Figure 9. Figure 10 shows a comparison between the billiard results and the MSP results. Although the differences seem small compared to the error bars, the billiard solution was in all realizations consistently superior to the MSP. Figure 11 shows how the number of constraints (examples) bordering the solution polyhedron changes with increasing α = M/N.
116
P´al Ruj´an
Figure 9: The theoretical Bayes learning curve (solid line) versus billiard results obtained in 10 independent trials. G(α) is the generalization probability, α = M , N N = 100.
As the number of examples increases, the probability of escape from the solution polyhedron decreases, and the network reaches its storage capacity. Figure 12 shows the average number of collisions before escape as a function of classification error, parameterized through α = M/N. Therefore, by measuring either the escape rate or the number of “active” examples, we can estimate the generalization error without using test examples. Note, however, that such calibration graphs should be calculated for each specific distribution of the input vectors. Randomly generated training examples lead to rather isotropic polyhedra, as illustrated by the small difference between the Bayes and the MSP learning curves (see Fig. 10). Therefore, we expect that the billiard approach leads to bigger improvements when applied to real-life problems with strongly anisotropic solution polyhedra. Similar improvements can be expected when using constructive methods for multilayer Perceptrons that use iteratively the Perceptron algorithm (Marchand et al. 1989). Such procedures have been used, for example, for classifying handwritten digits (Knerr et al. 1990). For all such example sets available to me, the introduction of the billiard algorithm on top of the MSP leads to consistent improvements of up to 5% in classification probability.
Playing Billiards in Version Space
117
Figure 10: Average generalization probability, same parameters as in Figure 9. Lower curve: the maximal stability Perceptron algorithm. Upper curve: the billiard algorithm.
A publicly available data set known as the sonar problem (Gorman and Sejnowski 1988; Fahlman N.d.) considers the problem of deciding between rocks and mines from sonar data. The input space is N = 60 dimensional and the whole set consists of 111 mine and 97 rock examples. We go down the list of rock signals putting alternate members into the training and test sets; we do the same with the set of mine signals. Since the data sets are sorted by increasing azimuth, this gives us training and testing with equal lengths (104 signals) and with the population of azimuth angles matched as closely as possible. Gorman and Sejnowski (1988) consider a three-layer feedforward neural network architecture with a different number of hidden units, trained by the backpropagation algorithm. Their results are summarized and compared to our results in Table 2. By applying the MSP algorithm, we first found that the whole data (training plus test) set is linearly separable. Second, by using the MSP on the training set we obtain a 77.5% classification rate on the test set (compared to 73.1% in Gorman and Senjowski 1988). Playing billiard leads to an 83.4% classification rate (in both cases, the training set was faultlessly classified). This improvement amount is also typical for other applications. The num-
118
P´al Ruj´an
Figure 11: Number of different identified polyhedral borders versus G(α), N = 100. Diamonds: maximal stability Perceptron, saturating at 101. Crosses: billiard algorithm.
Figure 12: Mean number of collisions before escaping the solution polyhedral cone versus G(α), N = 100.
Playing Billiards in Version Space
119
Table 2: Results for the Sonar Classification Problem from Gorman and Sejnowski (1988). Hidden units
% right on training set
Standard deviation
% right on test set
Standard deviation
0
79.3
3.4
73.1
4.8
0–MSP 0–Billiard
100.0 100.0
— —
77.5 83.4
— —
2 3 6 12 24
96.2 98.1 99.4 99.8 100.0
2.2 1.5 0.9 0.6 0.0
85.7 87.6 89.3 90.4 89.2
6.3 3.0 2.4 1.8 1.4
Note: 0-MSP is the maximal stability Perceptron; 0-Billiard is the Bayes billiard estimate.
ber of active examples (those contributing to the solution) was 42 for the MSP and 55 during the billiard. By computing the MSP and the Bayes Perceptron, we did not use any information available on the test set. On the contrary, many networks trained by backpropagation are slightly adjusted to the test set by changing network parameters—output unit biases and/or activation function decision bounds. Other training protocols also allow either such adjustments or generate a population of networks, from which a “best” is chosen based on test set results. Although such adaptive behavior might be advantageous in many practical applications, it can be misleading when trying to infer the real capability of the trained network. In the sonar problem, for instance, we know that a set of Perceptrons separating faultlessly the whole data (training plus test) set is included in the version space. Hence, one could use the billiard or other method to find it. This would be an extreme example of “adapting” our solution to the test set. Such a procedure is especially dangerous when the test set is not “typical.” Since in the sonar problem the data were divided in two equal sets, by exchanging the roles of the training and test sets one would expect similar quality results. However, in this case, much weaker results (73.3% classification rate) are obtained. This shows that the two sets do not contain the same amount of information about the common Perceptron solution. Looking at the results of Table 2 it is hard to understand how 104 training examples could substantiate the excellent average test set error of 90.4±1.8% for networks with 12 hidden units (841 free parameters). The method of
120
P´al Ruj´an
structural risk minimization (Vapnik 1982) uses uniform bounds for the generalization error in networks with firmly different Vapnik-Chervonenkis (1971) dimensions. To establish a classification probability of 90%, one needs about 10 times more training examples for the linearly separable class of functions. A network with 12 hidden units certainly has a much larger capacity and requires that many more examples. One could argue that each of the 12 hidden units has solved the problem on its own, and thus the network acts as a committee machine. However, such a majority decision should be at best comparable to the Bayes estimate.
7 Conclusions and Prospects The study of dynamic systems has led to many interesting practical applications in time-series analysis, coding, and chaos control. The elementary application of Hamiltonian dynamics presented in this paper demonstrates that the center of mass of a long dynamic trajectory bouncing back and forth between the walls of the convex polyhedral solution cone leads to a good estimate of the Bayes decision rule for linearly separable problems. Somewhat similar ideas have been recently applied to constrained nonlinear programming (Coleman and Li 1994). Although the geometric view presented in this paper overemphasizes the role played by extremal (active) vertices of the example set and faultlessly trained Perceptrons, it is not difficult to make these estimates more robust. For example, some of the extremal examples can be removed to test—and improve—the stability of the MSP. Similarly, the solution polyhedron can be expanded to include solutions with nonzero training errors, allowing for learning noisy examples. Since the Perceptron problem is equivalent to the linear inequalities problem (equation 4.1), it has the same algorithmic complexity as linear programming. The theory of convex polyhedra plays a central role in both mathematical programming and solving N P -hard problems such as the traveling salesman problem (Lawler and Lenstra 1984; Padberg and Rinaldi 1987). Viewed from this perspective, the ergodic theory of convex polyhedral billiards might provide new, effective tools for solving different combinatorial optimization problems (Ruj´an in preparation). A big advantage of such ray-tracing algorithms is that they can be run in parallel by following up several trajectories at the same time. The success of further applications depends on methods of making such simple dynamics strongly mixing. On the theoretical side, more general results, applicable to large classes of convex polyhedral billiards, are called for. In particular, a good estimate of the average escape (or typical mixing) time is needed in order to bound the average behavior of future “ergodic” algorithms.
Playing Billiards in Version Space
121
Acknowledgments I thank Manfred Opper for the analytic Bayes data and Bruno Eckhardt for discussions on billiards. For a LATEX original of this article with PostScript figures and a C-implementation of the billiard algorithm, point your browser at http://www.icbm.uni-oldenburg.de/∼ rujan/rujan.html. References Anlauf, J., and Biehl, M. 1989. The AdaTron: An Adaptive Perceptron algorithm. Europhysics Letters 10, 687. Bouten, M., Schietse, J., and Van den Broeck, C. 1995. Gradient descent learning in Perceptrons: A review of possibilities. Physical Review E 52, 1958. Bunimovich, L. A. 1979. On the ergodic properties of nowhere dispersing billiards. Communications in Mathematical Physics 65, 295. Bunimovich, L. A., and Sinai, Y. G. 1980. Markov partitions for dispersed billiards. Communications in Mathematical Physics 78, 247. Coleman, T. F., and Li, Y. 1994. On the convergence of interior-reflective Newton methods for nonlinear minimization subject to bounds. Mathematical Programming 67, 189. Diederich, S., and Opper, M. 1987. Learning of correlated patterns in spin-glass networks by local learning rules. Physical Review Letters 58, 949. Fahlman, S. E. N.d. Collection of neural network data. Included in the public domain software am-6.0. Gorman, R. P., and Sejnowski, T. J. 1988. Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks 1, 75. Hertz, J., Krogh, A., and Palmer, R. G. 1991. Introduction to the Theory of Neural Computation. Addison-Wesley, Reading, MA. Khachiyan, L. G., and Todd, M. J. 1993. On the complexity of approximating the maximal inscribed ellipsoid for a polytope. Mathematical Programming 61, 137. Knerr, S., Personnaz, L., and Dreyfus, G. 1990. Single layer learning revisited: A stepwise procedure for building and training a neural network. In Neurocomputing, F. Fogelman and J. H´erault, eds., Springer-Verlag, Berlin. Krishnamurthy, H. R., Mani, H. S., and Verma, H. C. 1982. Exact solution of Schrodinger ¨ equation for a particle in a tetrahedral box. Journal of Physics A 15, 2131. Kuttler, J. R., and Sigillito, V. G. 1984. Eigenvalues of the Laplacian in two dimensions. SIAM Review 26, 163. Lampert, P. F. 1969. Designing pattern categories with extremal paradigm information. In Methodologies of Pattern Recognition, M. S. Watanabe, ed., p. 359. Academic Press, New York. Lawler, E. L., Lenstra, J. K., Rinnoy Kan, A. H. G., and Shmoys, D. B., eds. 1984. The Traveling Salesman Problem. John Wiley, New York. Marchand, M., Golea, M., and Ruj´an, P. 1989. Convergence theorem for sequential learning in two layer perceptrons. Europhysics Letters 11, 487.
122
P´al Ruj´an
Moser, J. 1980. Geometry of quadrics and spectral theory. In The Chern Symposium, Y. Hsiang et al., eds., p. 147. Springer-Verlag, Berlin. Opper, M. 1993. Bayesian learning. Talk at the Neural Networks for Physicists 3. Minneapolis, MN. Opper, M., and Haussler, D. 1991. Generalization performance of Bayes optimal classification algorithm for learning a Perceptron. Physical Review Letters 66, 2677. Opper, M., and Haussler, D. 1992. Calculation of the learning curve of Bayes optimal classification algorithm for learning a Perceptron with noise. Contribution to IVth Annual Workshop on Computational Learning Theory (COLT91), Santa Cruz 1991, pp. 61–87. Morgan Kaufmann, San Mateo, CA. Padberg, M., and Rinaldi, G. 1987. Optimization of a 532-city symmetric traveling salesman by branch and cut. Oper. Res. Lett. 6, 1. Padberg, M., and Rinaldi, G. 1988. A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. IASI preprint R. 247. Polya, ´ G. 1968. Mathematics and Plausible Reasoning. Vol. 2: Patterns of Plausible Inference. 2d ed. Princeton University Press, Princeton, NJ. Rosenblatt, F. 1988. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review 65, 386. Ruj´an, P. 1993. A fast method for calculating the Perceptron with maximal stability. Journal de physique (Paris) I 3, 277. Ruj´an, P. In preparation. Ergodic billiards and combinatorial optimization. Sonnevend, Gy. 1988. New algorithms based on a notion of “centre” (for systems of analytic inequalities) and on rational extrapolation. In Trends in Mathematical Optimization—Proceedings of the 4th French-German Conference on Optimization, Irsee 1986 ISNM, K. H. Hoffmann, J.-B. Hiriart-Urruty, C. Lemarechal, and J. Zowe, eds., vol. 84 Birkhauser, Basel. Sutherland, B. 1985. An introduction to the Bethe Ansatz. In Exactly Solvable Problems in Condensed Matter and Relativistic Field Theory, B. S. Shastry, S. S. Jha, and V. Singh, eds., p. 1. Lecture Notes in Physics 242. Springer-Verlag, Berlin. Vaidya, P. M. 1990. A new algorithm for minimizing a convex function over convex sets. In Proceedings of the 30th Annual FOCS Symposium, Research Triangle Park, NC, 1989, pp. 338–343. IEEE Computer Society Press, Los Alamitos, CA. Vapnik, D. 1982. Estimation of dependencies from empirical data, SpringerVerlag, Berlin. Vapnik, V. N., and Chervonenkis, A. Y. 1971. Theor. Probab. Appl. 16, 264. Watkin, T. 1993. Optimal learning with a neural network. Europhysics Letters 21, 871. Widrow, B., and Hoff, M. E. 1960. Adaptive switching circuits. 1960 IRE WESCON Convention Record. IRE, New York.
Received August 14, 1995; accepted April 24, 1996.
Communicated by Raymond Watrous
Partial BFGS Update and Efficient Step-Length Calculation for Three-Layer Neural Networks Kazumi Saito Ryohei Nakano NTT Communication Science Laboratories, 2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02 Japan
Second-order learning algorithms based on quasi-Newton methods have two problems. First, standard quasi-Newton methods are impractical for large-scale problems because they require N2 storage space to maintain an approximation to an inverse Hessian matrix (N is the number of weights). Second, a line search to calculate a reasonably accurate step length is indispensable for these algorithms. In order to provide desirable performance, an efficient and reasonably accurate line search is needed. To overcome these problems, we propose a new second-order learning algorithm. Descent direction is calculated on the basis of a partial Broydon-Fletcher-Goldfarb-Shanno (BFGS) update with 2Ns memory space (s ¿ N), and a reasonably accurate step length is efficiently calculated as the minimal point of a second-order approximation to the objective function with respect to the step length. Our experiments, which use a parity problem and a speech synthesis problem, have shown that the proposed algorithm outperformed major learning algorithms. Moreover, it turned out that an efficient and accurate step-length calculation plays an important role for the convergence of quasi-Newton algorithms, and a partial BFGS update greatly saves storage space without losing the convergence performance. 1 Introduction The backpropagation (BP) algorithm (Rumelhart et al. 1986) has been applied to various classes of problems, and its usefulness has been proved. However, even with a momentum term, this algorithm often requires a large number of iterations for convergence. Moreover, the user is required to determine appropriate parameters by trial and error. To overcome these drawbacks, a learning rate maximization method (LeCun et al. 1993), learning rate adaptation rules (Jacobs 1988; Silva and Almeida 1990), smart algorithms such as QuickProp (Fahlman 1988) and RPROP (Riedmiller and Braun 1993), and second-order learning algorithms (Watrous 1987; Barnard 1992; Battiti 1992; Møller 1993b) based on nonlinear optimization techniques (Gill et al. 1981) have been proposed. Each has achieved a certain degree of Neural Computation 9, 123–141 (1997)
c 1997 Massachusetts Institute of Technology °
124
Kazumi Saito and Ryohei Nakano
success. Among these approaches, we believe that second-order learning algorithms should be investigated more because they theoretically have excellent convergence properties (Gill et al. 1981). At present, however, they have two problems. One is scalability. Second-order algorithms based on LevenbergMarquardt or quasi-Newton methods cannot suitably increase in scale for large problems. Levenberg-Marquardt algorithms generally require a large amount of computation until convergence, even for midscale problems that involve many hundreds of weights. They require O(N2 m) operations to calculate the descent direction during any one iteration; N denotes the number of weights and m the number of examples. Standard quasi-Newton algorithms (Watrous 1987; Barnard 1992) are rarely applied to large-scale problems that involve more than many thousands of weights because they require N2 storage space to maintain an approximation to an inverse Hessian matrix. In order to cope with this problem, the OSS algorithm (Battiti 1989) adopted the memoryless update (Gill et al. 1981); however, OSS may not provide desirable performance because the descent direction is calculated on the basis of a fast but rough approximation. The other problem is the burden of step-length computation. A line search to calculate an adequate step length is indispensable for second-order algorithms that are based on quasi-Newton or conjugate gradient methods. Since an inaccurate line search may provide undesirable performance, a reasonably accurate line search is needed. However, an exact line search based on existing methods generally requires a nonnegligible computational load, at least a few number of function and/or gradient evaluations. Since it is widely recognized that successful convergence of conjugate gradient methods relies heavily on the accuracy of a line search, it seems rather difficult to increase the speed of conjugate gradient algorithms. In the SCG algorithm (Møller 1993b), although the step length is estimated by using the model trust region approach (Gill et al. 1981), this method can be regarded as an efficient one-step line search method based on a one-sided difference equation. Fortunately, successful convergence of quasi-Newton algorithms is theoretically guaranteed even with an inaccurate line search if certain conditions are satisfied (Powell 1976), but efficient convergence cannot always be expected. Thus, we believe that if the step length can be calculated with greater efficiency and reasonable accuracy, the quasi-Newton algorithms will work better from two aspects: processing efficiency and convergence. For large-scale problems that include a large number of redundant training examples, on-line algorithms (LeCun et al. 1993), which perform a weight update for each single example, will work with greater efficiency than an offline counterpart. Although second-order algorithms are basically not suited to perform on-line updating, waiting for a sweep of many examples before updating is not reasonable. Toward the improvement, several attempts have been made to introduce “pseudo-on-line” updating into second-order algorithms, where updates are performed on smaller subsets of data (Møller
BFGS Update and Step-Length Calculation
125
1993c; Kuhn and Herzberg 1990). Thus, if a better second-order algorithm is developed, its pseudo-on-line version will work more efficiently. This paper is organized as follows. Section 2 describes a new secondorder learning algorithm based on a quasi-Newton method, called BPQ, where the descent direction is calculated on the basis of a partial BFGS update and a reasonably accurate step length is efficiently calculated as the minimal point of a second-order approximation. Section 3 evaluates BPQ’s performance in comparison with other major learning algorithms. 2 BPQ Algorithm 2.1 The Problem. Let {(x1 , y1 ), . . . , (xm , ym )} be a set of examples, where xt denotes an N-dimensional input vector and yt a target value corresponding to xt . In a three-layer neural network, let h be the number of hidden units, wi (i = 1, . . . , h) the weight vector between all the input units and the hidden unit i, and w0 = (w00 , . . . , w0h )T the weight vector between all the hidden units and the output unit; wi0 means a bias term and xt0 is set to 1. Note that aT denotes the transposed vector of a. Hereafter, a vector consisting of all parameters, (wT0 , . . . , wTh )T , is simply expressed as Φ; let N(= nh + 2h + 1) be the dimension of Φ. Then, learning in the three-layer neural network can be defined as the problem of minimizing the following objective function: f ( Φ) =
m 1X (yt − zt )2 , 2 t=1
(2.1)
P where zt = z(xt ; Φ) = w00 + hi=1 w0i σ (wTi xt ). σ (u) represents a sigmoidal function, σ (u) = 1/(1 + e−u ). Note that we do not employ nonlinear transformation at the output unit because it is not essential from the viewpoint of function approximation. 2.2 Quasi-Newton Method. A second-order Taylor expansion of f (Φ + 1Φ) about 1Φ is given as f (Φ) + (∇ f (Φ))T 1Φ + 12 (1Φ)T ∇ 2 f (Φ)1Φ. If ∇ 2 f (Φ) is positive definite, the minimal point of this expansion is given by 1Φ = −(∇ 2 f (Φ))−1 ∇ f (8). Newton techniques minimize the objective function f (Φ) by iteratively calculating the descent direction, 1Φ (Gill et al. 1981). However, since O(N3 ) operations are required to calculate (∇ 2 f (Φ))−1 directly, we cannot expect these techniques to increase suitably in scale for large problems. Quasi-Newton techniques, on the other hand, calculate a matrix H through iterations in order to approximate (∇ 2 f (Φ))−1 . The basic algorithm is described as follows (Gill et al. 1981): Step 1: Initialize Φ1 , set H1 = I (I: identity matrix), and set k = 1. Step 2: Calculate the current descent direction: 1Φk = −Hk gk , where gk = ∇ f (Φk ).
126
Kazumi Saito and Ryohei Nakano
Step 3: Step 4: Step 5: Step 6: Step 7:
Terminate the iteration if a stopping criterion is satisfied. Calculate the step length λk that minimizes f (Φk + λ1Φk ). Update the weights: Φk+1 = Φk + λk 1Φk . If k ≡ 0 (mod N), set Hk+1 = I; otherwise, update Hk+1 . Set k = k + 1, return to Step 2.
2.3 Existing Methods for Calculating Descent Directions. Several methods for updating Hk+1 have been proposed. Among them, the BroydonFletcher-Goldfarb-Shanno (BFGS) update (Fletcher 1980) was the most successful update in a number of studies. By putting pk = λk 1Φk and qk = gk+1 − gk , the BFGS formula is given as Hk+1 = Hk −
pk qTk Hk + Hk qk pTk pTk qk
à + 1+
qTk Hk qk pTk qk
!
pk pTk pTk qk
.
(2.2)
In large-scale problems that involve more than many thousands of weights, maintaining the approximation matrix H becomes impractical because it requires N2 storage space. In order to cope with this problem, the OSS (one-step secant) algorithm (Battiti 1989) adopted the memoryless BFGS update (Gill et al. 1981). By always taking the previous matrix Hk as the identity matrix (Hk = I), the descent direction of Step 2 is calculated as à ! pk qTk gk+1 + qk pTk gk+1 qTk qk pk pTk gk+1 − 1+ T .(2.3) 1Φk+1 = −gk+1 + pTk qk pk qk pTk qk Clearly equation 2.3 can be calculated with O(N) multiplications and storage space. However, OSS may not provide desirable performance because the descent direction is calculated on the basis of the above substitution. In our early experiments, this method worked rather poorly in comparison with the partial BFGS update proposed in the next section. 2.4 New Method for Calculating Descent Directions. In this section, we propose a partial BFGS update with 2Ns memory (s ¿ N), where the search directions are exactly equivalent to those of the original BFGS update during the first s + 1 iterations. The partiality parameter s means the length of the history described below. By putting rk = Hk qk , we have Hk gk+1 = Hk qk + Hk gk = rk −
pk . λk
Then, the descent direction based on equation 2.2 can be calculated as 1Φk+1 = −Hk+1 gk+1
BFGS Update and Step-Length Calculation
pk rTk gk+1 + rk pTk gk+1 pk + λk pTk qk à ! qTk rk pk pTk gk+1 − 1+ T . pk qk pTk qk
127
= −rk +
(2.4)
After calculating rk , equation 2.4 can be calculated using O(N) multiplications and storage space, just like the memoryless BFGS update. Now, we show that it is possible to calculate rk within O(Ns) multiplications and 2Ns storage space by using the partial BFGS update. Here, we assume k ≤ s. When k = 1, r1 (= H1 q1 = g2 − g1 ) can be calculated only by subtractions. When k > 1, we assume that each of r1 , . . . , rk−1 has been calculated and stored. Note that for i < k, 1 αi = T and βi = αi (1 + αi qTi ri ) pi qi have already been calculated during the iterations. Thus, by recursively applying equation 2.2, rk can be calculated with O(Nk) multiplications and 2Nk storage space, as follows: rk = Hk qk = Hk−1 qk − αk−1 pk−1 rTk−1 qk − αk−1 rk−1 pTk−1 qk + βk−1 pk−1 pTk−1 qk = qk +
k−1 X
(−αi pi rTi qk − αi ri pTi qk + βi pi pTi qk ) .
(2.5)
i=1
Next, when the number of iterations exceeds s+1, we have two alternatives: restarting the update by discarding the accumulated vectors or continuing the update by using the latest vectors. In both cases, equation 2.5 can be calculated within O(Ns) multiplications and 2Ns storage space. Therefore, it has been shown that equation 2.4 can be calculated within O(Ns) multiplications and 2Ns storage space. In our experiments, the former update was employed because the latter worked poorly when s was small. Although the idea of partial quasi-Newton methods has been briefly described (Luenberger 1984), strictly speaking, our update is different. The earlier proposal intended to store the vectors pi and qi , but our update stores the vectors pi and ri . The immediate advantage of this partial update over the original BFGS update is the applicability to large-scale problems. Even if N is very large, by setting s to an adequate small value with respect to the amount of available storage space, our update will work. On the other hand, the probable advantage over the memoryless BFGS update is the superiority of the convergence property. However, this claim should be examined through a wide range of experiments. Note that when s = 0, the partial BFGS update always gives the gradient direction; when s = 1, it corresponds to the memoryless BFGS update.
128
Kazumi Saito and Ryohei Nakano
2.5 Existing Methods for Calculating Step Lengths. In Step 4, since λ is the only variable in f , we can express f (Φ+λ1Φ) simply as ζ (λ). Calculating λ, which minimizes ζ (λ), is called a line search. Among a number of possible line search methods, we considered the following typical three methods: a fast but inaccurate one, a moderately accurate one, and an exact but slow one. The first method is explained below. By using quadratic interpolation (Gill et al. 1981; Battiti 1989), we can derive a fast line search method that only guarantees ζ (λ) < ζ (0). Let λ1 be an initial value for a step length. If ζ (λ1 ) < ζ (0), λ1 becomes the resultant step length; otherwise, by considering a quadratic function h(λ) that satisfies the conditions h(0) = ζ (0), h(λ1 ) = ζ (λ1 ), and h0 (0) = ζ 0 (0), we get the following approximation of ζ (λ): ζ (λ) ≈ h(λ) = ζ (0) + ζ 0 (0)λ +
ζ (λ1 ) − ζ (0) − ζ 0 (0)λ1 2 λ . λ21
Since ζ (λ1 ) ≥ ζ (0) and ζ 0 (0) < 0, the minimal point of h(λ) is given by λ2 = −
ζ 0 (0)λ21 . 2(ζ (λ1 ) − ζ (0) − ζ 0 (0)λ1 )
(2.6)
Note that 0 < λ2 < λ1 is guaranteed by equation 2.6. Thus, by iterating this process until ζ (λν ) < ζ (0), we can always find a λν that satisfies ζ (λν ) < ζ (0), where ν denotes the number of iterations. Here, the initial value of λ1 is set to 1 because the optimal step length is near 1 when H closely approximates (∇ 2 f (Φ))−1 . Hereafter, a quasi-Newton algorithm based on the original or partial BFGS update in combination with this fast but inaccurate line search is called BFGS1. By using quadratic extrapolation (Fletcher 1980), we can derive a moderately accurate line-search method that guarantees ζ 0 (λν ) > γ1 ζ 0 (λν−1 ) as well as ζ (λν ) < ζ (λν−1 ), where γ1 is a small constant (e.g., 0.1). Namely, when λν does not satisfy the stopping criterion, if ζ (λν ) ≥ ζ (λν−1 ), λν+1 is calculated using equation 2.6; otherwise, by considering an extrapolation to the slopes ζ 0 (λν−1 ) and ζ 0 (λν ), the appropriate expression for the minimizing value λν+1 is λν+1 = λν − ζ 0 (λν )
λν − λν−1 . ζ 0 (λν ) − ζ 0 (λν−1 )
(2.7)
However, if ζ 0 (λν ) ≤ ζ 0 (λν−1 ), then λν+1 does not exist; thus, λν+1 is set to λν−1 +γ2 (λν −λν−1 ), where γ2 is an adequate value (e.g., 9). In this method, λ0 is set to 0, and λ1 is estimated as min(1, −2ζ 0 (0)−1 ( f (Φcurrent )− f (Φprevious ))). Note that to use this estimate of λ1 for the first method is not good strategy because it does not have an extrapolation process, but λ1 can be a very small value. Hereafter, a quasi-Newton algorithm based on the original or partial
BFGS Update and Step-Length Calculation
129
BFGS update in combination with this moderately accurate line search is called BFGS2. An exact but slow line search method can be constructed by iteratively using the BFGS2 line search method until a stopping criterion is met, for example, kζ 0 (λν )k < 10−8 . The difference from the BFGS2 method is that equation 2.7 is used for interpolation as well as extrapolation. Hereafter, a quasi-Newton algorithm based on the original or partial BFGS update in combination with this exact but slow line search is called BFGS3. 2.6 New Method for Calculating Step Lengths. Here we propose a new method for calculating a reasonably accurate step-length λ in Step 4. 2.6.1 Basic Procedure. given as
A second-order Taylor approximation of ζ (λ) is
1 ζ (λ) ≈ ζ (0) + ζ 0 (0)λ + ζ 00 (0)λ2 . 2 When ζ 0 (0) < 0 and ζ 00 (0) > 0, the minimal point of this approximation is given by λ=−
ζ 0 (0) ζ 00 (0)
µ = −
(∇ f (Φ))T 1Φ (1Φ)T ∇ 2 f (Φ)1Φ
¶ .
(2.8)
Other cases will be considered in the next section. For the three-layer neural networks defined by equation 2.1, we can efficiently calculate ζ 0 (0) and ζ 00 (0) as follows. By differentiating ζ (λ) and substituting 0 for λ, we obtain ζ 0 (0) = −
m m X X (yt − zt )z0t and ζ 00 (0) = ((z0t )2 − (yt − zt )z00t ). t=1
t=1
Now that the derivative of zt = z(xt ; Φ) is defined as we obtain z0t = 1w00 +
h X i=1
(1w0i σit + w0i σit0 ) and z00t =
d dλ z(xt ; Φ + λ1Φ)|λ=0 ,
h X (21w0i σit0 + w0i σit00 ), i=1
where σit = σ (wTi xt ), σit0 = σit (1 − σit )(1wi )T xt , σit00 = σit0 (1 − 2σit )(1wi )T xt , and 1wij denotes the change of wij , calculated in Step 2. Now we consider the computational complexity of calculating the step length using equation 2.8. Clearly, (1wi )T xt must be calculated for each pair of hidden unit i and input xt ; thus, since the number of hidden units is h and the number of inputs is m, at least nhm multiplications are required. Since the order of multiplications required to calculate the remainder is O(hm), and N = nh + 2h + 1, the total complexity of the calculation is Nm + O(hm).
130
Kazumi Saito and Ryohei Nakano
This algorithm can be generalized to multilayered networks with other differentiable activation functions; thus, it is applicable to recurrent networks (Rumelhart et al. 1986). Consider a hidden unit τ connected to the output layer. We assume that its output value is defined by v = a(wT u), where u is the output values of the units connected to the unit τ , w equals the weights attached to these connections, and a(·) is an activation function. Then the first- and second-order derivatives of v with respect to λ are calculated as v0 = a0 (wT u)(1wT u + wT u0 ), v00 = a00 (wT u)(1wT u + wT u0 )2 + a0 (wT u)(21wT u0 + wT u00 ). Thus, by successively applying these formulas in reverse, we can calculate the step length for multilayered networks. Note that if ui is an output value of an input unit, then u0i = u00i = 0. 2.6.2 Coping with Undesirable Cases. In the above, we assumed ζ 0 (0) < 0. When ζ 0 (0) > 0, the value of the objective function cannot be reduced along the search direction; thus, we set 1Φk to −∇ f (Φk ) and restart the update by discarding the accumulated vectors (p, r). Note that ζ 0 (0) < 0 is guaranteed by such a setting unless k∇ f (Φk )k = 0 because ζ 0 (0) = (∇ f (Φk ))T 1Φk = −k∇ f (Φk )k2 < 0. When ζ 0 (0) < 0 and ζ 00 (0) ≤ 0, equation 2.8 gives a negative value or infinity. To avoid this situation, we employ the Gauss-Newton technique. The first-order approximation of zt = z(xt ; Φ + λ1Φ) is zt + z0t λ. Then, ζ (λ) of the next iteration can be approximated by ζ (λ) ≈
m m 1X 1X (yt − (zt + z0t λ))2 = ζ (0) + ζ 0 (0)λ + (z0 )2 λ2 . 2 t=1 2 t=1 t
The minimal point of this approximation is given by ζ 0 (0) λ = − Pm 0 2 . t=1 (zt )
(2.9)
Clearly, equation 2.9 always gives a positive value when ζ 0 (0) < 0. In many cases, it is useful from a practical sense to limit the maximum change in Φ, which should be done during any one iteration (Gill et al. 1981). Here, if kλ1Φk > 1.0, λ is set to k1Φk−1 . Since λ is calculated on the basis of the approximation, we cannot always reduce the value of the objective function, ζ (λ). When ζ (λ) ≥ ζ (0), we employ the fast line search given by equation 2.6. 2.6.3 Summary of Step-Length Calculation. By integrating the above procedures, we can specify Step 4 as follows.
BFGS Update and Step-Length Calculation
131
Step 4.1: If ζ 0 (0) > 0, set 1Φk = −∇ f (Φk ) and k = 1. Step 4.2: If ζ 00 (0) > 0, calculate λ using equation 2.8; otherwise, calculate λ using equation 2.9. Step 4.3: If kλ1Φk k > 1.0, set λ = k18k k−1 . Step 4.4: If ζ (λ) > ζ (0), calculate λ using equation 2.6 until ζ (λ) < ζ (0). Hereafter, the quasi-Newton algorithm based on our partial BFGS update in combination with our step-length calculation is called BPQ (Bp based on partial Quasi-Newton). Incidentally, a modified OSS algorithm, OSS2 (a combination of the memoryless BFGS update and our step-length calculation), may be another good algorithm and will be evaluated in the experiments. 2.7 Computational Complexity. We consider the computational complexity of BPQ and other algorithms with respect to one-iteration in which every training example is presented once. In off-line BP, the complexity (number of multiplications) to calculate the objective function is nhm + O(hm) and the complexity for the gradient vector is nhm + O(hm). Thus, since N = nh + 2h + 1, the complexity for off-line BP is 2Nm + O(hm). Here, the complexity for a weight update is just N (or 2N if momentum term is used), and is safely negligible. In this article, we define one-iteration of on-line BP as m updates sweeping all examples once. In addition to the above complexity, since on-line BP performs a weight update for each single example, the learning rate is multiplied to each element of the gradient vectors Nm times in one-iteration; thus, the complexity for on-line BP is 3Nm + O(hm). In the case of on-line BP with momentum term, the momentum factor is also multiplied to each element of the previous modification vectors Nm times in one-iteration; thus, the complexity for on-line momentum BP is 4Nm + O(hm). In addition to the complexity for off-line BP, BPQ calculates the descent direction based on the partial BFGS update with a history of at most s iterations and also calculates the step length. The complexity of the former calculation is O(Ns) (see Section 2.4) and that of the latter is Nm + O(hm) (see Section 2.6.1). Here, note that the computational complexity to calculate the objective function can be reduced from Nm + O(hm) to O(hm) for threelayer networks. This is because in the next iteration, the output value of each hidden unit is given by σ (wTi xt + λ(1wi )T xt ), but (1wi )T xt is already calculated when the step length is calculated. Thus, the total complexity for BPQ is 2Nm + O(Ns) + O(hm). To reduce the generalization error for an unseen example, m should be larger than N. Since s is smaller than N, the complexity of O(Ns) usually becomes much smaller than that of 2Nm, and the complexity for BPQ remains almost equivalent to that of off-line BP. A general method for calculating the denominator of equation 2.8 has been proposed (Pearlmutter 1994; Møller 1993a); after calculating ∇ 2 f (Φ)1Φ,
132
Kazumi Saito and Ryohei Nakano
the denominator is calculated by using an inner product. The result is mathematically the same as our step-length calculation, but the computational complexity of this method is much larger than that of our method, at least in the case of three-layer networks as shown below. By using Pearlmutter’s ∂ f (8+λ1Φ)|λ=0 (Pearlmutter operator, which is defined by 0 and wU > 0 such that for all values of w, |w| ∈ (wL , wU ), ²1 −ν ≤ f (w) ≤ ²1 +ν. This is the penalty corresponding to a nonzero connection in the network. In this interval, almost no distinction is made between the penalty for smaller or larger weights. For example, when ν = 10−2 , the penalty for any weight with magnitude in the interval (wL , wU ) = (0.95, 31.64) is within 10% of ²1 . By decreasing the value of ²2 , the interval over which the penalty value is approximately equal to ²1 can be made wider, as shown in Figure 2c, where ²2 = 10−6 . A weight is prevented from taking a value that is too large, since the quadratic term becomes dominant for values of w that are greater than wU . The value of
A Penalty-Function Approach for Pruning Feedforward Neural Networks 195
wL can be made arbitrarily small by increasing the value of the parameter β. The derivative of the function f (w) near zero is relatively large (Figs. 2b and 2d). This will give a small weight w stronger tendency to decay to zero. 3 Experimental Results We selected well-known problems to test the pruning algorithm described in the previous section: the contiguity problem, the 4-bit and 5-bit parity problems, and the monks problems. Since we were interested in finding the simplest network architecture that could solve these problems, all available input data were used for training and pruning. An exception was the monks problems. For the three monks problems, the division of the data sets into training and testing sets has been fixed (Thrun et al. 1991). There was one output unit in the networks. For all problems, thresholds or biases in the hidden units were incorporated into the network by appending a 1 to each input pattern. The threshold value of the output unit was set to 0 to simplify the implementation of the training and pruning algorithms. Only two different sets of parameters were involved in the error function. They corresponded to the weights of the connections between the input layer and the hidden layer, and between the hidden layer and the output layer. During training, the gradient of the error function was computed by taking partial derivatives of the function with respect to these parameters only. During pruning, conditions were checked to determine whether a connection could be removed. There was no special condition for checking whether a hidden unit bias could be removed. Note that if after pruning, there exists a hidden unit connected only to the input unit with a constant input value of 1 for all patterns and the output unit, then the weights of two arcs connected to this hidden unit determine a nonzero threshold value at the output unit. The function θ(w, v) (cf. equation 2.18) was minimized by a variant of the quasi-Newton algorithm, the BFGS method. It has been shown that quasiNewton algorithms, such as the BFGS method, can speed up the training process significantly (Watrous 1987). At each iteration of the algorithm, a positive definite matrix that is an approximation of the inverse of the Hessian of the function to be minimized is computed. The positive definiteness of this matrix ensures that a descent direction can be generated. Given a descent direction, a step size is computed via an inexact line search algorithm. Details of this algorithm can be found in Dennis and Schnabel (1983). For all the experiments reported here, we used the same values for the parameters involved in the function θ (w, v). These were ²1 = 0.1, ²2 = 10−5 , and β = 10. During the training process, the BFGS algorithm was terminated when the following condition was satisfied: k∇θ(w, v)k ≤ 10−8 max{1, kw, vk}, where k∇θ(w, v)k is the 2-norm of the gradient of θ (w, v). At this point the
196
Rudy Setiono
accuracy of the network was checked. The value of η1 (cf. equation 2.14) was 0.35. If the classification rate of the network met the prespecified required accuracy, then the pruning process was initiated. The required accuracy of the original fully connected network was 100% for all problems, except for the monks 3 problem. For the monks 3 problem, the acceptable accuracy rate was 95%. We did not attempt to get 100% accuracy rate for this problem due to the presence of six incorrect classifications (out of 122 patterns) in the training set. The parameter η2 used during the pruning process was set at 0.10. Typically, at the start of the pruning process, many weights would be eliminated because they satisfied the condition in equation 2.15. Subsequently, weights were removed one at a time based on the magnitude of the product |vpm wm` |. Before the actual pruning and retraining was done, the current network was saved. Should pruning additional weight(s) and retraining the network fail to give a new set of weights that met the prespecified accuracy requirement, the saved network would be considered the smallest network for this particular run of the experiment. 3.1 The Contiguity Problem. This problem has been the subject of several studies on network pruning (Thodberg 1991; Gorodkin et al. 1993). The input patterns were binary strings of length 10. Of all possible 1024 patterns, only those with two or three clumps were used. A clump was defined as a block of 1’s in the string separated by at least one 0. The target value ti for each of the 360 patterns with two clumps was 0, and the target value for each of the 432 patterns with three clumps was 1. A sparse symmetric network architecture with nine hidden units and a total of 37 weights has been suggested for this problem (Gorodkin et al. 1993). Two experiments were carried out using this data set as input. In the first experiment, 50 neural networks having six hidden units were used as the starting networks. In the second experiment, the same number of networks, each with nine hidden units, were used. The accuracy rate of all these networks, which were trained starting from different random initial weights, was 100%. The number of connections and hidden units left in the networks after pruning are tabulated in Table 1. Thodberg (1991) trained 38 fully connected networks with 15 hidden units using 100 patterns as training data. The trained networks were then pruned by a brute-force scheme, and the average number of connections left in the pruned network was 32. This number, however, did not include the biases in the networks. Employing the optimal brain damage pruning scheme, Gorodkin et al. (1993) obtained pruned networks with a number of weights ranging from 36 to more than 90. These networks were trained on 140 examples. In contrast, the maximum number of connections left in our networks was 33, and only one of the pruned networks had this many connections. The results of the experiments with different initial number of hidden units in the networks show the robustness of our pruning algorithm.
A Penalty-Function Approach for Pruning Feedforward Neural Networks 197 Table 1: Average Number of Connections and Hidden Units in 50 Networks Trained to Solve the Contiguity Problem after Pruning. Number of starting hidden units Number of connections before pruning Average number of connections after pruning Average number of hidden units after pruning
6
9
72 28.44 (1.33) 5.98 (0.14)
108 29.18 (1.32) 7.62 (0.75)
Note: Figures in parentheses indicate standard deviations.
Starting with six or nine hidden units, the networks were pruned until there were on average 29 connections left. 3.2 The Parity Problem. The parity problem is a well-known difficult problem that has often been used for testing the performance of a neural network training algorithm. The input set consists of 2N patterns in Ndimensional space, and each pattern is an N-bit binary vector. The target value ti is equal to 1 if the number of 1’s in the pattern is odd, and it is 0 otherwise. We used the 4-bit and 5-bit parity problems to test our pruning algorithm. For the 4-bit parity problem, three sets of experiments were conducted. In the first set, 50 networks each with four hidden units were used as the starting networks. In the second and third sets of experiments, the initial number of hidden units were five and six. Similarly, three sets of experiments were conducted for the 5-bit parity problem. The initial number of hidden units in 50 starting networks for each set of experiment were five, six, and seven, respectively. The average number of weights and hidden units of the networks after pruning are shown in Table 2. In this table, we also show the average number of hidden units left after pruning. Many experiments using backpropagation network assume that N hidden units are needed to solve the N-bit parity problem. Previously published neural network pruning algorithms have also failed to obtain networks with fewer than n hidden units (Chung and Lee 1992; Hanson and Pratt 1989). In fact, three hidden units are sufficient for both the 4-bit and 5-bit parity problems. 3.3 The Monks Problems. The monks problems (Thrun et al. 1991) are an artificial robot domain, in which robots are described by six different attributes: A1 : head shape ∈ round, square, octagon; A2 : body shape ∈ round, square, octagon;
198
Rudy Setiono
Table 2: Average Number of Connections and Hidden Units Obtained from 50 Networks for the 4-bit and 5-bit Parity Problems After Pruning. Parity 4 Number of starting hidden units Number of connections before pruning Average number of connections after pruning Average number of hidden units after pruning
4
5
6
24
30
36
17.40 (1.55)
18.12 (1.60)
18.76 (2.30)
3.42 (0.50)
3.64 (0.66)
3.98 (0.77)
5
6
7
35
42
49
21.80 (3.14)
21.84 (3.02)
22.34 (3.17)
3.58 (0.81)
3.68 (0.84)
3.82 (0.83)
Parity 5 Number of starting hidden units Number of connections before pruning Average number of connections after pruning Average number of hidden units after pruning
A3 : is smiling ∈ yes, no; A4 : holding ∈ sword, balloon, flag; A5 : jacket color ∈ red, yellow, green, blue; A6 : has tie ∈ yes, no. The learning tasks of the three monks problems are of binary classification, each given by the following logical description of a class. • Monks 1 problem: (head shape = body shape) or (jacket color = red). From 432 possible samples, 124 were randomly selected for the training set. • Monks 2 problem: Exactly two of the six attributes have their first value. From 432 samples, 169 were selected randomly for the training set. • Monks 3 problem: (Jacket color is green and holding a sword) or (jacket color is not blue and body shape is no octagon). From 432 samples, 122 were selected randomly for training and among them there were 5% misclassifications (i.e., noise in the training set).
A Penalty-Function Approach for Pruning Feedforward Neural Networks 199
The testing set for the three problems consisted of all 432 samples. Two sets of experiments were conducted on the monks 1 and monks 2 problems. In the first set, 50 networks, each with three hidden units, were used as the starting networks for pruning. In the second set, an equal number of networks with five hidden units were used. These networks correctly classified all patterns in the training set. For the monks 3 problem, in addition to the two sets of experiments, 50 networks with just one hidden unit were also trained as the starting networks. All starting networks for monks 3 had an accuracy rate of at least 95% on the training data. The required accuracy of the pruned networks for the monks 1 and monks 2 problems on the training patterns was 100%. For the monks 3 problem, it was 95%. The results from the experiments for monks 1 and monks 2 problems are summarized in Table 3. The results for the monks 3 problem are tabulated in Table 4. The P-values in these tables had been computed to test whether the accuracy of the networks on the testing set increased significantly after pruning. With the exception of the networks with one hidden unit for the monks 3 problem, the P-values clearly show that pruning did increase the predictive accuracy of the networks. There was not much difference in the average number of connections left after pruning between networks that originally had three hidden units and those with five hidden units. For the monks 3 problem, when the starting networks had only one hidden unit, the average number of connections left was 9.92, significantly fewer than the averages obtained from networks with 3 and 5 starting hidden units. Part of this difference can be accounted for by the number of hidden units still left in the pruned networks. Most of the networks with three or five starting hidden units still had three hidden units after pruning. The connections from these hidden units to the output unit added on average two more connections to the total. The smallest pruned networks with 100% accuracy on the training and the testing sets for the monks 1 and monks 2 problems had 11 and 15 connections, respectively. For the monks 3 problem, the pruning algorithm found a network with only 6 connections. This network was able to identify the 6 noise cases in the training set. The predicted outputs for these noise patterns were the opposite of the target outputs, and hence an accuracy rate of 95.08% was obtained on the training set. Since there was no noise in the testing data, the predictive accuracy rate was 100%. The network is depicted in Figure 3. For comparison, the backpropagation with weight decay and the cascade-correlation algorithms both achieved 97.2% accuracy on the testing set (Thrun et al. 1991). Some interesting observations on the relationship between the number of hidden units and the generalization capability of a network were made by Weigend (1993). One of these observations is that large networks perform better than smaller networks provided that the training of the networks is stopped early. To determine when the training should be terminated, a second data set—the cross-validation set—is used. Our results on the monks
200
Rudy Setiono
Table 3: Average Number of Connections and Predictive Accuracy Obtained from 50 Networks for the Monks 1 and Monks 2 Problems After Pruning. Monks 1 Number of starting hidden units Number of connections before pruning Average accuracy on testing set (%) Average number of connections after pruning Average accuracy on testing set (%) P-value
3
5
57 99.10 (2.66) 12.96 (1.63) 99.82 (0.74) 0.033
95 99.52 (1.44) 12.54 (1.05) 100.00 (0.00) 0.009
3
5
57 98.91 (1.47) 16.76 (1.81) 99.44 (0.78) 0.012
95 98.13 (2.32) 16.86 (1.40) 99.39 (0.70) 0.001
Monks 2 Number of starting hidden units Number of connections before pruning Average accuracy on testing set (%) Average number of connections after pruning Average accuracy on testing set (%) P-value
Note: P-values are computed for testing if the predictive accuracy rates of the networks increase significantly after pruning.
Figure 3: A network with six weights for the monks 3 problem. The number next to a connection shows the weight for that connection. The accuracy rates on the training set and the testing set are 95.08 and 100%, respectively.
A Penalty-Function Approach for Pruning Feedforward Neural Networks 201 Table 4: Average Number of Connections and Predictive Accuracy Obtained from 50 Networks for the Monks 3 Problem After Pruning. Monks 3 Number of starting hidden units Number of connections before pruning Average accuracy on training set (%) Average accuracy on testing set (%) Average number of connections after pruning Average accuracy on training set (%) Average accuracy on testing set (%) P-value
1
3
5
19
57
95
95.85 (0.69)
99.02 (1.09)
99.92 (0.38)
96.07 (2.43)
93.43 (1.91)
92.82 (1.53)
9.92 (2.06)
14.64 (2.79)
14.86 (2.19)
95.82 (0.69)
96.00 (1.07)
96.46 (1.12)
96.11 (2.55) 0.468
94.12 (2.95) 0.082
93.85 (2.21) 0.003
Note: P-values are computed for testing if the predictive accuracy rates of the networks increase significantly after pruning.
problems are mixed. For the monks 1 problem, networks with five hidden units have higher predictive accuracy than those with three hidden units. For the monks 2 problem, the reverse is true. For the monks 3 problem, where there is noise in the data, there is a trend that larger networks have poorer predictive accuracy rates than smaller ones, even after the networks have been pruned. Note that the statistics were obtained from networks that had been trained to reach a local minimum of the augmented error function. One clear conclusion that can be drawn from the data in Tables 3 and 4 is that pruned networks have better generalization capability than the fully connected networks. 4 Discussion and Final Remarks We presented a penalty function approach for feedforward neural network pruning. The penalty function consists of two components. The first component is to discourage the use of unnecessary connections, and the second component is to prevent the weights of these connections from taking excessively large values. Simple criteria for eliminating network connections are also given. The two components of the penalty function have been used individually in the past. However, applying our approach that combines the penalty function and the magnitude-based weight elimination criteria, we are able to get smaller networks than those reported in the literature. We
202
Rudy Setiono
have also experimented pruning with one of the two penalty parameters (²1 and ²2 ) set to zero; the results were not as good as those reported in Section 3. The approach described in this article works for networks with one hidden layer. However, it is possible to extend the algorithm so that it works for a network with N > 1 hidden layers. Let us label the input layer L0 , the output layer LN+1 , and the hidden layers L1 , L2 , . . . , LN . Connections are present only between units in layers Li and Li+1 . A criterion for removal of a connection between unit H1 in layer Li and unit H2 in layer Li+1 can be developed to make sure that the changes in the predicted outputs are not more than η2 . The effect of such a removal will be propagated layer by layer from H2 to the output units in LN+1 . Hence, the criterion for a removal will involve the weights from unit H2 to layer Li+2 and all the weights from layer Li+2 onward. The effectiveness of the pruning algorithm using our proposed penalty function has been demonstrated by its performance on three well-known problems. Using the same set of penalty parameters, networks that solve three test problems—the contiguity problem, the parity problems, and the monks problems—with few connections have been obtained. The small number of connections in the pruned networks allows us to develop an algorithm to extract compact rules from networks that have been trained to solve real-world classification problems (Setiono 1997).
References Ash, T. 1989. Dynamic node creation in backpropagation networks. Connection Science 1(4), 365–375. Chauvin, Y. 1989. A back-propagation algorithm with optimal use of hidden units. In Advances in Neural Information Processing Systems, Vol. 1, pp. 519– 526. Morgan Kaufmann, San Mateo, CA. Chung, F. L., and Lee, T. 1992. A node pruning algorithm for backpropagation networks. Int. J. Neural Systems 3(3), 301–314. Dennis, J. E., and Schnabel, R. B. 1983. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice Hall, Englewood Cliffs, NJ. Fahlman, S. E., and Lebiere, C. 1990. The cascade-correlation learning architecture. In Advances in Neural Information Processing Systems, Vol. 2, pp. 524–532. Morgan Kaufmann, San Mateo, CA. Frean, M. 1990. The upstart algorithm: A method for constructing and training feedforward neural networks. Neural Computation 2(2), 198–209. Gorodkin, J., Hansen, L. K., Krogh, A., and Winther, O. 1993. A quantitative study of pruning by optimal brain damage. Int. J. Neural Systems 4(2), 159– 169. Grossman, S. I. 1995. Multivariable Calculus, Linear Algebra, and Differential Equations. Saunders College Publishing, Orlando, FL.
A Penalty-Function Approach for Pruning Feedforward Neural Networks 203 Hanson, S. J., and Pratt, L. Y. 1989. Comparing biases for minimal network construction with back-propagation. In Advances in Neural Information Processing Systems, Vol. 1, pp. 177–185. Morgan Kaufmann, San Mateo, CA. Hassibi, B., and Stork, D. G. 1993. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems, Vol. 5, pp. 164–171. Morgan Kaufmann, San Mateo, CA. Hertz, J., Krogh, A., and Palmer, R. G. 1991. In Introduction to the Theory of Neural Computation. Addision-Wesley, Redwood City, CA. Hinton, G. E. 1989. Connectionist learning procedures. Artificial Intelligence 40, 185–234. Karnin, E. D. 1990. A simple procedure for pruning back-propagation trained neural networks. IEEE Trans. Neural Networks 1(2), 239–242. Lang, K. J., and Witbrock, M. J. 1988. Learning to tell two spirals apart. In Proc. of the 1988 Connectionist Summer School, pp. 52–59. Morgan Kaufmann, San Mateo, CA. Le Cun, Y., Denker, J. S., and Solla, S. A. 1990. Optimal brain damage. In Advances in Neural Information Processing Systems, Vol. 2, pp. 598–605. Morgan Kaufmann, San Mateo, CA. Mezard, M., and Nadal, J. P. 1989. Learning in feedforward layered networks: The tiling algorithm. Journal of Physics A 22(12), 2191–2203. Mozer, M. C., and Smolensky, P. 1989. Skeletonization: A technique for trimming the fat from a network via relevance assestment. In Advances in Neural Information Processing Systems, Vol. 1, pp. 107–115. Morgan Kaufmann, San Mateo, CA. Setiono, R. 1997. Extracting rules from neural networks by pruning and hidden unit splitting. Neural Computation 9, 205–225. Tenorio, M. F., and Lee, W. 1990. Self-organizing network for optimum supervised learning. IEEE Trans. Neural Networks 1(1), 100–110. Thodberg, H. H. 1991. Improving generalization of neural networks through pruning. Int. J. Neural Systems 1(4), 317–326. Thrun, S. B., Bala, J., Bloedorn, E., Bratko, I., Cestnik, B., Cheng, J., De Jong, K., Dˇzeroski, S., Fahlman, S. E., Fisher, D., Hamann, R., Kaufman, K., Keller, S., Kononenko, I., Kreuziger, J., Michalski, R. S., Mitchell, T., Pachowicz, P., Reich, Y., Vafaie, H., Van de Welde, W., Wenzel, W., Wnek, J., and Zhang, J. 1991. The MONK’s problems—a performance comparison of different learning algorithms. Preprint CMU-CS-91-197. Carnegie Mellon University, Pittsburgh, PA. van Ooyen, A., and Nienhuis, B. 1992. Improving the convergence of the backpropagation algorithm. Neural Networks 5, 465–471. Watrous, R. 1987. Learning algorithms for connectionist networks: Applied gradient methods of nonlinear optimization. Proc. IEEE First International Conference on Neural Networks, pp. 619–627. IEEE Press, New York. Weigend, A. S. 1993. On overfitting and the effective number of hidden units. In Proc. of the 1993 Connectionist Models Summer School, pp. 335–342. Lawrence Erlbaum, Hillsdale, NJ.
204
Rudy Setiono
Weigend, A. S., Rumelhart, D. E., and Huberman, B. A. 1990. Back-propagation, weight-elimination and time series prediction. In Proc. of the 1990 Connectionist Models Summer School, pp. 105–116. Morgan Kaufmann, San Mateo, CA.
Received November 23, 1994; accepted March 13, 1996.
Communicated by Jude Shavlik
Extracting Rules from Neural Networks by Pruning and Hidden-Unit Splitting Rudy Setiono Department of Information Systems and Computer Science, National University of Singapore, Kent Ridge, Singapore 0511, Republic of Singapore
An algorithm for extracting rules from a standard three-layer feedforward neural network is proposed. The trained network is first pruned not only to remove redundant connections in the network but, more important, to detect the relevant inputs. The algorithm generates rules from the pruned network by considering only a small number of activation values at the hidden units. If the number of inputs connected to a hidden unit is sufficiently small, then rules that describe how each of its activation values is obtained can be readily generated. Otherwise the hidden unit will be split and treated as output units, with each output unit corresponding to an activation value. A hidden layer is inserted and a new subnetwork is formed, trained, and pruned. This process is repeated until every hidden unit in the network has a relatively small number of input units connected to it. Examples on how the proposed algorithm works are shown using real-world data arising from molecular biology and signal processing. Our results show that for these complex problems, the algorithm can extract reasonably compact rule sets that have high predictive accuracy rates.
1 Introduction One of the most popular applications of feedforward neural networks is distinguishing patterns in two or more disjoint sets. The neural network approach has been applied to solve pattern classification problems in diverse areas such as finance, engineering, and medicine. While the predictive accuracy obtained by neural networks is usually satisfactory, it is often said that a neural network is practically a black box. Even for a network with only a single hidden layer, it is generally impossible to explain why a certain pattern is classified as a member of one set and another pattern as a member of another set, due to the complexity of the network. In many applications, however, it is desirable to have rules that explicitly state under what conditions a pattern is classified as a member of one set or another. Several approaches have been developed for extracting rules Neural Computation 9, 205–225 (1997)
c 1997 Massachusetts Institute of Technology °
206
Rudy Setiono
from a trained neural network. Saito and Nakano (1988) proposed a medical diagnostic expert system based on a multilayer neural network. They treated the network as a black box and used it only to observe the effects on the network output caused by changes in the inputs. Two methods for extracting rules from neural network are described in Towell and Shavlik (1993). The first method is the subset algorithm (Fu 1991), which searches for subsets of connections to a unit whose summed weight exceeds the bias of that unit. The second method, the MofN algorithm, clusters the weights of a trained network into equivalence classes. The complexity of the network is reduced by eliminating unneeded clusters and by setting all weights in each remaining cluster to the average of the cluster’s weights. Rules with weighted antecedents are obtained from the simplified network by translation of the hidden units and output units. Towell and Shavlik applied these two methods to knowledge-based neural networks that have been trained to recognize genes in DNA sequences. The topology and the initial weights of a knowledge-based network are determined using problem-specific a priori information. More recently, a method that uses sampling and queries was proposed (Craven and Shavlik 1994). Instead of searching for rules from the network, the problem of rule extraction is viewed as a learning task. The target concept is the function computed by the network, and the network input features are the inputs for the learning task. Conjunctive rules are extracted from the neural network with the help of two oracles. Thrun (1995) describes a rule-extraction algorithm that analyzes the inputoutput behavior of a network using validity interval analysis (VI). VI analysis divides the activation range of each network’s unit into intervals, such that all of the network’s activation values must lie within the intervals. The boundary of these intervals is obtained by solving linear programs. Two approaches of generating rule conjectures, specific-to-general and generalto-specific, are described. The validity of these conjectures is checked with VI analysis. In this article, we propose an algorithm to extract rules from a pruned network. We assume the network to be a standard feedforward backpropagation network with a single hidden layer that has been trained to meet a prespecified accuracy requirement. The assumption that the network has only a single hidden layer is not restrictive. Theoretical studies have shown that networks with a single hidden layer can approximate arbitrary decision boundaries (Hornik 1991). Experimental studies have also demonstrated the effectiveness of such networks (de Villiers and Barnard 1992). The process of extracting rules from a trained network can be made much easier if the complexity of the network has first been reduced. The pruning process attempts to eliminate as many connections as possible from the network while trying to maintain the prespecified accuracy rate. It is expected that fewer connections will result in more concise rules. No initial knowledge of the problem domain is required. Relevant and irrelevant attributes
Extracting Rules from Neural Networks
207
of the data are distinguished during the pruning process. Those that are relevant will be kept; others will be automatically discarded. A distinguishing feature of our algorithm is that the activation values at the hidden units are clustered into discrete values. When only a small number of inputs are feeding a hidden unit, it is not difficult to extract rules that describe how each of the discrete activation values is obtained. When many inputs are connected to a hidden unit, then its discrete activation values will be used as the target outputs of a new subnetwork. A new hidden layer is inserted between the inputs and this new output layer. The new network is then trained and pruned by the same pruning technique as in the original network. The process of splitting the hidden units and creating new networks is repeated until each hidden unit in the network has only a small number of inputs connected to it, say, no more than five. We describe our rule extraction algorithm in Section 2. A network that has been trained and pruned to solve the splice-junction problem (Lapedes et al. 1989) is used to illustrate in detail how the algorithm works. Each pattern in the data set for this problem is described by a 60-nucleotide-long DNA sequence; the learning task is to recognize the type of boundary at the center of the sequence. This boundary can be an exon-intron boundary, intron-exon boundary, or neither. In Section 3 we present the results of applying the proposed algorithm on a second problem, the sonar target classification problem (Gorman and Sejnowski 1988). The learning task is to distinguish between sonar returns from metal cylinders and those from cylindrically shaped rocks. Each pattern in this data set is described by a set of 60 real numbers between 0 and 1. The data sets for both the splicejunction and the sonar target classification problems were obtained via ftp from the University of California–Irvine repository (Murphy and Aha 1992). Discussion on the proposed algorithm and comparison with related work are presented in Section 4. Finally, a brief conclusion is given in Section 5.
2 Extracting Rules from a Pruned Network The algorithm for rule extraction from a pruned network basically consists of two main steps. The first step clusters the activation values of the hidden units into a small number of clusters. If the number of inputs connected to a hidden unit is relatively large, the second step of the algorithm splits the hidden unit into a number of units. The number of clusters found in the first step determines the number of new units. We form a new network by treating each new unit as an output unit and adding a new hidden layer. We train and prune this network and repeat the process if necessary. The process of forming a new network by treating a hidden unit as a new set of output units and adding a new hidden layer terminates when, after pruning, each hidden unit in the network has a small number of input units connected to it. The outline of the algorithm is as follows.
208
Rudy Setiono
Rule-extraction (RX) algorithm. 1. Train and prune the neural network. 2. Discretize the activation values of the hidden units by clustering. 3. Using the discretized activation values, generate rules that describe the network outputs. 4. For each hidden unit: • If the number of input connections is fewer than an upper bound UC , then extract rules to describe the activation values in terms of the inputs. • Else form a subnetwork: (a) Set the number of output units equal to the number of discrete activation values. Treat each discrete activation value as a target output. (b) Set the number of input units equal to the inputs connected to the hidden units. (c) Introduce a new hidden layer. Apply RX to this subnetwork. 5. Generate rules that relate the inputs and the outputs by merging rules generated in Steps 3 and 4. In Step 3 of algorithm RX, we assume that the number of hidden units and the number of clusters in each hidden unit are relatively small. If there are H hidden units in the pruned network, and Step 2 of RX finds Ci clusters in hidden unit i = 1, 2, . . . , H, then there are C1 × C2 . . . × CH possible combinations of clustered hidden-unit activation values. For each of these combinations, the network output is computed using the weights of the connections from the hidden units to the output units. Rules that describe the network outputs in terms of the clustered activation values are generated using the X2R algorithm (Liu and Tan 1995). The algorithm is designed to generate rules from small data sets with discrete attribute values. It chooses the pattern with the highest frequency, generates the shortest rule for this pattern by checking the information provided by one data attribute at a time, and repeats the process until rules that cover all the patterns are generated. The rules are then grouped according to the class labels, and redundant rules are eliminated by pruning. When there is no noise in the data, X2R generates rules with perfect accuracy on the training patterns. If the number of inputs of a hidden unit is fewer than UC , Step 4 of the algorithm also applies X2R to generate rules that describe the discrete activation values of that hidden unit in terms of the input units. Step 5 of RX merges the two sets of rules generated by X2R. This is done by substituting
Extracting Rules from Neural Networks
209
the conditions of the rules involving clustered activation values generated in Step 3 by the rules generated in Step 4. We shall illustrate how algorithm RX works using the splice-junction domain in the next section. 2.1 The Splice-Junction Problem. We trained a network to solve the splice-junction problem. This problem is a real-world problem arising in molecular biology that has been widely used as a test data set for knowledgebased neural networks training and rule-extraction algorithms (Towell and Shavlik 1993). The total number of patterns in the data set is 3175,1 with each pattern consisting of 60 attributes. The attribute corresponds to a DNA nucleotide and takes one of the following four values: G, T, C, or A. Each pattern is classified as IE (intron-exon boundary), EI (exon-intron boundary), or N (neither) according to the type of boundary at the center of the DNA sequence. Of the 3175 patterns, 1006 were used as training data. The four attribute values G, T, C, and A were coded as {1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, 1, 0}, and {0, 0, 0, 1}, respectively. Including the input for bias, 241 input units were present in the original neural network. The target output for each pattern in class IE was {1, 0, 0}, in class EI {0, 1, 0}, and in class N {0, 0, 1}. Five hidden units were sufficient to give the network more than 95% accuracy rate on the training data. We considered a pattern of type IE to be correctly classified as long as the largest activation value was found in the first output unit. Similarly, patterns of type EI and N were considered to be correctly classified if the largest activation value was obtained in the second and third hidden unit, respectively. The transfer functions used were the hyperbolic tangent function at the hidden layer and the sigmoid function at the output layer. The cross-entropy error measure was used as the objective function during training. To encourage weight decay, a penalty term was added to this objective function. The details of this penalty term and the pruning algorithm are described in Setiono (1997). Aiming to achieve a comparable accuracy rate as those reported in Towell and Shavlik (1993), we terminated the pruning algorithm when the accuracy of the network dropped below 92%. The resulting pruned network is depicted in Figure 1. There was a large reduction in the number of network connections. Of the original 1220 weights in the network, only 16 remained. Only 10 input units (including the bias input unit) remained out of the original 241 inputs. These inputs corresponded to 7 out of the 60 attributes present in the original data. Of the 5 original hidden units 2 were also removed. As we shall see in the next subsection, the number of input units and hidden units plays a crucial role
1 There are 15 more patterns in the data set that we did not use because one or more attributes has a value that is not G, T, C, or A.
210
Rudy Setiono
Figure 1: Pruned network for the splice-junction problem. Following convention, the original attributes have been numbered sequentially from −30 to −1 and 1 to 30. If the expression @n = X is true, then the corresponding input value is 1; otherwise it is 0. Table 1: Accuracy of the Pruned Network on the Training and Testing Data Sets.
Class
Training data Errors (total)
IE EI N Total
13 (243) 11 (228) 53 (535) 77 (1006)
Accuracy (%) 94.65 95.18 90.09 92.35
Testing data Errors (total) 32 (765) 42 (762) 150 (1648) 224 (3175)
Accuracy (%) 95.82 94.49 90.90 92.94
in determining the complexity of the rules extracted from the network. The accuracy of the pruned network is summarized in Table 1. 2.2 Clustering Hidden-Unit Activations. The range of the hyperbolic tangent function is the interval [−1, 1]. If this function is used as the transfer function at the hidden layer, a hidden-unit activation value may lie anywhere within this interval. However, it is possible to replace it by a discrete value without causing too much deterioration in the accuracy of the network. This is done by the following clustering algorithm.
Extracting Rules from Neural Networks
211
Hidden-Unit Activation-Values Clustering Algorithm 1. Let ² ∈ (0, 1). Let D be the number of discrete activation values in the hidden unit. Let δ1 be the activation value in the hidden unit for the first pattern in the training set. Let H(1) = δ1 , count(1) = 1, and sum(1) = δ1 ; set D = 1. 2. For each pattern pi , i = 2, 3, . . . , k in the training set: • Let δ be its activation value. • If there exists an index j¯ such that ¯ = |δ − H( j)|
min
j∈{1,2,...,D}
|δ − H(j)| and
¯ ≤ ², |δ − H( j)| ¯ := count( j) ¯ + 1, sum( j) ¯ := sum( j) ¯ +δ then set count( j) else D = D + 1, H(D) = δ, count(D) = 1, sum(D) = δ. 3. Replace H by the average of all activation values that have been clustered into this cluster: H(j) := sum(j)/count(j), j = 1, 2, . . . , D. Step 1 initializes the algorithm. The first activation value forms the first cluster. Step 2 checks whether subsequent activation values can be clustered into one of the existing clusters. The distance between an activation value ¯ is computed. If this under consideration and its nearest cluster, |δ − H( j)|, distance is less than ², then the activation value is clustered in cluster j.¯ Otherwise this activation value forms a new cluster. Once the discrete values of all hidden units have been obtained, the accuracy of the network is checked again with the activation values at the hidden units replaced by their discretized values. An activation value δ is ¯ where index j¯ is chosen such that j¯ = argmin |δ − H(j)|. If replaced by H( j), j the accuracy of the network falls below the required accuracy, then ² must be decreased and the clustering algorithm is run again. For a sufficiently small ², it is always possible to maintain the accuracy of the network with continuous activation values, although the resulting number of different discrete activations can be impractically large. The best ² value is one that gives a high accuracy rate after the clustering and at the same time generates as few clusters as possible. A simple way of obtaining an optimal value for ² is by searching in the interval (0, 1). The number of clusters and the accuracy of the network can be checked for all values of ² = iξ, i = 1, 2, . . . , where ξ is a small, positive scalar, such as 0.10. Note also that it is not necessary to fix the value of ² equal for all hidden units. For the pruned network depicted in Figure 1, we found the value of ² = 0.6 worked well for all three hidden units. The results of the clustering
212
Rudy Setiono
Table 2: Accuracy of the Pruned Network on the Training and Testing Data Sets with Discrete Activation Values at the Hidden Units. Class
Training data Errors (total)
IE EI N Total
16 (243) 8 (228) 52 (535) 76 (1006)
Accuracy (%) 93.42 96.49 90.28 92.45
Testing data Errors (total) 40 (765) 30 (762) 142 (1648) 212 (3175)
Accuracy (%) 94.77 96.06 91.38 93.32
algorithm were as follows: 1. Hidden unit 1. There were two discrete values: −0.04 and −0.99. Of the 1006 training data, 573 patterns had the first value, and 433 patterns had the second value. 2. Hidden unit 2: There were three discrete values: 0.96, −0.10, and −0.88. The distribution of the training data was 760, 72, and 174, respectively. 3. Hidden unit 3: There were two discrete values: 0.96 and 0. Of the 1006 training data, 520 patterns had the first value, and 486 patterns had the second value. The accuracy of the network with these discrete activation values is summarized in Table 2. With ² = 0.6, the accuracy was not the same as that achieved by the original network. In fact, it was slightly higher on both the training data and the testing data. It appears that a much smaller number of possible values for the hidden-unit activations reduces overfitting of the data. With a sufficiently small value for ², it is always possible to maintain the accuracy of a network after its activation values have been discretized. However, there is no guarantee that the accuracy can increase in general. Two discrete values at hidden unit 1, three at hidden unit 2, and two at hidden unit 3 produced 12 possible outputs for the network. These outputs are listed in Table 3. Let us define the notation α(i, j) to denote the hidden unit i taking its jth activation value. X2R generated the following rules that classify each of the predicted output classes: • If α(1, 1) and α(2, 1) and α(3, 1), then output = IE. • Else if α(2, 3), then output = EI. • Else if α(1, 1) and α(2, 2), then output = EI. • Else if α(1, 2) and α(2, 2) and α(3, 1), then output = EI.
Extracting Rules from Neural Networks
213
Table 3: Output of the Network with Discrete Activation Values at the Hidden Units. Hidden unit activations
Predicted output
Classification
1
2
3
1
2
3
−0.04 −0.04 −0.04 −0.04 −0.04 −0.04 −0.99 −0.99 −0.99 −0.99 −0.99 −0.99
0.96 0.96 −0.10 −0.10 −0.88 −0.88 0.96 0.96 −0.10 −0.10 −0.88 −0.88
0.96 0.00 0.96 0.00 0.96 0.00 0.96 0.00 0.96 0.00 0.96 0.00
0.44 0.44 0.44 0.44 0.44 0.44 0.00 0.00 0.00 0.00 0.00 0.00
0.01 0.01 0.62 0.62 0.99 0.99 0.01 0.01 0.62 0.62 0.99 0.99
0.20 0.99 0.00 0.42 0.00 0.02 0.91 1.00 0.06 0.97 0.00 0.43
IE N EI EI EI EI N N EI N EI EI
• Default output: N. Hidden unit 1 has only two inputs, one of which is the input for bias, and hidden unit 3 has only one input. Very simple rules in terms of the original attributes that describe the activation values of these hidden units were generated by X2R. The rules for these hidden units are: • If @-1=G, then α(1, 1); else α(1, 2). • If @-2=A, then α(3, 1); else α(3, 2). The expression @n=X is true implies that the corresponding input value is 1; otherwise the input value is 0. The seven input units feeding the second hidden unit can generate 64 possible instances. The three inputs @5 (= T, C, or A) can produce 4 different combinations—(0, 0, 0), (0, 0, 1), (0, 1, 0), or (1, 0, 0)—while the other 4 inputs can generate 16 different possible combinations. Rules that define how each of the three activation values of this hidden unit is obtained are not trivial to extract from the network. How this difficulty can be overcome is described in the next subsection. 2.3 Hidden-Unit Splitting and Creation of a Subnetwork. If we set the value of UC in algorithm RX to be less than 7, then in order to extract rules that describe how the three activation values of the second hidden unit are obtained, a subnetwork will be created.
214
Rudy Setiono
Since seven inputs determined the three activation values, the new network also had the same set of inputs. The number of output units corresponded to the number of activation values; in this case, three output units were needed. Each of the 760 patterns whose activation values were equal to 0.96 was assigned a target output of {1, 0, 0}. Patterns with activation values of −0.10 and −0.88 were assigned target outputs of {0, 1, 0} and {0, 0, 1}, respectively. In order to extract rules with the same accuracy rate as the network’s accuracy rate summarized in Table 2, the new network was trained to achieve a 100% accuracy rate. Five hidden units were sufficient to give the network this rate. The pruning process was terminated as soon as the accuracy dropped below 100%. When only 7 of the original 240 inputs were selected, many patterns had duplicates. To reduce the time to train and prune the new network, all duplicates were removed, and the 61 unique patterns left were used as training data. Since the network was trained to achieve a 100% rate, correct predictions of these 61 patterns guaranteed that the same rate of accuracy was obtained on all 1006 patterns. The pruned network is depicted in Figure 2. Three of the five hidden units remained after pruning. Of the original 55 connections in the fully connected network, 16 were still present. The most interesting aspect of the pruned network was that the activation values of the first hidden unit were determined solely by four inputs, while those of the second hidden unit were determined by the remaining three inputs. The activation values in the three hidden units were clustered. The value of ² = 0.2 was found to be small enough for the network with discrete activation values to maintain a 100% accuracy rate and large enough such that there were not too many different discrete values. The results were as follows: 1. Hidden unit 1: There were three discrete values: 0.98, 0.00, and −0.93. Of the original 1006 (61 training) patterns, 593 (39) patterns had the first value, 227 (15) the second value, and 186 (7) the third value. 2. Hidden unit 2: There were three discrete values: 0.88, 0.33, and −0.99. The distribution of the training data was 122 (8), 174 (8), and 710 (45), respectively. 3. Hidden unit 3: There was only one discrete value: 0.98. Since there were three different activation values at hidden units 1 and 2 and only one value at hidden unit 3, a total of nine possible outputs for the network were possible. These possible outputs are summarized in Table 4. In a similar fashion as before, let β(i, j) denote the hidden unit i taking its jth activation value. The following rules that classify each of the predicted output classes were generated by X2R from the data in Table 4: • If β(2, 3), then α(2, 1). • Else if β(1, 1) and β(2, 2), then α(2, 1).
Extracting Rules from Neural Networks
215
Figure 2: Pruned network for the second hidden unit of the original network depicted in Figure 1.
Table 4: Output of the Network with Discrete Activation Values at the Hidden Units for the Pruned Network in Figure 2.
Hidden unit activations
Predicted output
Classification
1
2
3
1
2
3
0.98 0.98 0.98 0.00 0.00 0.00 −0.93 −0.93 −0.93
0.88 0.33 −0.99 0.88 0.33 −0.99 0.88 0.33 −0.99
0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98
0.01 0.99 1.00 0.00 0.00 1.00 0.00 0.00 1.00
0.66 0.21 0.00 0.66 0.21 0.00 0.66 0.21 0.00
0.00 0.00 0.00 0.98 0.02 0.00 1.00 1.00 0.00
α(2, 2) α(2, 1) α(2, 1) α(2, 3) α(2, 2) α(2, 1) α(2, 3) α(2, 3) α(2, 1)
216
Rudy Setiono
• Else if β(1, 1) and β(2, 1), then α(2, 2). • Else if β(1, 2) and β(2, 2), then α(2, 2). • Default output: α(2, 3). X2R extracted rules, in terms of the original attributes, describing how each of the six different activation values of the two hidden units was obtained. These rules are given in Appendix A as level 3 rules. The complete rules generated from the splice-junction domain after the merging of rules in Step 5 of RX algorithm are also given in Appendix A. Since the subnetwork created for the second hidden unit of the original network had been trained to achieve a 100% accuracy rate and the value of ² used to cluster the activation values was also chosen to retain this level of accuracy, the accuracy rates of the rules were exactly the same as those of the network with discrete hidden-unit activation values listed in Table 2: 92.45% on the training data set and 93.32% on the test data set. The results of our experiment suggest that the extracted rules are able to mimic the network from which they are extracted perfectly. In general, if there is a sufficient number of discretized hidden-unit activation values and the subnetworks are trained and pruned to achieve a 100% accuracy rate, then the rules extracted will have the same accuracy rate as the network on the training data. It is possible to mimic the behavior of the pruned network with discrete activation values even on future patterns not in the training data set. Let S be the set of all patterns that can be generated by the inputs connected to a hidden unit. When the discrete activation values in the hidden unit are determined only by a subset of S , it is still possible to extract rules that will cover all possible instances that may be encountered in the future. For each pattern that is not represented in the training data, its continuous activation values are computed using the weights of the pruned network. Each of these activation values is assigned a discrete value in the hidden unit that is closest to it. Once the complete rules have been obtained, the network topology, the weights on the network connections, and the discrete hidden-unit activation values need not be retained. For our experiment on the splice-junction domain, we found that rules that cover all possible instances were generated without having to compute the activation values of patterns not already present in the training data. This was due to the relatively large number of patterns used during training and the small number of inputs found relevant for determining the hidden-unit activation values. 3 Experiment on a Data Set with Continuous Attributes In the previous section, we gave a detailed description of how rules can be extracted from a data set with only nominal attributes. In this section we shall illustrate how rules can also be extracted from a data set having
Extracting Rules from Neural Networks
217
Table 5: Relevant Attributes Found by ChiMerge and Their Subintervals. Attribute A35 A36 A44 A45 A47 A48 A49 A51 A52 A54 A55
Subintervals [0, 0.1300), [0.1300, 1] [0, 0.5070), [0.5070, 1] [0, 0.4280), [0.4280, 0.7760), [0.7760, 1] [0, 0.2810), [0.2810, 1] [0, 0.0610), [0.0610, 0.0910), [0.0910, 0.1530), [0.1530, 1] [0, 0.0762), [0.0762, 1] [0, 0.0453), [0.0453, 1] [0, 0.0215), [0.0215, 1] [0, 0.0054), [0.0054, 1] [0, 0.0226), [0.0226, 1] [0, 0.0047), [0.0047, 0.0057), [0.0057, 0.0064), [0.0064, 0.0127), [0.0127, 1]
continuous attributes. The sonar-returns classification problem (Gorman and Sejnowski 1988) was chosen for this purpose. The data set consisted of 208 sonar returns, each represented by 60 real numbers between 0.0 and 1.0. The task was to distinguish between returns from a metal cylinder and those from a cylindrically shaped rock. Returns from a metal cylinder were obtained at aspect angles spanning 90 degrees and those from rocks at aspect angles spanning 180 degrees. Gorman and Sejnowski (1988) used three-layer feedforward neural networks with a varying number of hidden units to solve this problem. Two series of experiments on this data set were reported: an aspect-angle independent series and an aspect-angle dependent series. In the aspect-angle independent series, the training patterns and the testing patterns were selected randomly from the total set of 208 patterns. In the aspect-angle dependent series, the training and the testing sets were carefully selected to contain patterns from all available aspect angles. It is not surprising that the performance of networks in the aspect-angle dependent series of experiments was better than those in the aspect-angle independent series. We chose the aspect-angle dependent data set to test the rule-extraction algorithm. The training set consisted of 49 sonar returns from metal cylinders and 55 sonar returns from rocks, and the testing set consisted of 62 sonar returns from metal cylinders and 42 sonar returns from rocks. In order to facilitate rule extraction, the numeric attributes of the data were first converted into discrete attributes. This was done by a modified version of the ChiMerge algorithm (Kerber 1992). The training patterns were first sorted according to the value of the attribute being discretized. Initial subintervals were formed by placing each unique value of the attribute in its own subinterval. The χ 2 value of adjacent intervals was computed, and the pairs of adjacent subintervals with the lowest χ 2 value were merged.
218
Rudy Setiono
In Kerber’s original algorithm, merging continues until all pairs of subintervals have χ 2 values exceeding a user-defined parameter χ 2 threshold. Instead of setting a fixed threshold value as a stopping condition for merging, we continued merging the data as long as there was no inconsistency in the discretized data. By inconsistency, we mean two or more patterns from different classes are assigned identical discretized values. Using this strategy, we found that many attributes values were merged into just one subinterval. Attributes with values that had been merged into one interval could be removed without introducing any inconsistency in the discretized data set. Let us denote the original 60 attributes by A1 , A2 , . . . , A60 . After discretization, eleven of these attributes were discretized into two or more discrete values. These attributes and the subintervals found by the ChiMerge algorithm are listed in Table 5. The thermometer coding scheme (Smith 1993) was used for the discretized attributes. Using this coding scheme, 28 binary inputs were needed. Let us denote these inputs by I1 , I2 , . . . , I28 . All patterns with attribute A35 less than 0.1300 were coded by I1 = 0, I2 = 1, while those with attribute value greater than or equal to 0.1300 were coded by I1 = 1, I2 = 1. The other 10 attributes were coded in similar fashion. A network with six hidden units and two output units was trained using the 104 binarized-feature training data set. One additional input for hidden unit thresholds was added to the network, giving a total of 29 inputs. After training was completed, network connections were removed by pruning. The pruning process was continued until the accuracy of the network dropped below 90%. The smallest network with an accuracy of more than 90% was saved for rule extraction. Two hidden units and 15 connections were present in the pruned network. Ten inputs— I1 , I3 , I8 , I11 , I12 , I14 , I20 , I22 , I25 , and I26 —were connected to the first hidden unit. Only one input, I17 , was connected the second hidden unit. Two connections connected each of the two hidden units to the two output units. The hidden-unit activation values clustering algorithm found three discrete activation values, −0.95, 0.94, and −0.04, at the first hidden unit. The number of training patterns having these activation values were 58, 45, and 1, respectively. At the second hidden unit there was only one value, −1. The overall accuracy of the network was 91.35%. All 49 metal cylinder returns in the training data were correctly classified, and 9 rock returns were incorrectly classified as metal cylinder returns. By computing the predicted outputs of the network with discretized hidden-unit activation values, we found that as long as the activation value of the first hidden unit equals −0.95, a pattern would be classified as a sonar return from metal cylinder. Because ten inputs were connected to this hidden unit, a new network was formed to allow us to extract rules. It was sufficient to distinguish between activation values of −0.95 and those not equal to −0.95, that is, 0.94 and −0.04. Hence the number of output units in
Extracting Rules from Neural Networks
219
Figure 3: Pruned network trained to distinguish between patterns having discretized activation values equal to −0.95 and those not equal to −0.95.
the new network was two. Samples with activation values equal to −0.95 were assigned target values of {1, 0}; all other patterns were assigned target values of {0, 1}. The number of hidden units was six. The pruned network is depicted in Figure 3. Note that this network’s accuracy rate was 100%; it correctly distinguished all 58 training patterns having activation values equal to −0.95 from those having activation values 0.94 or −0.04. In order to preserve this accuracy rate, six and three discrete activation values were required at the two hidden units left in the pruned network. At the first hidden unit, the activation values found were −1.00, −0.54, −0.12, 0.46, −0.76, and 0.11. At the second hidden unit, the values were −1.00, 1.00, and 0.00. All 18 possible combinations of these values were given as input to X2R. The target for each combination was computed using the weights on the connections from the hidden units to the output units of the network. Let β(i, j) denote hidden unit i taking its jth activation values. The following rules that describe the classification of the sonar returns were generated by X2R: • If β(1, 1), then output = Metal. • Else if β(2, 2), then output = Metal. • Else if β(1, 2) and β(2, 3), then output = Metal. • Else if β(1, 3) and β(2, 3), then output = Metal. • Else if β(1, 5), then output = Metal. • Default output: Rock.
220
Rudy Setiono
The activation values of the first hidden units were determined by five inputs: I1 , I8 , I12 , I25 , and I26 . Those of the second hidden units were determined by only four inputs: I3 , I11 , I14 , and I22 . Four activation values at the first hidden unit (−1.00, −0.54, −0.12, and −0.76) and two values at the second hidden unit (1.00 and 0.00) were involved in the rules that classified a pattern as a return from metal cylinders. The rules that determined these activation values were as follows: • Hidden unit 1: (initially, β(1, 1) = β(1, 2) = β(1, 3) = β(1, 5) = false): — — — — — —
If (I25 = 0) and (I26 = 1), then β(1, 1). If (I8 = 1) and (I25 = 1), then β(1, 2). If (I8 = 0) and (I12 = 1) and (I26 = 0), then β(1, 2). If (I8 = 0) and (I12 = 1) and (I25 = 1), then β(1, 3). If (I1 = 0) and (I12 = 0) and (I26 = 0), then β(1, 3). If (I8 = 1) and (I26 = 0), then β(1, 5).
• Hidden unit 2: (initially, β(2, 2) = β(2, 3) = false): — If (I3 = 0) and (I14 = 1), then β(2, 2). — If (I22 = 1), then β(2, 2). — If (I3 = 0) and (I11 = 0) and (I14 = 0) and (I22 = 0), then β(2, 3). With the thermometer coding scheme used to encode the original continuous data, it is easy to obtain rules in terms of the original attributes and their values. The complete set of rules generated for the sonar returns data set is given in Appendix B. A total of eight rules that classify a return as that from metal cylinder were generated. Two metal returns and three rock returns satisfied the conditions of one of these eight rules. With this rule removed, the remaining seven rules correctly classified 96 of the 104 training data (92.31%) and 101 of the 104 testing data (97.12%). 4 Discussion and Comparison with Related Work We have shown how rules can be extracted from a trained neural network without making any assumptions about the network’s activations or having initial knowledge about the problem domain. If some knowledge is available, however, it can always be incorporated into the network. For example, connections in the network from inputs thought to be not relevant can be given large penalty parameters during training, while those thought to be relevant can be given zero or small penalty parameters (Setiono 1997). Our algorithm does not require thresholded activation function to force the activation values to be zero or one (Towell and Shavlik 1993; Fu 1991), nor does it require the weights of the connections to be restricted in a certain range (Blassig 1994).
Extracting Rules from Neural Networks
221
Craven and Shavlik (1994) mentioned that one of the difficulties of the search-based method such as that of Saito and Nakano (1988) and Fu (1991) is the complexity of the rules. Some rules are found deep in the search space, and these search methods, having exponential complexity, may not be effective. Our algorithm, in contrast, finds intermediate rules embedded in the hidden units by training and pruning new subnetworks. The topology of any subnetwork is always the same: a three-layer feedforward one regardless of how deep these rules are in the search space. Each hidden unit, if necessary, is split into several output units of a new subnetwork. When more than one new network is created, each can be trained independently. As the activation values of a hidden unit are normally determined by a subset of the original inputs, we can substantially speed up the training of the subnetwork by using only the relevant inputs and removing duplicate patterns. The use of a standard three-layer network is another advantage of our method; any off-the-shelf training algorithm can be used. The accuracy and the number of rules generated by our algorithm are better than those obtained by C4.5 (Quinlan 1993), a popular machine learning algorithm that extracts rules from decision trees. For the splice-junction problem, C4.5’s accuracy rate on the testing data was 92.13%, and the number of rules generated was 43. Our algorithm achieved 1% higher accuracy (93.32%) with far fewer rules (10). The MofN algorithm achieved a 92.8% average accuracy rate, the number of rules was 20, and the number of conditions per rule was more than 100. These figures were obtained from 10 repetitions of tenfold cross-validation on a set of 1000 randomly selected training patterns (Towell and Shavlik 1993). Using 30% of the 3190 samples for training, 10% for cross-validation, and 60% for testing, the gradient descent symbolic rule generation was reported to achieve an accuracy of 93.25%. There were three intermediate rules for the hidden units and two rules for the two output units defining the classes EI and IE (Blassig 1994). For the sonar-return classification problem, the accuracy of the rules generated by our algorithm on the testing set was 2.88% higher (97.12% versus 94.24%) than that of C4.5. The number of rules generated by RX was only seven, three fewer than C4.5’s rules. 5 Conclusion Neural networks are often viewed as black boxes. Although their predictive accuracy is high, one usually cannot understand why a particular outcome is predicted. In this paper, we have attempted to open up these black boxes. Two factors make this possible. The first is a robust pruning algorithm. When redundant weights are eliminated, redundant input and hidden units are identified and removed from the network. Removal of these redundant units significantly simplifies the process of rule extraction and the extracted rules themselves. The second factor is the clustering of the hidden-unit activation values. The fact that the number of distinct activation values at
222
Rudy Setiono
the hidden units can be made small enough enables us to extract simple rules. An important feature of our rule-extraction algorithm is its recursive nature. When, after pruning, a hidden unit is still connected to a relatively large number of inputs, in order to generate rules that describe its activation values, a new network is formed. The rule-extraction algorithm is applied to a new network. By merging the rules extracted from the new network and the rules extracted from the original network, hierarchical rules are generated. The viability of the proposed method has been shown on two sets of real-world data. Compact sets of rules that achieved a high accuracy rate were extracted for these data sets. Appendix A The following rules are generated by the algorithm for the splice-junction domain (initial values for β(i, j) and α(i, j) are false): • Level 3 (rules that define the activation values of the subnetwork depicted in Fig. 2): 1. If (@4=A ∧ @5=C) ∨ (@46=A ∧ @5=G), then β(1, 2), else if (@4=A ∧ @5=G), then β(1, 3), else β(1, 1). 2. If (@1=G ∧ @2=T ∧ @3=A), then β(2, 1), else if (@1=G ∧ @2=T ∧ @36= A), then β(2, 2), else β(2, 3). • Level 2 (rules that define the activation values of the second hidden unit of the network depicted in Fig. 1): If β(2, 3) ∨ (β(1, 1) ∧ β(2, 2)), then α(2, 1), else if (β(1, 1) ∧ β(2, 1)) ∨ (β(1, 2) ∧ β(2, 2)), then α(2, 2), else α(2, 3). • Level 1 (rules extracted from the pruned network depicted in Fig. 1): If (@-1=G ∧ α(2, 1) ∧ @-2=A), then IE, else if α(2, 3) ∨ (@-1=G ∧ α(2, 2))∨ (@-16=G ∧ α(2, 2) ∧ @-2=A), then EI, else N. On removal of the intermediate rules in levels 2 and 3, we obtain the following equivalent set of rules: IE :- @-2 ’AGH----’. IE :- @-2 ’AG-V---’. IE :- @-2 ’AGGTBAT’.
Extracting Rules from Neural Networks
223
Table A.1: Accuracy of the Extracted Rules on the Splice-Junction Training and Testing Data Sets.
Class
Training data Errors (total)
IE EI N Total
17 (243) 10 (228) 49 (535) 76 (1006)
IE IE EI EI EI EI EI EI EI EI EI EI EI N.
:::::::::::::-
@-2 @-2 @-2 @-2 @-2 @-2 @-2 @-2 @-2 @-2 @-2 @-2 @-2
Testing data
Accuracy (%) 93.00 95.61 90.84 92.45
’AGGTBAA’. ’AGGTBBH’. ’--GT-AG’. ’--GTABG’. ’--GTAAC’. ’-GGTAAW’. ’-GGTABH’. ’-GGTBBG’. ’-GGTBAC’. ’AHGTAAW’. ’AHGTABH’. ’AHGTBBG’. ’AHGTBAC’.
Errors (total) 48 (765) 40 (762) 134 (1648) 222 (3175)
Accuracy (%) 93.73 94.75 91.50 93.01
(*)
(*) (*) (*) (*) (*)
An extended version of standard Prolog notation has been used to express the rules concisely. For example, the first rule indicates that an A in position −2, a G in position −1, and an H in position 1 will result in the classification of the boundary type to be an intron-exon. Following convention, the letter B means C or G or T; W means A or T; H means A or C or T; and V means A or C or G. The character ‘-’ indicates any one of the four nucleotides A, C, G, or T. Each rule marked by (∗) classifies no more than three patterns out the 1006 patterns in the training set. The accuracy rates in Table A.1 were obtained using the unmarked 10 rules for classification. Appendix B The following rules were generated by the algorithm for the sonar-returns classification problem (original attributes are assumed to have been labeled
224
Rudy Setiono
Table B.1: Accuracy of the Extracted Rules on the Sonar-Returns Training and Testing Data Sets.
Class
Training data Errors (total)
Metal Rock Total
2 (49) 6 (55) 8 (104)
Accuracy (%) 95.92 89.09 92.31
Testing data Errors (total) 0 (62) 3 (42) 3 (104)
Accuracy (%) 100.00 92.86 97.12
A1, A2, . . . , A60): Metal Metal Metal Metal Metal
:::::-
A36 A54 A45 A55 A36 A47 A55 Metal :- A35 A48 Metal :- A36 A48 Metal :- A36 A47 A55 Rock.
< 0.5070 and A48 >= 0.0762. >= 0.0226. >= 0.2810 and A55 < 0.0057. < 0.0064 and A55 >= 0.0057. < 0.5070 and A45 < 0.2810 and A47 < 0.0910 and >= 0.0610 and A48 < 0.0762 and A54 < 0.0226 and < 0.0057. < 0.1300 and A36 < 0.5070 and A47 < 0.0610 and < 0.0762 and A54 < 0.0226 and A55 < 0.0057. < 0.5070 and A45 >= 0.2810 and A47 < 0.0910 and < 0.0762 and A54 < 0.0226 and A55 >= 0.0064. (*) < 0.5070 and A45 < 0.2810 and A47 < 0.0910 and >= 0.0610 and A48 < 0.0762 and A54 < 0.0226 and >= 0.0064. (**)
The rule marked (∗) does not classify any pattern in the training set. The conditions of the rule marked (∗∗) are satisfied by two metal returns and three rock returns in the training set. The accuracy rates in Table B.1 are obtained using the six unmarked rules. Acknowledgments I thank my colleague Huan Liu for providing the results of C4.5 and the two anonymous reviewers for their comments and suggestions, which greatly improved the presentation of this article.
Extracting Rules from Neural Networks
225
References Blassig, R. 1994. GDS: Gradient descent generation of symbolic classification rules. In Advances in Neural Information Processing Systems, Vol. 6, pp. 1093– 1100. Morgan Kaufmann, San Mateo, CA. Craven, M. W., and Shavlik, J. W. 1994. Using sampling and queries to extract rules from trained neural networks. In Proc. of the Eleventh International Conference on Machine Learning. Morgan Kaufmann, San Mateo, CA. de Villiers, J., and Barnard, E. 1992. Backpropagation neural nets with one and two hidden layers. IEEE Trans. on Neural Networks 4(1), 136–141. Fu, L. 1991. Rule learning by searching on adapted nets. In Proc. of the Ninth National Conference on Artificial Intelligence, pp. 590–595. AAAI Press/MIT Press, Menlo Park, CA. Gorman, R. P., and Sejnowski, T. J. 1988. Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks 1, 75–89. Hornik, K. 1991. Approximation capabilities of multilayer feedforward neural networks. Neural Networks 4, 251–257. Kerber, R. 1992. Chi-merge: Discretization of numeric attributes. In Proc. of the Ninth National Conference on Artificial Intelligence, pp. 123–128. AAAI Press/MIT Press, Menlo Park, CA. Lapedes, A., Barnes, C., Burks, C., Farber, R., and Sirotkin, K. 1989. Application of neural networks and other machine learning algorithms to DNA sequence analysis. In Computers and DNA, pp. 157–182, Addison-Wesley, Redwood City, CA. Liu, H., and Tan, S. T. 1995. X2R: A fast rule generator. In Proceedings of IEEE International Conference on Systems, Man and Cybernetics, pp. 1631–1635. IEEE Press, New York. Murphy, P. M., and Aha, D. W. 1992. UCI repository of machine learning databases. Machine-readable data repository, University of California, Irvine, CA. Quinlan, J. R. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA. Saito, K., and Nakano, R. 1988. Medical diagnosis expert system based on PDP model. In Proc. IEEE International Conference on Neural Networks, pp. 1255– 1262. IEEE Press, New York. Setiono, R. 1997. A penalty-function approach for pruning feedforward neural networks. Neural Computation 9, 185–204. Smith, M. 1993. Neural networks for statistical modelling. Van Nostrand Reinhold, New York. Thrun, S. 1995. Extracting rules from artificial neural networks with distributed representations. In Advances in Neural Information Processing Systems, Vol. 7. MIT Press, Cambridge, MA. Towell, G. G., and Shavlik, J. W. 1993. Extracting refined rules from knowledgebased neural networks. Machine Learning 13(1), 71–101. Received November 23, 1994; accepted March 13, 1996.
REVIEW
Communicated by Geoffrey Hinton
Probabilistic Independence Networks for Hidden Markov Probability Models Padhraic Smyth Department of Information and Computer Science, University of California at Irvine, Irvine, CA 92697-3425 USA and Jet Propulsion Laboratory 525-3660, California Institute of Technology, Pasadena, CA 91109 USA
David Heckerman Microsoft Research, Redmond, WA 98052-6399 USA
Michael I. Jordan Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
Graphical techniques for modeling the dependencies of random variables have been explored in a variety of different areas, including statistics, statistical physics, artificial intelligence, speech recognition, image processing, and genetics. Formalisms for manipulating these models have been developed relatively independently in these research communities. In this paper we explore hidden Markov models (HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper presents a self-contained review of the basic principles of PINs. It is shown that the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general inference algorithms for arbitrary PINs. Furthermore, the existence of inference and estimation algorithms for more general graphical models provides a set of analysis tools for HMM practitioners who wish to explore a richer class of HMM structures. Examples of relatively complex models to handle sensor fusion and coarticulation in speech recognition are introduced and treated within the graphical model framework to illustrate the advantages of the general approach. 1 Introduction For multivariate statistical modeling applications, such as hidden Markov modeling (HMM) for speech recognition, the identification and manipulation of relevant conditional independence assumptions can be useful for model building and analysis. There has recently been a considerable amount Neural Computation 9, 227–269 (1997)
c 1997 Massachusetts Institute of Technology °
228
Padhraic Smyth, David Heckerman, and Michael I. Jordan
of work exploring the relationships between conditional independence in probability models and structural properties of related graphs. In particular, the separation properties of a graph can be directly related to conditional independence properties in a set of associated probability models. The key point of this article is that the analysis and manipulation of generalized HMMs (more complex HMMs than the standard first-order model) can be facilitated by exploiting the relationship between probability models and graphs. The major advantages to be gained are in two areas: • Model description. A graphical model provides a natural and intuitive medium for displaying dependencies that exist between random variables. In particular, the structure of the graphical model clarifies the conditional independencies in the associated probability models, allowing model assessment and revision. • Computational efficiency. The graphical model is a powerful basis for specifying efficient algorithms for computing quantities of interest in the probability model (e.g., calculation of the probability of observed data given the model). These inference algorithms can be specified automatically once the initial structure of the graph is determined. We will refer to both probability models and graphical models. Each consists of structure and parameters. The structure of the model consists of the specification of a set of conditional independence relations for the probability model or a set of (missing) edges in the graph for the graphical model. The parameters of both the probability and graphical models consist of the specification of the joint probability distribution: in factored form for the probability model and defined locally on the nodes of the graph for the graphical model. The inference problem is that of the calculation of posterior probabilities of variables of interest given observable data and a specification of the probabilistic model. The related task of maximum a posteriori (MAP) identification is the determination of the most likely state of a set of unobserved variables, given observed variables and the probabilistic model. The learning or estimation problem is that of determining the parameters (and possibly structure) of the probabilistic model from data. This article reviews the applicability and utility of graphical modeling to HMMs and various extensions of HMMs. Section 2 introduces the basic notation for probability models and associated graph structures. Section 3 summarizes relevant results from the literature on probabilistic independence networks (PINs), in particular, the relationships that exist between separation in a graph and conditional independence in a probability model. Section 4 interprets the standard first-order HMM in terms of PINs. In Section 5 the standard algorithm for inference in a directed PIN is discussed and applied to the standard HMM in Section 6. A result of interest is that the forward-backward (F-B) and Viterbi algorithms are shown to be special cases of this inference algorithm. Section 7 shows that the inference algo-
Probabilistic Independence Networks
229
rithms for undirected PINs are essentially the same as those already discussed for directed PINs. Section 8 introduces more complex HMM structures for speech modeling and analyzes them using the graphical model framework. Section 9 reviews known estimation results for graphical models and discusses their potential implications for practical problems in the estimation of HMM structures, and Section 10 contains summary remarks. 2 Notation and Background Let U = {X1 , X2 , . . . , XN } represent a set of discrete-valued random variables. For the purposes of this article we restrict our attention to discretevalued random variables; however, many of the results stated generalize directly to continuous and mixed sets of random variables (Lauritzen and Wermuth 1989; Whittaker 1990). P Let lowercase xi denote one of the values of variable Xi : the notation x1 is taken to mean the sum over all possible values of X1 . Let p(xi ) be shorthand for the particular probability p(Xi = xi ), whereas p(Xi ) represents the probability function for Xi (a table of values, since Xi is assumed discrete), 1 ≤ i ≤ N. The full joint distribution function is p(U) = p(X1 , X2 , . . . , XN ), and p(u) = (x1 , x2 , . . . , xN ) denotes a particular value assignment for U. Note that this full joint distribution p(U) = p(X1 , X2 , . . . , XN ) provides all the possible information one needs to calculate any marginal or conditional probability of interest among subsets of U. If A, B, and C are disjoint sets of random variables, the conditional independence relation A ⊥ B|C is defined such that A is independent of B given C, that is, p(A, B|C) = p(A|C)p(B|C). Conditional independence is symmetric. Note also that marginal independence (no conditioning) does not in general imply conditional independence, nor does conditional independence in general imply marginal independence (Whittaker 1990). With any set of random variables U we can associate a graph G defined as G = (V, E). V denotes the set of vertices or nodes of the graph such that there is a one-to-one mapping between the nodes in the graph and the random variables, that is, V = {X1 , X2 , . . . , XN }. E denotes the set of edges, {e(i, j)}, where i and j are shorthand for the nodes Xi and Xj , 1 ≤ i, j ≤ N. Edges of the form e(i, i) are not of interest and thus are not allowed in the graphs discussed in this article. An edge may be directed or undirected. Our convention is that a directed edge e(i, j) is directed from node i to node j, in which case we sometimes say that i is a parent of its child j. An ancestor of node i is a node that has as a child either i or another ancestor of i. A subset of nodes A is an ancestral set if it contains its own ancestors. A descendant of i is either a child of i or a child of a descendant of i. Two nodes i and j are adjacent in G if E contains the undirected or directed edge e(i, j). An undirected path is a sequence of distinct nodes {1, . . . , m} such that there exists an undirected or directed edge for each pair of nodes
230
Padhraic Smyth, David Heckerman, and Michael I. Jordan
{l, l+1} on the path. A directed path is a sequence of distinct nodes {1, . . . , m} such that there exists a directed edge for each pair of nodes {l, l + 1} on the path. A graph is singly connected if there exists only one undirected path between any two nodes in the graph. An (un)directed cycle is a path such that the beginning and ending nodes on the (un)directed path are the same. If E contains only undirected edges, then the graph G is an undirected graph (UG). If E contains only directed edges then the graph G is a directed graph (DG). Two important classes of graphs for modeling probability distributions that we consider in this paper are UGs and acyclic directed graphs (ADGs)— directed graphs having no directed cycles. We note in passing that there exists a theory for graphical independence models involving both directed and undirected edges (chain graphs, Whittaker 1990), but these are not discussed here. For a UG G, a subset of nodes C separates two other subsets of nodes A and B if every path joining every pair of nodes i ∈ A and j ∈ B contains at least one node from C. For ADGs, analogous but somewhat more complicated separation properties exist. A graph G is complete if there are edges between all pairs of nodes. A cycle in an undirected graph is chordless if none other than successive pairs of nodes in the cycle are adjacent. An undirected graph G is triangulated if and only if the only chordless cycles in the graph contain no more than three nodes. Thus, if one can find a chordless cycle of length four or more, G is not triangulated. A clique in an undirected graph G is a subgraph of G that is complete. A clique tree for G is a tree of cliques such that there is a one-to-one correspondence between the cliques of G and the nodes of the tree. 3 Probabilistic Independence Networks We briefly review the relation between a probability model p(U) = p(X1 , . . . , XN ) and a probabilistic independence network structure G = (V, E) where the vertices V are in one-to-one correspondence with the random variables in U. (The results in this section are largely summarized versions of material in Pearl 1988 and Whittaker 1990.) A PIN structure G is a graphical statement of a set of conditional independence relations for a set of random variables U. Absence of an edge e(i, j) in G implies some independence relation between Xi and Xj . Thus, a PIN structure G is a particular way of specifying the independence relationships present in the probability model p(U). We say that G implies a set of probability models p(U), denoted as PG , that is, p(U) ∈ PG . In the reverse direction, a particular model p(U) embodies a particular set of conditional independence assumptions that may or may not be representable in a consistent graphical form. One can derive all of the conditional independence
Probabilistic Independence Networks
231
Figure 1: An example of a UPIN structure G that captures a particular set of conditional independence relationships among the set of variables {X1 , . . . , X6 }—for example, X5 ⊥ {X1 , X2 , X4 , X6 } | {X3 }.
properties and inference algorithms of interest for U without reference to graphical models. However, as has been emphasized in the statistical and artificial intelligence literature, and as reiterated in this article in the context of HMMs, there are distinct advantages to be gained from using the graphical formalism. 3.1 Undirected Probabilistic Independence Networks (UPINs). A UPIN is composed of both a UPIN structure and UPIN parameters. A UPIN structure specifies a set of conditional independence relations for a probability model in the form of an undirected graph. UPIN parameters consist of numerical specifications of a particular probability model consistent with the UPIN structure. Terms used in the literature to describe UPINs of one form or another include Markov random fields (Isham 1981; Geman and Geman 1984), Markov networks (Pearl 1988), Boltzmann machines (Hinton and Sejnowski 1986), and log-linear models (Bishop et al. 1973). 3.1.1 Conditional Independence Semantics of UPIN Structures. Let A, B, and S be any disjoint subsets of nodes in an undirected graph G. G is a UPIN structure for p(U) if for any A, B, and S such that S separates A and B in G, the conditional independence relation A ⊥ B|S holds in p(U). The set of all conditional independence relations implied by separation in G constitutes the (global) Markov properties of G. Figure 1 shows a simple example of a UPIN structure for six variables. Thus, separation in the UPIN structure implies conditional independence in the probability model; i.e., it constrains p(U) to belong to a set of probability models PG that obey the Markov properties of the graph. Note that a complete UG is trivially a UPIN structure for any p(U) in the sense that
232
Padhraic Smyth, David Heckerman, and Michael I. Jordan
there are no constraints on p(U). G is a perfect undirected map for p if G is a UPIN structure for p(U) and all the conditional independence relations present in p(U) are represented by separation in G. For many probability models p there are no perfect undirected maps. A weaker condition is that a UPIN structure G is minimal for a probability model p(U) if the removal of any edge from G implies an independence relation not present in the model p(U); that is, the structure without the edge is no longer a UPIN structure for p(U). Minimality is not equivalent to perfection (for UPIN structures) since, for example, there exist probability models with independencies that cannot be represented as UPINs except for the complete UPIN structure. For example, consider that X and Y are marginally independent but conditionally dependent given Z (e.g., X and Y are two independent causal variables with a common effect Z). In this case the complete graph is the minimal UPIN structure for {X, Y, Z}, but it is not perfect because of the presence of an edge between X and Y. 3.1.2 Probability Functions on UPIN structures. Given a UPIN structure G, the joint probability distribution for U can be expressed as a simple factorization: p(u) = p(x1 , . . . , xN ) =
Y
aC (xC ),
(3.1)
VC
where VC is the set of cliques of G, xC represents an assignment of values to the variables in a particular clique C, and the aC (xC ) are non-negative clique functions. (The domain of each aC (xC ) is the set of possible assignments of values to the variables in the clique C, and the range of aC (xC ) is the semiinfinite interval [0, ∞).) The set of clique functions associated with a UPIN structure provides the numerical parameterization of the UPIN. A UPIN is equivalent to a Markov random field (Isham 1981). In the Markov random field literature the clique functions are generally referred to as potential functions. A related terminology, used in the context of the Boltzmann machine (Hinton and Sejnowski 1986), is that of energy function. The exponential of the negative energy of a configuration is a Boltzmann factor. Scaling each Boltzmann factor by the sum across Boltzmann factors (the partition function) yields a factorization of the joint density (the Boltzmann distribution), that is, a product of clique functions.1 The advantage of defining clique functions directly rather than in terms of the exponential of an energy function is that the range of the clique functions can be allowed 1 A Boltzmann machine is a special case of a UPIN in which the clique functions can be decomposed into products of factors associated with pairs of variables. If the Boltzmann machine is augmented to include “higher-order” energy terms, one for each clique in the graph, then we have a general Markov random field or UPIN, restricted to positive probability distributions due to the exponential form of the clique functions.
Probabilistic Independence Networks
233
Figure 2: A triangulated version of the UPIN structure G from Figure 1.
to contain zero. Thus equation 3.1 can represent configurations of variables having zero probability. A model p is said to be decomposable if it has a minimal UPIN structure G that is triangulated (see Fig. 2). A UPIN structure G is decomposable if G is triangulated. For the special case of decomposable models, G can be converted to a junction tree, which is a tree of cliques of G arranged such that the cliques satisfy the running intersection property, namely, that each node in G that appears in any two different cliques also appears in all cliques on the undirected path between these two cliques. Associated with each edge in the junction tree is a separator S, such that S contains the variables in the intersection of the two cliques that it links. Given a junction tree representation, one can factorize p(U) as the product of clique marginals over separator marginals (Pearl 1988): Q C∈V p(u) = Q C S∈VS
p(xC ) p(xS )
,
(3.2)
where p(xC ) and p(xS ) are the marginal (joint) distributions for the variables in clique C and separator S, respectively, and VC and VS are the set of cliques and separators in the junction tree. This product representation is central to the results in the rest of the article. It is the basis of the fact that globally consistent probability calculations on U can be carried out in a purely local manner. The mechanics of these local calculations will be described later. At this point it is sufficient to note that the complexity of the local inference algorithms scales as the sum of the sizes of the clique state-spaces (where a clique state-space is equal to the product over each variable in the clique of the number of states of each variable). Thus, local clique updating can make probability calculations on U much more tractable than using “brute force” inference if the model decomposes into relatively small cliques.
234
Padhraic Smyth, David Heckerman, and Michael I. Jordan
Many probability models of interest may not be decomposable. However, we can define a decomposable cover G0 for p such that G0 is a triangulated, but not necessarily minimal, UPIN structure for p. Since any UPIN G can be triangulated simply by the addition of the appropriate edges, one can always identify at least one decomposable cover G0 . However, a decomposable cover may not be minimal in that it can contain edges that obscure certain independencies in the model p; for example, the complete graph is a decomposable cover for all possible probability models p. For efficient inference, the goal is to find a decomposable cover G0 such that G0 contains as few extra edges as possible over the original UPIN structure G. Later we discuss a specific algorithm for finding decomposable covers for arbitrary PIN structures. All singly connected UPIN structures imply probability models PG that are decomposable. Note that given a particular probability model p and a UPIN G for p, the process of adding extra edges to G to create a decomposable cover does not change the underlying probability model p; that is, the added edges are a convenience for manipulating the graphical representation, but the underlying numerical probability specifications remain unchanged. An important point is that decomposable covers have the running intersection property and thus can be factored as in equation 3.2. Thus local clique updating is also possible with nondecomposable models by this conversion. Once again, the complexity of such local inference scales with the sum of the size of the clique state-spaces in the decomposable cover. In summary, any UPIN structure can be converted to a junction tree permitting inference calculations to be carried out purely locally on cliques. 3.2 Directed Probabilistic Independence Networks (DPINs). A DPIN is composed of both a DPIN structure and DPIN parameters. A DPIN structure specifies a set of conditional independence relations for a probability model in the form of a directed graph. DPIN parameters consist of numerical specifications of a particular probability model consistent with the DPIN structure. DPINs are referred to in the literature using different names, including Bayes network, belief network, recursive graphical model, causal (belief) network, and probabilistic (causal) network. 3.2.1 Conditional Independence Semantics of DPIN Structures. A DPIN structure is an ADG GD = (V, E) where there is a one-to-one correspondence between V and the elements of the set of random variables U = {X1 , . . . , XN }. It is convenient to define the moral graph GM of GD as the undirected graph obtained from GD by placing undirected edges between all nonadjacent parents of each node and then dropping the directions from the remaining directed edges (see Fig. 3b for an example). The term moral was coined to denote the “marrying” of “unmarried” (nonadjacent) parents. The motivation behind this procedure will become clear when we discuss the differ-
Probabilistic Independence Networks
235
(a)
(b)
Figure 3: (a) A DPIN structure GD that captures a set of independence relationships among the set {X1 , . . . , X5 }—for example, X4 ⊥ X1 |X2 . (b) The moral graph GM for GD , where the parents of X4 have been linked.
ences between DPINs and UPINs in Section 3.3. We shall also see that this conversion of a DPIN into a UPIN is a convenient way to solve DPIN inference problems by “transforming” the problem into an undirected graphical setting and taking advantage of the general theory available for undirected graphical models. We can now define a DPIN as follows. Let A, B, and S be any disjoint subsets of nodes in GD . GD is a DPIN structure for p(U) if for any A, B, and S such that S separates A and B in GD , the conditional independence relation A ⊥ B|S holds in p(U). This is the same definition as for a UPIN structure except that separation has a more complex interpretation in the directed context: S separates A from B in a directed graph if S separates A from B in the moral (undirected) graph of the smallest ancestral set containing A, B, and S (Lauritzen et al. 1990). It can be shown that this definition of a DPIN structure is equivalent to the more intuitive statement that given the values of its parents, a variable Xi is independent of all other nodes in the directed graph except for its descendants.
236
Padhraic Smyth, David Heckerman, and Michael I. Jordan
Thus, as with a UPIN structure, the DPIN structure implies certain conditional independence relations, which in turn imply a set of probability models p ∈ PGD . Figure 3a contains a simple example of a DPIN structure. There are many possible DPIN structures consistent with a particular probability model p(U), potentially containing extra edges that hide true conditional independence relations. Thus, one can define minimal DPIN structures for p(U) in a manner exactly equivalent to that of UPIN structures: Deletion of an edge in a minimal DPIN structure GD implies an independence relation that does not hold in p(U) ∈ PGD . Similarly, GD is a perfect DPIN structure G for p(U) if GD is a DPIN structure for p(U) and all the conditional independence relations present in p(U) are represented by separation in GD . As with UPIN structures, minimal does not imply perfect for DPIN structures. For example, consider the independence relations X1 ⊥ X4 |{X2 , X3 } and X2 ⊥ X3 |{X1 , X4 }: the minimal DPIN structure contains an edge from X3 to X2 (see Fig. 4b). A complete ADG is trivially a DPIN structure for any probability model p(U). 3.2.2 Probability Functions on DPINs. A basic property of a DPIN structure is that it implies a direct factorization of the joint probability distribution p(U): N Y p(xi | pa(xi )), (3.3) p(u) = i=1
where pa(xi ) denotes a value assignment for the parents of Xi . A probability model p can be written in this factored form in a trivial manner by the conditioning rule. Note that a directed graph containing directed cycles does not necessarily yield such a factorization, hence the use of ADGs. 3.3 Differences between Directed and Undirected Graphical Representations. It is an important point that directed and undirected graphs possess different conditional independence semantics. There are common conditional independence relations that have perfect DPIN structures but no perfect UPIN structures, and vice versa (see Figure 4 for examples). Does a DPIN structure have the same Markov properties as the UPIN structure obtained by dropping all the directions on the edges in the DPIN structure? The answer is yes if and only if the DPIN structure contains no subgraphs where a node has two or more nonadjacent parents (Whittaker 1990; Pearl et al. 1990). In general, it can be shown that if a UPIN structure G for p is decomposable (triangulated), then it has the same Markov properties as some DPIN structure for p. On a more practical level, DPIN structures are frequently used to encode causal information, that is, to represent the belief formally that Xi precedes Xj in some causal sense (e.g., temporally). DPINs have found application in causal modeling in applied statistics and artificial intelligence. Their popularity in these fields stems from the fact that the joint probability model can
Probabilistic Independence Networks
237
(a)
(b)
Figure 4: (a) The DPIN structure to encode the fact that X3 depends on X1 and X2 but X1 ⊥ X2 . For example, consider that X1 and X2 are two independent coin flips and that X3 is a bell that rings when the flips are the same. There is no perfect UPIN structure that can encode these dependence relationships. (b) A UPIN structure that encodes X1 ⊥ X4 |{X2 , X3 } and X2 ⊥ X3 |{X1 , X4 }. There is no perfect DPIN structure that can encode these dependencies.
be specified directly by equation 3.3, that is, by the specification of conditional probability tables or functions (Spiegelhalter el al. 1991). In contrast, UPINs must be specified in terms of clique functions (as in equation 3.1), which may not be as easy to work with (cf. Geman and Geman 1984, Modestino and Zhang 1992, and Vandermeulen et al. 1994 for examples of ad hoc design of clique functions in image analysis). UPINs are more frequently used in problems such as image analysis and statistical physics where associations are thought to be correlational rather than causal. 3.4 From DPINs to (Decomposable) UPINs. The moral UPIN structure GM (obtained from the DPIN structure GD ) does not imply any new independence relations that are not present in GD . As with triangulation, however, the additional edges may obscure conditional independence relations implicit in the numeric specification of the original probability model p associated with the DPIN structure GD . Furthermore, GM may not be triangulated (decomposable). By the addition of appropriate edges, the moral graph can be converted to a (nonunique) triangulated graph G0 , namely, a decomposable cover for GM . In this manner, for any probability model p for which GD is a DPIN structure, one can construct a decomposable cover G0 for p. This mapping from DPIN structures to UPIN structures was first discussed in the context of efficient inference algorithms by Lauritzen and
238
Padhraic Smyth, David Heckerman, and Michael I. Jordan
Spiegelhalter (1988). The advantage of this mapping derives from the fact that analysis and manipulation of the resulting UPIN are considerably more direct than dealing with the original DPIN. Furthermore, it has been shown that many of the inference algorithms for DPINs are in fact special cases of inference algorithms for UPINs and can be considerably less efficient (Shachter et al. 1994). 4 Modeling HMMs as PINs 4.1 PINs for HMMs. In hidden Markov modeling problems (Baum and Petrie 1966; Poritz 1988; Rabiner 1989; Huang et al. 1990; Elliott et al. 1995) we are interested in the set of random variables U = {H1 , O1 , H2 , O2 , . . . , HN−1 , ON−1 , HN , ON }, where Hi is a discrete-valued hidden variable at index i, and Oi is the corresponding discrete-valued observed variable at index i, 1 ≤ i ≤ N. (The results here can be directly extended to continuous-valued observables.) The index i denotes a sequence from 1 to N, for example, discrete time steps. Note that Oi is considered univariate for convenience: the extension to the multivariate case with d observables is straightforward but is omitted here for simplicity since it does not illuminate the conditional independence relationships in the HMM. The well-known simple first-order HMM obeys the following two conditional independence relations: Hi ⊥ {H1 , O1 , . . . , Hi−2 , Oi−2 , Oi−1 } | Hi−1 ,
3≤i≤N
(4.1)
and Oi ⊥ {H1 , O1 , . . . , Hi−1 , Oi−1 } | Hi ,
2 ≤ i ≤ N.
(4.2)
We will refer to this “first-order” hidden Markov probability model as HMM(1,1): the notation HMM(K, J) is defined such that the hidden state of the model is represented via the conjoined configuration of J underlying random variables and such that the model has state memory of depth K. The notation will be clearer in later sections when we discuss specific examples with K, J > 1. Construction of a PIN for HMM(1,1) is simple. In the undirected case, assumption 1 requires that each state Hi is connected to Hi−1 only from the set {H1 , O1 , . . . , Hi−2 , Oi−2 , Hi−1 , Oi−1 }. Assumption 2 requires that Oi is connected only to Hi . The resulting UPIN structure for HMM(1,1) is shown in Figure 5a. This graph is singly connected and thus implies a decomposable probability model p for HMM(1,1), where the cliques are of the form {Hi , Oi } and {Hi−1 , Hi } (see Fig. 5b). In Section 5 we will see how the joint probability function can be expressed as a product function on the junction tree, thus leading to a junction tree definition of the familiar F-B and Viterbi inference algorithms.
Probabilistic Independence Networks
239
(a)
(b)
Figure 5: (a) PIN structure for HMM(1,1). (b) A corresponding junction tree.
For the directed case, the connectivity for the DPIN structure is the same. It is natural to choose the directions on the edges between Hi−1 and Hi as going from i − 1 to i (although the reverse direction could also be chosen without changing the Markov properties of the graph). The directions on the edges between Hi and Oi must be chosen as going from Hi to Oi rather than in the reverse direction (see Figure 6a). In reverse (see Fig. 6b) the arrows would imply that Oi is marginally independent of Hi−1 , which is not true in the HMM(1,1) probability model. The proper direction for the edges implies the correct relation, namely, that Oi is conditionally independent of Hi−1 given Hi . The DPIN structure for HMM(1,1) does not possess a subgraph with nonadjacent parents. As stated earlier, this implies that the implied independence properties of the DPIN structure are the same as those of the corresponding UPIN structure obtained by dropping the directions from the edges in the DPIN structure, and thus they both result in the same junction tree structure (see Fig. 5b). Thus, for the HMM(1,1) probability model, the minimal directed and undirected graphs possess the same Markov properties; they imply the same conditional independence relations. Furthermore, both PIN structures are perfect maps for the directed and undirected cases, respectively.
240
Padhraic Smyth, David Heckerman, and Michael I. Jordan
(a)
(b)
Figure 6: DPIN structures for HMM(1,1). (a) The DPIN structure for the HMM(1,1) probability model. (b) A DPIN structure that is not a DPIN structure for the HMM(1,1) probability model.
4.2 Inference and MAP Problems in HMMs. In the context of HMMs, the most common inference problem is the calculation of the likelihood of the observed evidence given the model, that is, p(o1 , . . . , oN |model), where the o1 , . . . , oN denote observed values for O1 , . . . , ON . (In this section we will assume that we are dealing with one particular model where the structure and parameters have already been determined, and thus we will not explicitly indicate conditioning on the model.) The “brute force” method for obtaining this probability would be to sum out the unobserved state variables from the full joint probability distribution: X p(H1 , o1 , . . . , HN , oN ), (4.3) p(o1 , . . . , oN ) = h1 ,...,hN
where hi denotes the possible values of hidden variable Hi . In general, both of these computations scale as mN where m is the number of states for each hidden variable. In practice, the F-B algorithm (Poritz 1988; Rabiner 1989) can perform these inference calculations with much lower complexity, namely, Nm2 . The likelihood of the observed evidence
Probabilistic Independence Networks
241
can be obtained with the forward step of the F-B algorithm: calculation of the state posterior probabilities requires both forward and backward steps. The F-B algorithm relies on a factorization of the joint probability function to obtain locally recursive methods. One of the key points in this article is that the graphical modeling approach provides an automatic method for determining such local efficient factorizations, for an arbitrary probabilistic model, if efficient factorizations exist given the conditional independence (CI) relations specified in the model. The MAP identification problem in the context of HMMs involves identifying the most likely hidden state sequence given the observed evidence. Just as with the inference problem, the Viterbi algorithm provides an efficient, locally recursive method for solving this problem with complexity Nm2 , and again, as with the inference problem, the graphical modeling approach provides an automatic technique for determining efficient solutions to the MAP problem for arbitrary models, if an efficient solution is possible given the structure of the model. 5 Inference and MAP Algorithms for DPINs Inference and MAP algorithms for DPINs and UPINS are quite similar: the UPIN case involves some subtleties not encountered in DPINs, and so discussion of UPIN inference and MAP algorithms is deferred until Section 7. The inference algorithm for DPINs (developed by Jensen et al. 1990, and hereafter referred to as the JLO algorithm) is a descendant of an inference algorithm first described by Lauritzen and Spiegelhalter (1988). The JLO algorithm applies to discrete-valued variables: extensions to the JLO algorithm for gaussian and gaussian-mixture distributions are discussed in Lauritzen and Wermuth (1989). A closely related algorithm to the JLO algorithm, developed by Dawid (1992a), solves the MAP identification problem with the same time complexity as the JLO inference algorithm. We show that the JLO and Dawid algorithms are strict generalizations of the well-known F-B and Viterbi algorithms for HMM(1,1) in that they can be applied to arbitrarily complex graph structures (and thus a large family of probabilistic models beyond HMM(1,1)) and handle missing values, partial inference, and so forth in a straightforward manner. There are many variations on the basic JLO and Dawid algorithms. For example, Pearl (1988) describes related versions of these algorithms in his early work. However, it can be shown (Shachter et al. 1994) that all known exact algorithms for inference on DPINs are equivalent at some level to the JLO and Dawid algorithms. Thus, it is sufficient to consider the JLO and Dawid algorithms in our discussion as they subsume other graphical inference algorithms.2 2
An alternative set of computational formalisms is provided by the statistical physics
242
Padhraic Smyth, David Heckerman, and Michael I. Jordan
The JLO and Dawid algorithms operate as a two-step process: 1. The construction step. This involves a series of substeps where the original directed graph is moralized and triangulated, a junction tree is formed, and the junction tree is initialized. 2. The propagation step. The junction tree is used in a local messagepassing manner to propagate the effects of observed evidence, that is, to solve the inference and MAP problems. The first step is carried out only once for a given graph. The second (propagation) step is carried out each time a new inference for the given graph is requested. 5.1 The Construction Step of the JLO Algorithm: From DPIN Structures to Junction Trees. We illustrate the construction step of the JLO algorithm using the simple DPIN structure, GD , over discrete variables U = {X1 , . . . , X6 } shown in Figure 7a. The JLO algorithm first constructs the moral graph GM (see Fig. 7b). It then triangulates the moral graph GM to obtain a decomposable cover G0 (see Fig. 7c). The algorithm operates in a simple, greedy manner based on the fact that a graph is triangulated if and only if all of its nodes can be eliminated, where a node can be eliminated whenever all of its neighbors are pairwise-linked. Whenever a node is eliminated, it and its neighbors define a clique in the junction tree that is eventually constructed. Thus, we can triangulate a graph and generate the cliques for the junction tree by eliminating nodes in some order, adding links if necessary. If no node can be eliminated without adding links, then we choose the node that can be eliminated by adding the links that yield the clique with the smallest state-space. After triangulation, the JLO algorithm constructs a junction tree from G0 (i.e., a clique tree satisfying the running intersection property). The junction tree construction is based on the following fact: Define the weight of a link literature, where undirected graphical models in the form of chains, trees, lattices, and “decorated” variations on chains and trees have been studied for many years (see, e.g., Itzykson and Drouff´e 1991). The general methods developed there, notably the transfer matrix formalism (e.g., Morgenstern and Binder 1983), support exact calculations on general undirected graphs. The transfer matrix recursions and the calculations in the JLO algorithm are closely related, and a reasonable hypothesis is that they are equivalent formalisms. (The question does not appear to have been studied in the general case, although see Stolorz 1994 and Saul and Jordan 1995 for special cases.) The appeal of the JLO framework, in the context of this earlier literature on exact calculations, is the link that it provides to conditional probability models (i.e., directed graphs) and the focus on a particular data structure—the junction tree—as the generic data structure underlying exact calculations. This does not, of course, diminish the potential importance of statistical physics methodology in graphical modeling applications. One area where there is clearly much to be gained from links to statistical physics is the area of approximate calculations, where a wide variety of methods are available (see, e.g., Swendsen and Wang 1987).
Probabilistic Independence Networks
243
(a)
(b)
(c)
(d)
Figure 7: (a) A simple DPIN structure GD . (b) The corresponding (undirected) moral graph GM . (c) The corresponding triangulated graph G0 . (d) The corresponding junction tree.
between two cliques as the number of variables in their intersection. Then a tree of cliques will satisfy the running intersection property if and only if it is a spanning tree of maximal weight. Thus, the JLO algorithm constructs a junction tree by choosing successively a link of maximal weight unless it creates a cycle. The junction tree constructed from the cliques defined by the DPIN structure triangulation in Figure 7c is shown in Figure 7d. The worst-case complexity is O(N3 ) for the triangulation heuristic and O(N2 log N) for the maximal spanning tree portion of the algorithm. This construction step is carried out only once as an initial step to convert the original graph to a junction tree representation. 5.2 Initializing the Potential Functions in the Junction Tree. The next step is to take the numeric probability specifications as defined on the directed graph GD (see equation 3.3) and convert this information into the general form for a junction tree representation of p (see equation 3.2). This is achieved by noting that each variable Xi is contained in at least one clique in the junction tree. Assign each Xi to just one such clique, and for each clique
244
Padhraic Smyth, David Heckerman, and Michael I. Jordan
define the potential function aC (C) to be either the product of p(Xi |pa(Xi )) over all Xi assigned to clique C, or 1 if no variables are assigned to that clique. Define the separator potentials (in equation 3.2) to be 1 initially. In the section that follows, we describe the general JLO algorithm for propagating messages through the junction tree to achieve globally consistent probability calculations. At this point it is sufficient to know that a schedule of local message passing can be defined that converges to a globally consistent marginal representation for p; that is, the potential on any clique or separator is the marginal for that clique or separator (the joint probability function). Thus, via local message passing, one can go from the initial potential representation defined above to a marginal representation: Q C∈V p(u) = Q C
p(xC )
S∈VS p(xS )
.
(5.1)
At this point the junction tree is initialized. This operation in itself is not that useful; of more interest is the ability to propagate information through the graph given some observed data and the initialized junction tree (e.g., to calculate the posterior distributions of some variables of interest). From this point onward we will implicitly assume that the junction tree has been initialized as described so that the potential functions are the local marginals. 5.3 Local Message Propagation in Junction Trees Using the JLO Algorithm. In general p(U) can be expressed as Q C∈V p(u) = Q C
aC (xC )
S∈VS bS (xS )
,
(5.2)
where the aC and bS are nonnegative potential functions (the potential functions could be the initial marginals described above, for example). Note that this representation is a generalization of the representations for p(u) given by equations 3.1 and 3.2. K = ({aC : C ∈ VC }, {bS : S ∈ SC }) is a representation for p(U). A factorizable function p(U) can admit many different representations, that is, many different sets of clique and separator functions that satisfy equation 5.2 given a particular p(U). The JLO algorithm carries out globally consistent probability calculations via local message passing on the junction tree; probability information is passed between neighboring cliques, and clique and separator potentials are updated based on this local information. A key point is that the cliques and separators are updated in a fashion that ensures that at all times K is a representation for p(U); in other words, equation 5.2 holds at all times. Eventually the propagation converges to the marginal representation given the initial model and the observed evidence.
Probabilistic Independence Networks
245
The message passing proceeds as follows: We can define a flow from clique Ci to Cj in the following manner where Ci and Cj are two cliques adjacent in the junction tree. Let Sk be the separator for these two cliques. Define X aCi (xCi ) (5.3) b∗Sk (xSk ) = Ci \Sk
where the summation is over the state-space of variables that are in Ci but not in Sk , and a∗Cj (xCj ) = aCj (xCj )λSk (xSk )
(5.4)
where λSk (xSk ) =
b∗Sk (xSk ) bSk (xSk )
.
(5.5)
λSk (xSk ) is the update factor. Passage of a flow corresponds to updating the neighboring clique with the probability information contained in the originating clique. This flow induces a new representation K∗ = ({a∗C : C ∈ VC }, {b∗S : S ∈ SC }) for p(U). A schedule of such flows can be defined such that all cliques are eventually updated with all relevant information and the junction tree reaches an equilibrium state. The most direct scheduling scheme is a two-phase operation where one node is denoted the root of the junction tree. The collection phase involves passing flows along all edges toward the root clique (if a node is scheduled to have more than one incoming flow, the flows are absorbed sequentially). Once collection is complete, the distribution phase involves passing flows out from this root in the reverse direction along the same edges. There are at most two flows along any edge in the tree in a nonredundant schedule. Note that the directionality of the flows in the junction tree need have nothing to do with any directed edges in the original DPIN structure. 5.4 The JLO Algorithm for Inference Given Observed Evidence. The particular case of calculating the effect of observed evidence (inference) is handled in the following manner: Consider that we observe evidence of the form e = {Xi = x∗i , Xj = xj∗ , . . .}, and Ue = {Xi , Xj , . . .} denotes the set of variables observed. Let Uh = U \ Ue denote the set of hidden or unobserved variables and uh a value assignment for Uh . Consider the calculation of p(Uh |e). Define an evidence function ge (xi ) such that ½ 1 if xi = x∗i e (5.6) g (xi ) = 0 otherwise.
246
Padhraic Smyth, David Heckerman, and Michael I. Jordan
Let f ∗ (u) = p(u)
Y
ge (xi ).
(5.7)
Ue
Thus, we have that f ∗ (u) ∝ p(uh |e). To obtain f ∗ (u) by operations on the junction tree, one proceeds as follows: First assign each observed variable Xi ∈ Ue to one particular clique that contains it (this is termed “entering the evidence into the clique”). Let CE denote the set of all cliques into which evidence is entered in this manner. For each C ∈ CE let Y ge (xi ) . (5.8) gC (xC ) = {i: Xi is entered into C}
Thus, f ∗ (u) = p(u) ×
Y
gC (xC ) .
(5.9)
C∈CE
One can now propagate the effects of these modifications throughout the tree using the collect-and-distribute schedule described in Section 5.3. Let xhC denote a value assignment of the hidden (unobserved) variables in clique C. When the schedule of flows is complete, one gets a new representation K∗f such that the local potential on each clique is f ∗ (xC ) = p(xhC , e), that is, the joint probability of the local unobserved clique variables and the observed evidence (Jensen et al. 1990) (similarly for the separator potential functions). If one marginalizes at the clique over the unobserved local clique variables, X
p(xhC , e) = p(e),
(5.10)
XCh
one gets the probability of the observed evidence directly. Similarly, if one normalizes the potential function at a clique to sum to one, one obtains the conditional probability of the local unobserved clique variables given the evidence, p(xhC |e). 5.5 Complexity of the Propagation Step of the JLO Algorithm. In general, the time complexity T of propagation within a junction tree is P C O( N i=1 s(Ci )) where NC is the number of cliques in the junction tree and s(Ci ) is the number of states in the clique state-space of Ci . Thus, for inference to be efficient, we need to construct junction trees with small clique sizes. Problems of finding optimally small junction trees (e.g., finding the junction tree with the smallest maximal clique) are NP-hard. Nonetheless, the heuristic algorithm for triangulation described earlier has been found to work well in practice (Jensen et al. 1990).
Probabilistic Independence Networks
247
6 Inference and MAP Calculations in HMM(1,1) 6.1 The F-B Algorithm for HMM(1,1) Is a Special Case of the JLO Algorithm. Figure 5b shows the junction tree for HMM(1,1). One can apply the JLO algorithm to the HMM(1,1) junction tree structure to obtain a particular inference algorithm for HMM(1,1). The HMM(1,1) inference problem consists of being given a set of values for the observable variables, e = {O1 = o1 , O2 = o2 , . . . , ON = oN }
(6.1)
and inferring the likelihood of e given the model. As described in the previous section, this problem can be solved exactly by local propagation in any junction tree using the JLO inference algorithm. In Appendix A it is shown that both the forward and backward steps of the F-B procedure for HMM(1,1) are exactly recreated by the more general JLO algorithm when the HMM(1,1) is viewed as a PIN. This equivalence is not surprising since both algorithms are solving exactly the same problem by local recursive updating. The equivalence is useful because it provides a link between well-known HMM inference algorithms and more general PIN inference algorithms. Furthermore, it clearly demonstrates how the PIN framework can provide a direct avenue for analyzing and using more complex hidden Markov probability models (we will discuss such HMMs in Section 8). When evidence is entered into the observable states and assuming m discrete states per hidden variable, the computational complexity of solving the inference problem via the JLO algorithm is O(Nm2 ) (the same complexity as the standard F-B procedure). Note that the obvious structural equivalence between PIN structures and HMM(1,1) has been noted before by Buntine (1994), Frasconi and Bengio (1994), and Lucke (1995) among others; however, this is the first publication of equivalence of specific inference algorithms as far as we are aware. 6.2 Equivalence of Dawid’s Propagation Algorithm for Identifying MAP Assignments and the Viterbi Algorithm. Consider that one wishes to calculate fˆ(uh , e) = maxx1 ,...,xK p(x1 , . . . , xK , e) and also to identify a set of values of the unobserved variables that achieve this maximum, where K is the number of unobserved (hidden) variables. This calculation can be achieved using a local propagation algorithm on the junction tree with two modifications to the standard JLO inference algorithm. This algorithm is due to Dawid (1992a); this is the most general algorithm from a set of related methods. First, during a flow, the marginalization of the separator is replaced by bˆS (xS ) = max aC (xC ), C\S
(6.2)
248
Padhraic Smyth, David Heckerman, and Michael I. Jordan
where C is the originating clique for the flow. The definition for λS (xS ) is also changed in the obvious manner. Second, marginalization within a clique is replaced by maximization: fˆC = max p(u) . u\xC
(6.3)
Given these two changes, it can be shown that if the same propagation operations are carried out as described earlier, the resulting representation Kˆ f at equilibrium is such that the potential function on each clique C is fˆ(xC ) = max p(xhC , e, {uh \ xC }), uh \xC
(6.4)
where xhC denotes a value assignment of the hidden (unobserved) variables in clique C. Thus, once the Kˆ f representation is obtained, one can locally identify the values of XCh , which maximize the full joint probability as xˆ hC = argxh fˆ(xC ) . C
(6.5)
In the probabilistic expert systems literature, this procedure is known as generating the most probable explanation (MPE) given the observed evidence (Pearl 1988). The HMM(1,1) MAP problem consists of being given a set of values for the observable variables, e = {O1 = o1 , O2 = o2 , . . . , ON = oN }, and inferring max p(h1 , . . . , hN , e)
h1 ,...,hN
(6.6)
or the set of arguments that achieve this maximum. Since Dawid’s algorithm is applicable to any junction tree, it can be directly applied to the HMM(1,1) junction tree in Figure 5b. In Appendix B it is shown that Dawid’s algorithm, when applied to HMM(1,1), is exactly equivalent to the standard Viterbi algorithm. Once again the equivalence is not surprising: Dawid’s method and the Viterbi algorithm are both direct applications of dynamic programming to the MAP problem. However, once again, the important point is that Dawid’s algorithm is specified for the general case of arbitrary PIN structures and can thus be directly applied to more complex HMMs than HMM(1,1) (such as those discussed later in Section 8). 7 Inference and MAP Algorithms for UPINs In Section 5 we described the JLO algorithm for local inference given a DPIN. For UPINs the procedure is very similar except for two changes to
Probabilistic Independence Networks
249
the overall algorithm: the moralization step is not necessary, and initialization of the junction tree is less trivial. In Section 5.2 we described how to go from a specification of conditional probabilities in a directed graph to an initial potential function representation on the cliques in the junction tree. To utilize undirected links in the model specification process requires new machinery to perform the initialization step. In particular we wish to compile the model into the standard form of a product of potentials on the cliques of a triangulated graph (cf. equation 3.1): P(u) =
Y
aC (xC ) .
C∈VC
Once this initialization step has been achieved, the JLO propagation procedure proceeds as before. Consider the chordless cycle shown in Figure 4b. Suppose that we parameterize the probability distribution on this graph by specifying pairwise marginals on the four pairs of neighboring nodes. We wish to convert such a local specification into a globally consistent joint probability distribution, that is, a marginal representation. An algorithm known as iterative proportional fitting (IPF) is available to perform this conversion. Classically, IPF proceeds as follows (Bishop et al. 1973): Suppose for simplicity that all of the random variables are discrete (a gaussian version of IPF is also available, Whittaker 1990) such that the joint distribution can be represented as a table. The table is initialized with equal values in all of the cells. For each marginal in turn, the table is then rescaled by multiplying every cell by the ratio of the desired marginal to the corresponding marginal in the current table. The algorithm visits each marginal in turn, iterating over the set of marginals. If the set of marginals is consistent with a single joint distribution, the algorithm is guaranteed to converge to the joint distribution. Once the joint is available, the potentials in equation 3.1 can be obtained (in principle) by marginalization. Although IPF solves the initialization problem in principle, it is inefficient. Jiˇrousek and Pˇreuˇcil (1995) developed an efficient version of IPF that avoids the need for both storing the joint distribution as a table and explicit marginalization of the joint to obtain the clique potentials. Jiˇrousek’s version of IPF represents the evolving joint distribution directly in terms of junction tree potentials. The algorithm proceeds as follows: Let I be a set of subsets of V. For each I ∈ I , let q(xI ) denote the desired marginal on the subset I. Let the joint distribution be represented as a product over junction tree potentials (see equation 3.1), where each aC is initialized to an arbitrary constant. Visit each I ∈ I in turn, updating the corresponding clique potential aC (i.e, that potential aC for which I ⊆ C) as follows: a∗C (xC ) = aC (xC )
q(xI ) . p(xI )
250
Padhraic Smyth, David Heckerman, and Michael I. Jordan
The marginal p(xI ) is obtained via the JLO algorithm, using the current set of clique potentials. Intelligent choices can be made for the order in which to visit the marginals to minimize the amount of propagation needed to compute p(xI ). This algorithm is simply an efficient way of organizing the IPF calculations and inherits the latter’s guarantees of convergence. Note that the Jiˇrousek and Pˇreuˇcil algorithm requires a triangulation step in order to form the junction tree used in the calculation of p(xI ). In the worst case, triangulation can yield a highly connected graph, in which case the Jiˇrousek and Pˇreuˇcil algorithm reduces to classical IPF. For sparse graphs, however, when the maximum clique is much smaller than the entire graph, the algorithm should be substantially more efficient than classical IPF. Moreover, the triangulation algorithm itself need only be run once as a preprocessing step (as is the case for the JLO algorithm). 8 More Complex HMMs for Speech Modeling Although HMMs have provided an exceedingly useful framework for the modeling of speech signals, it is also true that the simple HMM(1,1) model underlying the standard framework has strong limitations as a model of speech. Real speech is generated by a set of coupled dynamical systems (lips, tongue, glottis, lungs, air columns, etc.), each obeying particular dynamical laws. This coupled physical process is not well modeled by the unstructured state transition matrix of HMM(1,1). Moreover, the first-order Markov properties of HMM(1,1) are not well suited to modeling the ubiquitous coarticulation effects that occur in speech, particularly coarticulatory effects that extend across several phonemes (Kent and Minifie 1977). A variety of techniques have been developed to surmount these basic weaknesses of the HMM(1,1) model, including mixture modeling of emission probabilities, triphone modeling, and discriminative training. All of these methods, however, leave intact the basic probabilistic structure of HMM(1,1) as expressed by its PIN structure. In this section we describe several extensions of HMM(1,1) that assume additional probabilistic structure beyond that assumed by HMM(1,1). PINs provide a key tool in the study of these more complex models. The role of PINs is twofold: they provide a concise description of the probabilistic dependencies assumed by a particular model, and they provide a general algorithm for computing likelihoods. This second property is particularly important because the existence of the JLO algorithm frees us from having to derive particular recursive algorithms on a case-by-case basis. The first model that we consider can be viewed as a coupling of two HMM(1,1) chains (Saul and Jordan, 1995). Such a model can be useful in general sensor fusion problems, for example, in the fusion of an audio signal with a video signal in lipreading. Because different sensory signals generally have different bandwidths, it may be useful to couple separate Markov models that are developed specifically for each of the individual signals. The
Probabilistic Independence Networks
251
alternative is to force the problem into an HMM(1,1) framework by either oversampling the slower signal, which requires additional parameters and leads to a high-variance estimator, or downsampling the faster signal, which generally oversmoothes the data and yields a biased estimator. Consider the HMM(1,2) structure shown in Figure 8a. This model involves two HMM(1,1) backbones that are coupled together by undirected links between the state variables. Let Hi(1) and O(1) i denote the ith state and ith output of the “fast” chain, respectively, and let Hi(2) and O(2) i denote the ith state and ith output of the “slow” chain. Suppose that the fast chain is sampled τ times as often is connected to Hi(2) for i0 equal to τ (i − 1) + 1. as the slow chain. Then Hi(1) 0 0 Given this value for i , the Markov model for the coupled chain implies the following conditional independencies for the state variables: (1) (1) (2) (2) (1) (1) (2) (2) (2) {Hi(1) 0 , Hi } ⊥ {H1 , O1 , H1 , O1 , . . . , Hi0 −2 , Oi0 −2 , Hi−2 , Oi−2 , (2) (1) (2) O(1) i0 −1 , Oi−1 } | {Hi0 −1 , Hi−1 },
(8.1)
as well as the following conditional independencies for the output variables: (1) (1) (2) (2) (1) (1) (2) {O(1) i0 , Oi } ⊥ {H1 , O1 , H1 , O1 , . . . , Hi0 −1 , Oi0 −1 , (2) (1) (2) , O(2) Hi−1 i−1 } | {Hi0 , Hi } .
(8.2)
Additional conditional independencies can be read off the UPIN structure (see Figure 8a). As is readily seen in Figure 8a, the HMM(1,2) graph is not triangulated; thus, the HMM(1,2) probability model is not decomposable. However, the graph can be readily triangulated to form a decomposable cover for the HMM(1,2) probability model (see Section 3.1.2). The JLO algorithm provides an efficient algorithm for calculating likelihoods in this graph. This can be seen in Figure 8b, where we show a triangulation of the HMM(1,2) graph. The triangulation adds O(Nh ) links to the graph (where Nh is the number of hidden nodes in the graph) and creates a junction tree in which each clique is a cluster of three state variables from the underlying UPIN structure. Assuming m values for each state variable in each chain, we obtain an algorithm whose time complexity is O(Nh m3 ). This can be compared to the naive approach of transforming the HMM(1,2) model to a Cartesian product HMM(1,1) model, which not only has the disadvantage of requiring subsampling or oversampling but also has a time complexity of O(Nh m4 ). Directed graph semantics can also play an important role in constructing interesting variations on the HMM theme. Consider Figure 9a, which shows an HMM(1,2) model in which a single output stream is coupled to a pair of underlying state sequences. In a speech modeling application, such a structure might be used to capture the fact that a given acoustic pattern can have multiple underlying articulatory causes. For example, equivalent shifts in
252
Padhraic Smyth, David Heckerman, and Michael I. Jordan
(a)
(b)
Figure 8: (a) The UPIN structure for the HMM(1,2) model with τ = 2. (b) A triangulation of this UPIN structure.
Probabilistic Independence Networks
253
formant frequencies can be caused by lip rounding or tongue raising; such phenomena are generically refered to as “trading relations” in the speech psychophysics literature (Lindblom 1990; Perkell et al. 1993). Once a particular acoustic pattern is observed, the causes become dependent; thus, for example, evidence that the lips are rounded would act to discount inferences that the tongue has been raised. These inferences propagate forward and backward in time and couple the chains. Formally, these induced dependencies are accounted for by the links added between the state sequences during the moralization of the graph (see Figure 9b). This figure shows that the underlying calculations for this model are closely related to those of the earlier HMM(1,2), but the model specification is very different in the two cases. Saul and Jordan (1996) have proposed a second extension of the HMM(1,1) model that is motivated by the desire to provide a more effective model of coarticulation (see also Stolorz 1994). In this model, shown in Figure 10, coarticulatory influences are modeled by additional links between output variables and states along an HMM(1,1) backbone. One approach to performing calculations in this model is to treat it as a Kth-order Markov chain and transform it into an HMM(1,1) model by defining higher-order state variables. A graphical modeling approach is more flexible. It is possible, for example, to introduce links between states and outputs K time steps apart without introducing links for the intervening time intervals. More generally, the graphical modeling approach to the HMM(K,1) model allows the specification of different interaction matrices at different time scales; this is awkward in the Kth-order Markov chain formalism. The HMM(3,1) graph is triangulated as is, and thus the time complexity of the JLO algorithm is O(Nh m3 ). In general an HMM(K,1) graph creates cliques of size O(mK ), and the JLO algorithm runs in time O(Nh mK ). As these examples suggest, the graphical modeling framework provides a useful framework for exploring extensions of HMMs. The examples also make clear, however, that the graphical algorithms are no panacea. The mK complexity of HMM(K,1) will be prohibitive for large K. Also, the generalization of HMM(1,2) to HMM(1,K) (couplings of K chains) is intractable. Recent research has therefore focused on approximate algorithms for inference in such structures; see Saul and Jordan (1996) for HMM(K,1) and Ghahramani and Jordan (1996) and Williams and Hinton (1990) for HMM(1,K). These authors have developed an approximation methodology based on mean-field theory from statistical physics. While discussion of mean-field algorithms is beyond the scope of this article, it is worth noting that the graphical modeling framework plays a useful role in the development of these approximations. Essentially the mean-field approach involves creating a simplified graph for which tractable algorithms are available, and minimizing a probabilistic distance between the tractable graph and the intractable graph. The JLO algorithm is called as a subroutine on the tractable graph during the minimization process.
254
Padhraic Smyth, David Heckerman, and Michael I. Jordan
(a)
(b)
Figure 9: (a) The DPIN structure for HMM(1,2) with a single observable sequence coupled to a pair of underlying state sequences. (b) The moralization of this DPIN structure.
9 Learning and PINs Until now, we have assumed that the parameters and structure of a PIN are known with certainty. In this section, we drop this assumption and discuss methods for learning about the parameters and structure of a PIN. The basic idea behind the techniques that we discuss is that there is a true joint probability distribution described by some PIN structure and parameters, but we are uncertain about this structure and its parameters. We
Probabilistic Independence Networks
255
Figure 10: The UPIN structure for HMM(3,1).
are unable to observe the true joint distribution directly, but we are able to observe a set of patterns u1 , . . . , uM that is a random sample from this true distribution. These patterns are independent and identically distributed (i.i.d.) according to the true distribution (note that in a typical HMM learning problem, each of the ui consist of a sequence of observed data). We use these data to learn about the structure and parameters that encode the true distribution. 9.1 Parameter Estimation for PINs. First, let us consider the situation where we know the PIN structure S of the true distribution with certainty but are uncertain about the parameters of S. In keeping with the rest of the article, let us assume that all variables in U are discrete. Furthermore, for purposes of illustration, let us assume that S is an ADG. Let xki and pa(Xi ) j denote the kth value of variable Xi and jth configuration of variables pa(Xi ) in S, respectively (j = 1, . . . , qi , k = 1, . . . , ri ). As we have just discussed, we assume that each conditional probability p(xki |pa(Xi ) j ) is an uncertain parameter, and for convenience we represent this parameter as θijk . We use θ ij to denote the vector of parameters , . . . , θijri ) and θ s to denote the vector of all parameters for S. Note that (θ Pij1 ri k=1 θijk = 1 for every i and j. One method for learning about the parameters θ s is the Bayesian approach. We treat the parameters θ s as random variables, assign these parameters a prior distribution p(θ s |S), and update this prior distribution with data D = (u1 , . . . , uM ) according to Bayes’ rule: p(θ s | D, S) = c · p(θ s | S) p(D | θ s , S),
(9.1)
where c is a normalization constant that does not depend on θ s . Because the patterns in D are a random sample, equation 9.1 simplifies to p(θ s | D, S) = c · p(θ s | S)
M Y l=1
p(ul | θ s , S) .
(9.2)
256
Padhraic Smyth, David Heckerman, and Michael I. Jordan
(a)
(b)
Figure 11: A Bayesian network structure for a two-binary-variable domain {X1 , X2 } showing (a) conditional independencies associated with the random sample assumption and (b) the added assumption of parameter independence. In both parts of the figure, it is assumed that the network structure X1 → X2 is generating the patterns.
Given some prediction of interest that depends on θ s and S—say, f (θ s , S)— we can use the posterior distribution of θ s to compute an expected prediction: Z f (θ s , S) p(θ s | D, S) dθ s . (9.3) E( f (θ s , S) | D, S) = Associated with our assumption that the data D are a random sample from structure S with uncertain parameters θ s is a set of conditional independence assertions. Not surprisingly, some of these assumptions can be represented as a (directed) PIN that includes both the possible observations and the parameters as variables. Figure 11a shows these assumptions for the case where U = {X1 , X2 } and S is the structure with a directed edge from X1 to X2 . Under certain additional assumptions, described, for example, in Spiegelhalter and Lauritzen (1990), the evaluation of equation 9.2 is straightfor-
Probabilistic Independence Networks
257
ward. In particular, if each pattern ul is complete (i.e., every variable is observed), we have p(ul | θ s , S) =
qi Y ri N Y Y
δijkl
θijk ,
(9.4)
i=1 j=1 k=1
where δijkl is equal to one if Xi = xki and pa(Xi ) = pa(Xi ) j in pattern Cl and zero otherwise. Combining equations 9.2 and 9.4, we obtain p(θ s | D, S) = c · p(θ s | S)
qi Y ri N Y Y
Nijk
θijk ,
(9.5)
i=1 j=1 k=1
where Nijk is the number of patterns in which Xi = xki and pa(Xi ) = pa(Xi ) j . The Nijk are the sufficient statistics for the random sample D. If we assume that the parameter vectors θ ij , i = 1, . . . , n, j = 1, . . . , qi are mutually independent, an assumption we call parameter independence, then we get the additional simplification p(θ s | D, S) = c
qi N Y Y
p(θ ij | S)
i=1 j=1
ri Y
Nijk
θijk .
(9.6)
k=1
The assumption of parameter independence for our two-variable example is illustrated in Figure 11b. Thus, given complete data and parameter independence, each parameter vector θ ij can be updated independently. The update is particularly simple if each parameter vector has a conjugate distribution. For a discrete variable with discrete parents, the natural conjugate distribution is the Dirichlet, p(θ ij | S) ∝
ri Y
αijk −1
θijk
,
k=1
in which case equation 9.6 becomes p(θ s | D, S) = c
qi Y ri N Y Y
Nijk +αijk −1
θijk
.
(9.7)
i=1 j=1 k=1
Other conjugate distributions include the normal Wishart distribution for the parameters of gaussian codebooks and the Dirichlet distribution for the mixing coefficients of gaussian-mixture codebooks (DeGroot 1970; Buntine 1994; Heckerman and Geiger 1995). Heckerman and Geiger (1995) describe a simple method for assessing these priors. These priors have also been used for learning parameters in standard HMMs (e.g., Gauvain and Lee 1994).
258
Padhraic Smyth, David Heckerman, and Michael I. Jordan
Parameter independence is usually not assumed in general for HMM structures. For example, in the HMM(1,1) model, a standard assumption is that p(Hi |Hi−1 ) = p(Hj |Hj−1 ) and p(Oi |Hi ) = p(Oj |Hj ) for all appropriate i and j. Fortunately, parameter equalities such as these are easily handled in the framework above (see Thiesson 1995 for a detailed discussion). In addition, the assumption that patterns are complete is clearly inappropriate for HMM structures in general, where some of the variables are hidden from observation. When data are missing, the exact evaluation of the posterior p(θ s |D, S) is typically intractable, so we turn to approximations. Accurate but slow approximations are based on Monte Carlo sampling (e.g., Neal 1993). An approximation that is less accurate but more efficient is one based on the observation that, under certain conditions, the quantity p(θ s |S) · p(D|θ s , S) converges to a multivariate gaussian distribution as the sample size increases (see, e.g., Kass et al. 1988; MacKay, 1992a, 1992b). Less accurate but more efficient approximations are based on the observation that the gaussian distribution converges to a delta function centered at the maximum a posteriori (MAP) and eventually the maximum likelihood (ML) value of θ s . For the standard HMM(1,1) model discussed in this article, where either discrete, gaussian, or gaussian-mixture codebooks are used, an ML or MAP estimate is a well-known efficient approximation (Poritz 1988; Rabiner 1989). MAP and ML estimates can be found using traditional techniques such as gradient descent and expectation-maximization (EM) (Dempster et al., 1977). The EM algorithm can be applied efficiently whenever the likelihood function has sufficient statistics that are of fixed dimension for any data set. The EM algorithm finds a local maximum by initializing the parameters θ s (e.g., at random or by some clustering algorithm) and repeating E and M steps to convergence. In the E step, we compute the expected sufficient statistic for each of the parameters, given D and the current values for θ s . In particular, if all variables are discrete and parameter independence is assumed to hold, and all priors are Dirichlet, we obtain E(Nijk | D, θ s , S) =
M X
p(xki , pa(Xi ) j | ul , θ s , S).
l=1
An important feature of the EM algorithm applied to PINs under these assumptions is that each term in the sum can be computed using the JLO algorithm. The JLO algorithm may also be used when some parameters are equal and when the likelihoods of some variables are gaussian or gaussianmixture distributions (Lauritzen and Wermuth 1989). In the M step, we use the expected sufficient statistics as if they were actual sufficient statistics and set the new values of θ s to be the MAP or ML values given these statistics. Again, if all variables are discrete, parameter independence is assumed to
Probabilistic Independence Networks
259
hold, and all priors are Dirichlet, the ML is given by E(Nijk | D, θ s , S) , θijk = Pri k=1 E(Nijk | D, θ s , S) and the MAP is given by θijk = Pri
E(Nijk | D, θ s , S) + αijk − 1
k=1 (E(Nijk
| D, θ s , S) + αijk − 1)
.
9.2 Model Selection and Averaging for PINs. Now let us assume that we are uncertain not only about the parameters of a PIN but also about the true structure of a PIN. For example, we may know that the true structure is an HMM(K, J) structure, but we may be uncertain about the values of K and J. One solution to this problem is Bayesian model averaging. In this approach, we view each possible PIN structure (without its parameters) as a model. We assign prior probabilities p(S) to different models, and compute their posterior probabilities given data: Z p(S | D) ∝ p(S) p(D | S) = p(S)
p(D | θ , S) p(θ | S) dθ .
(9.8)
As indicated in equation 9.8, we compute p(D|S) by averaging the likelihood of the data over the parameters of S. In addition to computing the posterior probabilities of models, we estimate the parameters of each model either by computing the distribution p(θ |D, S) or using a gaussian, MAP, or ML approximation for this distribution. We then make a prediction of interest based on each model separately, as in equation 9.3, and compute the weighted average of these predictions using the posterior probabilities of models as weights. One complication with this approach is that when data are missing—for example, when some variables are hidden—the exact computation of the integral in equation 9.8 is usually intractable. As discussed in the previous section, Monte Carlo and gaussian approximations may be used. One simple form of a gaussian approximation is the Bayesian information criterion (BIC) described by Schwarz (1978), d log p(D | S) ≈ log p(D | θˆs , S) − log M, 2 where θˆs is the ML estimate, M is the number of patterns in D, and d is the dimension of S—typically, the number of parameters of S. The first term of this “score” for S rewards how well the data fit S, whereas the second term punishes model complexity. Note that this score does not depend on the
260
Padhraic Smyth, David Heckerman, and Michael I. Jordan
parameter prior, and thus can be applied easily.3 For examples of applications of BIC in the context of PINs and other statistical models, see Raftery (1995). The BIC score is the additive inverse of Rissanen’s (1987) minimum description length (MDL). Other scores, which can be viewed as approximations to the marginal likelihood, are hypothesis testing (Raftery 1995) and cross validation (Dawid 1992b). Buntine (in press) provides a comprehensive review of scores for model selection and model averaging in the context of PINs. Another complication with Bayesian model averaging is that there may be so many possible models that averaging becomes intractable. In this case, we select one or a handful of structures with high relative posterior probabilities and make our predictions with this limited set of models. This approach is called model selection. The trick here is finding a model or models with high posterior probabilities. Detailed discussions of search methods for model selection among PINs are given by, among others, Madigan and Raftery (1994), Heckerman et al. (1995), and Spirtes and Meek (1995). When the true model is some HMM(K, J) structure, we may have additional prior knowledge that strongly constrains the possible values of K and J. Here, exhaustive model search is likely to be practical. 10 Summary Probabilistic independence networks provide a useful framework for both the analysis and application of multivariate probability models when there is considerable structure in the model in the form of conditional independence. The graphical modeling approach both clarifies the independence semantics of the model and yields efficient computational algorithms for probabilistic inference. This article has shown that it is useful to cast HMM structures in a graphical model framework. In particular, the well-known F-B and Viterbi algorithms were shown to be special cases of more general algorithms from the graphical modeling literature. Furthermore, more complex HMM structures, beyond the traditional first-order model, can be analyzed profitably and directly using generally applicable graphical modeling techniques. Appendix A: The Forward-Backward Algorithm for HMM(1,1) Is a Special Case of the JLO Algorithm Consider the junction tree for HMM(1,1) as shown in Figure 5b. Let the final clique in the chain containing (HN−1 , HN ) be the root clique. Thus, a 3 One caveat: The BIC score is derived under the assumption that the parameter prior is positive throughout its domain.
Probabilistic Independence Networks
261
nonredundant schedule consists of first recursively passing flows from each (Oi , Hi ) and (Hi−2 , Hi−1 ) to each (Hi−1 , Hi ) in the appropriate sequence (the “collect” phase), and then distributing flows out in the reverse direction from the root clique. If we are interested only in calculating the likelihood of e given the model, then the distribute phase is not necessary since we can simply marginalize over the local variables in the root clique to obtain p(e). (Subscripts on potential functions and update factors indicate which variables have been used in deriving that potential or update factor; e.g., fO1 indicates that this potential has been updated based on information about O1 but not using information about any other variables.) Assume that the junction tree has been initialized so that the potential function for each clique and separator is the local marginal. Given the observed evidence e, each individual piece of evidence O = o∗i is entered into its clique (Oi , Hi ) such that each clique marginal becomes fO∗ i (hi , oi ) = p(hi , o∗i ) after entering the evidence (as in equation 5.8). Consider the portion of the junction tree in Figure 12, and in particular the flow between (Oi , Hi ) and (Hi−1 , Hi ). By definition the potential on the separator Hi is updated to fO∗ i (hi ) =
X
f ∗ (hi , oi ) = p(hi , o∗i ).
(A.1)
oi
The update factor from this separator flowing into clique (Hi−1 , Hi ) is then λOi (hi ) =
p(hi , o∗i ) = p(o∗i | hi ). p(hi )
(A.2)
This update factor is “absorbed” into (Hi−1 , Hi ) as follows: fO∗ i (hi−1 , hi ) = p(hi−1 , hi )λOi (hi ) = p(hi−1 , hi )p(o∗i | hi ).
(A.3)
Now consider the flow from clique (Hi−2 , Hi−1 ) to clique (Hi−1 , Hi ). Let 8i,j = {Oi , . . . , Oj } denote a set of consecutive observable variables and ∗ = {o∗ , . . . , o∗ } denote a set of observed values for these variables, 1 ≤ φi,j i j i < j ≤ N. Assume that the potential on the separator Hi−1 has been updated to ∗ ) f8∗ 1,i−1 (hi−1 ) = p∗ (hi−1 , φ1,i−1
(A.4)
by earlier flows in the schedule. Thus, the update factor on separator Hi−1 becomes λ81,i−1 (hi−1 ) =
∗ ) p∗ (hi−1 , φ1,i−1
p(hi−1 )
,
(A.5)
262
Padhraic Smyth, David Heckerman, and Michael I. Jordan
Figure 12: Local message passing in the HMM(1,1) junction tree during the collect phase of a left-to-right schedule. Ovals indicate cliques, boxes indicate separators, and arrows indicate flows.
and this gets absorbed into clique (Hi−1 , Hi ) to produce f8∗ 1,i (hi−1 , hi ) = fO∗ i (hi−1 , hi )λ81,i−1 (hi−1 ) = p(hi−1 , hi )p(o∗i | hi )
∗ ) p∗ (hi−1 , φ1,i−1
p(hi−1 ) ∗ = p(o∗i | hi )p(hi | hi−1 )p∗ (hi−1 , φ1,i−1 ).
(A.6)
Finally, we can calculate the new potential on the separator for the flow from clique (Hi−1 , Hi ) to (Hi , Hi+1 ): f8∗ 1,i (hi ) =
X
f8∗ 1,i (hi−1 , hi )
hi−1
= p(o∗i | hi )
X
(A.7)
∗ p(hi | hi−1 )p∗ (hi−1 , φ1,i−1 )
(A.8)
p(hi | hi−1 ) f8∗ 1,i−1 (hi−1 ) .
(A.9)
hi−1
= p(o∗i | hi )
X hi−1
Proceeding recursively in this manner, one finally obtains at the root clique ∗ f8∗ 1,N (hN−1 , hN ) = p(hN−1 , hN , φ1,N )
(A.10)
from which one can get the likelihood of the evidence, ∗ )= p(e) = p(φ1,N
X hN−1 ,hN
f8∗ 1,N (hN−1 , hN ).
(A.11)
Probabilistic Independence Networks
263
We note that equation A.9 directly corresponds to the recursive equation (equation 20 in Rabiner 1989) for the α variables used in the forward phase of the F-B algorithm, the standard HMM(1,1) inference algorithm. In particular, using a “left-to-right” schedule, the updated potential functions on the separators between the hidden cliques, the f8∗ 1,i (hi ) functions, are exactly the α variables. Thus, when applied to HMM(1,1), the JLO algorithm produces exactly the same local recursive calculations as the forward phase of the F-B algorithm. One can also show an equivalence between the backward phase of the F-B algorithm and the JLO inference algorithm. Let the “leftmost” clique in the chain, (H1 , H2 ), be the root clique, and define a schedule such that the flows go from right to left. Figure 13 shows a local portion of the clique tree and the associated flows. Consider that the potential on clique (Hi , Hi+1 ) has been updated already by earlier flows from the right. Thus, by definition, ∗ ). f8∗ i+1,N (hi , hi+1 ) = p(hi , hi+1 , φi+1,N
(A.12)
The potential on the separator between (Hi , Hi+1 ) and (Hi−1 , Hi ) is calculated as f8∗ i+1,N (hi ) =
X
∗ p(hi , hi+1 , φi+1,N )
hi+1
= p(hi )
X
(A.13)
∗ p(hi+1 | hi )p(o∗i+1 | hi+1 )p(φi+2,N | hi+1 )
(A.14)
hi+1
(by virtue of the various conditional independence relations in HMM(1,1)) = p(hi )
X
p(hi+1 | hi )p(o∗i+1 | hi+1 )
∗ , hi+1 ) p(φi+2,N
p(hi+1 )
hi+1
= p(hi )
X
p(hi | hi+1 )p(o∗i+1 | hi+1 )
f8∗ i+2,N (hi+1 )
hi+1
p(hi+1 )
.
(A.15) (A.16)
Defining the update factor on this separator yields λ∗8i+1,N (hi ) =
f8∗ i+2,N (hi )
p(hi ) X f8∗ (hi+1 ) p(hi | hi+1 )p(o∗i+1 | hi+1 ) i+2,N = p(hi+1 ) hi+1 X = p(hi | hi+1 )p(o∗i+1 | hi+1 )λ∗8i+2,N (hi+1 ) . hi+1
(A.17) (A.18) (A.19)
264
Padhraic Smyth, David Heckerman, and Michael I. Jordan
Figure 13: Local message passing in the HMM(1,1) junction tree during the collect phase of a right-to-left schedule. Ovals indicate cliques, boxes indicate separators, and arrows indicate flows.
This set of recursive equations in λ corresponds exactly to the recursive equation (equation 25 in Rabiner 1989) for the β variables in the backward phase of the F-B algorithm. In fact, the update factors λ on the separators are exactly the β variables. Thus, we have shown that the JLO inference algorithm recreates the F-B algorithm for the special case of the HMM(1,1) probability model. Appendix B: The Viterbi Algorithm for HMM(1,1) Is a Special Case of Dawid’s Algorithm. As with the inference problem, let the final clique in the chain containing (HN−1 , HN ) be the root clique and use the same schedule: first a left-to-right collection phase into the root clique, followed by a right-to-left distribution phase out from the root clique. Again it is assumed that the junction tree has been initialized so that the potential functions are the local marginals, and the observable evidence e has been entered into the cliques in the same manner as described for the inference algorithm. We refer again to Figure 12. The sequence of flow and absorption operations is identical to that of the inference algorithm with the exception that marginalization operations are replaced by maximization. Thus, the potential on the separator between (Oi , Hi ) and (Hi−1 , Hi ) is initially updated to fˆOi (hi ) = max p(hi , oi ) = p(hi , o∗i ). oi
(B.1)
Probabilistic Independence Networks
265
The update factor for this separator is λOi (hi ) =
p(hi , o∗i ) = p(o∗i | hi ), p(hi )
(B.2)
and after absorption into the clique (Hi−1 , Hi ) one gets fˆOi (hi−1 , hi ) = p(hi−1 , hi )p(o∗i | hi ).
(B.3)
Now consider the flow from clique (Hi−2 , Hi−1 ) to (Hi−1 , Hi ). Let Hi,j = {Hi , . . . , Hj } denote a set of consecutive observable variables and h∗i,j = {h∗i , . . . , hj∗ }, denote the observed values for these variables, 1 ≤ i < j ≤ N. Assume that the potential on separator Hi−1 has been updated to ∗ ) fˆ81,i−1 (hi−1 ) = max p(hi−1 , h1,i−2 , φ1,i−1
(B.4)
h1,i−2
by earlier flows in the schedule. Thus, the update factor for separator Hi−1 becomes λ81,i−1 (hi−1 ) =
∗ ) maxh1,i−2 p(hi−1 , h1,i−2 , φ1,i−1
p(hi−1 )
,
(B.5)
and this gets absorbed into clique (Hi−1 , Hi ) to produce (B.6) fˆ81,i (hi−1 , hi ) = fˆOi (hi−1 , hi )λ81,i−1 (hi−1 ) ∗ maxh1,i−2 p(hi−1 , h1,i−2 , φ1,i−1 ) . (B.7) = p(hi−1 , hi )p(o∗i | hi ) p(hi−1 ) We can now obtain the new potential on the separator for the flow from clique (Hi−1 , Hi ) to (Hi , Hi+1 ), fˆ81,i (hi ) = max fˆ81,i (hi−1 , hi ) =
hi−1 p(o∗i
(B.8)
∗ | hi ) max{p(hi | hi−1 ) max p(hi−1 , h1,i−2 , φ1,i−1 )} hi−1
h1,i−2
(B.9)
∗ = p(o∗i | hi ) max{p(hi | hi−1 )p(hi−1 , h1,i−2 , φ1,i−1 )}
(B.10)
∗ = max p(hi , h1,i−1 , φ1,i ),
(B.11)
h1,i−1
h1,i−1
which is the result one expects for the updated potential at this clique. Thus, we can express the separator potential fˆ81,i (hi ) recursively (via equation B.10) as fˆ81,i (hi ) = p(o∗i | hi ) max{p(hi | hi−1 ) fˆ81,i−1 (hi−1 )}. hi−1
(B.12)
266
Padhraic Smyth, David Heckerman, and Michael I. Jordan
This is the same recursive equation as used in the δ variables in the Viterbi algorithm (equation 33a in Rabiner 1989): the separator potentials in Dawid’s algorithm using a left-to-right schedule are exactly the same as the δ’s used in the Viterbi method for solving the MAP problem in HMM(1,1). Proceeding recursively in this manner, one finally obtains at the root clique ∗ ), fˆ81,N (hN−1 , hN ) = max p(hN−1 , hN , hN−2 , φ1,N h1,N−2
(B.13)
from which one can get the likelihood of the evidence given the most likely state of the hidden variables: fˆ(e) = max fˆ81,N (hN−1 , hN ) hN−1 ,hN
∗ = max p(h1,N , φ1,N ). h1,N
(B.14) (B.15)
Identification of the values of the hidden variables that maximize the evidence likelihood can be carried out in the standard manner as in the Viterbi method, namely, by keeping a pointer at each clique along the flow in the forward direction back to the previous clique and then backtracking along this list of pointers from the root clique after the collection phase is complete. An alternative approach is to use the distribute phase of the Dawid algorithm. This has the same effect: Once the distribution flows are completed, each local clique can calculate both the maximum value of the evidence likelihood given the hidden variables and the values of the hidden variables in this maximum that are local to that particular clique. Acknowledgments MIJ gratefully acknowledges discussions with Steffen Lauritzen on the application of the IPF algorithm to UPINs. The research described in this article was carried out in part by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. References Baum, L. E., and Petrie, T. 1966. Statistical inference for probabilistic functions of finite state Markov chains. Ann. Math. Stat. 37, 1554–1563. Bishop, Y. M. M., Fienberg, S. E., and Holland, P. W. 1973. Discrete Multivariate Analysis: Theory and Practice. MIT Press, Cambridge, MA. Buntine, W. 1994. Operations for learning with graphical models. Journal of Artificial Intelligence Research 2, 159–225. Buntine, W. In press. A guide to the literature on learning probabilistic networks from data. IEEE Transactions on Knowledge and Data Engineering.
Probabilistic Independence Networks
267
Dawid, A. P. 1992a. Applications of a general propagation algorithm for probabilistic expert systems. Statistics and Computing 2, 25–36. Dawid, A. P. 1992b. Prequential analysis, stochastic complexity, and Bayesian inference (with discussion). In Bayesian Statistics 4, J. M. Bernardo, J. Berger, A. P. Dawid, and A. F. M. Smith, eds., pp. 109–125. Oxford University Press, London. DeGroot, M. 1970. Optimal Statistical Decisions. McGraw-Hill, New York. Dempster, A., Laird, N., and Rubin, D. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B 39, 1–38. Elliott, R. J., Aggoun, L., and Moore, J. B. 1995. Hidden Markov Models: Estimation and Control. Springer-Verlag, New York. Frasconi, P., and Bengio, Y. 1994. An EM approach to grammatical inference: Input/output HMMs. In Proceedings of the 12th IAPR Intl. Conf. on Pattern Recognition, pp. 289–294. IEEE Computer Society Press, Los Altimos, CA. Gauvain, J., and Lee, C. 1994. Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains. IEEE Trans. Sig. Audio Proc. 2, 291–298. Geman, S., and Geman, D. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Patt. Anal. Mach. Intell. 6, 721–741. Ghahramani, Z., and Jordan, M. I. 1996. Factorial hidden Markov models. In Advances in Neural Information Processing Systems 8, D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, eds., pp. 472–478. MIT Press, Cambridge, MA. Heckerman, D., and Geiger, D. 1995. Likelihoods and Priors for Bayesian Networks. MSR-TR-95-54. Microsoft Corporation, Redmond, WA. Heckerman, D., Geiger, D., and Chickering, D. 1995. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning 20, 197–243. Hinton, G. E., and Sejnowski, T. J. 1986. Learning and relearning in Boltzmann machines. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, D. E. Rumelhart, J. L. McClelland, and the PDP Research Group, eds., vol. 1, chap. 7. MIT Press, Cambridge, MA. Huang, X. D., Ariki, Y., and Jack, M. A. 1990. Hidden Markov Models for Speech Recognition. Edinburgh University Press, Edinburgh. Isham, V. 1981. An introduction to spatial point processes and Markov random fields. International Statistical Review 49, 21–43. Itzykson, C., and Drouff´e, J-M. 1991. Statistical Field Theory. Cambridge University Press, Cambridge. Jensen, F. V., Lauritzen, S. L., and Olesen, K. G. 1990. Bayesian updating in recursive graphical models by local computations. Computational Statistical Quarterly 4, 269–282. Jiˇrousek, R., and Pˇreuˇcil, S. 1995. On the effective implementation of the iterative proportional fitting procedure. Computational Statistics and Data Analysis 19, 177–189.
268
Padhraic Smyth, David Heckerman, and Michael I. Jordan
Kass, R., Tierney, L., and Kadane, J. 1988. Asymptotics in Bayesian computation. In Bayesian Statistics 3, J. Bernardo, M. DeGroot, D. Lindley, and A. Smith, eds., pp. 261–278. Oxford University Press, Oxford. Kent, R. D., and Minifie, F. D. 1977. Coarticulation in recent speech production models. Journal of Phonetics 5, 115–117. Lauritzen, S. L., and Spiegelhalter, D. J. 1988. Local computations with probabilities on graphical structures and their application to expert systems (with discussion). J. Roy. Statist. Soc. Ser. B. 50, 157–224. Lauritzen, S. L., Dawid, A. P., Larsen, B. N., and Leimer, H. G. 1990. Independence properties of directed Markov fields. Networks 20, 491–505. Lauritzen, S., and Wermuth, N. 1989. Graphical models for associations between variables, some of which are qualitative and some quantitative. Annals of Statistics 17, 31–57. Lindblom, B. 1990. Explaining phonetic variation: A sketch of the H&H theory. In Speech Production and Speech Modeling, W. J. Hardcastle and A. Marchal, eds., pp. 403–440. Kluwer, Dordrecht. Lucke, H. 1995. Bayesian belief networks as a tool for stochastic parsing. Speech Communication 16, 89–118. MacKay, D. J. C. 1992a. Bayesian interpolation. Neural Computation 4, 415–447. MacKay, D. J. C. 1992b. A practical Bayesian framework for backpropagation networks. Neural Computation 4, 448–472. Madigan, D., and Raftery, A. E. 1994. Model selection and accounting for model uncertainty in graphical models using Occam’s window. J. Am. Stat. Assoc. 89, 1535–1546. Modestino, J., and Zhang, J. 1992. A Markov random field model-based approach to image segmentation. IEEE Trans. Patt. Anal. Mach. Int. 14(6), 606– 615. Morgenstern, I., and Binder, K. 1983. Magnetic correlations in two-dimensional spin-glasses. Physical Review B 28, 5216. Neal, R. 1993. Probabilistic inference using Markov chain Monte Carlo methods. CRGTR-93–1. Department of Computer Science, University of Toronto. Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA. Pearl, J., Geiger, D., and Verma, T. 1990. The logic of influence diagrams. In Influence Diagrams, Belief Nets, and Decision Analysis, R. M. Oliver and J. Q. Smith, eds., pp. 67–83. John Wiley, Chichester, UK. Perkell, J. S., Matthies, M. L., Svirsky, M. A., and Jordan, M. I. 1993. Trading relations between tongue-body raising and lip rounding in production of the vowel /u/: A pilot motor equivalence study. Journal of the Acoustical Society of America 93, 2948–2961. Poritz, A. M. 1988. Hidden Markov models: A guided tour. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 1:7–13, IEEE Press, New York. Rabiner, L. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE 77, 257–285.
Probabilistic Independence Networks
269
Raftery, A. 1995. Bayesian model selection in social research (with discussion). In Sociological Methodology, P. Marsden, ed., pp. 111–196. Blackwell, Cambridge, MA. Rissanen, J. 1987. Stochastic complexity (with discussion). Journal of the Royal Statistical Society, Series B 49, 223–239, 253–265. Saul, L. K., and Jordan, M. I. 1995. Boltzmann chains and hidden Markov models. In Advances in Neural Information Processing Systems 7, G. Tesauro, D. S. Touretzky, and T. K. Leen, eds., pp. 435–442. MIT Press, Cambridge, MA. Saul, L. K., and Jordan, M. I. 1996. Exploiting tractable substructures in intractable networks. In Advances in Neural Information Processing Systems 8, D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, eds., pp. 486–492. MIT Press, Cambridge, MA. Schwarz, G. 1978. Estimating the dimension of a model. Annals of Statistics 6, 461–464. Shachter, R. D., Anderson, S. K., and Szolovits, P. 1994. Global conditioning for probabilistic inference in belief networks. In Proceedings of the Uncertainty in AI Conference 1994, pp. 514–522. Morgan Kaufmann, San Mateo, CA. Spiegelhalter, D. J., Dawid, A. P., Hutchinson, T. A., and Cowell, R. G. 1991. Probabilistic expert systems and graphical modelling: A case study in drug safety. Phil. Trans. R. Soc. Lond. A 337, 387–405. Spiegelhalter, D. J., and Lauritzen, S. L. 1990. Sequential updating of conditional probabilities on directed graphical structures. Networks 20, 579–605. Spirtes, P., and Meek, C. 1995. Learning Bayesian networks with discrete variables from data. In Proceedings of First International Conference on Knowledge Discovery and Data Mining, pp. 294–299. AAAI Press, Menlo Park, CA. Stolorz, P. 1994. Recursive approaches to the statistical physics of lattice proteins. In Proc. 27th Hawaii Intl. Conf. on System Sciences, L. Hunter, ed., 5:316–325. Swendsen, R. H., and Wang, J-S. 1987. Nonuniversal critical dynamics in Monte Carlo simulations. Physical Review Letters 58. Thiesson, B. 1995. Score and information for recursive exponential models with incomplete data. Tech. rep. Institute of Electronic Systems, Aalborg University, Aalborg, Denmark. Vandermeulen, D., Verbeeck, R., Berben, L., Delaere, D., Suetens, P., and Marchal, G. 1994. Continuous voxel classification by stochastic relaxation: Theory and application to MR imaging and MR angiography. Image and Vision Computing 12(9), 559–572. Whittaker, J. 1990. Graphical Models in Applied Multivariate Statistics. John Wiley, Chichester, UK. Williams, C., and Hinton, G. E. 1990. Mean field networks that learn to discriminate temporally distorted strings. In Proc. Connectionist Models Summer School, pp. 18–22. Morgan Kaufmann, San Mateo, CA.
Received February 2, 1996; accepted April 22, 1996.
NOTE
Communicated by Andrew Barto and Michael Jordan
Using Expectation-Maximization for Reinforcement Learning Peter Dayan Department of Brain and Cognitive Sciences, Center for Biological and Computational Learning, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
Geoffrey E. Hinton Department of Computer Science, University of Toronto, Toronto M5S 1A4, Canada
We discuss Hinton’s (1989) relative payoff procedure (RPP), a static reinforcement learning algorithm whose foundation is not stochastic gradient ascent. We show circumstances under which applying the RPP is guaranteed to increase the mean return, even though it can make large changes in the values of the parameters. The proof is based on a mapping between the RPP and a form of the expectation-maximization procedure of Dempster, Laird, and Rubin (1977). 1 Introduction Consider a stochastic learning automaton (e.g., Narendra & Thatachar 1989) whose actions y are drawn from a set Y . This could, for instance, be the set of 2n choices over n separate binary decisions. If the automaton maintains a probability distribution p(y|θ ) over these possible actions, where θ is a set of parameters, then its task is to learn values of the parameters θ that maximize the expected payoff: X p(y | θ )E [r | y]. (1.1) ρ(θ) = hE [r | y]ip(y|θ ) = Y
Here, E [r|y] is the expected reward for performing action y. Apart from random search, simulated annealing, and genetic algorithms, almost all the methods with which we are familiar for attempting to choose appropriate θ in such domains are local in the sense that they make small steps, usually in a direction that bears some relation to the gradient. Examples include the Keifer-Wolfowitz procedure (Wasan 1969), the ARP algorithm (Barto & Anandan 1985), and the REINFORCE framework (Williams 1992). There are two reasons to make small steps. One is that there is a noisy estimation problem. Typically the automaton will emit single actions y1 , y2 , . . . according to p(y|θ), will receive single samples of the reward from the distributions p(r|ym ), and will have to average the gradient over these noisy values. The other reason is that θ might have a complicated effect on which actions are chosen, or the relationship between actions and rewards Neural Computation 9, 271–278 (1997)
c 1997 Massachusetts Institute of Technology °
272
Peter Dayan and Geoffrey E. Hinton
might be obscure. For instance, if Y is the set of 2n choices over n separate binary actions a1 , . . . , an , and θ = {p1 , . . . , pn } is the collection of probabilities of choosing ai = 1, so p(y = {a1 ..an } | θ) =
n Y
pai i (1 − pi )1−ai ,
(1.2)
i=1
then the average reward ρ(θ ) depends in a complicated manner on the collection of pi . Gradient ascent in θ would seem the only option because taking large steps might lead to decreases in the average reward. The sampling problem is indeed present, and we will evade it by assuming a large batch size. However, we show that in circumstances such as the n binary action task, it is possible to make large, well-founded changes to the parameters without explicitly estimating the curvature of the space of expected payoffs, by a mapping onto a maximum likelihood probability density estimation problem. In effect, we maximize reward by solving a sequence of probability matching problems, where θ is chosen at each step to match as best it can a fictitious distribution determined by the average rewards experienced on the previous step. Although there can be large changes in θ from one step to the next, we are guaranteed that the average reward is monotonically increasing. The guarantee comes for exactly the same reason as in the expectation-maximization (EM) algorithm (Baum et al. 1970; Dempster et al. 1977) and, as with EM, there can be local optima. The relative payoff procedure (RPP) (Hinton 1989) is a particular reinforcement learning algorithm for the n binary action task with positive r, which makes large moves in the pi . Our proof demonstrates that the RPP is well founded. 2 Theory The RPP operates to improve the parameters p1 , . . . , pn of equation 1.2 in a synchronous manner based on substantial sampling. It suggests updating the probability of choosing ai = 1 to p0i =
hai E [r | y]ip(y|θ ) , hE [r | y]ip(y|θ )
(2.1)
which is the ratio of the mean reward that accrues when ai = 1 to the net mean reward. If all the rewards r are positive, then 0 ≤ p0i ≤ 1. This note proves that when using the RPP, the expected reinforcement increases; that is, hE [r | y]ip(y|θ 0 ) ≥ hE [r | y]ip(y|θ )
where
θ 0 = {p01 , . . . , p0n }.
Using EM for Reinforcement Learning
273
The proof rests on the following observation: Given a current value of θ , if one could arrange that α E [r | y] =
p(y | θ 0 ) p(y | θ)
(2.2)
for some α, then θ 0 would lead to higher average returns than θ. We prove this formally below, but the intuition is that if
E [r | y1 ] > E [r | y2 ]
then
p(y1 | θ ) p(y1 | θ 0 ) > p(y2 | θ 0 ) p(y2 | θ )
so θ 0 will put more weight on y1 than θ does. We therefore pick θ 0 so that p(y|θ 0 ) matches the distribution α E [r|y]p(y|θ ) as much as possible (using a Kullback-Leibler penalty). Note that this target distribution moves with θ. Matching just β E [r|y], something that animals can be observed to do under some circumstances (Gallistel 1990), does not result in maximizing average rewards (Sabes & Jordan 1995). If the rewards r are stochastic, then our method (and the RPP) does not eliminate the need for repeated sampling to work out the mean return. We assume knowledge of E [r|y]. Defining the distribution in equation 2.2 correctly requires E [r|y] > 0. Since maximizing ρ(θ ) and ρ(θ ) + ω has the same consequences for θ, we can add arbitrary constants to the rewards so that they are all positive. However, this can affect the rate of convergence. We now show how an improvement in the expected reinforcement can be guaranteed: log
X E [r | y] ρ(θ 0 ) = log p(y | θ 0 ) ρ(θ) ρ(θ ) y∈Y · X p(y | θ )E [r | y] ¸ p(y | θ 0 ) = log (2.3) ρ(θ ) p(y | θ ) y∈Y X · p(y | θ)E [r | y] ¸ p(y | θ 0 ) ≥ log , by Jensen’s inequality ρ(θ ) p(y | θ ) y∈Y =
where Q(θ, θ 0 ) =
¤ 1 £ Q(θ, θ 0 ) − Q(θ, θ ) . ρ(θ)
X
(2.4)
p(y | θ)E [r | y] log p(y | θ 0 ),
y∈Y
so if Q(θ, θ 0 ) ≥ Q(θ, θ), then ρ(θ 0 ) ≥ ρ(θ ). The normalization step in equation 2.3 creates the matching distribution from equation 2.2. Given θ , if θ 0 is chosen to maximize Q(θ, θ 0 ), then we are guaranteed that Q(θ, θ 0 ) ≥ Q(θ, θ ) and therefore that the average reward is nondecreasing.
274
Peter Dayan and Geoffrey E. Hinton
In the RPP, the new probability p0i for choosing ai = 1 is given by p0i =
hai E [r | y]ip(y|θ ) , hE [r | y]ip(y|θ )
(2.5)
so it is the fraction of the average reward that arrives when ai = 1. Using equation 1.2, 1 ∂Q(θ, θ 0 ) = 0 0 ∂pi pi (1 − p0i ) X X 0 p(y | θ )E [r | y] − pi p(y | θ )E [r | y] . × y∈Y:ai =1
So, if
P p0i
=
y∈Y:ai =1
P
y∈Y
p(y | θ)E [r | y]
p(y | θ)E [r | y]
y∈Y
,
then
∂Q(θ, θ 0 ) = 0, ∂p0i
and it is readily seen that Q(θ, θ 0 ) is maximized. But this condition is just that of equation 2.5. Therefore the RPP is monotonic in the average return. Figure 1 shows the consequence of employing the RPP. Figure 1a shows the case in which n = 2; the two lines and associated points show how p1 and p2 change on successive steps using the RPP. The terminal value p1 = p2 = 0, reached by the left-hand line, is a local optimum. Note that the RPP always changes the parameters in the direction of the gradient of the expected amount of reinforcement (this is generally true) but by a variable amount. Figure 1b compares the RPP with (a deterministic version of) Williams’s (1992) stochastic gradient ascent REINFORCE algorithm for a case with n = 12 and rewards E [r|y] drawn from an exponential distribution. The RPP and REINFORCE were started at the same point; the graph shows the difference between the maximum possible reward and the expected reward after given numbers of iterations. To make a fair comparison between the two algorithms, we chose n small enough that the exact averages in equation 2.1 and (for REINFORCE) the exact gradients, p0i = α
∂ hE [r | y]ip(y|θ ) , ∂pi
could be calculated. Figure 1b shows the (consequently smooth) course of learning for various values of the learning rate. We observe that for both algorithms, the expected return never decreases (as guaranteed for the RPP but not REINFORCE), that the course of learning is not completely smooth— with a large plateau in the middle—and that both algorithms get stuck in
Using EM for Reinforcement Learning
275
local minima. This is a best case for the RPP: only for a small learning rate and consequently slow learning does REINFORCE not get stuck in a worse local minimum. In other cases, there are values of α for which REINFORCE beats the RPP. However, there are no free parameters in the RPP, and it performs well across a variety of such problems. 3 Discussion The analogy to EM can be made quite precise. EM is a maximum likelihood method for probability density estimation for a collection X of observed data where underlying point x ∈ X there can be a hidden variable y ∈ Y . The density has the form X p(y | θ)p(x | y, θ), p(x | θ) = y∈Y
and we seek to choose θ to maximize X £ ¤ log p(x | θ) . x∈X
The E phase of EM calculates the posterior responsibilities p(y|x, θ) for each y ∈ Y for each x: p(y | x, θ ) = P
p(y | θ)p(x | y, θ) . z∈Y p(z | θ )p(x | z, θ)
In our case, there is no x, but the equivalent of this posterior distribution, which comes from equation 2.2, is p(y | θ)E [r | y] . Py (θ) ≡ P z∈Y p(z | θ)r(z) The M phase of EM chooses parameters θ 0 in the light of this posterior distribution to maximize XX p(y | x, θ ) log[p(x, y | θ 0 )] . x∈X y∈Y
In our case this is exactly equivalent to minimizing the Kullback-Leibler divergence ¸ · X p(y | θ 0 ) 0 . Py (θ ) log KL[Py (θ), p(y | θ )] = − Py (θ ) y∈Y Up to some terms that do not affect θ 0 , this is −Q(θ, θ 0 ). The Kullback-Leibler divergence between two distributions is a measure of the distance between them, and therefore minimizing it is a form of probability matching.
276
Peter Dayan and Geoffrey E. Hinton
Figure 1: Performance of the RPP. (a) Adaptation of p1 and p2 using the RPP from two different start points on the given ρ(p1 , p2 ). The points are successive values; the lines are joined for graphical convenience. (b) Comparison of the RPP with Williams’s (1992) REINFORCE for a particular problem with n = 12. See text for comment and details.
Using EM for Reinforcement Learning
277
Our result is weak: the requirement for sampling from the distribution is rather restrictive, and we have not proved anything about the actual rate of convergence. The algorithm performs best (Sutton, personal communication) if the differences between the rewards are of the same order of magnitude as the rewards themselves (as a result of the normalization in equation 2.3). It uses multiplicative comparison rather than the subtractive comparison of Sutton (1984), Williams (1992), and Dayan (1990). The link to the EM algorithm suggests that there may be reinforcement learning algorithms other than the RPP that make large changes to the values of the parameters for which similar guarantees about nondecreasing average rewards can be given. The most interesting extension would be to dynamic programming (Bellman 1957), where techniques for choosing (single-component) actions to optimize return in sequential decision tasks include two algorithms that make large changes on each step: value and policy iteration (Howard 1960). As various people have noted, the latter explicitly involves both estimation (of the value of a policy) and maximization (choosing a new policy in the light of the value of each state under the old one), although its theory is not at all described in terms of density modeling. Acknowledgments Support came from the Natural Sciences and Engineering Research Council and the Canadian Institute for Advanced Research (CIAR). GEH is the Nesbitt-Burns Fellow of the CIAR. We are grateful to Philip Sabes and Mike Jordan for the spur and to Mike Jordan and the referees for comments on an earlier version of this paper. References Barto, A. G., & Anandan, P. (1985). Pattern recognizing stochastic learning automata. IEEE Transactions on Systems, Man and Cybernetics, 15, 360–374. Baum, L. E., Petrie, E., Soules, G., & Weiss, N. (1970). A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann. Math. Stat., 41, 164–171. Bellman, R. E. (1957). Dynamic programming. Princeton, NJ: Princeton University Press. Dayan, P. (1990). Reinforcement comparison. In Proceedings of the 1990 Connectionist Models Summer School, D. S. Touretzky, J. L. Elman, T. J. Sejnowski, & G. E. Hinton (Eds.), (pp. 45–51). San Mateo, CA: Morgan Kaufmann. Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Proceedings of the Royal Statistical Society, 1–38. Gallistel, C. R. (1990). The organization of learning. Cambridge, MA: MIT Press. Hinton, G. E. (1989). Connectionist learning procedures. Artificial Intelligence, 40, 185–234.
278
Peter Dayan and Geoffrey E. Hinton
Howard, R. A. (1960). Dynamic programming and Markov processes. Cambridge, MA: MIT Press. Narendra, K. S., & Thatachar, M. A. L. (1989). Learning automata: An introduction. Englewood Cliffs, NJ: Prentice-Hall. Sabes, P. N., & Jordan, M. I. (1995). Reinforcement learning by probability matching. Advances in Neural Information Processing Systems, 8. Cambridge, MA: MIT Press. Sutton, R. S. (1984). Temporal credit assignment in reinforcement learning. Unpublished doctoral dissertation, University of Massachusetts, Amherst. Wasan, M. T. (1969). Stochastic approximation. Cambridge University Press, Cambridge. Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8, 229–256.
Received October 2, 1995; accepted May 30, 1996.
Fast S i g m o i d a l N e t w o r k s v i a S p i k i n g N e u r o n s Wolfgang Maass InstituteforTheoreticalCompufnSoence, TechnircheUniversitmt Gror, GIOZ,Austria
We show that networks of relatively realistic mathematical models for biological neurons in principle can simulate arbitrary feedfornard sigmoidal neural n e b in away that has previoudv not been considered. This new approach is based on temporal coding b;single spikes (mspectively bv the timine of svnchmnous firin= in ~ o a l of s neurons) rather than on the traditional interpretation of analog variables in terms of firing rates. Ihe resulting new simulation is subrtantially faster and hence more consistent withexperimental resulbabout Ihemuimalspeedof information processing i n cortical neural systems. Asa consequence wecan show that networks of noisy spiking neurons are "universal soomximators" in the sense that thev , can a~oroximate with regard to temporal coding any givencontinuous function of several variables. This result holds for a fairly large class of schemes for coding analog variables by firing times of spiking neurons. This new proposal for the possible organization of computations in nehvorks of spiking neurons systems has some interesting consequences for the type of learning rules that would be needed to explain the selforganization of such networks. Finally, the fast and noise-robust implementation of sigmoidal neural nets by temporal coding points to possible new ways of implementing feedforward and recurrent sigmoidal neural n e b with pulse stream VLSI.
.,
2
..
- .
..
1 Introduction Sigmoidal neural nets are the most powerful and flexible computational model known today. In addition they have the advantage of allowing "selforganiration"via a variety of quitesuccessful Learningalgorithms. Unforhnately thecomputational unitsof sigmoidal neural netsdiffer strongly from biological neurons, and it is particularly dubious whether sigmoidal neural nets provide a useful paradigm for the organization offast computations in cortical neural svstems. lrad~honallyon', vmws the brmg rate of a neuron as the rep-ntatwn 01 an analog vanable m analog computatwns with sp~kmgneurons, ~n part~cular.m thc %~mulahon of s~gmondalneural net5 by spekmg nrumns Neural Computorzon 9 , 2 m (1997)
@ 1997 Ma.rachuselb lnshhlte of Technology
280
Wolfgang Maass
However, with regard tofastcorticalcomputations, thisview isinconsistent with exoerimental data. Perrett el ol. (19821 . . and Tho- and Imbert (19891 . . have demonstrated that visual pattern analysis and pattern classification can be carried out by humans in just 100 ms, in spite of the fact that it involves a minimum of 10 synaptic stages from theretina to the temporal lobe. The same speed of visual processing has been measured by Rolls and others in macaaue monkevs. Furthermore thev have shown that a sin& cortical area involved m vl,ual proce,wn): can complcle !b compulaliun in lust 20 to 70 ms (Rolls 1994. Rolls and Tovee 19Y4) On the other hand, the firmg rates of neurons involved in these computations are usually below 100 Hz. and hence at least 20 to 30 ms would be needed iust to samole the current firing rate of a neuron. Thus a coding of analog variables by firing rates is quite dubious in the context of fast cortical computations. Experimental evidenceaccumulated during the past few years indicates thatmany biologicalneuralsystemsuse the timingofsingleactionpotentials (or "svikes") to encode information (Abeles et al. 1993: Bialek and Rieke 1992, b a r lareeclass of schemes for temporal coding of analog variables. In Section 4 we briefly indicate some new perspectives about the organization of learning in biological neural svstems &at follow from this a&roach. ' We point out that this is nocan article about biology but about computational complexity theory. Its main results (given in Sections 2 and 3) are rieorous theoretical results about the com~u&ionalDower of common mathrm~t~cal models for networks of spkmg neurons However, some Informal comment\ have bwn added (after the thcorrm In %dwn 2. as well as in Sections 4 and 5) in order to facilitate a discussion of the bioloeical relevance of this mathematical model and its theoretical consequences. The computational unit of a sigmoidal neural net is a sigmoidal gate (o-gate) G, that assigns to analog input numben XI.. .. x,_l 6 [O. y] an output of the form'= x o ;:( r, .x, + r.). The function o: R + [0, y] is called the activation function of G. n . . . .. ""-1 are the weiehts of G.. and r... is the bias of G. These are conside;ed adjustable paramete; of G in the context of a learning process. The parameter y > 0 determines the scale of the analog -. com~utationscarried out bv the neural net. For convenience we assume that each G-gate G has an additional input x, with some constant value c E (0, yl available. Hence after rescaling r., the function , fc- that is com~utedbv , G can be viewed as a restriction oTthe function
. .,
.
.
-
-
.
.
Wolfgang Maass
282
to arguments with x. = c. The original choice for the activation function a in Rumelhart et ol. (1986) has been the Logistic sigmoid function o(y) = 1/(1 + 0'). Many years of practical experience with sigmoidal neural nets have shown that the exact form of the activation function o is not relevant for the computational power and learning capabilities of such neural nets, as Long as a is nondecreasing and almost everywhere differentiable, the Limits Limy,-, o(y) and limy,,o(y) have finite values, and o increases approximately linearly in some intermediate range. Gradientdescent learning procedures such as backpropagation formally require that o is differentiable everywhere, but practically one can just as well use the piecewise linear "Linear-saturated activation function n,: R + [O, y] defined by n,(y)=
I
0. y. Y
ifyy.
As a model for a spiking neuron we take the common model of a Leaky integrate-and-fire neuron with noise, in the formulation of the somewhat more general spike response model of Gershler and van Hemmen (1994). The onlv specific assumption needed for the consbction in this article is that po&+aphc potent;als can be dexrtbed (or at least approxmated) by a llnear funchon d u m g some rnrhal segment Actually theconstmchons of thtb art& appear to be of ~nteresteven ~f this asumphon 15not sattsfied, but in that case thev are harder to analvze theoreticaliv. We consider networks that consist of a finite set V of spiking neurons, a set E G V x V of svnapses, . . a weizht w . , > 0 and a response function E,. R* + R for each svnaose (u. v ) E E (where RC := Ix E R: x > 01). and , a thmhold funchon (3, R t + R' for each neuron u c V Each responsefunctmnc, ,modelsetther an EPSPor an 1151' lhr typral ahaw of EEPs and lPSPs IS md~caledan F~eure1 F. c R+ is the set of firing times of akeuron u, then the potential at the higger zone of neuron v at time t is given by
.,:
.
if
Furthermore one considers a threshold function Q,(t - t') that quantifies the "reluctance" of v to fire again at time t if its Last previous firing was at time P. Thus (3&) is extremely large for small x and then approaches 0,(0) for larger x. In a noise-& model, a neuron v fires at time t as s w n as P,,(t) reaches @At- P). The precise form of this threshold function (3, is not important for the conshuctions in this article, since we consider here only computations that relv on the timinz of the first spike in a spike hain. Thus it suffices to assume that O,(t - t') = 0,(0) for sufficiently large values o f t - t' and
Fast Sigmoidal Networks via Spiking Neumns
283
Figure I. The lypncal shapeof mhtbalory and exotalory postsynaphc polenlials al a bmlo@calneuron (We assume that the resnng mcmbrane pomennal has thc value 0.)
that mfI(3,W x E (0 YII IS larger than the polenhals P,UJ that occur m the construction of Section 2 lor t t [To.,- y . T ,,] The latter condihon (which amounk to the assum~tionof a sufficientlv lone refracton, oeriod) will prevent iterated firing of neuron u during the critical time interval [Tag - y , Tml. The construction in Section 2 is robust with respect to several types of noise that make the model of a spiking neuron biologically more realistic. As in the model for a leaky integrate-and-fire neuron with noise, we allow that the potentials P,(t) and the threshold functions 0,or thatv fires "spontaneously" at a time t when e Y ( t )- C)Y(t - t') c 0. For thesubsequent constructionswe need only the following assumption about the firing mechanism: For any time interval I of length greater than 0, the probability that v fires during I is arbitrarily close to 1 if P'Yy(t) 0YY(t t') is sufficiently large for t E I (up to the time when u fires), and the probabilitythatv firesduring !isarbitrarily c l o s e t o ~ o f ~ ~ ( t ) - @ y y ( t - t ' ) is sufficiently negative for all t E I. It turns out that it suffices to assume only the following rather weak propertiesof the response functionsc,, ,,:Eachresponse functionE,,,: Rf R is either excitatory or inhibitory. All excitatory response functions e.,,(t) have thevalueOfor t E 10, d,,,l,and thevalue t d , . for t E Id,,, d,,+Al, where d,,, ? 0 is some fixed delay and A > 0 is some other constant. Furthennore we assume that E ,,, (t) ? c..,(d ,,., A) for all t E [d,., A.d,, + A y], where y with 0 c y 5 A12 is another constant. With regard to inhibitory response functions E. .(t), we assume that ~,,,,(t)= 0 D - du.d fort E Id,,.". du." A]. Furthermore fort E 10, d,.,] and E ~ , ~=( -(t we assume that E,, .(t) = 0 for all sufficiently Large t. Finally we need a mechanism for increasing the firing threshold C) := @do)of a "rested" neuron v (at Least for a short ~eriod). One bioloeicallv plaus!bleabsumptmn that would account for buchan ulcreasets that neuron u m e w o b a large number of iPSh from randomly fmngnruruna that a r r w c onsynapses that are far away from the triggerzoneof vTso that eachof them has barely any effect on the dynamics of the potential at the trigger zone, but together they contributea rather steady negative summand BN- to the potential at the trigger zone. Other possible explanations for the increase of the firine threshold (3 could be based on the contribution of inhibitow mterneurons whose IPSPsarr~vecloseto lhesoma and are t m c locked tothe onset of the stimulus, or on long-lasl~ng - ~nhnbittonssuch as those mcdmted by GABAs receptors. Formally we assume that each neuron v receives some negative (i.e., inhibitory) potential BN- c 0 that can be assumed to be constant during the time intervals that are considered in the followine areuments. In comparison with other models for spiking neurons, this model allows more general noise than the models considered in Gerstner and van Hemmen (1994) and Maass (1995). On the other hand this model is somewhat less general than the one cons~deredIn Maass (199ha) H a v q deftntd the formal modcl, wr can now cxplam the key mechanism of ihe constructions in more detail. It is well known that i&oming
-
+
+
+
+
-
--
.
Fast Sigmoidal Networks via Spiking Neurons
285
EPSPs and I E P s are able to shiff the firing time of a biological neuron. We explore this effect in the mathematical model of a spiking neuron, showing that in principle it can be used to carry out complex analog computations in temporal coding. Assume that a spiking neuron u receives PSPs from presynaptic neurons a,. .. . , a n , that w, is the weight (efficacy) for the synapse from a, to u, and that d; is the time delay from a, to v. Then there exists a range of values for the parameters where the firing time L of neuron v can be written in terms of the firing times t,, of the presynaptic neurons a, as
Hence in principle a spiking neuron is able to compute in temporal coding of inoutsand oubutsalinear function (where theefficaciesof svnaosesencode , the coeff~cients of the h e a r function, as in rate codnng of analog varmbles) Thecalculations at the beztnntnz of Sectnon 2 show that thrs holds pmrwly are at time t,, all in their initial liniarlv rik if there is no noise and then P S P ~ ing or linearly decreasing phase. However, for a biological interpretation, it is interesting to know that even if the firing times to, (or more precisely their effectivevaluest., + d i ) lie further apart, this mechanism computes a meaningful approximation to a linear function. It employs (through the natural shaoeof BPS) an interesting adaotation of outliersamonp. the t, +d;: Input neuronsa, that fire t w late (relative to the average) losetheir influenceon the determination off,, and input neurons a, that fire extremely early have the same imoact as neuronsa, ihat firesomewhat later (but stilibefori the average) Remark 2 ~nScctnon 2 provtdesa m o r e d e t a h i dtscuss~onof thtsrffecl Thegoal of the next srctton IS to provca ngorous theoret~ralresult about theco&utational oower of formal models foinetworks of soikine neurons. We are not claiming that this construction (which is designed exclusively for that purpose) provides a blueprint for the organization of fast analog comoutations in bioloeical neural svstems. However.. it aorovides the first theoretical model that is able to explain the possibility of fast analog computations with noisy spiking neurons. Some remarks about the possible biological relevance of details of this construction can be found after the theorem in Section 2.
.
" .
.
-
2 The Main Construction
Consider an arbitrary +-gate G, for some y > 0, which computes a function fc: [O. Y ] " --t 10, Y ] . Let rl. . . . r. E R be the weights of G. Thus we have rl . s, < 0 if
.
fc(s1,.
.. . s n ) =
I1
E:=,r, .s,,
.
for arbitrary inputs sl.. . . s. E [O. y ] .
r:=l
if 0 5 if
EL, I# .
r, .s, 5 y S,
r y
Wolfgang Maass
286
Figure 2: The simulation of a sigmoidal gate by a spiking neuron v in temporal coding
-
.
.
For the sake of simdicitv * we first consider the case of mikine " neurons without noise (i.e., u, = 8, 0 and each neuron v fires whenever Po(!) crosses Q,(t - P ) from below). Then wedescribe thechan~es that areneeded in this construction for the general case of noisy spikingneurons. We construct for a given n,-gate G and for an arbitrary given parameter E > 0 with E c y a network Nc.* of spiking neurons that approximates fc with precision 5 E ; that is, the output NG.,(sI.. .. ,s,,) of NG., satisfies INC.&I.. .. .sn) - fc(s1.. .. .s.)I j E for all sl. . . . s. E [O, y ] . In order to be able to scale the size of weights according to the given gate G.we assume that h% ,receives an additional inout sn that is eiven like the other input variables sl.. . . s. in temporal coding. Thus we assume that there are n 1 input neurons ao.. .. o, with the property that a, fires at time Ti. - s, (where T.. is some constant). We will discuss at the end of this sectio"(in Remarks 5 a n d 6) biologicaliy more plausible variations of the construction where ao and T , are not needed. that receives n+l PSPs ho(t),.. , Weconshucta spiking neuron v in NcGG h.(t) from the n 1input neuronsao.. .. an, which result from the firing of a, at time T,. - s, (see Fimre 2). In addition v receives some auxiliarv PSPs from other spiking neu& in N c ,, whose timing depends only on
.
.
+
+
. -
.
.
-
.
Fast Sigmoidal Networks via Spiking Neurons
287
The firing time t, of this neuron v will p r o v i d .. . ,~).s of the network NG.,in temporal coding; that is, u will fire at time T, NG.&I.. . . s.) for some Toutthat does not depend on SI.. . . ,s.. Let w.,,, be the weight of the synapse from input neuron a, to neuron u, i = 0,... ,n. We assume that the "delav" , do,.-,,behveen a, and u is the same for all input neurons ao.. . . an, and we write d for this common delay. Thus we can describe for i = 0... . n the impact of the firing of a; at time T,. - s, on
.
.
. --
the potential at the trigger zonebf neuron u at t h e t bv the EPSP or IPSP h,(t) = w.,,,. ~ , . ~-((T,,, t - s,)), which has on the basis of our assumptions the value if t - (T>,,- s,) < d h i ( t ) = ( ~ E . ( t - ( T , . - s , ) - d ) , ifdjt-(T,.-s;)jd+A. in the case of an EPSP and w, = -w.,,, in the case of an where w, = w.,,, IPSl? Weassume that neuron v hasnot fired for a sufficientlv , lone- time.. so that its thmhold function B,(t - Y) can be assumed to have a constant value W when the PSPs ho(t). . . . h.(t) arrive at the trigger zone of u. Furthermore weassume for themoment thatbesides these n j i PSPsonly BN- influences the potential at the trigger zone of u. This contribution BN- is assumed to have a constant value in the time internal considered here. Then if no noise is present, the time t, of the next firing of u can be described by the equality
.
h,(t,)
(3 = ;=a
+ BN-
w, . (t, - (T,. - s;)
=
- d) + BN-.
(2.1)
'4
provided that
We assume from now on that a fixed value so input so. Then equation 2.1 is equivalent to
This t, satisfies equation 2.2 if -s, hence for any sj E [0, y ] if
.
j 1,
= 0 is chosen for the extra
- T,. - d
j
.
A
- s, for j = 0,. . . . n;
We set w, := A . r, for i = 1 , . . . n, where rl, .. . r, are the weights of the simulated =,-gate G, and A > 0 is some not-yet-determined factor (that we
288
Wolfgang Maass
will later choose sufficiently large in order to make sure that neuron v fires closely to the time 1, given by equation 2.3 even in the presence of noise). We choose wo so that x:=o wi = A. This impl~esthat equation 2.3 with To,,,:=
It I S O ~ V ~ U I that all t h c r cond~tronscan be achwed wrth a sufftcwnt number (and/or weiehtsl of auxiliarv IPSPs from inhibitorv neurons in N P . ,svnchronized , whose firing time depends only on T,,,. Thiscan be satisfied using only the very weak condition that each IPSP is continuous and of value 0 before it evenhlallv vanishes (see the omise condition in Section 1). We now want tomakrwre that u fire+at the latest by ttme T,,., - y + r ~f by tmeT,,, - y r r . r c ,y r S,ncvallauxtltaryII'Sl'shave~~an~shed ie-have P,,(T,,,, - y &) = x:=oh,(~,.l - y E) B ~ - . . ~ e n cite suffices to show that the latter is 2 (-1. Consider the set I of those i E (I.. . . n) with r, > 0, that is, those i where w,,,, 8,,, ( t ) represents an EPSP. By choosing suitable values s: in the interval [O.s,l for i E I and by settings: := s, for i E ( I . . .. n) - I , one can achieve that EL, r; . s: = y - E. According to equation 2.5, the potential P,, reaches the value 6) at time To,,,- y + E for the inout (s:. and 2.7. each PSP . . , . . s:,), ,,. and accordine to eauations 2.2.2.4.2.6. w,.,, .E.,,, is at time T,,,,,- Y +& still within A of the beginning of its nonzero phase. If we now changes: tos, for i t I, the EPSPw,,., .e,,.,, will be advanced time by s, - s: E [o.;]. Hence each of the EPSPs w.,,, . F.,,, for i E I is for inputs, at time To,,,- y + e within A + y of the beginning of its rising phase. Since we have assumed in Section 1 that w.,,, . e,,,,(t) z w.,,,,. E, (d + A) for all t E Id A, d A vl, . . P,,(T,,,+ . . -v . cl has for inout h. . .. . s,,).. a value that is at least as large as for input (s;. . . . .s,), and therefore a value 3 (3. This implies that in the case E:=, . r, . s, > y - e the neuron u will fire within the time interval [T,,,, - y , T,,,, - y + E]. In an analogous manner we can achieve with the help of EPSPs from auxiliary excitatory neurons in NcI (whose firing time depends only on T,,, not on the input valuesst.. . . ,s,,)that neuron v fires at the latest at time r, . S, < 0. The preceding analysis implies that for any To,,,,even if values, E 10, yl and any t 5 To,,,,the absolute value of the PSP w,,,, . E.,.,.(11 ,,(t)I: i E 10.. . . nl and can be bounded bv Iw.. z.I - 0. where 0 := SUDIIE,, t E Id, d A y.1) is'the maximum absolite vhue that any e.,,,,(t) for i = 0.. .. ncanreachduringtheinitialsegmentoflengthA+y ofitsnonzero segment. Thus if the absolute value of thhse functions e., ,St) grows during [d+A. d+A+y] not faster thanduring their linearsegment [ d , d+A] (which
+
..
" .
-.
~
+
r:-,
.
+
. ..
+
+ +
.
- .
+ +
+
.
.
r:=,
+ + .
.
Wolfgang Maass
290
Figure 3: The resulting activation function n of the implementation of a sigmoidal gate in temporal coding.
apparently holds far biological PSPs), we can set p := A + y.Consequently we can derive for any I 5 T,,,, and any values 51. . . . s, t [O,yl the bound I x:loh,tt)l 5 Iw,.,.~ - p = W - p. Thus it suffices to make sure that EPSPs from auxiliary neurons in NG,,reach the trigger zone of neuron u shortlv after time T,,., - e and that their sum reaches a value C ) - B N + W. o by t~meT~,,.,.Then for any valuesof $ 8 . . s, E 10.y l thepotent~alP,( 1 , will reach the value 0 was some not-yet-determined factor. According to equation 2.3, a change in the value of this parameter A can result only in a shift of t, by an amount that is independent h.om the input variables $1,. . . s.. Furthermore if one chooses the contribution BN- of inhibitory background noise as a function of A so that Q - EN- = A. Z for some constant ?, the resulting firing time t, is completely independent of the choice of A. On the other hand, the parameter A occurs as a factor in the non. r, . la.. . d t - (T,. -$))I = zy=oh,(t) in the potential constant term h,(t) +EN- (we ignore the effectof auxilary PSPs for the moP&) = ment). Hence by chwsing A sufficiently large, one can make sure that in the deterministic case h ( t ) has an arbitrarily large derivative at the time t, when it crosses O,(f - t'). Furthermore if we choose y. A, and BN- so that equation 2.7 can be replaced by
.
x:=,xy=,A
then it is guaranteed that the linearly increasing potential Pdt) (with slope proportional to A) will rise with the same slope throughout the interval Itn - y , t, E]. Ifwenow k e e thissettine ~ of themrameters but reolacethedeterministic neuron u by a stochastrc neumn u that rs subject to the two types of nolse that werespeofd mSect~on1. thefollowmgcan beobserved If o(t)l 5 o and 16(t)1 . -2.6 for all L then the time intervaiaround t.,- durine which Pdf) . . is within the interval [O - ol - 0. O + a + ,4] becomes arbitrarily small for sufficiently large A. Furthermore if A is sufficiently large (and EN- is adjusted along with A so that = Z for some constant ? z 0 that is independent of A), then durine IT.,,, - v. R ( t ) + a + 6 - (9 is arbitrarilv , neeative ~, . t,.. - el.. Thus the pmhah~htythatvferesdur~ng[T,.,-y, I, -r]can behroughtarbttrardy close to 0 In addrt~on,by making A sufficwntly large, one can achwvr that P,.(t)a - 6 - O is arbit&lv larie throunhoit thetime interval It. . . + s12, . t,, + 81. ~ e n c the e probabiliGtha& fires b; time t, + E can be brought arbitraril; close to 1. If one increases the number or the weights of the previously described auxilary PSPs in an analogous manner, one can achieve for the noisy neuron fires model that with probability 2 1 - S the neuron v of network NG,,~ exactly~nceduring[T~-y, TTT,l,andatatime;" = Tout-N~.r.a(s~. . . . .a,l) with l N ~ . ~ . s (. .s.~. a. n ) - fc(s1,. ...snl 4 2e, f o r a l l s ~.. . . .s t [O, yl.
+
-
.
-
.,.
.
292
Wolfgang Maass
The previously described construction can easily be adapted to allow shifts in the arrival times of auxiliary PSPs at v of u p to &/2.Hence we can allow that these PSPs also come from noisv soikine neurons. In order to simulate an arbitrary given feedforward network N of n,gates with precision e a n d probability? I - S of correctness, oneapplies the e i v & ~ .6 > 0 &xedineconstruction sevaratelv,toeach eate G in N. For anv, ~, one detcrminn for each n, -gate (; m N (startmg at the output gates of N) su~tablevaluesr,..6r, ,0,aothat turpproumatt,N wrthmt w~thprobabiltly 2_ withii d ~ E with C - 1 - 6, it suffices that each gate G i n N is a ~ ~ r o x i m a t e probability E 1-' 6 by a networkNc -8, of noisy spiking neurons. In this ~ spiking neurons that way, one can achieve that the n e t w ~ r k N ~ ,of, , noisy is composed of these networks Nchlc,sGapproximates in temporal coding with probability 2_ 1 - S the output of N within c, for any given network input X I . . . . x,~,E [O. yl. Actually it would be more efficient if the modules N G . , ~ are . ~ not disjoint but share the neurons that generate the auxiliary EffiPs and IPSPs. Note that the oroblem of achievine reliable dieital and analog romputatmn w t h nwsy neurons ha, already brrn conrldrwd m von Neumann (1956) for other t) pes of t o m a l neuron models Thus we have shown:
,.
-
.
..
.
Theorem.Foranygiuenp.S > Oonecansimulateanygivenfeedforwardsigmoidd neuralnetNconsistingofn,-gates(forsomesyqiomtlysmaNy thatdependson the chosen model for a spiking neuron) by a n e t w o r k N ~s. ~ of noisy spikmg neurons in temporal coding. More precrsely, for any network input XI. . .. x., E [0,y ] the output of NN,,,~ differs with probability z 1 - 6 by at most &from that of N. Furthermore the computation time ofNq, s depends on nerther the number of gates in N nor theparameters s, S, but only on the number oflayers ofthesigmoidal neural network N.
.
Remark 1.Onecan exploit the concreteshape of PSPs in biological neurons in order to arrive at an alternative approximation of sigmoidal neural n e b bvspikinr! neurons in tem~oralcodinawtthoutsimulatinr! ex~licitlvtheactivation function n, of the sigmo~dalneural net. In this alternative approach, one can delete from the previously described construction the auxiliary neurons that prevent a firing of neuron v before time To,,,- y and force it to fire Furthermore in this interpretation we no longer at the latest by time Tun,. have toassume that the initial linear segmentsof all incoming PSPsoverlap, and hence less precision in the firing times is needed. In case v fires substantiallv after time T,,,,,. the resultine PSP at a ~ o s t s y n a p t ~neunm u' ( a h x h smulatrs a stgmuddl gate un the next layer of the srmulated s~gmoldalneural net) stdl has 11stntt~alvalue 0 at the t,me t, whrn z3 f ~ r e sDually, 7,fnrrsbefore tune T,,,, - y . t h e r r d t t n ) : PSPmay be at trme I,. already near tls saturatmn value (where 11 mcreases or dccreast% mon.slowly than d u r q 11s~nnr~al "lmear phase") Thus In e ~ t h ecr a s . the concrete functronal form of EPSPs and IPSPs tn a bmlog~ialnrurun u' n d -
..
- .
.
Fast Sigmoidal Nehvorks via Spiking Neurons
293
. , .
ulates the inout from a oresvnaotic neuron u in a wav that corresoonds to the application of a saturating sigmoidal activation function to the output of v in temporal coding. Furthermore. largerdifferences than y between the firine timk of oresvn&tic neurons u &be tolerated in this context This implicit implementationof an activation functionis, however, mathematically imprecise. A closer look shows that the amount by which the o u t ~ uofv t in temooral coding depends on thesize of the ournut - is adiusted , of u relative to the size of the outputs of the other preynaptic neurons of v' (in temporal coding). The output of u is changed by a Larger amount if it differs more strongly from the median output of the other presynaptic neurons of u' (but it is always moved in the direction of the median). This context-dependentmodulation of the output of v is hard toexploit for precise theoretical results, but it appears to be useful for practically relevant computations such as pattern recognition. Thus we arrive in this way at a variation of the implementation of a sigmoidal neural net, where two biologically dubiouscomponents of our main conshmction (the auxilian, neurons that force u to fire exactlv once durine IT,.,, - y . ' I : , , , ) , as well as the rcqummrnl that the initml h e a r segments of all relevant E P s have to overlap) are deleted, bur the result~ngnetwork of spiking neurons can still carry out complex and practical multilayer computations in temporal coding.
.
, .
-
Remark 2. Our constructionshows as a soecial case that a linear function of g ?for real-valued inputs - ( 5 , . . . s,,) and a stored vector the form 1 can be computed very efflc~ently(and bery qurckly) by a g = (w. .am,,) spiking neuron v in temporaimding. In this case;no auxi~i~ry~neu&s are needed. Quick computation of linear functions is relevant in many biological contexts, such as coordinate transformations between different frames of reference or the analysis of a complex stimulus g in terms of many stored patterns g.g'.. . . For example, in an olfactory neural system (see, e.g., Hopfield 1991, 1995) the stimulus 2 may be thought of as a superposition of various stored basic odors w. w'. . . . . In this case the oubut v = w - s of neuron u in temporal codingmay be interpreted as the an&& byw6ch the basx odor w (which is stored in the efficacis of the synapses of u ) is present in the stimulus 2. Furthermore another neuron on the next layer might recewe as its input y = (y. y'. . . .) from several such neurons u, v'. . . . : that is, B receives the "n&ing proportions" y = g - & d = g' g for various stored basic odors g,g',. . . in temporal coding. This neuron B on the second layer can then continue the patternanalysis by computing for its input y the inner product W .y with some stored "higher-order pattern"H (e.g.,the compositionof basic odorscharacteristic for an individual animal). Such multilayer pattern analysis is facilitated by the fact that the neurons considered hereencode their output in the same way in which their input is
.
.
Wolfgang Maass
294
encoded (in contrast to the approach in Hopfield 1995).One also gets in this way a very fast implementation of Linsker's network (Linsker 1988) with spiking neurons in temporal coding. Remark 3. For bioloaical neurons it is imvossible to choose the varameter A arbiharily large.0n'ihe other hand, the&periments of ~ r ~ a n t a Segundo nd (1976). as well as the recent experiments of Mainen and Sejnowski (1995). s u a e s t that bioloaical neurons already exhibit very hizh precision in their fir& times if the slope of the membrane potential',(6 ai the time t when it crosses the firing threshold is moderately large. Remark 4. Analog computations with the type of temporal coding considered here become impossible if the jitter in the firing times is large relative to the size of the range [0, y ] of analog variables in temporal coding. The following points should be taken into account in this context:
.
Even if the jitter is so high that in temporal coding just two different outputs can be distinguished in a reliable manner, the computational power of the constructed network N of spiking neurons is still enormous from the point of view of computational complexity theory. In this case, the network can simulate arbihary threshold circuits (is., multilaver DerceDhOns whose eates eive binarv o u b u t s ~verv fast. Threshold crrcults are cxlrvmely powerful models for parallel dlgrtal computntmn, whrh ran (m contrast to PRAM, and other models for cur&tly available computer hardware) compute various nonhivial boolean functions that depend on a large number n of input bits with polynomially in n many gates and not more than four layers (Johnson 1990; Roychowdhury et al. 1994; Siu et al. 1995).
, . .
~. ~.
3
.
.
Formallv the value of v in the med dine construction was reouired to be very small: it was bounded by a fraction of the length A of the rising segment of an EPSP (see inequalities2.7and 2.8). However, keep in mind that we were forced to resort to such small values for v oniy bccaur we wanted toprovea rlgorous theoretrcal rrsult.'lhecons~deratwns In Remark 1suggest that in a prurttrulconte~t.mecan still carry out meaningful comp&tions if the linearly rising segments of incomine EPSP aresvread out over a somewhat lareer time interval than A. where they need no longer overlap. If one wants to compute specific values for the parameters y and A, one runs into the problem that the value of A va& enormously amona different biolo&l neural svstems, from about 1 to 3 ms fir EPSPS-resulting from ~ M P A mep& in cortex to about a second in dark-adapted toad photoreceptors
.
There exists an alternative interpretation of our construction in a biological neural system that fmuses on a smaller spatial scale. In this interpretation, a "hot spot" in thedendritic t m of a biological neuron
Fast S~gmoidal Networks via Spiking Neurons
295
assumes the role of a spiking neuron in our construction. Hot spots are patches of membrane with voltage-dependent channels that are known to occur at branching points of the dendritic tree (Shepherd 1994; Jaffe et al. 1992; Mel 1993; Softky 1994). They fire a dendritic spike if the membrane potential reaches a certain threshold value. From this point of view, a single biological neuron may be viewed as a network of spiking neurons, which according to our construction, can simulate in temporal coding a multilayer sigmoidal neural net. The timing precision of such circuits is possibly very high since it does not involve synapses (see also Softky 1994). Furthermore in this interpretation the computation is less affected by possible failures of synapses. In another biological interpretation one can replace each neuron v in
NN,6 by a pool P,. of neurons (as in a synfire chain; see Abeles et al 1993).The firing time t,, of neuron v is then replaced by the mean firing time of neurons in P,,, and an EPSP from u is replaced by a sum of EPSPs from neurons in PC,.This interpretation has the advantage that it is less affectedbv, iitter in the firine times of individual neurons and by stochast~cfadures of ~nd~vrdual synaprs Furthcrmorc, the rjbml: begmen1 of a ,urn of EKPs from P, IS hkoly to be subslanttally lonwr than that of an individual EPSP. Hence in this interpretation. the parameter y can be chosen substantiallylarger than for singleneurons.
.
Remark 5. The auxiliaryneuronan that provides in theconstruction a reference spike at time T,, is not really necessary Without such auxiliaryneuron ao, theweights w, of the invutss; are auto&icallv normalized soihat the" sum up to 1 (see equation 2.3 with x:=o W, replaced by x:=, w,). Such normalization is disadvantageous for simulating an arbitrary given sigmoidal gate G,but it may be desirable in a biologicai context. Remark 6. Our construction was based on a specific temvoral codine in terms of reference ltmes T,,, and T,,,. Some bdogsal \ystem\ may pn* vide such rcfcrencctmcs that are trme locked w~ththe onset of a scnsory stimulus. However, our construction can also be adapted to other types of tem~oralcodine that reauire no reference times. For example.. the basic equation at the end of Section 1-uation 1.1-also yields a mechanism for carrying out analog computations with regard to the scheme of "competitive temvoral coding" discussed in Thome and lmbert (1989). . . In this i o d q %heme the frrrng time 1, of neumn a, encdes the analog varrable (1, + d , ~ min,;] ,,(re . 7 4 .1 , whered,. = d,,, , ISthedelay behveen neuron a, and v. For 7 := minl=l. ..(to, +dl) we have according to equation 1.1, "
A
.
296
Wolfgang Maars
-
In this coding xheme the firing time 4. encodes the analog value 1,. minesl t i , where L is the layer of neurons to which u belongs. Thus, equation 2.9 provndes the mwhanism for computing a segment of a linear function for inputs and outputs in competitive temporal coding. Hence our construction also pmvides a method for simulating multilayer neural nets with regard to thisalternativecodingschemeNoreference times T,,, or T,,,,, are needed for that
by spiking neurons with competitive temporal coding, one can add lateral excntation among all neumns on the same layer L. In this way the neumns in L are forced to fire within a rather small time wlndow (correspondq to the bounded output range of a slgmoidal activation function). In other applications, it appears to be more advantageous to employ instead lateral inhibition. This also has the effm of preventing the value of 1,. - mmiSr 13 from becoming t w large.
Remark 7. For the sake of simplicity we have considered ~n the p d ing constructmn only feedfonvsrd computations on networks of spiking neurons. However, thesamesimulation method can be used tosimulate recurrentsigmoidalneural nehby murrentnehvorksofspikingneurons For example, in this way onegetsa novel implementationof a Hop$ddnet (with synchronous updates). At each ''round'' of the computation, the output of a unit of the Hopfield net is encoded by the finng time of a corresponding sptksng neuron relatwe to that of other ncurons m the network. For example, one may assume that a neuron gives the highest possible output value !fit 15 amone the first neurons that fire at that round. Each soikine neuron simulates in temporal coding a sigmoidal unit whose inputs are the firing timer of the other spiking neumns in the network at the previous mund One can employ here competitive temporal coding (we Remark 6); hence no reference tnmw or external clock are needed. A stablestateof theHopfield net is reached in this implementation (with competitive temporal codmg) nf and only tf all neumns fire "regularly" (i.e., at regular intervals with a common intenpike internal) but generally with dnfferentphases. Thwe phase encode the output values of the individual neurons,and together these phaw differencesrepresent a "malled" stored pattern of the Hopfield net Thus, each stored pattern of the Hopfield net is realized by a different assignment of phase differences in some stable oscdlation. This novel implementation of a Hopfield net with spiking neurons m temporalcodmg has,apart from its hlghcomputatlonspeed,anotherfeature that is possibly of biological interest: whereas the input to such network of spiking neuroncan be transient (encoded ~nthe relative t i m q of the firing of each neuron in thenetworkat the first round), itsoutput isavailableover
-~
297
Fast Lgmoidal Networks v m Spxkmg Neurons
a longer tune period, since it is encoded in the phases of the neurons in a
stable global oscillation of the network. Hence, this implementation makes it possible that even in the rapidly fluctuating environment of temporal coding with single sptkes, the outputs from different neural subsystems (which may operate at different time wales) can be collected and integrated by a larger neural system.
3 Universal Approximation Properties of Networks of Noisy Spiking Neurons in Temporal Coding
One of the most interesting and useful features of sigmoidal neural nets N s the fact that nf [O. yl is the range of them activation functions, they can approximate for any gwen natural numbers n and k any given continuous function F from 10. yl" into 10. y l h i t h i n inany given E > 0 (with regard to unnform convergence, is., the L, norm). Furthermore it sufficestoconsider for this purpose feedforward nets N with just one hidden layer of neurons and (roughly) any activation function o that is not a polynomial (Leshno t t al. 1993) In addition, many years of experiments with backpropagation andother learning rulesfor sigmoidal neuralnets haveshown that for most concrete application problems, sigmoidal neural nets wlth relatively few hidden units allow a satisfactory approximation of the underlying target function F. The result from the preceding section allows us to transfer these results to networksof spiking neurons with temporal coding. If some feedforward rigmoidal neural net N approximates an arb~trarygwen cont~nuousfunc[O. ylk within an e (with regard to the L, norm), then tion F: 10. yl" of splklng neurons that we with probabslity ? 1 - 6 the network constructed in k t i o n 2 approximates the same F within 28 (with regard to the L, norm). Furthermore, if N has only a small number p of layers, the computation time o f N ~ can be bounded (for biologically reasonable choices of the parameters involved) by 10. y ms. Thus if one neglects the fact that the fan-in of bmlogncal neurons s bounded bv some fixed lalthoueh the owedine" the. " rather lareelconstant. ,, . oretlcal results sueeest "" that networks of bioloelcal neurons can 11" . mite , of lhwr " r h w n e d ) apprdxtmaw arbtlrary ronlmu~urlunclwns F 10, y l" 10. v]' wnlhtn any gnen I wtlh a compulatam ttme 01 not more than 20 rn, hnally. wr would hkr to p r n t out thal o u r apprormmalum result hold, no1 only lor the parttcular way of enr.odmg analog Inputs and outpul, by firing timw that was consided ~nthe previoussection but basscally for m y coding that iscontmuously related toit. Moreprecisely,let n. iI. k. k bearbi10.11" be any continuous function trary natural numbers Let Q: 10. yl" that specifiesa method of "decoding" 1 variables ranging over [O. I] from the firing times of n neurons dur~ngsometune window of length y (whore
-
.
-
-
Wolfgang Maass
298
-
end point is marked by the firing time of the first one of these n neurons), [O. ylk be any continuous and invertible function that and let R: [O. lji describes a method for "encodine" outout variables raneine " " over 10.11 bv, the firing times of k neurons durmg some time window of length y (whose end point is marked by the firing time of the first one of these k neurons). [O, 1li thecomposition Then for any givencontinuous function f : I0.11" F. R o f o ~ o these f three functions isa continuous function from 10, y ] " into [O. ylk . Accordtngtoour precedingargument, thereexistsa networkfl,.s of noisyspikingneurons(withone"hiddenlayer")such that forany? E [O. y]" one has for theoutputNF.s(x)of this networkN?,a that II F@)-N,.s&) II 5 F with probability ? 1 - 8 , where II . II can be any common norm. Hencea,.! approximates for arbitrary inputs the given function f : [ 0 , 1 ] b [O.Ilk for arbitrarily chosencontinuous functions R Q for codingand decoding of analogvariablesby firing timesofspikingneurons witha precisionofat least sup (I1R-'(y) - R-'(f) 11: y. y' [0.1Ik and II y - f l l E ) . Thus, if the inverse R-' of the function R is uniformly continuous, one can approximate P with regard to neural coding and decoding described by R and Q with arbitrarily high precision by networks of noisy spiking neurons with just one hidden layer.
"
.
.,.
4 Consequences for Learning
In the traditional intemretatlon of (unsupervised) Hebbian learninn, a synapw r\ atrengthenrjlf both the pmsynaptic and the portsynapttc &uronsares~multaneously"actwe" (re ,both give hrgh output values ~nterms of the~r current frrtngrates) In thc implementat~onof a sramordal neural net N bv a network of &kine neurons NW .... ,"i in Section 2. ;he "weiehts" r, of N are in fact modeled by the "shengths" w, of corresponding synapsez between spiking neurons. However, the information whether both the presvnaptic a i d p&tsynaptic neurons give high output values in temporal coding can no longer be read off from their "activity" but only from the time difference T, between their firing tunes. This observation eives rise to the auestion of whether there are biological mechanisms known that support a modulation of the efficacy (i.e. "strength) w, of a synapse as a function of this time difference T,. If one worksin the linear r a n k of the simulation Nr. of a n.-eate , " G accordine IoStvtron 2 (whereC;computes the funct~on(s, 5 , ) and h,ttl describes an tl'S1: then for Ilebbmn learnma tt would bc des~cablcto increasew, = A . r, if si is close to E:=, r, . s,; that is, if the differencein firing r,.s,-T,,+~,isclosetoT,,,~ -T ,,,. On timesT, := t,,-(T,.-5,) = theother hand, one would like to decrease" w, if T, issubstantially smaller
.
-
.
"
-
. . .
,,,, z:=, ,,
Fast Sigmoidal Networks via Spiking Neurons
299
or larger than Tour- T,.. Hence, a Hebbianstyle unsupervised learning rule of the form
(for suitable parameters 6, p > 0)would be meaningful in this context. Recent results from neurobiolow (Stuart and Sakmann 1994)show that action potentials in neocortical pyramidal cells are actively (i.e., supported by voltage-dependent channels) propagated backward from the soma into the dendrites (see also laffe el al. 1992). Hence the time difference T , between the firing of the presynaptic and the postsynaptic neurons is in principle available to each synapse. Furthermore new experimental results (Markram 1995; Markram and Sakmann 1995) show that in vitro. the efficacy of synapses of neocortical pyramidal neurons is in fact modulated as a function of this time difference T;. There exists one interestine structural differencebetween this intemretation of Hebbian learning in temporal coding and its traditional interpretation: the time difference T, provides a synapse with the full information about the correlation between the output ;alies of the pre- and postsynaptic neurons in temporal coding, no matter whether both neurons give high or low output values. However, in the traditional interpretation of Hebbian learning in terms of firing rates, the efficacy of a synapse is increased only if both neurons give high output values (in frequency coding). An implementation of Hebbian learning in the temporal domain is also appealing in the context of pulse s w m VLSl (i.e., "silicon spiking neurons"). These artificial neural nets are much faster than biological spiking neurons: thev can work with intermike intervals in the microsecond ranee. If for a hardware implementation of a slgmoidal gate with pulse stream VLSl according to the construction of Section 2, a Hebbian learning rule can be applied in i h e temporal domain after each pulse, such a chip may be able to carry out so many learning steps per second that it could in principle (neglecting input-output constraints) overcome the main impediment of traditional artificial neural nets: their low learnine meeds. So far we have assumed in the construction of N N . ~ in ,Section ~ 2 that the time delays d, between the presynaptic neurons a; and the postsynaptic neuron u (i.e., the time needed until an action potential from a; can influence the potential at the trigger zone of v ) were the same for all neurons a,. Differences amonn these delays have the effect of ~ r o v i d i n e an additive corwtloni, rt r, s d , tothevariablr that isrommunicaled in temporal codIn): I n m a , I O U llenre, they also have Ihrability togivedifferent "we~ghts" todifferent inout variables. In a bioloeical context.. jhev, aowar to be useful for providing to the network a priori information about its computational
-.
-
".
..
Wolfgang Maass
300
task, so that Hebbian learning can be viewed as "hne-tuning" on top of this vrevrorrrammed information. If there exist bioloeical mechanisms for modulaiin&uchdelays(see Hopfield 1995; ~ e m ~ t e r h1996). l . they would provide in addition toa short-term memory via synaptic modulationa separate mechanism for staring and adapting long-term memory via differences in the delays. 5 Conclusions
We have shown in this article that there exists a rather simple way to compute linear functions and to simulate feedforward as well as recurrent sigkoidal neural nets in temporal coding - bv , networks of noisv spiking neimns. In contrast to the traditionally considered implementation via frequency coding, the new approach yields a computation speed that is faster by several orders of magnitude. In fact, to the best of our knowledge, it pmvides the first theoretical model that is able to explain the experimentally observed speed of fast informat~onpmcesing in the cortex on the basis of relatively slow spiking neurons as computational units. Further experimentswill be needed todetermine whether this theoretical model is biologically relevant. One problem is that we do not know in which way batch inputs (consisting of many analog variables in parallel) are encoded by biological neural systems. The existing results on neural codine (see Rieke el al. 19961 address onlv the codine of time serie. that 13. wqurnt~al analog mpul, However. tf further ekperrmenrz -howrd that the mput-dependent fmng trmes nn vtsual cortex, as reporld In Bau el a! . . (1994). varv in a continuous (i.e, piecewise continuous) manner in resvonse to smooth change, of comple* mputs, t h ~ would s pro\ tde some support to rr nth sptkmy, thc4yleof thvnn.t~calmodel5 foranalogromputat~on . . . .neurons that is considered here. HovFurthermore. a bioloeical realization of recurrent neural n e b ke.. tf, we have (2.9) where e,!jl)(s)is the analytically derived kernel introduced above. Finally, the kernel 9 is determined by solving the Hodgkin-Huxley equations numerically with input Z(t) = c ito(t).The amplitude c has been chosen sufficiently large so as to evoke a spike. The exact value of c is not important. We set q l ( t - tf) = lim,,o u[c ifJ(t) - ii for t > to where tf is the time when u[c it,](t)crosses a formal threshold 19 and ii is the constant solution with zero input. Once the amplitude c is fixed, the kernel 9 is uniquely determined as the form of an action potential with, apart from the triggering pulse, zero input current. The only free parameter is the threshold 9 , which is to be found by an optimization procedure described below. 2.5 Comparison with Wiener Expansions. The approach indicated so far shows that all kernels can be found by a systematic and straightforward procedure. The kernels can be derived either analytically, as described in
1022
Werner M. Kistler, Wulfram Gerstner, and J.Leo van Hemmen
the appendix, or numerically by studying the system’s response to input pulses on a small set of examples. This is in contrast to the Wiener theory (Wiener, 1958; Palm & Poggio, 1977),which analyzes the response of a system to gaussian white noise. Since Wiener’s approach to the description of nonlinear systems is a stochastic one, the determination of the Wiener kernels requires large sets of input and output data (Palm & Poggio, 1978; Korenberg, 1989). Exploiting the deterministic response of the system to well-designed inputs (short pulses) thus simplifies things significantly.The study of impulse response functions is a well-known approach to linear or weakly nonlinear systems. Here we have extended this approach to highly nonlinear systems under the proviso that the response kernels 6 are (almost everywhere) continuous functions of their arguments. It is important to keep in mind that Volterra and Wiener seriescannot fully reproduce the threshold behavior of spike generation, even if higher-order terms are taken into account. The reason is that these expansions can only approximate mappings that are smooth, whereas the mapping from input current I ( t ) to the time course of the membrane voltage has an apparent discontinuity at the spiking threshold (see Figure 2). Of course, this is not a discontinuity in the mathematical sense, but the output is very sensitive to small changes in the input current (Cole,Guttman, & Bezanilla, 1970; Rinzel & Ermentrout, 1989).We have to correct for this by adding the kernel q each time the series expansion indicates that a spike will occur, say, by crossing an appropriate threshold value. In doing so, we no longer expand around the constant solution u ( t ) = 0, but around u ( t ) = q(t - tf), the time course of a standard action potential. The general framework outlined in this section will now be applied to the Hodgkin-Huxleyequations. 3 Application to Hodgkin-Huxley Equations
Applying the theoretical analysis to the Hodgkin-Huxley equations, we begin by specifying the equations and then explain how the reduction to the spike response model is performed.
3.1 Hodgkin-Huxley Spike Trains. According to Hodgkin and Huxley (Hodgkin & Huxley, 1952;Cronin, 1987),the voltage u ( t )across a small patch of axonal membran ( e g , close to the hillock) changes at a rate given by
where I ( t ) is a time-dependent input current. The constants U N ~ U, K , and U L are the equilibrium potentials corresponding to sodium, potassium, and ”leakagecurrents,” and the 8’s are parameters of the respective ion conduc-
1023
Reduction of the Hodgkin-Huxley Equations
10
15
20
25
30
tlms Figure 2: Threshold behavior of the Hodgkin-Huxley equations (see equations 3.1 and 3.2). We show the response of the Hodgkin-Huxley equations to a current pulse of l ms duration. A current amplitude of 7.0 p A cm-* suffices to evoke an action potential (solid line; the maximum at 100 mV is out of scale), whereas a slightly smaller pulse of 6.9 PA cm-2 fails to produce a spike (dashed line). The time course of the membrane voltage is completely different in both cases. Therefore, the mapping of the input current onto the time course of the membrane voltage is highly sensitiveto changes in the input around the spiking threshold. The bar in the upper left indicates the duration of the input pulse.
tances. The variables m, n, and h change according to a differential equation of the form dx
- = (rx(v)(1 - x ) - bx(u)x
dt
(3.2)
with x E ( m ,n,h). The parameters are given in Table 1. For a biological neuron that is part of a larger network of neurons, the input current is due to spike input from many presynaptic neurons. Hence the input current is not constant but fluctuates. In our simulations, we therefore use an input current generated by the following procedure. Every 2 ms, a random number is drawn from a gaussian distribution with zero mean and standard deviation u . The discrete current values at intervals of 2 ms are linearly interpolated and define the input I(t). This approach is intended to mimic the effect of spike input into the neuron. The procedure is somewhat arbitrary but easily implementable and leads to a spike train with a broad interval distribution and realistic firing rates depending on u , as shown in Figure 3. We emphasize that our single-variable model is intended to reproduce the firing times of the spikes generated by the Hodgkin-Huxley equations.
Werner M. Kistler, Wulfram Gerstner, and J. Leo van Hemmen
1024
Table 1: Parameters of the Hodgkin-Huxley Equations (Membrane Capacity, C = 1pF/cm2) X
vx
81
Nu K L
115mV -12mV 10.6mV
120mS/cm2 36mS/cm2 0.3mS/cm2
~
~~
n m h
(0.1 - 0.01 v)/ [exp(l - 0.1 V ) - 11 (2.5 - 0.1 v)/ Iexp(2.5 - 0.1 U ) - 1) 0.07exp(-v/ 20)
0.125 exp(-v / 80) 4exp(-v/ 18) 1/[exp(3-0.1v)+ll
This is a harder problem than fitting the gain function for constant input current. 3.2 Approximation by a Threshold Model. We want to reduce the four Hodgkin-Huxley equations (3.1 and 3.2) to the spike response model (see equation 2.2) in such a way that the model generates a spike train identical with or at least similar to the original one of Figure 3a. The reduction will involve four major steps:
1. Calculate the response functions cA1),cA2),and q,(3). 2. Derive the kernel
r]l.
3. Analyze the corrections to output spike.
that are caused by the most recent
4. Determine a threshold criterion for firing. Before starting, we note that for I ( t ) = f, the Hodgkin-Huxley equations allow a stationary solution with constant values v ( t )= 5,m ( t ) = Z, h ( t ) = h, and n ( t ) = Ti. The constant solution is stable if 1is smaller than a threshold value f , ~= 9.47pA/cm2, which is determined by the parameter values given in Table 1.
3.2.2 Standard Volterra Expansion. The kernels cA1)(sl) and cA2)(s1,s2) have been derived as indicated in the appendix and are shown in Figures l b (solid line) and 4. Figure 5 shows a small section of the spike train of Figure 3a with a single spike. Two intervals, one before (lower left) and a second during and immediately after the spike (lower right), are shown on an enlarged scale. Before the spike occurs, the numerical solution of the Hodgkin-Huxley equations (solid) is well approximated by the
1025
Reduction of the Hodgkin-Huxley Equations
200
4 10
600
time (4
/
ms
900.-
600v)
42
c
:
300-
25
50
75
100
IS1 / ms Figure 3: Hodgkin-Huxley model. (a) 1000 ms of a simulation of the HodgkinHuxley equations stimulated through a fluctuating input current I ( t ) with zero mean and standard deviation 3pA/cm2. Because of the fluctuating input, the spikes occur at stochastic intervals with a broad distribution. (b)A histogram of the interspike interval (ISI) sampled over a period of 100 s.
first-order Volterra expansion u ( t ) = I d s cA1)(s)I ( t - s) (long dashed line) and even better by the second-order approximation (dotted line). During a spike, however, even an approximation to third order fails completely (see Figure 6). The discrepancy during a spike may be surprising at first sight but is not completely unexpected. During action potential generation, the neuronal dynamics is mainly driven by internal processes of channel opening and closing, much more than by external input. Mathematically, the reason is
Werner M. Kistler, Wulfram Gerstner, and J. Leo van Hemmen
1026
-
20
r 4
0
I .
- 0
15 vl
E
.
\
10
Y
5 0 0
5
10
15
20
I
s/ms Figure 4 Second-order kernel for the Hodgkin-Huxley equations. The figure exhibits a contour plot for the second-orderkernel cA2)(s,s’). The kernel vanishes identically for negative arguments, as is required by causality. that the series expansion is valid only as long as I ( t ) is not too large; see the Appendix.
3.2.2 Expansion lncluding Action Potentials. In order to account for the remaining differences, we have to add explicitly the shape of the spike and the afterpotential by way of the kernel r,q(s). The function ql(s) has been determined by the procedure explained in section 2 and is shown in Figure la. If we add ~1(t -t f ) but continue to work with the standard kernels q,(1) (s), q,(2) (s, s’), .. . we find large errors in an interval up to roughly 20 ms after a spike.This is exemplifiedin the lower right of Figure 5 where the longdashed line shows the approximation u(t) = q l ( t - tf) + Jds chl)(s) l ( t - s). From a biological point of view, this is easy to understand. The kernel c f ) ( s ) describes a postsynaptic potential in response to a standard input spike. Due to refractoriness, the responsiveness of the membrane is lower after a spike, and for this reason the postsynaptic potential looks different if an input pulse arrives shortly after an output spike. We then have to work with the improved kernels E ~ ) ( c s), ; ci2)(o;s, s’), . . . introduced in equation 2.2. The dotted line in the lower-right graph of Figure 5 gives the approximation u ( t ) := r,q(t - t j ) + Jds ci’)(t - tf; s) l(t - s), and we see that the fit is nearly perfect.
3.2.3 Threshold Criterion. The trigger time tf of an action potential is given by a simple threshold crossing process. Although the expansion in terms of the E kernels will never give the perfect form of the spike, the
1027
Reduction of the Hodgkin-Huxley Equations
time I ms
>
Figure 5: Systematic approximation of the Hodgkin-Huxley model by the spike response model (see equation 2.2). The upper graph shows a small section of the Hodgkin-Huxley spike train of Figure 3a. The lower left diagram shows the solution of the Hodgkin-Huxley equations (solid) and the first (dashed) and second (dotted) order approximation with the kernels 6:;’ and 6;’’ before the spike occurs on an enlarged scale. The lower right diagram illustrates the behavior of the membrane immediately after an action potential. The solution of the Hodgkin-Huxley equations (solid line) is approximated by u ( t ) = q l ( t t ’ ) + Sdsc:;’(s)I ( t - s), a dashed line. Using the improved kernel 6 ; ’ ’ instead of 6:;’ results in a nearly perfect fit (dotted line).
truncated series expansion does exhibit rather significant peaks at positions where the Hodgkin-Huxley model produces spikes (seeFigure 6). We therefore decided to apply a simple voltage threshold criterion instead of using some other derived quantity such as the derivative v or an effective input current (Koch, Bernander, & Douglas, 1995).Whenever the membrane potential in terms of the expansion (see equation 2.3) reaches the threshold 29 from below, we define a firing time tf and add a contribution q ( t - tf). The threshold parameter 29 is considered as a fit parameter and has to be optimized as described below.
Werner M. Kistler, Wulfram Gerstner, and J. Leo van Hemmen
1028
I
4
,
I--
-
2 o 215
u . 220
I 225
time
230
/
\.
235
1
240
ms
Figure 6: Approximation of the Hodgkin-Huxley model by a standard Volterra expansion (see equation 2.1) during a spike. Here we demonstrate the importance of the kernel q1 as it appears in equation 2.2. First- (long dashed) and second-order approximation (dashed) using the kernels and cf' produces a significant peak in the membrane potential where the Hodgkin-Huxley equations generate an action potential (solid line), but even the third-order approximation (dotted) fails to reproduce the spike with a peak of nearly 100 mV (far out of scale). The remaining difference is taken care of by the kernel q l .
~61'
4 Simulation Results
We have compared the full Hodgkin-Huxley model and the spike response model(seeequation2.2) inasimulationof spikeactivityover 100,000ms (i.e., 100 s).In so doing, we have accepted a spike of the spike response model to be coincident with the corresponding spike of the Hodgkin-Huxley model if it arrives within a temporal precision of f 2 ms. 4.1 Coincidence Measure. As a measure of the rate of coincidence of spikes generated by the Hodgkin-Huxley equations and another model such as equation 2.3, we define an index r that is the number of coincidences minus the number of coincidences by chance relative to the total number of spikes produced by the Hodgkin-Huxley and the other model. More precisely, r is the fraction
Here Ncoincis the number of coincident spikes of Hodgkin-Huxley and the other model as counted in a single simulation run, NHHthe number of action potentials generated by the Hodgkin-Huxley equations, and N ~ R the M
Reduction of the Hodgkin-Huxley Equations
1029
number of spikes produced by model, say, equation 2.2. The normalization factor N restricts r to unity in the case of a perfect coincidence of the other model’s spike train with the Hodgkin-Huxley one (Ncoinc= Nsm = NHH). Finally, (Ncoinc) is the average number of coincidences obtained if the spikes of our model were generated not systematically but randomly by a homogeneous Poisson process. Hence r x 0, if equation 2.2 would produce spikes randomly. The definition 4.1 is a modification of an idea of Joeken and Schwegler (1995). In order to calculate (Ncoinc), we perform a gedanken experiment. We are given the numbers NHHand N s m and divide the total simulation time into K bins of 4 ms length each. Due to refractoriness, each bin contains at most one Hodgkin-Huxley spike; we denote the number of these bins by NHH. So ( K - NHH)bins are empty. We now distribute N s m randomly generated spikes among the K bins, each bin receiving at most one spike. A coincidence occurs each time a bin contains a Hodgkin-Huxiey spike and a random spike. The probability of encountering Ncoinccoincidences is thus hypergeometrically distributed, that is, p(Ncoinc) = (Ns~!~omc) / with mean (Ncoinc) = NSRM NHH/ K. To see why, we use a simple analogy. Imagine an urn with NHHblack balls and (K - NHH)white balls, and perform a random sampling of N s balls ~ without replacement. The number of black balls drawn from the urn corresponds to the number of coincidences in the original problem. This setup is governed by the hypergeometric distribution (Prohorov & Rozanov, 1969;Feller, 1970).In passing, we note that dropping the refractory side condition leads to a binomial distribution and, hence, to the same result for (Ncoinc). Using the correct normalization, N = 1 - NHH/ K, we thus obtain a suitable measure: r has expectation zero for a purely random spike train, yields unity in case of a perfect coincidence of the spike response model’s spike train with the Hodgkin-Huxley one, and is bounded by -1 from below. A negative r hints at a negative correlation. Furthermore, r is linear in the number of coincidences and monotonically decreasing with the number of erroneous spikes. That is, r decreases with increasing NsRM while Ncoincis kept fixed or with decreasing Ncoincwhile N,, is kept fixed. While simulating the spike response model, we have found that due to the long-lasting afterpotential, a single misplaced spike causes subsequent spikes to occur with high probability at false times too. Since this is not a numerical artifact but a well-known and even experimentally observed biological effect (Mainen & Sejnowski, 1995),we have adopted the following procedure in order to eliminate this type of problem from the statistics. Every time the spike response model fails to reproduce a spike of the HodgkinHuxley spike train, we note an error and add the kernel q at the position where the original Hodgkin-Huxley spike occurred. Analogously, we count an error and omit the afterpotential if the spike response model produces a spike where no spike should occur. In case of a coincidence between the
(2:c)
(Nsk),
1030
Werner M. Kistler, Wulfram Gerstner, and J. Leo van Hemmen
Hodgkin-Huxley spike and a spike of the spike response model, we take no action; we place the ‘I kernel at the position suggested by the threshold criterion. Without correction, I- would decrease by at most a few percentage points, and the standard deviation would increase by a factor of two. 4.2 Low Firing Rate. The results of the simulations for low mean firing rates are summarized in Table 2. Due to the large interspike intervals, effects from spikes in the past are rather weak. Hence we can ignore the dependence of the c kernels on the former firing times and work with cA1)(s)instead of ci”(a; s). We do, however, take into account the kernel ql(s). Higherorder kernels co(2),co(3) give a significant improvement over an approximation with only. This reflects the importance of the nonlinear interaction of input pulses. In a third-order approximation, the spike response model reproduces 85 percent of the Hodgkin-Huxley spikes correctly (see Table 2). 4.3 High Firing Rate. At higher firing rates, the influence of the most recent output spike is more important than the nonlinear interaction between input pulses. We therefore use the kernel cil) but neglect higher-order terms with ei2), cj3), . . . in equation 2.2. Figure 7 gives the results of the simulations with different mean firing rates. As for Table 2, the threshold 19 and the time S between the formal threshold crossing time tf and the maximum of the action potential have been optimized in a separate run over 10 s. For this optimization run we have used an input current with (T = 3 pA/cm2, corresponding to a mean firing rate of about 33 Hz; the maximum mean firing rate is in the range of 70 Hz. For firing rates above 30 Hz, the single-variable model reproduces about 90 percent of the Hodgkin-Huxley spikes correctly. This is quite remarkable since we have used only the first-order approximation,
J
and neglected all higher-order terms. If we neglected the influence of the last spike, we would end up with a coincidence rate of only 73 percent, even in a second-order approximation using kernels ch’) and c 8 ’ . Closer examination of the simulation results for various mean firing rates reveals a systematic deviation of the mean firing rate of the spike response model from the mean firing rate of the Hodgkin-Huxley equations. More precisely, the spike response model generates additional spikes where no spike should occur, and it misses fewer spikes if the standard deviation of the input current is increased-which is quite natural (Mainen & Sejnowski, 1995). We have performed the same test with two versions of traditional inte-
Reduction of the Hodgkin-Huxley Equations
1031
Table 2: Simulation Results of the Spike Response Model Using the Volterra Kernels 661'. . . . ,E,,(3)
First Second Third
4.00 4.70 5.80
2.85 91 f 8 71 f 7 2 O f 5 1 6 f 3 0.788f0.030 2.75 87% 11 74f8 1 3 f 3 1 3 f 2 0.845f0.016 2.50 88f10 7 6 f 8 1 3 f 3 11 f 3 0.858f0.019
Note: The table gives the number of spikes produced by the spike response model (NsRM), the number of coincidences (Ncoinc),the number of spikes produced by the spike response model where no spike should occur (Nwrong),and the number of missing spikes (Nrnisd). The numbers give the average and standard deviation after 10 tuns with different realizations of the input current and a duration of 10 s per tun. The numbers should be compared with the number of spikes in the Hodgkin-Huxley model, NHH = 87 f 8. The coincidence rate r has been defined by equation 4.1.For random gambling we would obtain r zz 0. The parameters (9 (threshold) and 6 (time from threshold crossing to maximum of the action potential) have been determined by optimizing the results in a separate run over 10 s.
grate-and-fire models: a simple integrate-and-fire model with reset potential u0 and time constant r , and an integrate-and-fire model with timedependent threshold derived from the rpkernel found by the methods described in section 2.4--29(a) = -ql(a) for CJ > 5ms and B ( a ) = 00 for CJ < 5 ms. The results are summarized in Figure 8. We have optimized the parameters r , 190, and uo with a fluctuating input current as already explained and found optimal performance for a time constant of r = 1ms. This surprisingly short time constant should be compared to the time constants of the 6;') kernel. Indeed, if the time since the last spike is shorter than about 10 ms (see Figure lb), the 6;'' kernel approaches zero within a few milliseconds. The optimized reset value was uo = -3 mV and the threshold 190 = 3.05mV respectively 00 = 2.35mV for the model with respectively without time-dependent threshold. 4.4 Response to Current Step Functions. Constant input currents have been a paradigm to neuron models ever since Hodgkin and Huxley (1952), although a constant current is very far from reality and one may wonder whether it offers a sensible criterion for the behavior of neurons producing spikes and, thus, responding to fluctuating input currents. Nevertheless, we want to study the response of our model to current steps and compare it with that of the original Hodgkin-Huxley model. As a consequence of the oscillatory behavior of the first-order kernel E;') (see Figure lb), our model exhibits a phenomenon known as inhibitory re-
1032
Werner M. Kistler, Wulfram Gerstner, and J. Leo van Hemmen
11.0
2.25
2.5
input standard deviation o / pA cm-2 Figure 7: Simulation results for different mean firing rates using the first-order improved kernel 6 ; ’ ’ only. The black bars indicate the number of spikes (leftaxis) produced by the Hodgkin-Huxley model; the neighboring bars correspond to the number of spikes of the spike response model. The gray shading reflects the coincident spikes. All numbers are averages of 10 runs over 10 s with different realizations of the input current. The mean firing rate varies between 23 and 40 Hz. The error bars are the corresponding standard deviations. The line in the upper part of the diagram indicates the coincidence rate r (right axis) as defined by equation 4.1.The parameters 19 = 4.7mV and 6 = 2.15ms have been optimized in a separate run with an input current with standard deviation o = 3 p A cm-’.
bound. Suppose we have a constant inhibitory (i.e., negative) input current for time t < 0, which is turned off suddenly at t = 0. We can easily calculate the membrane potential from equation 1.1 if we assume that there are no spikes in the past. The resulting voltage trace exhibits a significant oscillation (see Figure 9). If the current step is large enough, the positive overshooting triggers a single spike after the release of the inhibition. Since equation 1.1is linear in the input, the amplitude of the oscillatory response of the membrane potential in Figure 9 is proportional to the height of the current step but independent of the absolute current values before or after the step. This is in contrast to the Hodgkin-Huxley equations, where amplitude and frequency of the oscillations depend on both step height and absolute current values (Mauro, Conti, Dodge, & Schor, 1970). The limitations of a voltage threshold for spike prediction can be investi-
Reduction of the Hodgkin-Huxley Equations
L
I
-
1033
i i&f*
SRM
Figure 8: Comparison of various threshold models. The diagram shows the coincidence rate r defined in section 4.1 for a simple integrate-and-fire model with time constant r = 1ms and reset potential ug = -3 mV (i&f),an integrateand-fire model with time-dependent threshold (i&f*),and the spike response model (SRM).The error bars indicate the standard deviation of 10 simulations over 10 s each with a fluctuating input current with u = 3wA cm-*.
gated by systematically studying the response of the model to current steps (see Figure 10). For t < 0 we apply a constant current with amplitude 11. At t = 0 we switch to a stronger input 12 > 11. Depending on the values of the final input current 12 and the step height AI = 12 - 11 we observe three different modes of behavior in the original Hodgkin-Huxley model, as is well known (Cronin 1987).For small steps and low final input currents, the membrane potential exhibits a transient oscillation but no spikes (inactive phase 1). Larger steps can induce a single spike immediately after the step (single-spike phase S).Increasing the final input current further leads to repetitive firing (R). Note that repetitive firing is possible for I > 6 PA cm-’, but firing must be triggered by a sufficiently large current step AI. Only for currents larger than 9.47 FA/cm2 is there autonomous firing independent of the current step. We conclude from the step response behavior that there are two different threshold paradigms: a single-spike threshold (dashed line in Figure 10) and a threshold for repetitive firing (solid line). We want to compare these results with the behavior of our threshold model. The same set of step response simulations has been performed with the spike response model. Comparing the threshold lines in the (12, AI) diagram for the Hodgkin-Huxley model and the spike response model, we can state a qualitative correspondence. The spike response model exhibits the same three modes of response as the Hodgkin-Huxley equations. Let us have a closer look at the two thresholds: the single-spike threshold and the repetitive firing threshold. The slope of the single-spike threshold (dashed
1034
Werner M. Kistler, Wulfram Gerstner, and J. Leo van Hemmen
-1"
0
-5
5
10
15
20
25
3
t/ms Figure 9: Response of the spike response model to a current step. An inhibitory input current that is suddenly turned off produces a characteristic oscillation of the membrane potential. The overshooting membrane potential can trigger an action potential immediately after the inhibition is turned off. The amplitude of the oscillation is proportional to the height of the current step. Here, the current is switched from -1 pA cm-' to 0 at time t = 0.
line in Figure 10)is far from perfect if we compare the lower and the upper parts of the figure. The slope is determined by the mean Jds chl)(s) of the linear response kernel and is therefore fixed as long as we stick to the membrane voltage as the relevant variable used in applying the threshold criterion. We now turn to the threshold of repetitive firing. The position of the vertical branch of the repetitive-firing threshold (solid line in Figure 10) is shifted to lower current values as compared to the Hodgkin-Huxley model. Consequently, the gain function of the spike response model is shifted to lower current values too (see Figure 11).The repetitive-firing threshold is directly related to the (free) threshold parameter 1.9 and can be moved to larger current values by increasing 19. Using 19 = 9 mV instead of I.9 = 4.7mV results in a reasonable fit of the Hodgkin-Huxley gain function (see Figure 11).However, a shift of I.9 would also shift the single-spike threshold of Figure 10 to larger current values, and the triggering of single spikes at low currents would be made practically impossible. The value for the threshold parameter d = 4.7mV found by optimization with a fluctuating input as described in section 4.3 is therefore a compromise. It follows from Figure 10 that there is not a strict voltage threshold for spiking in the Hodgkin-Huxley model. Nevertheless, our model qualitatively exhibits all relevant phases, and, with the fluctuating input scenario, there is even a fine quantitative agreement.
,A1)
1035
Reduction of the Hodgkin-Huxley Equations
lime / ms
6 100 50
'E4.
S
Q, 1
> 6 Em
.
0
------__
7 +-
g2.
*
\
I 3 O
0
50 time / m\
100
R ,:
F=i 0
0-
50
100
current I :/ FA c i '
;":1 I
0' 0
50
Id0
_____
6
I
\
84.
-. a
4,
\
\
100
*
50
\ \s
0
R
\\
\
g2.
0 100 50
I 0time / m%
0
0
50
100
current I ?IpA cm-
Figure 10: Comparison of the response to current steps of the Hodgkin-Huxley equations (upper graph) and the spike response model (lower graph). At time t = 0 the input current is switched from II to 4, and the behavior of the models is studied in dependence on the final current Z2 and the step height AI = 12 I,. The figures in the center indicate in the manner of a phase diagram three different regimes: (I) the neuron remains inactive; (S) a single spike is triggered; (R) repetitive firing is induced. We have chosen four representative pairs of values of Iz and A I (marked by dots in the phase diagram) and plotted the corresponding time course of the membrane potential on the left and the right of the main diagram so that the responses of Hodgkin-Huxley model and spike response model can be compared. The parameters for the spike response model are the same as those detailed in the caption to Figure 7.
Werner M. Kistler, Wulfram Gerstner, and J. Leo van Hemmen
1036
125 -’ 100-
2
75-
1
% 25 0
I
;
1
I
I
i
Figure 11: Comparison of the gain function (output firing rate for constant input current I as a function of I) of the spike response and the Hodgkin-Huxleymodel. The threshold parameter B = 4.7mV found by the optimization procedure described in section43givesa gain functionof the spike response model (dashed line) that is shifted toward lower current values as compared to the gain function of the Hodgkin-Huxley model (solid line). Increasing the threshold parameter to B = 9 mV results in a reasonable fit (dotted line).
5 Discussion In contrast to the complicated dynamics of the Hodgkin-Huxley equations, the spike response model in equation 2.2 has a straightforward and intuitively appealing interpretation. Furthermore, the transparent structure of the model has a great number of advantages. The dynamical behavior of a single neuron can be discussed in simple but biological terms of threshold, postsynaptic potential, refractoriness, and afterpotential. As a first example, we discuss the refractory period. Refractoriness of the Hodgkin-Huxley model leads to two distinct effects. On the one hand, we have shunting effects-a reduced responsiveness of the membrane to input spikes during and immediately after the spike-that is related to absolute refractoriness and is expressed in our model by the first argument of ci”. In addition, the kernel 71 induces an afterpotential corresponding to a relative refractory period. In the Hodgkin-Huxley model, the afterpotential corresponds with hyperpolarization and lasts for about 15 ms, followed by a rather small depolarization. A different form of the afterpotential with a significant depolarizing phase would lead to intrinsic bursting (Gerstner & van Hemmen, 1992).So the neuronal behavior is easily taken care of by the spike response model. As a second example, let us focus on the temperature dependence of the various kernels. Temperature can be included in the Hodgkin-Huxley equa-
Reduction of the Hodgkin-Huxley Equations
1037
tions by multiplying the right-hand side of equation 3.2 by a temperaturedependent factor { (Cronin, 1987),
< = exp [ln(3.0) (T - 6.3) / 10.01,
(5.1)
with the temperature T in degrees Celsius. Equation 3.1 remains unmodified. In Figure 12 we have plotted the resulting c;’) and q1 kernels for different temperatures. Although the temperature correction affects only the equations for m, h, and n, the overall effect can be approximated by a time rescaling, since the form of both kernels is not modified but stretched in time with decreasing temperature. Apart from the transparent structure, the single-variable model is open to a mathematical analysis that would be inconceivable for the full HodgkinHuxley equations. Most important, the collective behavior of a large network of neurons can now be predicted if the form of the kernels q and c ( l ) is known (Gerstner & van Hemmen, 1993; Gerstner, 1995; Gerstner, van Hemmen, & Cowan, 1996).In particular, the existence and stability of collective oscillations can be studied by a simple graphical construction using the kernels and c ( l ) .It can be shown that a collective oscillation is stable if the sum of the postsynaptic potentials is increasing at the moment of firing. Otherwise it is unstable (Gerstner et al., 1996).Furthermore, it can be shown that in a fully connected and homogeneous network of spike response neurons with a potential described by equation 1.1, the state of asynchronous firing is almost always unstable (Gerstner, 1995).Also in simulations of a network of Hodgkin-Huxley neurons, a spontaneous breakup of the asynchronous firing state and a small oscillatory component in the mean firing activity have been observed (Wang, personal communication, 1995). In summary, we have used the Hodgkin-Huxley equations as a wellstudied reference model of spike dynamics and shown that it can indeed be reduced to a threshold model. Our methods are more general and can also be applied to more elaborate models that involve many ion currents and a complicated spatial structure. Furthermore, an analytic evaluation of some of the kernels (see the appendix for mathematical details) is possible. We have also presented a numerical algorithm to determine the response kernels that, in contrast to the Wiener method, can be determined quickly and easily. In fact, the spike response method we have used in our analysis can be seen as a systematic approach to a reduction of a complex system to a threshold model.
Appendix In this appendix we treat Volterra’s theory (Volterra, 1959)in the context of inhomogeneous differential equations, indicate under what conditions and how the kernels of the Volterra series can be computed analytically, and isolate the mathematical essentials of our own approach.
Werner M. Kistler, Wulfram Gerstner, and J. Leo van Hemmen
1038
2
3
4
8
6
10
12
0.
-
0. 0.
-0
w
0.
-0. 0
5
10
15
20
tlms (b)
Figure 12: Temperature dependence of the kernels corresponding to the Hodgkin-Huxley equations. Both ql kernel (a) and 6;’) kernel (b)show apart from a stretching in time no significant change in form if temperature is decreased from 15.0 (dotted line) and 10.0 (dashed line) to 6.3 (solid line) degrees Celsius.
A.l Volterra Series. We start by studying the ordinary differential equation
where j is a given function of time and X denotes differentiation of x ( t ) with respect to t. We require that for j = 0, x = 0 should be an attractive fixed point with a sufficiently large basin of attraction so that the system returns
Reduction of the Hodgkin-Huxley Equations
1039
to equilibrium if the perturbation j ( t ) is turned off. Furthermore, we require f and j to be continuous and bounded and f to fulfill a Lipschitz condition. Finally, j E C2(R)should have compact support so that x E C2(R). Under these premises there exists a unique solution x, with x ( -m) = 0 for each input j E A c C2(R).Hence, the time course of the solution x is a function' of the time course of the perturbation j, formally,
F : A c C2(R) += C2(R), j H F[j] = x with x(t) - f ( x ( t ) ) = j ( t ) , V t E R. (A.2) Let us suppose that the function F is analytic in a neighborhood of the constant solution x = 0. For a precise definition of the notion of analyticity and conditions under which it holds, we refer to Thomas (1996). In case of analyticity there is a Taylor series (Dieudonnk, 1968; Dunford & Schwartz, 1958; Kolmogorov & Fomin, 1975; Hille & Phillips, 1957) for F,
F[jl = 0
1 + F'[Ol[jl + -F"[Ol[j, jl + . . . 2!
(A.3)
This series is convergent with respect to the II.112-norm, that is, F[j](t) = x ( t ) for almost all t E R. Each derivative F(")[O]in equation A.3 is a continuous nlinear mapping from (C2(R))"to C2(R).Hence, F'"'[O][.](t): (C2(IR))"-+ R is a continuous n-linear functional for almost all (fixed) t E R. We know that every such functional has a Volterra-like integral representation (Volterra, 1959; Palm & Poggio, 1977; Palm, 1978),
In the most general case the kernels c:") are distributions. We will see, however, that the kernels describing equation A.2 are (almost everywhere) continuous functions from C2(Rt1). Combining equations A.3 and A.4 yields the Volterra series of F ,
c ( 2 ) ( t l ,t 2 ) j(t-tl) j(t-t2)
'
+
The bracket convention is such that (..) denotes the dependenceon a real or complex scalar, and I..]denotes the dependence on a function.
1040
Werner M. Kistler, Wulfram Gerstner, and J. Leo van Hemmen
+ ...
(A.5)
Because we expand around the constant solution x = 0, the 6 kernels depend not on the time t but on the difference ( t - t*), t‘ being the time when we have stated the initial condition. In this subsection, we calculate the solution for the initial condition x(t’ = -00) = 0 and the 6 kernels do not depend on t at all. We have therefore dropped the index t from 6;”)as it occurs in equation A.4. As is discussed in section A.3, dropping the index is possible only if no spike occurs in the past. We can easily generalize this formalism to systems of differential equations,
and obtain
‘J
+-2!
s
dti dtz tl*’(tl, t 2 ) j ( t - t l ) j ( t - t 2 )
+. . .
As opposed to the lower index t in equation A.4, and the lower index p in equation 2.3, the subscript k in equation A.7 denotes the kth vector compo-
nent of the system throughout the rest of this appendix. A.2 Calculation of the Kernels. We want to prove that the kernels CAI), ch2), . . . can be calculated explicitly for fairly arbitrary systems of differential
equations. In particular, it is possible to obtain analytic expressions for the kernels in terms of the eigenvalues of the linearizationof the N-dimensional system of differential equations we start with. Suppose f in equation A.6 is analytic in the neighborhood of x = 0 so that we can expand f in a Taylor series around x = 0. In doing so, we use the Einstein summation convention. We substitute the Taylor series into equation A.6,
realize that the derivatives are evaluated at x = 0, and switch to Fourier transforms,
Reduction of the Hodgkin-Huxley Equations
1041
(-4.9)
The Fourier transform of the Volterra series (see equation A.7) is
We substitute this into equation A.9 and obtain a polynomial (in a convolution algebra) in terms of j ( w ) ,
Here we have defined the abbreviations
ck
= 8k.l ,
and (A.13)
Equation A . l l holds for every function j . The coefficients of j ( w ) therefore have to vanish identically, and we are left with a set of linear equations for the kernels, which can be solved consecutively,
(A.14)
Werner M. Kistler, Wulfram Gerstner, and J. Leo van Hemmen
1042
Note that we have to invert only one matrix, ckl(w), in order to solve equation A.14 for all kernels. The inverse of ckl(w) is (A.15)
where C,,,(w) is the adjunct matrix of ckl(w) defined in equation A.12. Solving equation A.14, we obtain for c ; ' ( w l , . . . , w,) an expression with a denominator that is a product of the characteristic polynomial p ( w ) := det ckl(w), evaluated at different frequencies w . For instance, solving equation A.14 for ck(2)(w1, cry) yields a denominator p(w1)p(w2)p(w1 q), and the denominator in c i 3 ' ( w l , q , w3) is p(w1)p(w2)p(w3)p(w1 w3) p(w2 w3)p(wl q w3), and so forth. If we know the roots of p(w), that is, the eigenvalues of the system of differential equations, we can factorize the denominator and easily determine the residues of c r ) ( w l , . . . , w,) and thus derive an analytic expression for the inverse Fourier transform of
+
+ +
+
+
(n) ck (w13...,wn)!
The coefficients uklnl E C are sums and differences of the N eigenvalues hk of the N-dimensional system of differential equations,
The computational difficulties are reduced to the calculation of the N roots of a polynomial of degree N where N is the system dimension; see equation A.6. A.3 Limitations. In most cases, the function F is not an entire function; in other words, the series expansion around the constant solution x = 0 does not converge for all perturbations j (Thomas, 1996). For the Hodgkin-Huxley equations, we have found numerically that the truncated series expansion gives a fine approximation to the true solution only as long as the input current keeps well below the spiking threshold. This is not too surprising, since the solution of Hodgkin-Huxley equations exhibits an apparent discontinuity at spiking threshold. Consider a family of input functions given by
jr :t
HjoO(t
- s)O(t
+
5).
(A.18)
These are square pulses with height j o and duration 2s. There is a threshold jlY(r) and a small constant S with 6 / j I 9 ( s ) j a ( 7 ) S at least one spike is triggered within a given compact interval (see Figure 2). Thus, the two-norm of the derivative, IIF’[jr]112 =Z llF[jr S ] - F [ j , - 8 ] l l 2 / 28, takes large values as j o approaches the value j o ( 7 ) , and we do not expect the series expansion to converge beyond that point. In the body of this article, we introduced the response function q in order to address this problem. The key advantage of 171 and the ”improved” kernelsci’), ci2’,. . . is that we no longer expand around theconstant solution x = 0 but around a solution of the Hodgkin-Huxley equations that contains a single spike at t = tf. In the context of equation A.5, we have argued that the c kernels do not depend on the absolute time. However, since the new zero-order approximation u ( t ) = q l ( t - tf) contains a spike, homogeneity of time is destroyed and the improved c kernels depend on (t - tf) where tf is the latest firing time. Unfortunately, there is no analytic solution for the Hodgkin-Huxley equations with spikes. We therefore have to, and did, resort to numerical methods in order to determine the kernel c r ) in the neighborhood of a spike. In order to construct approximations of solutions with more than one spike, we have to concatenate these single-spike approximations. We can then exploit the fact that the truncated series expansion exhibits rather prominent peaks in the membrane potential at positions where the HodgkinHuxley equations would produce a spike. The task is to decide from the truncated series expansion which of the peaks in the approximated membrane potential belong to a spike and which do not. In the main body of the article, we investigated how well this can be done by using a threshold criterion.
+
Acknowledgments This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant numbers He 1729/2-2 and 8-1. References Abbott, L. F., & Kepler, T. B. (1990). Model neurons: From Hodgkin-Huxley to Hopfield. In L. Garrido (Ed.),Statistical Mechanics ofNeural Networks. Berlin: Springer-Verlag. Abbott, L. F., & Vreeswijk, C. van. (1993). Asynchronous states in a network of pulse-coupled oscillators. Phys. Rev. E 48:1483-1490. Av-Ron, E., Parnas, H., & Segel, L. A. (1991). A minimal biophysical model for an excitable and oscillatory neuron. Biol. Cybern. 65:487-500. Cole, K. S., Guttman, R., & Bezanilla, F. (1970). Nerve membrane excitation without threshold. Proc. Nafl. Acad. Sci. 65(4):884-891. Cronin,J. (1987).Mathematical aspects ofHodgkin-Huxley theory. Cambridge:Cambridge University Press.
1044
Werner M. Kistler, Wulfram Gerstner, and J. Leo van Hemmen
Dieudonne, J. (1968).Foundations of modern analysis. New York: Academic Press. Dunford, N., & Schwartz, J. T. (1958).Linear operators, Part I: General theory. New York: Wiley. Ekeberg, O., Wallen, P., Lansner, A., Traven, H., Brodin, L., & Grillner, S. (1991). A computer based model for realistic simulations of neural networks. Biol. Cybern. 6581-90. Feller, W. (1970).A n Introduction to Probability Theory and Its Applications (3rd ed., Vol. 1).New York Wiley. FitzHugh, R. (1961).Impulses and physiological states in models of nerve membrane. Biophys. 1, 1:445-466. Gerstner, W. (1991).Associative memory in a network of ”biological” neurons. In R. I? Lippmann, J. E. Moody, & D. S. Touretzky (Eds.),Advances in Neural Information Processing Systems 3 (pp. 84-90). San Mateo, CA: Morgan Kaufmann. Gerstner, W. (1995). Time structure of the activity in neural network models. Phys. Rev. E 51:738-758. Gerstner, W., & Hemmen, J. L. van. (1992).Associative memory in a network of “spiking” neurons. Network 3:139-164. Gerstner, W., & Hemmen, J. L. van. (1993). Coherence and incoherence in a globally coupled ensemble of pulse emitting units. Phys. Rev. Lett. 71(3):312315. Gerstner, W., Hemmen, J. L. van, & Cowan, J. D. (1996).What matters in neuronal locking? Neural Comp. 8:1689-1712. Hille, E., & Phillips, R. S. (1957).Functional analysis and semi-groups. Providence, RI:American Mathematical Society. Hodgkin, A. L., & Huxley, A. F. (1952).A quantitative description of ion currents and its applications to conduction and excitation in nerve membranes. 1. Physiol. (London) 117:500-544. Hopfield, J. J., & Herz, A. V. M. (1995). Rapid local synchronization of action potentials: Towards computation with coupled integrate-and-fire networks. Proc. Natl. Acad. Sci. U S A 92:6655. Joeken, S., & Schwegler, H. (1995). Predicting spike train responses of neuron models. In M. Verleysen (Ed.),Proc. 3rd European Symposium on Artificial Neural Networks (pp. 93-98). Brussels: D facto. Kepler, Thomas B., Abbott, L. F., & Marder, E. (1992).Reduction of conductancebased neuron models. Biol. Cybern. 66:381-387. Kernell, D., & Sjoholm, H. (1973). Repetitive impulse firing: Comparison between neuron models based on voltage clamp equations and spinal motoneurons. Acta Physiol. Scand. 874&56. Koch, C., Bernander, O., & Douglas, R. J. (1995).Do neurons have a voltage or a current threshold for action potential initiation? 1. Comp. Neurosci. 2:63-82. Kolmogorov, A. N., & Fomin, S. V. (1975).Reelle Funktionen und Funktionalanalysis. Berlin: VEB Deutscher Verlag der Wissenschaften. Korenberg, M. J. (1989).A robust orthogonal algorithm for system identification and time-series analysis. Biol. Cybern. 60267-276. Lapicque, L. (1907).Recherches quantitatives sur l’excitation electrique des nerfs traitee comme une polarisation. 1. Physiol. Pathol. Gen. 9620-635.
Reduction of the Hodgkin-Huxley Equations
1045
Mainen, Z. F., & Sejnowski, T. J.(1995).Reliability of spike timing in neocortical neurons. Science 268:1503-1506. Mauro, A., Conti, F., Dodge, F., & Schor, R. (1970).Subthreshold behavior and phenomenological impedance of the giant squid axon. 1.Gen. Physiol. 55:497523. Palm, G. (1978). On representation and approximation of nonlinear systems. Biol. Cybernetics 31:119-124. Palm, G., & Poggio, T. (1977). The Volterra representation and the Wiener expansion: Validity and pitfalls. SIAM 1. Appl. Math. 33(2):195216. Palm, G., & Poggio, T. (1978). Stochastic identification methods for nonlinear systems: An extension of the Wiener theory. SlAM 1. Appl. Math. 34(3):524534. Prohorov, Yu. V., & Rozanov, Yu. A. (1969).Probability. Berlin: Springer-Verlag. Rinzel, J. (1985).Excitation dynamics: Insights from simplified membrane models. Federation Proc. 44:2944-2946. Rinzel, J., & Ermentrout, G. B. (1989).Analysis of neuronal excitability and oscillations. In C. Koch & I. Segev (Eds.), Methods in Neuronal Modeling. Cambridge, MA: MIT Press. Thomas, E. G. F. (1996).Q-analytic solution operatorsfor non-linear differential equations (Reprint). Department of Mathematics, University of Groningen. Traub, R. D., Wong, R. K. S., Miles, R., & Michelson, H. (1991). A model of a CA3 hippocampal pyramidal neuron incorporating voltage-clamp data on intrinsic conductances. 1. Neurophysiol. 66:635450. Tsodyks, M., Mitkov, I., & Sompolinsky, H. (1993). Patterns of synchrony in inhomogeneous networks of oscillatorswith pulse interaction. Phys. Rev. Lett. 72 :1281-1283. Usher, M., Schuster, H. G., & Niebur, E. (1993). Dynamics of populations of integrate-and-fireneurons, partial synchronization and memory. Neural Computat ion 5570-586. Volterra, V. (1959). Theory of Functionals and of integral and Integro-Differential Equations. New York: Dover. Wiener, N. (1958). Nonlinear Problems in Random Theory. Cambridge, MA: MIT Press. Wilson, M. A,, Bhalla, U. S., Uhley, J. D., & Bower, J. M. (1989). Genesis: A system for simulating neural networks. In D. Touretzky (Ed.), Advances in Neural Information Processing Systems (pp. 485-492). San Mateo, CA: Morgan Kaufmann. Yamada, W. M., Koch, C., & Adams, I? R. (1989). Multiple channels and calcium dynamics. In C. Koch & I. Segev (Eds.), Methods in Neuronal Modeling: From Synapses to Netzuorks. Cambridge, MA: MIT Press. Received April 17,1996;accepted October 28,1996.
Communicated by Bruce Knight
Noise Adaptation in Integrate-and-Fire Neurons Michael E. Rudd Lawrence G. Brown Department of Psychology, Johns Hopkins University, Baltimore, Maryland 21218, U.S.A.
The statistical spiking response of an ensemble of identically prepared stochastic integrate-and-fire neurons to a rectangular input current plus gaussian white noise is analyzed. It is shown that, on average, integrateand-fire neurons adapt to the root-mean-square noise level of their input. This phenomenon is referred to as noise adaptation. Noise adaptation is characterized by a decrease in the average neural firing rate and an accompanying decrease in the average value of the generator potential, both of which can be attributed to noise-induced resets of the generator potential mediated by the integrate-and-fire mechanism. A quantitative theory of noise adaptation in stochastic integrate-and-fire neurons is developed. It is shown that integrate-and-fire neurons, on average, produce transient spiking activity whenever there is an increase in the level of their input noise. This transient noise response is either reduced or eliminated over time, depending on the parameters of the model neuron. Analytical methods are used to prove that nonleaky integrate-and-fire neurons totally adapt to any constant input noise level, in the sense that their asymptotic spiking rates are independent of the magnitude of their input noise. For leaky integrate-and-fire neurons, the long-run noise adaptation is not total, but the response to noise is partially eliminated. Expressions for the probability density function of the generator potential and the first two moments of the potential distribution are derived for the particular case of a nonleaky neuron driven by gaussian white noise of mean zero and constant variance. The functional significance of noise adaptation for the performance of networks comprising integrate-and-fire neurons is discussed. 1 Introduction In this article, we analyze the statistical spiking behavior of stochastic integrate-and-fire neurons, that is, integrate-and-fire neurons that receive probabilistic input. We consider the rather general case in which the random fluctuations in the neural input are modeled as gaussian white noise. We show that neurons whose range of allowable generator potential states is unbounded in the negative domain exhibit a dynamic and probabilistic Neural Computation 9, 1047–1069 (1997)
c 1997 Massachusetts Institute of Technology °
1048
Michael E. Rudd and Lawrence G. Brown
adaptation to the root-mean-square noise level of their input. This suggests a technique by which noise suppression can be carried out in artificial neural networks consisting of such neurons and raises the question of whether a similar noise adaptation mechanism exists in actual biological neural networks. The stochastic integrate-and-fire model of neural spike generation was introduced about thirty years ago (Gerstein, 1962; Gerstein & Mandelbrot, 1964) and has since been the subject of numerous theoretical papers (e.g., Calvin & Stevens, 1965; Capocelli & Ricciardi, 1971; Geisler & Goldberg, 1966; Gluss, 1967; Johannesma, 1968; Stein, 1965, 1967) and reviews (Holden, 1976; Ricciardi, 1977; Tuckwell, 1988, 1989). Because the spike generation model is not a new idea, many users of the model probably assume that its properties are well—if not completely—understood. This is far from the case. Analytical results have generally proved difficult to obtain, and the main results in the literature consist of expressions for the density function and moments of the spike interarrival times of the neuron (Holden, 1976; Ricciardi, 1977; Tuckwell, 1988, 1989). In fact, not even a closed-form solution for the interarrival time density is known for the biologically important case of an integrate-and-fire model that includes a membrane decay (Tuckwell, 1988, 1989). Some behavioral properties of the stochastic integrate-and-fire neuron, such as the firing rate, are most naturally defined in terms of the statistics of an ensemble of identically prepared neurons. For example, in singlecell recording studies of individual neurons, the firing rate dynamics are often illustrated by a response histogram, which is a tabulation of many probabilistic spiking events as a function of the time after stimulus onset. The response histogram represents a statistical sample of spike counts taken from an ensemble of identically prepared neurons. In this article, we present some new analytic results that add considerable insight into the theoretical ensemble spiking statistics of integrate-and-fire neurons. Specifically, we consider the case of an integrate-and-fire neuron whose generator potential is unbounded in the negative domain, and derive an expression for the timedependent average firing rate of the neuron, assuming an input consisting of a rectangular current plus gaussian white noise. We demonstrate that the neuron has the surprising property of adapting, on average, to the level of the noise in its input. We then analyze the evolution of the probabilistic state of the generator potential to discover the mechanism behind this noise adaptation. 2 The Stochastic Integrate-and-Fire Spike Generation Model Let U(t) stand for the generator potential of the integrate-and-fire neuron. According to the standard mathematical formalization of integrate-andfire neurons (Holden, 1976; Ricciardi, 1977; Tuckwell, 1988, 1989), we can write a stochastic differential equation that describes the time evolution
Noise Adaptation
1049
of the generator potential. In each small time step, the potential U(t) is increased by an amount that depends on the mean input rate µ(t) and the duration of the step dt. The potential is also perturbed by gaussian white noise with variance per unit time σ 2 (t). Finally, a percentage αdt of the generator potential decays away as a result of a membrane leak. Thus, the generator potential obeys the stochastic differential equation dU(t) = (−αU(t) + µ(t))dt + σ (t)dW(t),
(2.1)
where W(t) is a standard Wiener process with mean zero and unit variance per unit time. As is well known, equation 2.1 describes the evolution of a stochastic diffusion process, known in the mathematical literature as the Ornstein-Uhlenbeck process (OUP) (Uhlenbeck & Ornstein, 1930; Tuckwell, 1988, 1989). The OUP is a type of transformed Brownian motion process. Whenever the generator potential of the integrate-and-fire neuron exceeds the value of a neural firing threshold θ, the neuron emits a single neural pulse, and the generator potential is reset to zero. The neural output signal y(t) thus evolves according to the following formula (Chapeau-Blondeau, Godivier, & Chambet, 1996): If U(t) = θ then y(t) = δ(t0 − t), U(t) = 0; else y(t) = 0.
(2.2)
Equations 2.1 and 2.2 represent the standard formulation of the integrateand-fire model. Note, however, that the effect of the resets on the probabilistic evolution of the generator potential is not explicitly taken into account in the standard formulation of the problem (see equation 2.1). Technically, equation 2.1 describes only the evolution of the generator potential between successive spikes. To develop a probabilistic model of the generator potential that takes the resets into account and is therefore not limited to time intervals in which no spikes occur, we modify equation 2.1 as follows: dV(t) = (−αV(t) + µ(t))dt + σ (t)dW(t) − θ dN(t),
(2.3)
where dN(t) is the number of resets in the interval dt; we now denote the potential by a new symbol, V(t), in order to distinguish clearly the behavior of the generator potential with resets (see equation 2.3) from the behavior of the OUP U(t) (see equation 2.1). To complete our formalization of the stochastic integrate-and-fire neuron, we may then write If V(t) = θ then y(t) = δ(t0 − t), V(t) = 0; else y(t) = 0.
(2.4)
Note that explicit consideration of the resets on the generator potential results in the appearance of an inhibitory term in the stochastic equation 2.3, for the time evolution of the potential that is not present in the standard equation for the generator potential (see equation 2.1). In fact, the only difference between equations 2.1 and 2.3 is the presence of an extra inhibitory
1050
Michael E. Rudd and Lawrence G. Brown
term in equation 2.3 that arises from the operation of the reset mechanism of the integrate-and-fire neuron. The inhibition resulting from the neural resets tends to reduce the potential compared to the hypothetical potential that would be produced in the absence of resets. Thus, the resets tend to hyperpolarize the neuron. We will show that this reset-induced hyperpolarization has important implications for the dynamics of the mean spiking rate of the stochastic integrate-and-fire neuron. Informal reasoning suggests that as the potential is progressively decreased by a series of resets while the threshold remains fixed, the neuron will fire less; thus, the reset mechanism will act to decrease the firing rate progressively until the separate influences that act to increase (input current) and decrease (resets) the generator potential exactly counterbalance one another. Once the effects of these opposing influences equilibrate, the firing rate will stabilize. This reasoning is directly checked in the next section by computing the the average spiking rate of the neuron as a function of time. 3 Firing Rate Dynamics In this section, we investigate the spiking response of the neuron to an input of constant mean rate µ and diffusion coefficient σ 2 that is turned on at t = 0. The neuron is initially at potential V(0). At time zero it is suddenly perturbed by a rectangular input current, to which is added gaussian white noise. For purposes of exposition, we will momentarily assume that the neuron has no membrane leak. In this special case, we can derive an expression for the time-dependent neural firing rate by using the well-known density function for the first passage time of a Brownian motion initially localized at the point (V(0), 0) to cross an arbitrary voltage level Va , which is one of the few analytic expressions that we have in our arsenal for understanding the behavior of the neuron. The density function is f (t) =
Va − V(0) −(Va −V(0)−µt)2 /2σ 2 t e √ σ 2π t3
(3.1)
(Karlin & Taylor, 1975). From this density we immediately obtain the first firing time density: f1 (t) =
θ − V(0) −(θ −V(0)−µt)2 /2σ 2 t e . √ σ 2π t3
(3.2)
More generally, it can be shown (Rudd, 1996) that the density for the time of the Nth neural spike is fN (t) =
Nθ − V(0) −(Nθ−V(0)−µt)2 /2σ 2 t e . √ σ 2π t3
(3.3)
Noise Adaptation
1051
It is easily seen that the average spiking rate is given by the infinite sum ∞ X dN(t) = fN (t). dt N=1
(3.4)
In Figure 1a, we plot the infinite series (3.4) for the parameter set V(0) = 0, µ = 150, θ = 50, and three different values of σ . Note that the firing rate has both a transient noise-dependent component and a sustained noiseindependent component. The initial, transient, firing rate is larger for larger values of the input noise. However, in the long run (as t → ∞), as indicated on the plot, dN(t)/dt → µ/θ , a value independent of σ . The asymptotic (sustained) firing rate can be derived by using a mathematical theorem (Parzen, 1962, p. 180), which relies on the fact that the number of spikes N(t) generated in the interval [0, t] is a renewal counting process. We first note that the average value of the random time ξ for the potential initialized at zero to cross the neural threshold is ξ¯ = θ/µ (Rudd, 1996) and then apply the theorem: 1 N(t) = . t→∞ t ξ¯ lim
(3.5)
Multiplying both sides of equation 3.5 by t and taking the time-derivative yields dN(t)/dt → ξ¯ −1 as t → ∞. In the specific case of the nonleaky integrate-and-fire model, dN(t)/dt → µ/θ as t → ∞. In what follows, we will refer to equation 3.5 as the renewal counting theorem. When µ = 0, ξ¯ is infinite, and the renewal counting theorem therefore does not apply. However, as µ → 0, dN(t)/dt → 0 as t → ∞, which suggests that the firing rate will tend to zero in the long run when the neuron is driven by gaussian white noise of constant variance and mean zero. The same conclusion is reached by noting that for µ = 0 and as t → ∞, every term in our infinite series expression (3.4) for the neural firing rate goes to zero. We further verify the conclusion by plotting in Figure 1b the spiking rate of a nonleaky integrate-and-fire neuron with µ = 0 driven by gaussian white noise of three different magnitudes. Our results to this point indicate that when an integrate-and-fire neuron is suddenly perturbed by a noisy input, the neuron produces a transient spiking response that is positively related to the noise fluctuation level; in the long run, the response to noise completely dies away, and the mean firing rate depends only on the deterministic component of the input. In the next section we will show that the tendency of the neuron’s response to noise to decay over time represents a form of stochastic neural adaptation. The noise response is eliminated by the reset-induced hyperpolarization of the generator potential. We refer to this phenomenon as noise adaptation. The word adaptation connotes both the idea of dynamic change and the idea that
1052
Michael E. Rudd and Lawrence G. Brown
Figure 1: Average firing rate of the integrate-and-fire neuron as a function of time, plotted for three different values of the general potential noise level σ . (a) Average firing rate of a nonleaky neuron with mean input rate µ = 150, calculated on the basis of equation 3.4. (b) Average firing rate of a nonleaky with mean input rate µ = 0, calculated on the basis of equation 3.4. (c) Simulated firing rate of a leaky integrate-and-fire neuron (α = 3.33) driven by at a constant mean input rate (µ = 150). Additional model parameters in (a), (b), and (c): θ = 50; V(0) = 0. Time in arbitrary units.
Noise Adaptation
1053
the change may serve an adaptive function. To the extent that the function of an individual neuron within a network is to perform a calculation based on a weighted sum of its deterministic input currents, the noise response is spurious, and noise adaptation serves the purpose of reducing the number of noise-induced spikes (i.e., reducing the number of false positives). Before we present a mathematical proof that the dynamic firing rate reduction represents an adaptation to noise, we will briefly return to the problem of investigating the average firing rate dynamics of integrate-andfire neurons. So far, we have derived an expression only for the average firing rate of an integrate-and-fire neuron that has no membrane leak. Ideally, we would like to find a general expression for the average firing rate that would apply to leaky integrate-and-fire neurons as well as nonleaky ones. Despite considerable effort, however, we have not been able to obtain such an expression. We therefore rely on computer simulation to analyze the properties of the leaky neuron in order to check the generality of our conclusions. In Figure 1c, we plot the firing rate of a simulated leaky integrate-and-fire neuron (α = 3.33) for a constant positive mean input (µ = 150) and three different levels of input noise (indicated on the plot). As in the case of the nonleaky neuron, the leaky neuron produces an initial transient response to noise, the magnitude of which is positively related to the input noise level. The transient response is followed by a decline in the firing rate while the input remains constant. Since the neural response decreases dynamically while the input remains constant, it appears that some form of neural adaptation is taking place. In fact, this state of affairs might be taken as a definition of adaptation. The renewal counting theorem, equation 3.5, can be used, together with any of several exact or approximate expressions for the mean firing time of the leaky-integrator neuron (e.g., Roy & Smith, 1969; Thomas, 1975; Ricciardi & Sacerdote, 1979; Wan & Tuckwell, 1982; Tuckwell, 1989), to derive an expression for the long-run value of the average firing rate of the neuron. Wan and Tuckwell (1982) obtained several approximate expressions for the mean firing time and noted that noise always tends to decrease the average spike interarrival time of the leaky integrator neuron (regardless of whether the mean input rate is positive or zero). From this fact and equation 3.5, it follows that the sustained firing rate of the neuron with a membrane leak increases with the level of the input noise. The simulated firing rates plotted in Figure 1c are consistent with this conclusion. Thus, we conclude that the response to noise is not completely eliminated over time in the leaky neuron. However, our firing rate simulations indicate that a greater amount of noise-response reduction (transient response minus sustained response) is observed at higher-input noise levels. The overall pattern of results indicates that the dynamic reduction in the average neural firing rate acts to reduce the level of the transient noise response in the leaky neuron, with greater compensation occurring at higher
1054
Michael E. Rudd and Lawrence G. Brown
noise levels. This pattern is consistent with the interpretation that leaky integrate-and-fire neurons adapt to reduce their response to noise. The dynamic reduction in the noise response appears to be a general feature of integrate-and-fire neurons, with or without a membrane leak, although the noise dependence of the average firing rate is asymptotically eliminated only in the case of the nonleaky neuron. 4 Generator Potential Dynamics In order to understand better the mechanism behind the dynamical changes in the neural firing rate, we now return to our analysis of the time-dependent behavior of the generator potential. Setting α = µ(t) = 0 and σ (t) = σ , a constant, in equation 3.5, taking the ensemble averages of the quantities on both sides of the equation, and time-integrating yields V(t) = V(0) − θN(t) ,
(4.1)
where V(0) is the value of the generator potential at time zero, and N(t) is the average number of resets in the interval [0, t]. Since µ(t) = 0, the neuron is driven by gaussian white noise of mean zero and neural spikes are generated solely on the basis of noise fluctuations. According to equation 4.1, the reset mechanism acts in this special case to decrease progressively the mean value of the generator potential, and since the mean input rate is zero, there is no counteracting tendency for the average potential to be increased by the input. With each additional reset, the average value of the generator potential is further decreased. We anticipate that the reduction in the mean level of the potential will be associated with a concomitant reduction in the neural firing rate. The generator potential will continue to be decreased until the cell eventually stops firing altogether. This anticipated behavior is consistent with our previous result based on the renewal counting theorem, which indicated that a nonleaky neuron driven by noise alone turns itself off in the long run. Without the insight gained from a stochastic analysis of the neuron’s behavior, an empirical observation of the neuron’s behavior might lead to the mistaken conclusion that the firing rate is subject to negative feedback. Further insight can be gained by averaging both sides of equation 2.3 (with α = µ(t) = 0 and σ (t) = σ , a constant) and dividing by dt to obtain: dN(t) dV(t) = −θ . dt dt
(4.2)
Since, as proved in the previous section, the average firing rate goes to zero as t → ∞, the time derivative of the average value of the generator potential also must go to zero in the same limit.
Noise Adaptation
1055
This approach is easily generalized to the case of an integrate-and-fire neuron with arbitrary parameters. In the most general case, dN(t) dV(t) = −αV(t) + µ(t) − θ . dt dt
(4.3)
In the last section it was shown that the firing rate tends to a constant as t → ∞. In that limit, dV(t)/dt → 0. By making this substitution in equation 4.3, we obtain the asymptotic firing rate µ − αV(∞) dN(∞) = , dt θ
(4.4)
where we have assumed µ(t) to be constant. For α = 0 (nonleaky neuron), dN(t)/dt → µ/θ as t → ∞, which is consistent with our earlier result based on the renewal counting theorem. For arbitrary α, equation 4.4 can be rewritten in the form θ V(∞) = α
Ã
! dN(∞) µ− . dt
(4.5)
The asymptotic level of the mean generator potential can be obtained from equation 4.5 by using the renewal counting theorem, equation 3.5, and any of several expressions for the mean firing time (Roy & Smith, 1969; Thomas, 1975; Ricciardi & Sacerdote, 1979; Wan & Tuckwell, 1982; Tuckwell, 1989) to compute the asymptotic average firing rate of the leaky neuron. For a leaky neuron driven by gaussian white noise of mean zero, the asymptotic value of V(t) will be negative and of larger absolute value at higher noise levels, since the asymptotic firing rate is larger for larger values of σ . Again assuming that α = 0 and µ(t) constant, we next substitute equation 3.4 into equation 4.3 to obtain ∞ X dV(t) =µ−θ fN (t) dt N=1
(4.6)
and V(t) = V(0) + µt − θ
Z tX ∞ 0 N=1
fN (s) ds.
(4.7)
We then rearrange equation 4.7 to obtain "Z V(0) − V(t) = θ
∞ tX
0 N=1
fN (s) ds −
³µ´ θ
# t .
(4.8)
1056
Michael E. Rudd and Lawrence G. Brown
The time integral on the right-hand side of equation 4.8 is the average number of spikes emitted up to time t. The quantity in parentheses is the average firing rate of an integrate-and-fire neuron with no input noise. Therefore, the quantity in brackets is the difference between the number of spikes emitted in the interval [0, t] with and without input noise. Equation 4.8 indicates that the amount by which the neural potential at time t is decreased, or hyperpolarized, relative to the resting potential is equal to the product of the threshold and the number of “extra” spikes generated when gaussian white noise is added to the deterministic input current. We thus conclude that the generator potential is hyperpolarized, on average, by an amount that depends only on the level of the noise, not on the average input level. This result supports our claim that the adaptation observed in the average spiking rate of the stochastic integrate-and-fire neuron is an adaptation to noise. Further knowledge of the dynamic behavior of V(t) can be gained by reasoning qualitatively from equation 4.6 while simultaneously recalling the dynamic behavior the infinite sum (i.e., the average firing rate) illustrated in Figure 1a. At time zero, the average firing rate of the neuron is zero, so the rate of change in V(t) must initially be positive, and V(t) itself must initially increase. In order for the time derivative of the mean generator potential to change signs, its value must pass through zero. This occurs when the average firing rate is µ/θ. The average firing rate first increases with time, then decreases asymptotically to the value µ/θ. It attains the value µ/θ at only one finite time, which occurs during the rising portion of the firing rate function. Let us call this time t0 . It follows that V(t) first increases from the value V(0) during the interval [0, t0 ), then decreases during the interval (t0 , ∞]. We verified these conclusions by plotting equation 4.7 as a function of time in Figure 2. Values of V(t) were computed numerically with V(0) = 0. Figure 2a illustrates the initial behavior of V(t) examined over a brief period. The plot indicates that V(t) first increases and thereafter monotonically decreases, as anticipated by the preceding analysis. We also verified by plotting (not shown) the average firing rate of a nonleaky neuron with the same parameters that the turnaround occurs during the initial rising phase of the neural firing rate, when the firing rate first attains the level µ/θ. Figure 2b illustrates the behavior of V(t) on a larger time scale. The initial rise in V(t) and subsequent turnaround are not obvious at this scale, but the tendency for V(t) to decrease monotonically at large adaptation times is clear. When µ = 0 (nonleaky neuron driven by gaussian white noise alone), V(t) is a monotonically decreasing function of time. In that case, the infinite sum in equation 4.6 goes to zero as t → ∞, so the time derivative of the average potential must also asymptotically go to zero. The integral in equation 4.7 diverges for large t (since it is the sum of an infinite number of density functions and the time integral of each goes to one for t large). Thus, when µ = 0, V(t) → −∞ as t → ∞. After a long period of adaptation to gaussian white noise of mean zero,
Noise Adaptation
1057
Figure 2: Time dependence of the mean generator potential of a nonleaky integrate-and-fire neuron, computed on the basis of equation 4.7. (a) The initial behavior of V(t), illustrated on a short time scale. V(t) first rises, then monotonically decreases with time. (b) When viewed over a longer adaptation time, including the brief initial period illustrated in (a), a deceleration of the final monotonic decrease in V(t) is apparent. Model parameters: θ = 1; α = 0; µ = 2 × 10−4 ; V(0) = 0. Time in arbitrary units.
1058
Michael E. Rudd and Lawrence G. Brown
Figure 3: Average level of the generator potential of a nonleaky integrate-andfire driven by a gaussian white noise with mean zero and constant variance as a function of the standard deviation of the time-integrated noise in the Brownian motion process that drives the neuron. Additional model parameters: θ = 50; α = 0; µ = 0; V(0) = 0.
the generator potential will thus, on average, be large and negative. An informal argument suggested by a reviewer allows us to use this fact to find the approximate functional dependence of the average potential on the variables σ and t. At large negative values along the potential continuum, the thresholded Brownian motion will appear identical to a regular Brownian motion with a downward-reflecting barrier imposed at V(0). The average value of such √ a “reflected Brownian motion” is proportional to the standard deviation σ t of an unrestricted Brownian motion with no absorbing or reflecting barriers. It follows that, for the integrate-and-fire √ neuron driven by be approximately proportional to σ t for sufficiently noise alone, V(t) must √ large values of σ t. We verified this informal reasoning by √ evaluating equation 4.7 numerically and plotting the function versus σ t in Figure 3. As anticipated, the average value of the generator potential is a linearly decreasing function of the time-dependent standard deviation of the original Brownian motion process for t sufficiently large. This result further supports our claim that the integrate-and-fire neuron undergoes noise adaptation and quantifies the dependence of this noise adaptation on σ and t in the particular case in which α = µ(t) = 0. The approximate equivalence between the probabilistic path structure of the generator potential of an integrate-and-fire neuron driven by noise alone and that of a reflected Brownian motion generalizes to the case of the leaky neuron, provided that the noise level is sufficiently large. In this case, the density function of the generator potential is approximately equivalent
Noise Adaptation
1059
to that of a reflected OUP at large t. In support of this claim, we note that the equivalence holds for large, negative values of V(t) and that, according to equation 4.5, (with µ = 0), large, negative values will be obtained at large adaptation times, provided that the asymptotic firing rate is sufficiently large. Recall that the asymptotic firing rate is an increasing function of the noise level; therefore, the equivalence must hold for sufficiently large values of σ . It follows that V(t) will be approximately equivalent to the average level of an OUP reflected at the origin when V(t) is large and negative. The average value of the reflected OUP is proportional to the standard deviation of the regular, unrestricted OUP (Tuckwell, 1988, 1989): s σ2 (1 − e−2αt ) . 2α We have not been able to derive an expression for the probability density function of the generator potential in the most general case in which none of the parameters of the model is fixed; however, in the appendix, we derive expressions for the density function and the first two moments of the generator potential in the special case in which α = µ(t) = 0 and σ (t) = σ , a constant. There, the first moment is obtained by a method different than the one that led to equation 4.7. 5 Summary We have investigated the effect of the reset mechanism of the stochastic integrate-and-fire neuron on the dynamics of both the average neural spiking rate and the statistics of the generator potential. The response of the neuron to a constant input with additive gaussian white noise was investigated in detail, under the assumption that the generator potential is unbounded in the negative range. By a combination of analytic and simulation methods, we showed that both leaky and nonleaky stochastic integrate-and-fire neurons produce a transient spiking response, the magnitude of which is positively related to the root-mean-square noise level of the input. In the specific case of the nonleaky neuron, it was proved that the noise response is dynamically and completely suppressed by a depression of the average level of the generator potential (i.e., hyperpolarization of the neuron) resulting from the neural reset mechanism. In a neuron with a membrane leak, the noise response is not completely suppressed by the reset mechanism, but it is nevertheless dynamically reduced by it. The reduction in the firing rate (measured by the difference between the peak transient and asymptotic spiking rates) is an increasing function of the input noise. We conclude that all integrate-and-fire neurons—leaky or not—for which the generator potential is unbounded in the negative domain adapt to the noise in their input; in the long run the adaptation is complete only in the case of the nonleaky neuron.
1060
Michael E. Rudd and Lawrence G. Brown
We derived a differential equation (see equation 4.4) that relates the rate of change in the mean level of the generator potential to the average neural firing rate. In the case of the nonleaky neuron, the average firing rate can be computed from an infinite series expression, and equation 4.4 can then be used to write an expression (see equation 4.6) for the time derivative of the mean value of the generator potential in terms of this series. We used this fact to study qualitatively the dynamic behavior of the mean generator potential. When the nonleaky neuron is driven by an input having a positive mean rate, the average potential first increases, then decreases. When the neuron is driven by gaussian white noise of mean zero, the average potential decreases over time monotonically, asymptotically tending to minus infinity in the limit of large t. For sufficiently large t, the average generator potential level in a nonleaky neuron driven by gaussian white noise of mean zero is a linearly decreasing function of the standard deviation of the time-integrated noise input. We presented an argument suggesting that this should also be the case in the model that includes a membrane leak. In the specific case of the nonleaky neuron, we showed that the average amount of decrease in the generator potential (hyperpolarization) over the interval [0, t] is directly related to the average number of extra spikes generated during that interval over and above those that would be generated by a deterministic input current of the same intensity. In other words, the generator potential is inhibited on average by an amount that depends only on the number of neural spikes that can be attributed to noise, which increases with increasing noise magnitude. It follows that the dynamic reduction in the neural firing rate similarly depends only on the magnitude of the noise fluctuations in the input. This analysis supports our claim that the dynamic change in the average neural firing rate of an integrate-and-fire neuron in response to a stochastic input with constant parameters represents noise adaptation. 6 General Conclusions The spike generation mechanism of stochastic integrate-and-fire neurons introduces an important nonlinearity into the operation of any circuit constructed of these units. In general, the consequences of this nonlinearity for the behavior of neural networks are not well understood. We have shown that the threshold nonlinearity produces a probabilistic neural adaptation that counteracts the tendency for an integrate-and-fire neuron to spike in response to noise fluctuations in its generator potential. This property of integrate-and-fire neurons might be useful in both artificial and biological neural networks, where it could function to reduce the number of falsepositive responses of individual neurons within the network. We note in particular that noise adaptation could serve to reduce, or even totally eliminate, the effect of any constant internal (thermal) noise level within a network.
Noise Adaptation
1061
We envision that noise adaptation could be used to dynamically adjust the sensitivity of individual neurons within a network as either the internal noise or the input noise level changed on a time scale that is slow compared to the time scale over which modulations in the spiking rates are interpreted as meaningful signals. By increasing the average distance between the generator potential and the neural threshold as a function of the level of voltage noise, noise adaptation would tend dynamically to control the individual gains of the network neurons to compensate for both internal and external noise (Rudd & Brown, 1994, 1995, 1996, in press). Thus, the fact that integrate-and-fire neurons undergo noise adaptation also implies that they exhibit a stochastic gain control, which in a network context could function to keep the spiking rates of individual neurons operating within an optimal range, despite gradually changing neural noise levels. If the spike generation model was made somewhat more realistic by the additional assumption of an absolute refractory period, noise adaptation would also help to protect the neurons from response saturation due to noise-induced spiking activity. We became interested in noise adaptation in integrate-and-fire neurons while carrying out simulation studies in which we examined the effect of the spike generation mechanism of retinal ganglion cells on the behavior of a retinal circuit that receives probabilistic light input (photons) (Rudd & Brown, 1993, 1994, 1995, 1996, in press). Because the number of photons falling on a small area of the retina within a small time interval obeys Poisson statistics, the mean and variance of the photon count are equal. In the context of our integrate-and-fire ganglion cell model, the noise adaptation mechanism described in this article results in a noise adaptation that depends on the level of the random fluctuations in the number of photons falling within the ganglion cell receptive field, and thus also on the mean light level. Noise adaptation in our retinal ganglion cell model results in a type of light adaptation that is inherently stochastic and is therefore distinct from the deterministic retinal light adaptation models that have been previously proposed. Our ganglion cell noise adaptation model accounts for a large body of light adaptation data, both physiological and psychophysical, that cannot be accounted for by deterministic light adaptation models. Furthermore, we have used the model to make predictions about the dependence of the apparent brightness of light flashes on the mean light level (Rudd & Brown, 1996) that have since been experimentally verified (Brown & Rudd, submitted). The ubiquity of spike generation in the central nervous system suggests that similar noise adaptation may in occur in other neural circuits. Because of the large number of neural subsystems to which our results might potentially be relevant, some discussion of the biological plausibility of the model is in order. Clearly, the intracellular potential of actual neurons cannot take on arbitrarily large negative values, as we have assumed here. However, it is probably not critical to assume that the generator potential can take on arbitrarily large, negative values in order to achieve significant amounts of
1062
Michael E. Rudd and Lawrence G. Brown
noise adaptation in integrate-and-fire neurons. The biological plausibility of the noise adaptation hypothesis appears to depend critically on whether reset-induced hyperpolarization in real neurons can build up to a sufficient degree to account for the levels of adaptation observed in vivo. In the context of our light adaptation work, we simulated the response of a neuron in which the maximum hyperpolarization state was limited by a hard saturation. In one such simulation, we imposed a lower bound of −15θ on the range of the generator potential without noticeably affecting the spiking response of the neuron when it was driven by gaussian white noise of mean zero. However, we have not experimented with variants of the hard saturation model in which the range of allowable hyperpolarization states is shallower, nor have we parametrically investigated the effects of varying the lower bound on the degree of noise adaptation. In the theoretical neurobiology literature, an alternative version of the integrate-and-fire model in which the generator potential is restricted to the range 0 ≤ V(t) ≤ θ by a reflecting barrier at zero is often alternatively assumed. This alternative boundary condition is in some sense the opposite of the assumption made in this article that there is no lower bound on the potential. Whether the lower bound on the potential is modeled as a reflecting barrier or a hard saturation, a biologically realistic integrate-and-fire model would allow for some range of negative potential states. Although restricting the negative potential states to a small range would not seem to allow much room for noise adaptation, it is difficult to estimate the amount of adaptation that should be expected from such a model without performing a complete probabilistic analysis, because the effect of hyperpolarization on the firing rate depends not just on the average potential level but also on the distribution of potential states. Perhaps even a small, negative potential range would allow for enough adaptation to account for a significant amount of the adaptation observed in actual neurons. The noise adaptation properties of our integrate-and-fire model have relevance to a large body of contemporary simulation work in which networks are constructed of stochastic integrate-and-fire neurons. In many such networks, the individual neurons within the network receive a large number of combined excitatory and inhibitory inputs. By the central limit theorem, the inputs to the spike generation mechanisms of the network neurons integrated over a sufficiently long time period will be well approximated as gaussian random variables, and the model developed in here will then apply to the individual neurons within the network. Unless lower bounds on the neural activation levels in such a network model are explicitly imposed by the modeler, the network neurons will undergo noise adaptation of the type that we have described. Caution should be exercised in attributing any observed dynamical changes in the average firing rates of individual neurons within a network solely to global network effects. At the same time, we expect the coupling of neurons within such a network to encourage the noise adaptation to spread and be modified, thus producing global noise adap-
Noise Adaptation
1063
tation effects that cannot be easily anticipated on the basis of the analysis presented here. Appendix A.1 Density Function of the Generator Potential. Our goal is to obtain the probability density function of V(t) given the boundary condition V(0). Let p(V(t)) be the density function. The integral of this density with respect to the voltage-level parameter v is the cumulative distribution function of V(t), written P{V(t) ≤ v}. We will solve for P{V(t) ≤ v}, then take the derivative with respect to v to find p(V(t)). Here we assume that α = µ(t) = 0, and σ (t) = σ , a constant. When α = 0, there is no membrane leak, and the spike generation mechanism is driven by a regular Brownian motion process. Recall that equation 2.1 describes the evolution of the generator potential between spikes. With α = 0, this equation describes the time evolution of a Brownian motion process. The probabilistic path of the generator potential thus consists of a sequence of Brownian motion paths strung together, one after another, with discontinuous breaks between the Brownian motion paths occurring at the reset times. Each time that the neuron resets, a new Brownian motion path is initiated at voltage level zero, and that particular Brownian motion path is terminated as soon as it exceeds the neural threshold. The first Brownian motion path in the sequence is initialized at the level V(0), which, in general, may be different from zero. Although every generator potential path consists of a sequence of Brownian motion paths, the overall generator potential path will not be Brownian motion path unless no resets of the potential occur along the path. In general, the path structure of the generator potential includes the effects of resets, whereas that of Brownian motion does not. This difference is reflected in the two different stochastic differential equations that describe the time evolution of Brownian motion (see equation 2.1) and that of the generator potential (see equation 2.3). Because the two processes are subject to different probability laws, we need to distinguish them clearly in what follows. We will use B(t) to denote a regular Brownian motion path (without resets and terminated on reaching threshold) and reserve V(t) to signify the generator potential path, which is reset to zero whenever it exceeds θ . Let B(t) be a Brownian motion path that is initialized (by either a reset or the initial boundary conditions) at an arbitrary level v0 at any time s within the interval [0, t]. The probability that B(t) is below level v at time t is then given by the normal integral φ(v − v0 , t − s) ≡ P{B(t) ≤ v | B(s) = v0 } Z v 1 2 2 = √ e−(x−v0 ) /(2σ (t−s)) dx σ 2π(t − s) −∞
(A.1)
1064
Michael E. Rudd and Lawrence G. Brown
(Karlin & Taylor, 1975). We will use this result to compute the distribution function P{V(t) ≤ v} by noting that, at any arbitrary time t, V(t) will be a point on a Brownian motion path that was initialized at the time of the last reset (or at time zero if no reset has occurred). Therefore, P{V(t) ≤ v} will be a sum of probabilities of the form in equation A.1, each weighted by the conditional probability that V(t) is on a path initialized at (v0 , s). The generator potential paths for which V(t) ≤ v can be partitioned into two subgroups. The first group consists of those paths that exceed the neural threshold θ at least once in the interval [0, t]; the second group consists of those that do not. We may therefore write P{V(t) ≤ v} as the sum of two joint probabilities: P{V(t) ≤ v} = P{V(t) ≤ v, no resets} + P{V(t) ≤ v, at least one reset}.
(A.2)
Define the maximum height during the interval [s, t] of a Brownian motion path initialized at (v0 , s) as M(s, t) ≡ max {B(t) | B(s) = v0 }. s≤u≤t
The first term on the right side of equation A.2 can then be rewritten in terms of the Brownian motion process according to the law of total probability: P{V(t) ≤ v, no resets} = P{M(0, t) < θ, B(t) ≤ v | B(0) = V(0)} = P{B(t) ≤ v | B(0) = V(0)} − P{M(0, t) ≥ θ, B(t) ≤ v | B(0) = V(0)}. (A.3) By applying the well-known reflection principle (Karlin & Taylor, 1975, p. 345), it can be shown that P{M(0, t) ≥ θ, B(t) ≤ v | B(0) = V(0)} = P{B(t) ≥ 2θ −v | B(0) = V(0)} = 1 − φ(2θ − v − V(0), t),
(A.4)
the last step in equation A.4 being an application of equation A.1. Substituting equations A.1 with v0 = V(0) and s = 0) and A.4 directly into equation A.3 then yields P{V(t) ≤ v, no resets} = φ(v − V(0), t) + φ(2θ − v − V(0), t) − 1. (A.5) We next derive an expression for the second probability on the righthand side of equation A.2. Note that for every generator potential path that contributes to this probability, there must be one and only one time in the interval [0, t] at which the potential was last reset. We can therefore assign a probability density to each possible last reset time. Let s ∈ [0, t] be the
Noise Adaptation
1065
time of the last reset. This is equivalent to saying that a reset occurs at s, and no further resets occur in the interval [s, t]. We can therefore write the joint probability for the event {last reset at s, V(t) ≤ v} in the form P{V(t) ≤ v, no resets ∈ [s, t] | reset at s}. The probability that we seek will then be an integral over s of terms of this form, each weighted by the average number of resets dN(s) that occur within a small neighborhood of s. But dN(s) =
dN(s) ds, ds
and, for the particular case of a nonleaky integrate-and-fire neuron considered here, dN(s)/ds can be written in terms of the infinite series (3.4). We can therefore write the second probability on the right-hand side of equation A.2 in the form P{V(t) ≤ v, at least one reset} Z tX ∞ fN (s)P{V(t) ≤ v, no resets ∈ [s, t] | reset at s} ds = 0 N=1
=
Z tX ∞
0 N=1
fN (s)P{V(t) ≤ v, no resets ∈ [s, t] | V(s) = 0} ds. (A.6)
The last step in equation A.6 follows from the Markov property of Brownian motion: the probability of any given path over the interval [s, t] depends only on the initial boundary condition V(s) = 0 and not on how the generator potential got there. Note that paths that “accidentally” satisfy the condition of the final conditional probability (i.e., when no reset actually occurs at s) will not be given any weight by the infinite series weighting term, because all of the weight corresponds to paths that reset at s; so paths that accidentally cross the zero line at s will effectively be given a zero weighting. Furthermore, although there is no limit to the number of resets that can occur in the interval [0, t], all resets other than last resets are automatically discounted by the nature of the joint probability within the integrand of equation A.6. Since every path that leads to V(t) will have one and only one last reset time, each path will be counted once and only once, ensuring that the probability in equation A.6 will be properly normalized. Continuing with our derivation of the generator potential density function, we next note that the following two expressions are equivalent: P{V(t) ≤ v, no resets ∈ [s, t] | V(s) = 0} = P{V(t − s) ≤ v, no resets ∈ [0, t − s] | V(0) = 0},
(A.7)
and that the probability on the right-hand side of equation A.7 is trivially obtained by substituting 0 for V(0) and t − s for s in equation A.5 to get
1066
Michael E. Rudd and Lawrence G. Brown
P{V(t) ≤ v, no resets ∈ [0, t − s] | V(0) = 0} = φ(v, t − s) + φ(2θ − v, t − s) − 1.
(A.8)
Combining equations A.6 through A.8 then yields P{V(t) ≤ v, at least one reset} Z tX ∞ = fN (s)(φ(v, t − s) + φ(2θ − v, t − s) − 1) ds.
(A.9)
0 N=1
Finally, we substitute equations A.5 and A.9 into equation A.2 to obtain P{V(t) ≤ v} = φ(v − V(0), t) + φ(2θ − v − V(0), t) − 1 Z tX ∞ fN (s)(φ(v, t − s) + 0 N=1
+φ(2θ − v, t − s) − 1) ds.
(A.10)
The density function of the generator potential is the derivative with respect to v of equation A.10: ³ ´ 1 2 2 2 2 e−(v−V(0)) /(2σ t) − e−(2θ −v−V(0)) /(2σ t) p[V(t)] = √ σ 2π t Z tX ∞ 1 + fN (s) √ σ 2π(t − s) 0 N=1 ³ 2 ´ 2 2 2 × e−v /(2σ (t−s)) − e−(2θ −v) /(2σ (t−s)) ds. (A.11) Because the density function does not have a simple analytic form, its dependence on the parameters of the model can be understood most easily by numerically evaluating the function and plotting it for different parameter values. Since our overarching goal is to understand the mechanism of noise adaptation in the integrate-and-fire neuron, we will restrict our further analysis of equation A.11 to the task of finding its dependence on the root-mean-square noise magnitude σ and adaptation time t. Note that wherever these two variables appear within the expression, they always appear together as part of the unit σ 2 t, which is the time-integrated variance of the Brownian motion process that drives the neuron. It is therefore unnecessary to examine the dependence of the function on the two variables separately. In Figure 4, we plot√equation A.11 for three different levels of the noise standard deviation σ t (with V(0) = 0 and θ = 50). The plots indicate that the spread in the generator potential distribution increases with the standard deviation of the Brownian motion process that drives the neuron. Because the range of V(t) is bounded on the upper end by θ, large variability in V(t) is associated with a high probability of obtaining large, negative voltage values.
Noise Adaptation
1067
Figure 4: Examples of the probability density function (pdf) of the generator potential of a nonleaky integrate-and-fire neuron driven by gaussian white noise with mean zero and constant variance. The three pdfs correspond to three different standard deviations of the time-integrated noise in the Brownian motion process that drives the neuron (indicated on plot). Additional model parameters: θ = 50; α = 0; µ = 0; V(0) = 0.
A.2 The First and Second Moments of the Generator Potential Distribution. The first and second moments of the generator potential density (see equation A.11) can be calculated in a straightforward manner using integral calculus. The first moment is · µ ¶ θ − V(0) V(t)µ=0 = V(0) − θ Erf c √ σ 2t ¶ # µ Z tX ∞ θ ds , (A.12) fN (s)Erf c + √ σ 2(t − s) 0 N=1 where Erf c denotes the complementary error function: 1 − Erf and the error function Erf is defined in the usual way: Z x 2 2 e−z dz Erf (x) = √ π 0 (Gradshteyn & Ryzhik, 1980, p. 930). Recall that we already have an expression for V(t) (see equation 4.7), which was derived on the basis of the stochastic differential equation approach. Setting µ = 0 in that expression should produce an expression that is equivalent to equation A.12. The two expressions are equal if the following general relation holds: ¶ µ ¶ µ Z t k k (A.13) ds = Erf c √ , R(s)Erf √ t−s t 0
1068
Michael E. Rudd and Lawrence G. Brown
where we have defined R(s) =
∞ X nk − N2 k2 e s . √ πs3 n=1
We have checked equation A.13 numerically using Gauss-Legendre 100point quadrature over several orders of magnitude of t and k and have found excellent agreement; however, we have not been able to prove the equality. To write the second moment, we first define r
2t − (θ −v20 )2 e 2σ t (θ + v0 ) ψ(t, v0 ) ≡ 2θv0 − 2θ + σ π µ ¶ θ − v0 + Erf (v20 + σ 2 t + 2θ 2 − 2θ v0 ), √ σ 2t 2
(A.14)
then write V 2 (t)µ=0 = ψ(t, V(0)) +
Z tX ∞ 0 N=1
fN (s)ψ(t − s, 0) ds.
(A.15)
References Brown, L. G., & Rudd, M. E. Evidence for a noise-gain control mechanism in human vision. Manuscript submitted for publication. Calvin, W. H., & Stevens, C. F. (1965). A Markov process for neuron behavior in the interspike neuron. In Proceedings of the 18th Annual Conference on Engineering in Medicine and Biology (vol. 7, p. 118). Capocelli, R. M., & Ricciardi, L. M. (1971). Diffusion approximation and first passage time problem for a model neuron. Kybernetik 8:214–233. Chapeau-Blondeau, F., Godiver, X., & Chambet, N. (1996). Stochastic resonance in a neuron model that transmits spikes. Physical Review E, 53:1273–1275. Geisler, C. D., & Goldberg, J. M. (1966). A stochastic model of the repetitive activity of neurons. Biophysical Journal 6:53–69. Gerstein, G. L. (1962). Mathematical models for the all-or-none activity of some neurons. Institute of Radio Engineers Transactions on Information Theory 8:137– 143. Gerstein, G. L., & Mandelbrot, B. (1964). Random walk models for the spike activity of a single neuron. Biophysical Journal 4:41–68. Gluss, B. (1967). A model for neuron firing with exponential decay of potential resulting in diffusion equations for probability density. Bulletin of Mathematical Biophysics 29:233–243. Gradshteyn, I. S., & Ryzhik, I. M. (1980). Table of integrals, series, and products. New York: Academic Press. Holden, A. V. (1976). Models of the stochastic activity of neurons. Berlin: SpringerVerlag.
Noise Adaptation
1069
Johannesma, P. I. M. (1968). Diffusion models for the stochastic activity of neurons. In E. R. Caianiello (Ed.), Neural Networks. Berlin: Springer-Verlag. Karlin, S. M., & Taylor, H. M. (1975). A first course in stochastic processes. New York: Academic Press. Parzen, E. (1962). Stochastic processes. San Francisco: Holden-Day. Ricciardi, L. M. (1977). Diffusion processes and related topics in biology. Berlin: Springer-Verlag. Ricciardi, L. M., & Sacerdote, L. (1979). The Ornstein-Uhlenbeck process: A model for neuronal activity. Biological Cybernetics 35:1–9. Roy, B. K., & Smith, D. R. (1969). Analyis of the exponential decay model of the neurons showing frequency threshold effects. Bulletin of Mathematical Biophysics 31:341–357. Rudd, M. E. (1996). A neural timing model of visual threshold. Journal of Mathematical Psychology 40:1–29. Rudd, M. E., & Brown, L. G. (1993). A stochastic neural model of light adaptation. Investigative Ophthalmology and Visual Science 34:1036. Rudd, M. E., & Brown, L. G. (1994). Light adaptation without gain control: A stochastic neural model. Investigative Ophthalmology and Visual Science 35:1837. Rudd, M. E., & Brown, L. G. (1995). Temporal sensitivity and threshold vs intensity behavior or a photon noise-modulated gain control model. Investigative Ophthalmology and Visual Science 35:S16. Rudd, M. E., & Brown, L. G. (1996). Stochastic retinal mechanisms of light adaptation and gain control. Spatial Vision 10:125–148. Rudd, M. E., & Brown, L. G. (In press). A model of Weber and noise gain control in the retina of the toad Bufo marinus. Vision Research. Stein, R. B. (1965). A theoretical analysis of neuronal variability. Biophysical Journal 5:173–194. Stein, R. B. (1967). Some models of neuronal variability. Biophysical Journal 7:37– 68. Thomas, M. U. (1975). Some mean first passage time approximations for the Ornstein-Uhlenbeck process. Journal of Applied Probability 12:600–604. Tuckwell, H. C. (1988). Introduction to theoretical neurobiology: Vol. 2. Nonlinear and stochastic theories. Cambridge: Cambridge University Press. Tuckwell, H. C. (1989). Stochastic processes in the neurosciences. Philadelphia: SIAM. Uhlenbeck, G. E., & Ornstein, L. S. (1930). On the theory of Brownian motion. Physical Review 36:823–841. Wan, F. Y. M., & Tuckwell, H. C. (1982). Neuronal firing and input variability. Journal of Theoretical Neurobiology 1:197–218.
Received January 16, 1996; accepted October 21, 1996.
Communicated by Mikhail Tsodyks
Paradigmatic Working Memory (Attractor) Cell in IT Cortex Daniel J. Amit Stefano Fusi Racah Institute of Physics, Hebrew University, Jerusalem, and INFN, Istituto di Fisica, Universit`a di Roma, Italy
Volodya Yakovlev Department of Neurobiology, Hebrew University, Jerusalem, Israel
We discuss paradigmatic properties of the activity of single cells comprising an attractor—a developed stable delay activity distribution. To demonstrate these properties and a methodology for measuring their values, we present a detailed account of the spike activity recorded from a single cell in the inferotemporal cortex of a monkey performing a delayed match-to-sample (DMS) task of visual images. In particular, we discuss and exemplify (1) the relation between spontaneous activity and activity immediately preceding the first stimulus in each trial during a series of DMS trials, (2) the effect on the visual response (i.e., activity during stimulation) of stimulus degradation (moving in the space of IT afferents), (3) the behavior of the delay activity (i.e., activity following visual stimulation) under stimulus degradation (attractor dynamics and the basin of attraction), and (4) the propagation of information between trials—the vehicle for the formation of (contextual) correlations by learning a fixed stimulus sequence (Miyashita, 1988). In the process of the discussion and demonstration, we expose effective tools for the identification and characterization of attractor dynamics.1 1 Introduction 1.1 Delay Activity in Experiment. A number of cortical areas, such as the inferotemporal cortex (IT) and prefrontal cortex, have been suggested as part of the working memory system (Fuster, 1995; Miyashita & Chang, 1988; Nakamura & Kubota, 1995; Wilson, Scalaidhe, & Goldman-Rakic, 1993). The phenomenon is detected in primates trained to perform a delay match-tosample (DMS) task, or a delay eye-movement (DEM) task, using, in each case, a relatively large set of stimuli. It is observed in single-unit extracellular recordings of spikes following, rather than during, the presentation of a 1 A color version of this article is found on the Web at: http://www.fiz.huji.ac.il/staff/acc/faculty/damita
Neural Computation 9, 1071–1092 (1997)
c 1997 Massachusetts Institute of Technology °
1072
Daniel J. Amit, Stefano Fusi, and Volodya Yakovlev
sensory stimulus. In these tasks, the behaving monkey must remember the identity or location of an initial eliciting stimulus in order to decide on its behavioral response following a second stimulus. This latter test stimulus is, with equal likelihood, identical to or different from the first stimulus in the DMS task and is simply a “go” signal in the DEM task. In these areas of the cortex, neurons are observed, in rather compact regions of the same area (called modules or columns), to have reproducible elevated spike rates during the delay period, after the first, eliciting stimulus has been removed.2 Such elevated rate distributions have been observed to persist for long times (compared to neural time constants)—as long as 30 seconds. The rates observed are in the range of about 10–20 spikes per second (s−1 ), against a background of spontaneous activity of a few s−1 . The subset of neurons that sustain elevated rates, in the absence of any stimulus, is selective of the preceding, first, or sample stimulus. Thus, the distribution of delay activity could act as a neural representation or memory of the identity of the eliciting stimulus, transmitted for later processing in the absence of the stimulus. Moreover, in one experiment in which training was carried out with eliciting stimuli presented in a fixed order, it was observed that though the stimuli were originally generated to be uncorrelated, the resulting learned delay activity distributions (DADs), corresponding to the uncorrelated stimuli, were themselves correlated (Miyashita & Chang, 1988). That is, there was an elevated probability that the same neuron would respond with an elevated delay activity for two stimuli that were close in the sequence. In fact, the magnitude of the correlations expressed the temporal separation of the stimuli in the training sequence: the closer two stimuli were in the training sequence, the higher the correlation of the corresponding two DADs. (For a detailed discussion, see Amit, 1994, 1995; and Amit, Brunel, Tsodyks, 1994.) 1.2 The Attractor Picture. These experimental findings have been interpreted as an expression of local attractor dynamics in the cortical module: A comprehensive picture has been suggested (Amit 1994, 1995; Amit et al., 1994) that connects the DAD to the recall of a memory into a working status and views the DAD as a collective phenomenon sustained by a synaptic matrix formed in the process of learning. The collective aspect is expressed in the mechanism by which a stimulus that had been learned previously has left a synaptic engram of potentiated excitatory synapses connecting the cells driven by the stimulus. When this subset of cells is reactivated, they cooperate to maintain elevated firing rates in the selective set of neurons, via the same set of potentiated synapses, after the stimulus is removed. In this way, these cells can provide each other, that is, each one of the group, with an afferent signal that clearly differentiates the members of the group from
2 A similar phenomenon has been observed in the motor cortex by Georgopoulos (private communication).
Paradigmatic Working Memory Cell
1073
other cells in the same cortical module. Theoretical and simulation studies have demonstrated that this signal may be clear enough to overcome the noise generated by the spontaneous activity of all other cells, provided that not too many stimuli have been encoded into the synaptic system of the module. A large number of “correct” neurons must collaborate to sustain the pattern. A rough estimate is that about 1000–2000 cells would collaborate in a given DAD (Brunel 1994), out of some 100,000 cells in a column of 1 mm2 . There would be considerable overlap between DADs. The collective nature of the DAD is related to its attractor property. Since so many cells collaborate, the DAD is robust to stimulus “errors.” That is, even if a few of the cells belonging to the self-maintaining group are absent from the group initially driven by a particular test stimulus, or if some of them are driven at the “wrong” firing rates, once the stimulus is removed, the synaptic structure will reconstruct the “nearest” distribution of elevated activities of those it had learned to sustain. The reconstruction (the attraction) will succeed provided the deviation from the learned template is not too large. All stimuli leading to the same DAD drive activities in the basin of attraction of that attractor. Each of the stimuli presented repeatedly during training creates an attractor with its own basin of attraction (see Amit & Brunel, 1995). In addition, the same module can have all neurons at spontaneous activity levels, if the stimulus has driven too few of the neurons belonging to any of the stimuli previously learned. 1.3 Attractors and Learning Correlations. According to the attractor picture, attractors are established through the learning process. In experiments in which monkeys performed a DMS task using as many as 100 stochastically generated visual stimuli, selective delay activities were formed for all images repeatedly used in the training phase. This finding supports a dynamic interpretation for the delay activity and confirms the presence of a learning process, since it is rather unlikely that synaptic structures related to these stimuli had been generated prior to training. In fact, no associated delay activities were found for new images—not used during training—but introduced subsequently in the testing stage (Miyashita & Chang, 1988). Moreover, the correlations between DADs for sequentially appearing stimuli in the fixed training order, corresponding to uncorrelated images, are even more directly related to learning, inasmuch as these correlations are dependent on the particular fixed order in which the stimuli appeared in the particular training protocol. Learning these correlations presents a puzzle that is naturally resolved in the attractor picture: How does the information contained in one image (used as the first, eliciting, stimulus in one trial) propagate, in the absence of the image, until the next image in the sequence is presented, several seconds later, as the first stimulus in the next trial? Attractors provide a natural vehicle for the propagation of this information (Griniasty, Tsodyks, & Amit, 1993; Amit et al., 1994; Amit, 1995), as follows: During training, there are initially no delay activity attractors. If
1074
Daniel J. Amit, Stefano Fusi, and Volodya Yakovlev
the stimuli in the training set are uncorrelated, as DADs are first formed they are necessarily uncorrelated because there is no way to communicate information between successive stimuli. Once the uncorrelated DADs have been formed, however, these DADs themselves may be used to propagate information between trials. Recall that in half of the trials, the second stimulus is identical to the first (so as to maintain an unbiased probability of same and different trials). This second stimulus may also leave behind a delay activity distribution, during the intertrial interval, identical to the DAD excited by the first stimulus, during the interstimulus interval of the trial. If this delay activity is not disturbed by the motor response of the animal (and other unmonitored visual or nonvisual events in the intertrial interval), then it may persist through the intertrial interval and will be present in the network when the next image in the sequence is presented as the first stimulus in the successive trial. This delay activity will be present in the sense that there will be a time window in which the cells of the delay activity are still at elevated rates, while the cells corresponding to the new stimulus begin to be driven. Information is then available for learning the correlation of the two (Brunel, 1996).
1.4 Experimental Questions. In this study, we outline an experimental program. We show recordings, and results of their analysis, from a paradigmatic cell—an IT neuron with a clear delay activity, which we were successful in holding for a long, stable recording session, sufficient for a detailed series of tests. Given the very detailed recordings of selective delay activities reported by Miyashita and Chang (1988); Miyashita (1988); Sakai and Miyashita (1991); Wilson et al. (1993); and Nakamura and Kubota (1995), on the one hand, and the detailed picture of attractor networks (Amit, 1994, 1995; Amit et al., 1994) on the other, one naturally asks what would be expected of a cell that participates in an attractor net. We ask the question in this article, and describe in detail the expected answer by demonstrating the results from this particularly rich paradigmatic neuron. Our goal is not to present data that give a definitive answer to this question of whether IT delay activity reflects an attractor network serving stimulus image memory. For that we would have to record and analyze many cells from more than one monkey. We wish instead to raise the issues quantitatively and point out, by considering in detail data from a single example, what experiments need to be carried out and what behavior one must expect from neurons participating in an attractor neural net. The central issue with which we deal is neuronal behavior following presentation of a degraded stimulus. This is because the attractor picture naturally implies basins of attraction for every attractor. That is, it predicts that although the input to IT will be different for stimuli of different degrees of degradation, the attractor, that is, the delay activity distribution within IT, will always be the same, as long as the degraded stimulus is not too far from
Paradigmatic Working Memory Cell
1075
the original. Thus, the attractor theory predicts very different characteristics for the response of IT neurons during stimulation (reflecting the response driven by the input to IT) and the response following stimulation (reflecting the attractor state). Testing empirically the attractor picture raises a set of issues: 1. IT observability. Are the stimuli used in these experiments identifiable at the level of the relevant cortical region, namely, IT? Do these stimuli induce neuronal responses in the recorded area that are clearly higher than spontaneous, as well higher than the delay activities? Are these responses different for the different stimuli? 2. Significance of prestimulus activity. Interstimulus interval (ISI) activity is identified in relation to activity immediately preceding the first stimulus of a trial. Is the latter merely spontaneous activity, or could it be delay activity following the second stimulus in the preceding trial? The question is particularly pertinent when trials are organized in series. 3. Moving in the space of IT-observable stimuli. What is the effect of stimulus degradation, for example, using degraded or noisy stimuli? What is the effect on visual responses (during the stimulus) and on attractor dynamics, as expressed in a given cell? 4. Basin of attraction. Does motion in IT-observable space bring about an abrupt change in delay activity? 5. Information transport between consecutive trials. Can delay activity act as the agent for the generation of correlations between sequentially appearing stimuli, as in Griniasty et al. (1993) and Amit et al. (1994)? 2 Methods 2.1 Preparation, Stimuli, and Task. A rhesus monkey (Macaca mulatta) was trained to perform a DMS task on 30 stochastically generated images, 15 fractals, and 15 Fourier descriptors; much as in Miyashita (1988) and Miyashita, Higuchi, Sakai, and Masui (1991), the two types were randomly intermixed. Three examples of the stimuli are shown in Figure 1: from left to right, two fractals and a Fourier descriptor. The left two images, the fractals, are the ones we used in recording most of the data reported here. The behavioral paradigm was as follows. Following the appearance of a flickering fixation point on the monitor screen in front of the monkey, the monkey was to lower a lever and keep it in the down position for the duration of the trial (see Figure 2). After the lever was lowered and following
1076
Daniel J. Amit, Stefano Fusi, and Volodya Yakovlev
Figure 1: Three of the 30 color images used in the DMS experiments. (Left) Fractal stimulus, which induced the clearest delay activity for the neuron of Figure 3 (number 5 in the figure) and therefore called the best stimulus for this neuron. (Center) Fractal stimulus that induced delay activity indistinguishable from the average prestimulus activity, as demonstrated in Figure 3 (number 17 in the figure), and is therefore called the worst stimulus for this neuron. (Right) A Fourier descriptor type stimulus, which induced response 14 in Figure 3.
a 1000 ms delay, the first (eliciting, or sample) stimulus was presented on the screen for 500 ms. The ISI was 5 seconds. Following this delay, the second (test) stimulus was shown, also for 500 ms. This second test stimulus was the same as the first sample stimulus in half of the trials and different in the other half (chosen randomly from the other 14 stimuli of its type, fractal or Fourier descriptors). After the second stimulus was turned off, the monkey had to shift the bar left or right depending on whether the second stimulus was the same as the first or different, respectively, and then, when the fixation point stopped flickering, to release the bar. Correct responses were rewarded by a squirt of fruit juice (see Figure 2). The monkey had been trained to a performance level not less than 80 percent correct responses when the reported experiment took place. 2.2 Recording. We recorded extracellularly spike activity from single neurons of the inferotemporal cortex. The experiment is still in progress so we cannot give the histological verification of the recorded cell. However, the coordinates of the position of the guide tube were A14 L15 during vertical penetration into the ventral cortex. The experimental procedures and care of the monkey conformed to guidelines established by the National Institutes of Health for the care and use of laboratory animals. After a cell was isolated by a six-channel spike sorter (Multi-Spike Detector; ALPHA OMEGA Engineering, Nazareth, Israel), the experiment proceeded in two stages. During the first stage, all 30 stimuli were presented one to three times. Average PST (peri-stimulus) response histograms were produced online, as demonstrated in Figure 3. From these, the best and worst stimuli were determined by eye. The best stimulus was that which evoked the clearest excess of the average rate in the ISI over the average rate in the period immediately preceding the first stimulus (the prestimu-
Paradigmatic Working Memory Cell
1077
Figure 2: Schematic sequence of events in each phase of DMS task.
lus rate: stimulus 5 in Figure 3, corresponding to Figure 1, left). The worst stimulus was selected as one that produced an overall weak response and an indistinct difference between prestimulus and delay period firing rate (stimulus 17 in Figure 3, corresponding to Figure 1, center). At this stage, the prestimulus firing rate was interpreted naively as the average activity in a 1.0 second interval prior to the presentation of the trial’s first stimulus. For the second stage of the experiment, a new image set was created from the chosen best and worst stimuli. The red-green-blue (RGB) pixel map of each image was degraded by superimposing uniform noise at one of four levels. That is, we generated from each of these two images a set of five images: the pure, original one (referred to as degradation 0) and four degraded images, each at an increasing level of degradation (levels 1–4). The DMS task was then resumed, using as the first (sample) stimulus in the DMS task a randomly selected image from the new set of 10 (original and degraded) images. The second stimulus, the test image, was randomly selected as an undegraded version of either the best or the worst stimulus. In Figure 4 we reproduce the best stimulus and degraded versions of this image at each of the four degradation levels used in this second stage of the experiment. 2.3 Mean Rate Correlation Coefficient. In order to test various hypotheses concerning the structure and origin of persistent activities, we introduce a coefficient measuring the correlation of the trial-by-trial fluctuations of the mean firing rates—or spike counts—in two different intervals. The mean rate correlation (MRC) is defined as follows: xn and yn (n = 1, . . . , Nst —the total number of selected trials) are, respectively, the spike counts in two
1078
Daniel J. Amit, Stefano Fusi, and Volodya Yakovlev
Figure 3: Spike rate histograms (full scale = 50 s−1 ) for a small number of trials of the entire set of 30 color stimuli, used online for the selection of best and worst stimuli. Each window is divided into six temporal intervals; the neuron’s activity is binned and averaged for the different trials with the same stimulus. From left: Prestimulus interval (66 ms per bin for the 1000 ms just prior to the first stimulus of the trial); first, or sample, stimulus period (33 ms per bin for 500 ms); delay (ISI) interval (125 ms per bin for 5 sec); second, or test, stimulus period (showing the response for the one-half of the trials when the stimulus was the same as the first; 33 ms per bin for 500 ms); post–second stimulus interval (showing the activity when the second stimulus was the same as the first; 66 ms per bin for 1000 ms). Finally we show the response to the second stimulus of other trials (when the first stimulus was not that of this window, but the second stimulus was that of this window (so that the trial was a nonmatch trial; 33 ms per bin for 500 ms). The horizontal line over each interval is the average rate in the interval. On the right of every window are stimulus number, number of trials with this image as first stimulus ( f ); number of trials with the same stimulus as second stimulus in the cases of same (s) and different (d) trials. Best is stimulus 5, the one evoking the highest excess of the average rate in the ISI over the average prestimulus rate; worst is stimulus 17, the one having an overall weak response. Vertical ticks are 10 spikes/sec.
Paradigmatic Working Memory Cell
1079
Figure 4: Image degradation (in color) as gradual motion in stimulus space for testing the attractor dynamics of delay activity. From left to right: pure best image (0 noise), then four levels of noise on RGB map.
intervals in the nth trial. Then: h(x − hxi)(y − hyi)i MRC = p (hx2 i − hxi2 )(hy2 i − hyi2 )
(2.1)
where h· · ·i denotes the estimate of the expectation of a random variable performed over all the selected trials, that is, hxi =
Nst 1 X xi . Nst i=1
This coefficient measures the extent of the correlation between the deviations from the mean over the selected set of trials, of the spike counts in each of the two intervals. With this tool, we monitor whether high rates during visual stimulus presentation imply high rates in the delay period and low imply low, whether high (low) counts at the beginning of a delay period imply high (low) counts in later intervals in the delay, and whether high (low) counts following a given second stimulus are related to high (low) counts in the period immediately preceding the first stimulus of the succeeding trial, its prestimulus interval. The value of the MRC is in the interval [−1, 1]. The standard of comparison for the magnitude of this coefficient will be established by considering subclasses of trials in which the average rates in both intervals are both high or both low. For instance, the MRC of the subset of responses to the worst stimulus is calculated by performing the expectation of equation 2.1 over the trials in which the worst stimulus was presented as a sample. In this case, if the fluctuations about the mean spike count are random and hence uncorrelated, the MRC coefficients will set the scale for lack of correlation. As we shall see, large values of MRC can capture two different phenomena: (1) trial-by-trial correlation of the fluctuation of the individual counts about their mean and (2) the existence of subsets of trials with very different mean counts in each, while within each subset, fluctuations about the mean may even be uncorrelated.
1080
Daniel J. Amit, Stefano Fusi, and Volodya Yakovlev
3 Experimental Testing and the Special Cell The issues raised in section 1.4 will now be discussed in the same order in terms of the responses of a single IT cell. The intention is not to claim that the data presented here give a definite answer to these questions. That would require the recording and analysis of many cells and, traditionally, in more than one monkey. The objective, rather, is to raise the questions, the problems, and the potential answers by considering in detail the record of a particularly rich cell that exhibits behavior in relation to which we may pose most of the problems and demonstrate the form answers may take. Following continued recording from numerous neurons in this and another monkey, we will be able to report whether this area of IT may in fact serve this memory function for DMS tasks. Alternatively, we may report that too few IT neurons have the expected significant DAD activity to be able to serve this function under the conditions used in our experiment, and it would be best to look for such DADs elsewhere. Finally, we may find that delay activity is present in IT but does not generally have the properties we expect (and which are reported here for this single cell), so that our understanding of cortical dynamics should best be revised. 3.1 Identification of the Stimulus. Delay activity distributions are likely to be sustained by the synaptic structure, learned or preexisting. The fact that the delay activities are selective to the preceding stimulus implies that the synaptic structure can sustain a variety of distributions in the absence of a stimulus. Which of the stable delay activities is actually propagating depends on which was the last effective stimulus. But different stimuli presented on the monitor screen may not actually lead to different neuronal responses in deep structures (higher cortical areas) such as the IT cortex, since the differences may well be filtered out at afferent levels. It is the representation of the stimulus as it affects the cells of IT that is relevant for which of the DADs is excited or learned. Thus, for example, it may be that differences of scale or color in the external image may result in very similar distributions of afferents when arriving at the IT cortex. In that case, they cannot elicit different DADs. In other words, stimuli and stimulus differences must be IT observable. (For some recent work on observability of stimulus variation in IT, see Kovacs, Vogels, & Orban, 1995.) What can give rise to different DADs are significantly different distributions of synaptic inputs that arrive from previous cortical areas to IT. These distributions, in turn, can be observed by recording the neuronal activity in the attractor network during presentation of the stimulus. In fact, the IT cortex is particularly propitious in this respect because rates observed during the presence of the stimulus in visually responsive cells that participate in a DAD are much higher than those observed for the same cells during interstimulus intervals of the trial. (See, e.g., Miyashita & Chang, (1988), and responses in the underlined intervals in the histograms for best stimu-
Paradigmatic Working Memory Cell
1081
lus in Figure 5.) This allows a very clear identification of the stimulus, for comparison with the attractor. In other cortical areas, such as the prefrontal cortex, this may not be as clear (see, e.g., Wilson et al., 1993; and Williams & Goldman-Rakic, 1995). The range of responses during the stimulus and during the ISI, as in Figures 3 and 5, exhibits IT observability of the stimuli as well as of their differences. Thus, the stimuli used for this experiment, the 30 fractals and Fourier descriptors, are IT observable in that the responses to some of them are much higher than the background spontaneous rates, as well the rates during the delay period ISI. This difference from background will be measured quantitatively below. In addition, it is clear that our neuron also differentiates between stimuli (e.g., between the best and the worst stimuli) during and following the stimulation.
3.2 Delay Activity and Spontaneous Activity. It is quite common to consider the ongoing activity before the first stimulus (the prestimulus activity) in a given cell as spontaneous activity. Delay activity is considered significant if its rate is significantly higher than the prestimulus rate. In this way, one observes in Figure 5, in the first three windows on the left of the best stimulus row, that the average rate (the horizontal line) during the delay (the last, wide interval) is higher than in the prestimulus interval of the current trial (see the numbers in the table below the histograms). They become equal in the two windows on the right (see section 4). In the leftmost window of the best row of histograms, corresponding to the undegraded best stimulus, the ratio ISI/pre-stim rates is 13/8, which may not be considered very convincing as a signal-to-noise ratio. But in a context in which delay activities exist, prestimulus rates are not necessarily spontaneous. For example, if the second stimulus in a trial evokes a DAD when presented as a first stimulus and if the neuron being observed has an elevated rate in this DAD, then one would expect that following the second stimulus, this neuron would have an elevated rate. In fact, this is observed in Figure 6, where we plot histograms of spike rates that follow presentation and removal of the best (top) and the worst (bottom) stimuli as second stimulus. These histograms are averages over all trials at all levels of the first stimulus degradation, since the second stimulus is always undegraded. What one observes is that following the best second stimulus, the level of post–second-stimulus persistent rate is as high as the delay activity for the first three levels (0–2) of degradation in Figure 5 (no degradation and the next two levels of degradation). This is true even when the first stimulus, due to its degradation, does not provoke significant delay activity. It is consistent with the fact that this persistent activity is provoked by the second stimulus that is never degraded. Another expression of the same fact is presented in Figure 7. These are the trial-by-trial distributions of the post–second stimulus average rate for best
1082
Daniel J. Amit, Stefano Fusi, and Volodya Yakovlev
Figure 5: Testing attractor dynamics: spike rate histograms (rate scale 10 s−1 , bins 50 ms) for best (top row) and worst (second row) first stimulus, each original, and four levels of degradation. Each histogram is divided into three intervals. From left: prestimulus of the current trial, first stimulus, delay interval. The horizontal line over each interval is average rate in the interval. The value of the average rates for each interval is reported in the table below the histograms with the mean rates and the standard errors (in s−1 ). The delay interval (ISI) has been divided in five parts of 1 s each. The mean rate of the activity during the entire ISI is in the last column on the right. For both best and worst first stimulus, one observes a variation of the rate during stimulation (the visual response) as a function of the level of degradation. For the first three windows in the “best” row, there is significant invariant delay activity. In the last two levels of degradation, the delay activity drops and becomes indistinguishable from the delay rate for the “worst” stimuli. The rate in the first second of the ISI, following a strong visual response, is significantly higher than the mean rate of the delay interval. This is due to the latency of the external stimulus. (See section 3.5.)
Paradigmatic Working Memory Cell
1083
Figure 6: Testing information transmission across trials: spike rate histograms for best and worst second stimulus and the corresponding table of mean rates (as in Figure 5). The histogram windows are divided in four intervals: second stimulus (underlined), postsecond stimulus, central part of intertrial interval (invisible), and the prestimulus of the successive trial. Note that the prestimulus activity of the trial following best second stimulus is significantly higher than that following the worst second stimulus. It carries the information about the second stimulus of the previous trial (see the text and Figure 7).
and worst stimuli (plotted above and below the axis, respectively, though both are positive valued, of course).3 The distributions differ significantly according to the T-test: P(Tst)< 10−5 . If this elevated rate were to continue beyond the monkey’s behavioral response, it would arrive at the following trial as an elevated rate and appear
3 In calculating the average poststimulus rate, we take an interval starting 250 ms after the second stimulus is turned off, to exclude latency of response to the stimulus. (See section 3.5.)
1084
Daniel J. Amit, Stefano Fusi, and Volodya Yakovlev
0.4 0.2 0 -0.2 -0.4 0
5
10
15
20
25
30
0
5
10
15
20
25
30
Figure 7: Trial-by-trial rate distributions of post–second stimulus (left) in s−1 and pre–first stimulus of the next trial (right) for best (continuous line) and worst (dashed line, for clarity in negative values). The two distributions differ significantly in the second case, carrying the information about the second stimulus of the trial. The poststimulus interval is on average 710 ms, starting 250 ms after the stimulus, to avoid latency effects.
as an elevated prestimulus activity before the next trial. There are previously reported indications that delay activity does in fact survive motor reaction events (Nakamura & Kubota, 1995), though here there is a novelty in that the postreaction activity is not motivated by the behavioral paradigm. As a test of this possibility we separated the average prestimulus rates over the entire set of trials (N = 123 trials) into two sets, according to the preceding second stimulus (see Figure 6). It turns out that the average prestimulus rate in the group of trials following trials in which the second stimulus was the best stimulus (N = 62) is 8.6 s−1 while for those following trials where the second stimulus was the worst stimulus (N = 61), the average prestimulus rate is 5.7 s−1 . (This difference is significant according to the T-test: P(Tst)< 10−5 .) Yet another test of this effect will be discussed in section 3.6. A simple tool suggests itself for improved online monitoring of the relevant prestimulus activity: to reduce (though not fully eliminate) the effect of persistent post–second stimulus high rate, one should consider the average prestimulus activity over all trials, including all stimuli. This will help, provided the number of DADs in which the cell under observation participates is relatively low. That would be the typical case.4 (See also Miyashita & Chang, 1988.) 3.3 Attractor Dynamics. Attractor dynamics (associative memory) as a description of persistent DADs implies that each DAD is excited by a whole class of stimuli. Each of these stimuli raises the rates of a sufficient number of cells belonging to the DAD so that via the learned collateral synaptic ma4
We are indebted to Nicolas Brunel for this observation.
Paradigmatic Working Memory Cell
1085
trix, they can excite the entire subset of neurons characterizing the attractor and maintain them with elevated rates when the stimulus is removed. The class of stimuli leading to the same persistent distribution is the basin of attraction of that particular attractor. As one moves in the space of stimuli, at some point the boundary of the basin of attraction of the attractor is reached. Moving even slightly beyond this boundary, the network will relax either into another learned attractor or to the grand attractor of spontaneous activity. (See Amit & Brunel, 1997; and Amit, 1995.) To test the validity of the attractor picture one has to be able to do the following: 1. Move in the space of stimuli by steps that are not too big, so that there is a choice between staying inside the basin and moving out. Recall that moving in this space does not mean only changing the stimuli, but changing them in a way that is IT observable. 2. Find IT-observable changes in stimuli and have the delay activity unchanged. 3. Arrive at IT-observable changes in stimuli that will go over the edge (i.e., will not evoke the same DAD). A convincing demonstration of these phenomena would require recording a large number of cells, especially since most cells with high rates in the same DAD (corresponding to the same stimulus) are predicted to lose their rates together. The prototypical behavior of such cells expresses itself on our single cell. First, Figure 5 demonstrates the required motion in stimulus space at the level of the IT cortex. In the five windows for the best stimulus as well as in those for the worst, going from left to right, corresponding to increasing the degradation level as presented in Figure 4, there is a clear variation in the visual response: the neuron’s spike rate during stimulation. This implies that the choice of the degradation mode is an IT-observable motion in stimulus space, as required in point 1 above. Second, in the top three windows in the best stimulus row, the delay activity is essentially invariant, as would have been expected if the stimulus were moving within the basin of attraction of that DAD. This corresponds to point 2 above. Third, as the level of degradation increases to reach the right-most two windows for the best stimulus, the delay activity disappears (see Figure 5). The average rate in the delay period becomes that of the reduced prestimulus activity discussed above, in accordance with the expectation in point 3 (see the table in Figure 5). This rate is also the same as that during the delay in all five windows corresponding to the worst stimulus. The discussion of point 1 calls for a comment concerning the fact that as the best image is initially degraded, the visual response increases. Only in the fourth and fifth levels of degradation (3, 4) does the visual response decrease. This may seem contradictory, in that one might naively expect
1086
Daniel J. Amit, Stefano Fusi, and Volodya Yakovlev 20 18 16 Delay 14 rate 12 (s 1) 10 8 6 4 2
1
0
4
20
2
3
30
40
50
60
70
80
Stimulus rate (s 1)
Figure 8: Average delay activity rate (in s−1 , y-axis) versus average stimulus response (in s−1 , x-axis) for five levels of degradation of best stimulus. The degradation level is indicated above each data point (0 for pure image). Error bars are standard errors. Note that the stimulus response difference between 1 and 0, where delay activity is essentially constant, is larger than between 0 and 3, where delay activity collapses. Recall that 0 implies pure stimulus.
that the visual response should be maximal for the pure image and decrease monotonically with degradation. But on second thought, it should be clear that there is no reason that a particular cell should be maximally tuned to the image rather than to a degraded version of it. In fact, looking at other cells, we find that although there is not a strong systematic trend for the visual response as a function of the degradation level, on the average the visual response decreases with degradation. Nor is there any argument why there should be (Yakovlev, Fusi, Amit, Hochstein, & Zohary, 1995). What is essential is that there be a set of neurons that have a strong visual response and a corresponding significant elevated delay rate, and when the images change moderately, the delay activity remain invariant and when they change enough, the delay activity change abruptly. Our paradigmatic cell would have belonged to such a class had it existed. This cell gives yet two more strong signals of the attractor property. Looking at the visual response to the degraded best stimulus in decreasing order of rates suggested above (2–1–0–3–4), one observes in the table in Figure 5 and in Figure 8 that the rate difference from 1–0 is 14.5 s−1 and there is no noticeable change in delay activity. Yet going from 0–3 the visual response changes by 13.1 s−1 , and the delay activity collapses. In other words, the decrease in the delay activity seems precipitous, as a function of the visual response, as would befit a watershed at the edge of the basin of attraction. The second signal is related to the fact that in the images used for testing the attractor property (see Figure 4), as the degradation noise is introduced, a circle appears around the figure (for technical reasons). This might have been perceived by the monkey as a different stimulus. That is, as the figure gets increasingly blurred, it is the circle that is most visible. Yet the delay ac-
Paradigmatic Working Memory Cell
1087
Table 1: Correlations of Delay Activity in Subintervals of ISI First Stimulus Selection
Degradation Levels
Time Intervals
Number of Trials
MRC
B+W B B B W
0-1-2 0-1-2-3-4 0-1-2 3-4 0-1-2
ISI1-ISI5 ISI1-ISI5 ISI1-ISI5 ISI1-ISI5 ISI1-ISI5
74 61 36 25 38
0.399 0.382 0.080 −0.123 0.089
Note: The intervals are the first 1 second following the removal of the stimulus and the last 1 second of the delay interval.
tivity for the first three degradation levels (0–2) of the best stimulus remains unchanged with the appearance of the circle. The delay activity disappears when the degradation level crosses the critical level, despite the continued presence of the circle. Moreover, the circle produces no effect for the worst stimulus, despite the fact that it is visible there as well. This may be interpreted as due to the fact that the degradation circle appears only during testing and is not seen often enough to have been learned. 3.4 Delay Interval MRC. To test the attractor interpretation more closely, we calculated the MRC (see section 2.3) between the average rates in the 1 second immediately following the first stimulus and the 1 second immediately preceding the second stimulus. The results are summarized in Table 1. In a subset of trials with best and worst first stimuli and the first three degradation levels (0–2) we find MRC = 0.399. Note that in this subset of 74 trials, there are 36 trials with elevated delay activity and 38 with low delay activity. In other words, the high MRC may be attributed to the presence of two underlying subpopulations with different means. Similarly, in the subset of trials with best first stimuli and all degradation levels (second row in the table), there are 61 trials, of which 36 have elevated delay activity and 25 do not, and MRC = 0.382. The 61 trials with best first stimulus are then divided into two subsets: one for the first three degradation levels (Nst = 36) and one for the last two (Nst = 25). In the first subset there is elevated delay activity; there is none in the second. For these two subsets we find MRC = 0.080 and −0.123, respectively. We conclude that the average delay rates in the first subset of trials remain high throughout the ISI and low in the second set. The deviations from the mean in each subset, at the beginning and the end of the ISI, are uncorrelated. This constitutes an example of the first option described at the end of section 2.3 and provides scales for high and low correlations.
1088
Daniel J. Amit, Stefano Fusi, and Volodya Yakovlev
Table 2: Single Neurons versus Attractor Dynamics: Correlations Between Visual Response and Several 1-Second Intervals of the ISI First Stimulus Selection
Degradation Levels
Time Intervals
Number of Trials
MRC
B B+W B B
0-1-2 0-1-2-3-4 0-1-2 0-1-2
ST1-ISI1 ST1-ISI1 ST1-ISI4 ST1-ISI5
36 123 36 36
0.407 0.784 0.085 −0.098
Note: First two rows have high MRCs due to latency of visual response. No correlation in later intervals.
3.5 Single Neurons versus Attractors Dynamics. An alternative scenario to the attractor picture may attribute the enhanced delay activity to some change in the internal state of single cells. Such a scenario would imply that the change in the internal state of the cell is triggered by the level of the visual response. This would be required to make the persistent delay activity stimulus selective. It would further imply a correlation between the rate during the ISI and the rate during the visual response. Table 2 summarizes the MRCs between the visual response to the first stimulus and various 1 second intervals of the ISI. We find that the rate during the visual response to the first stimulus is correlated with the rate in the first 1 second interval of the ISI (MRC = 0.407), even in a subset of trials with stimuli all leading to elevated delay activity, that is, the 36 trials corresponding to the best stimulus at degradation levels 0–2. This high MRC in a trial set with a narrow distribution of rates indicates that the trial-by-trial fluctuations of the rate in the first 1 second of the ISI are correlated with the fluctuations in the rate during the visual response. In fact, if the effect of splitting the means is added, by computing the MRC of the visual response rates with the rates in the first 1 second of the ISI for all trials (Nst = 123), we find MRC = 0.784. These high correlations we associate with the latency of the visual response. In fact, it appears from Figure 5 that the visual response penetrates about 250 ms into the ISI. Next we take the same set of 36 trials (best with degradation 0–2), which gave MRC = 0.407 between the rate in the visual response and the rate of the first 1 second of the ISI. The MRCs between the rate of the visual response and the rate in the fourth and fifth 1 seconds of the ISI are 0.085 and –0.098, respectively (third and fourth rows in the table). We conclude that beyond the first second of the ISI, the fluctuations of the visual response are not correlated with the fluctuations of the delay activity. This makes the single-cell scenario rather implausible.
Paradigmatic Working Memory Cell
1089
3.6 Information Transmission Across Trials. The discussion in section 3.2 of the meaning of prestimulus activity provides a partial confirmation of the scenario of information transport between first stimuli in consecutive trials, essential for the formation of the Miyashita correlations (Miyashita, 1988). That scenario requires that information about the coding of one stimulus be able to traverse the time interval between consecutive trials. The scenario proposed (Griniasty et al., 1993; Amit et al., 1994; Amit, 1995) is that DADs, when they first become stable, are uncorrelated. When they are stable, they would be provoked as much by a (same) second stimulus in a trial as by the first one. Since the second stimulus in half of the trials is the same as the first (to prevent bias in the DMS paradigm), in half of the training trials the activity in the intertrial interval will be the same as the delay activity elicited by the first stimulus of the preceding trial. If training is done by a fixed sequence of first stimuli, then upon half of the presentations of a given first stimulus, the activity distribution in the IT module, in the intertrial interval, would be the same as the delay activity corresponding to the immediately preceding first stimulus. The joint presence of the delay activity of the previous stimulus and the activity stimulated by the current first stimulus is hypothesized to be the information source for the learning, into the synaptic structure, of the correlations between the delay activities corresponding to consecutive images. We calculated the MRCs between the average rate in the 1 second immediately following the second stimulus and the average rate in an interval of 1 second immediately preceding the successive first stimulus (see Table 3). The idea is that since (for technical reasons) we do not have recordings during the entire interval separating two trials, if the activity between trials were to be undisturbed by the monkey’s response, to arrive at the next first stimulus, the MRC of the activities in the two extreme subintervals between trials should be similar to that between the two extreme subintervals in the undisturbed ISI. In fact, MRC = 0.320 for the intertrial interval performed over all the available trials (Nst = 123). The entire set of trials is composed of two well-balanced subsets with two different intertrial mean rates. This is so because the second stimulus is never degraded, so for Nst = 62 the second stimulus is a pure best image, leading to high intertrial persistent activity, and for Nst = 61 of the trials, the preceding second stimulus is the pure worst image, leading to low intertrial activity. This value is compared with the MRCs of extreme ISI intervals in trial subsets of similar statistics, with balanced subsets of low and high averages. Those would correspond to the first and second row in Table 1. In fact, the numbers are close. If the 123 trials are separated into two subsets—one with preceding best and one with preceding worst—we find MRC = 0.038 and MRC = 0.120, respectively (rows 2 and 3 in Table 3). These would be naturally compared with MRCs of subintervals of trials with best and worst first stimulus, with degradation levels (0–2) (third and fifth rows in Table 1) that are, respectively, 0.080 (Nst = 36) and 0.089 (Nst = 38).
1090
Daniel J. Amit, Stefano Fusi, and Volodya Yakovlev
Table 3: Information Transmission Across Trials: MRCs of 1 Second Post–Second Stimulus and 1-Second Pre–Next First Stimulus Second Stimulus of Previous Trials
Time Intervals
B+W B W
POST and PRE POST and PRE POST and PRE
Number of Trials
MRC
123 62 61
0.320 0.038 0.120
Note: For a few trials, the time interval after the second stimulus is less than 1 second; for those trials, the mean rate is calculated over the available time interval. First row: the high MRC is due to presence of two different means. Last two rows: absence of correlation in unimodal subsets of trials.
4 Discussion The cell we have discussed presents a rich phenomenology, naturally interpreted in the learning-attractor paradigm. To establish the various characteristics of attractor dynamics and their internal structure, many more good recorded cells are required. Our underlying intention has been to exploit this cell merely as an example of the kind of features that a cell participating in a presumed attractor scenario may have. Looking back at the wealth of data drawn from this cell, we feel we can say more. This collection of data can hardly be accounted for in other paradigms suggested to date. Most prominent of these is the propagation of the persistent elevated rates following the test stimulus, after this matching stimulus is turned off, and even after the behavioral reaction is completed. The amount of data collected for this cell and the tools we have sharpened to analyze them leave no doubt that this propagation can indeed take place. In fact, we have found that this persistent activity is as robust when it crosses the intertrial interval (including the behavioral response) as it is crossing the ISI delay interval between sample and match. What is surprising is that following the second stimulus and the reaction, there is no apparent need for maintaining the active memory. One might have expected that the persistent activity distribution would die down at this point. It does not. This has been observed also by Nakamura and Kubota (1995), but the novelty here is that the delay activity continues to persist even when there seems to be no functional role for it. What seriously modifies the delay activity is a new visual stimulus—either the second stimulus following the ISI or the first in another trial, much as in Miller, Li, and Desimone (1993). The fact that a DAD can get across the intertrial interval is an essential pillar in the construction of the dynamic learning scenario for the generation of the Miyashita (1998) correlations in the internal representations (working
Paradigmatic Working Memory Cell
1091
memory) of stimuli often experienced in contiguity. Our single-cell result suggests that such correlations may find their origin in the arrival of selective persistent activity related to one stimulus at the presentation of the consecutive stimulus in the next trial. The collective attractor picture would have been fuller had we had a case in which there is no visual response and yet elevated delay activity. In Sakai and Miyashita (1991) such cases are shown. Yet the mean rate correlation analysis on this cell shows that neural activity during the delay period is correlated with the visual response only in the immediate interval following the stimulus—strong evidence that what determines the actual spike rate later into the ISI is beyond the single cell. It is most likely the result of the interaction of this cell with other cells that participate in the same DAD, via potentiated synapses. Finally, there is an additional, behavioral correlate to the activity of this cell. When the performance level (percentage correct responses) of the monkey is considered versus the level of stimulus degradation, averaged across experiments, it is found that the performance is similar for the first three levels of degradation. Then, for the fourth and fifth levels, performance drops abruptly to chance level, just as our cell indicates that the stimuli are not now in the basin of attraction of the DAD (Yakovlev et al., 1995). This effect calls for much further study, and it opens new vistas toward identification of a potential functional role of the delay activity. Sakai and Miyashita (1988) turned to the pair associate paradigm because the monkeys perform the DMS task well even for stimuli that do not generate delay activity (“new stimuli”). Here we see that in the absence of delay activity, the DMS task was not performed successfully if the sample stimulus was very degraded. Our paradigmatic cell suggests that the mechanism that allows matching to an identical new sample may not work when the monkey has to match to a very degraded image. Yet as long as the DAD (the attractor) is effective, it corrects for the degradation, by attracting to the prototype delay activity, and the task can be performed. Acknowledgments Without the experimental and intellectual contribution of Shaul Hochstein and Ehud Zohari, this article would not have been possible. Had we had it our way, both would have figured among the authors. We have benefited from the contribution of Gil Rabinovici to the experiment. This cell was found during his stay in our lab as a summer project in his course of premedical studies at Stanford University. We are also indebted to J. Maunsel and R. Shapley for help in the early stages of this experiment and the detailed comments of Misha Tsodyks. This work was partly supported by grants from the Israel Ministry of Science and the Arts and the Israel National Institute of Psychobiology and by Human Capital and Mobility grant ERB-CHRX-CT93-0245 of the EEC.
1092
Daniel J. Amit, Stefano Fusi, and Volodya Yakovlev
References Amit, D. J. (1994). Persistent delay activity in cortex: A Galilean phase in neurophysiology? Network 5:429. Amit, D. J. (1995). The Hebbian paradigm reintegrated: Local reverberations as internal representations. Behavioural and Brain Science 18:617. Amit, D. J., & Brunel, N. (1995). Learning internal representations in an attractor neural network with analogue neurons. Network 6:39. Amit, D. J., & Brunel, N. (1997). Global spontaneous activity and local structured (learned) delay activity in cortex. Cerebral Cortex 7(2):237. Amit, D. J., Brunel, N., & Tsodyks, M. V. (1994). Correlations of cortical Hebbian reverberations: Experiment vs theory. J. Neurosci. 14:6435. Brunel, N. (1994). Dynamics of an attractor neural network converting temporal into spatial correlations. Network 5:449. Brunel, N. (1996). Hebbian learning of context in recurrent neural networks. Neural Computation 8:1677. Fuster, J. M. (1995). Memory in the cerebral cortex. Cambridge, MA: MIT Press. Griniasty, M., Tsodyks, M. V., & Amit, D. J. (1993). Conversion of temporal correlations between stimuli to spatial correlations between attractors. Neural Computation 5:1. Kovacs, G., Vogels, R., & Orban, G. A. (1995). Selectivity of macaque inferior temporal neurons for partially occluded shapes. J. Neurosci. 15:1984. Miyashita, Y., & Chang, H. S. (1988). Neuronal correlate of pictorial short-term memory in the primate temporal cortex. Nature 331:68. Miller, E. K., Li, L., & Desimone, R. (1993). Activity of neurons in anterior inferior temporal cortex during a short-term memory task. J. Neurosci. 13:1460. Miyashita, Y. (1988). Neuronal correlate of visual associative long-term memory in the primate temporal cortex. Nature 335:817. Miyashita, Y., Higuchi, S., Sakai, K., & Masui, N. (1991). Generation of fractal patterns for probing visual memory. Neurosci Res. 12:307. Nakamura, K., & Kubota, K. (1995). Mnemonic firing of neurons in the monkey temporal pole during a visual recognition memory task. J. Neurophysiol. 74:162. Sakai, K., & Miyashita, Y. (1991). Neural organisation for the long-term memory of paired associates. Nature 354:152. Williams, G. V., & Goldman-Rakic, P. S. (1995). Modulation of memory fields by dopamine D1 receptors in prefrontal cortex. Nature 376:572. Wilson, F. A. W., Scalaidhe, S. P. O., & Goldman-Rakic, P. S. (1993). Dissociation of object and spatial processing domains in primate prefrontal cortex. Science 260:1955. Yakovlev, V., Fusi, S., Amit, D. J., Hochstein, S., & Zohary, E. (1995). An experimental test of attractor network behavior in IT of the performing monkey. Israel J. of Medical Sciences 31:765.
Received June 7, 1996; accepted September 11, 1996.
Communicated by Todd Leen and Christopher Bishop
Noise Injection: Theoretical Prospects Yves Grandvalet St´ephane Canu CNRS UMR 6599 Heudiasyc, Universit´e de Technologie de Compi`egne, Compi`egne, France
St´ephane Boucheron CNRS-Universit´e Paris-Sud, 91405 Orsay, France
Noise injection consists of adding noise to the inputs during neural network training. Experimental results suggest that it might improve the generalization ability of the resulting neural network. A justification of this improvement remains elusive: describing analytically the average perturbed cost function is difficult, and controlling the fluctuations of the random perturbed cost function is hard. Hence, recent papers suggest replacing the random perturbed cost by a (deterministic) Taylor approximation of the average perturbed cost function. This article takes a different stance: when the injected noise is gaussian, noise injection is naturally connected to the action of the heat kernel. This provides indications on the relevance domain of traditional Taylor expansions and shows the dependence of the quality of Taylor approximations on global smoothness properties of neural networks under consideration. The connection between noise injection and heat kernel also enables controlling the fluctuations of the random perturbed cost function. Under the global smoothness assumption, tools from gaussian analysis provide bounds on the tail behavior of the perturbed cost. This finally suggests that mixing input perturbation with smoothness-based penalization might be profitable. 1 Introduction Neural network training consists of minimizing a cost functional C(.) on the set of functions F realizable by multilayer Perceptrons (MLP) with fixed architecture. The cost C is usually the averaged squared error, µ ¶2 C( f ) = EZ f (X) − Y
(1.1)
where the random variable Z = (X, Y) describing the data is sampled according to a fixed but unknown law. Because C is not computable, an Neural Computation 9, 1093–1108 (1997)
c 1997 Massachusetts Institute of Technology °
1094
Yves Grandvalet, St´ephane Canu, and St´ephane Boucheron
empirically computable cost is then minimized in applications using a sample z` = {zi }`i=1 , with zi = (xi , yi ) ∈ Rd ×R gathered by drawing independent identically distributed data according to the law of Z. An estimate fˆemp of the regression function f ∗ (x) = arg min f ∈L2 C( f ) is given by the minimization of the empirical cost Cemp : Cemp ( f ) =
` 1X ` i=1
µ
¶2 f (xi ) − yi
.
(1.2)
The cost Cemp (.) is a random functional with expectation C(.). In order to justify the minimization of Cemp , the convergence of the empirical cost toward its expectation should be uniform with respect to f ∈ F (Vapnik, 1982; Haussler, 1992). When F is too large, this may not hold against some sampling laws. Hence practitioners have to trade the expressive power of F with the ability to control the fluctuations of Cemp (.). This suggests analyzing modified estimators that possibly restrict the effective search space. One of these modified training methods consists of applying perturbations to the inputs during training. Experimental results in Sietsma and Dow (1991) show that noise injection (NI) can dramatically improve the generalization ability of MLP. This is especially attractive because the modified minimization problem can be solved thanks to the initial training algorithm. During NI, the original training sample z` is distorted by adding some noise η to the inputs xi while leaving the target value yi unchanged. During the kth epoch of the backpropagation algorithm, a new distortion η k is applied to z` . The distorted sample is then used to compute the error and to derive the weights updates. A stochastic algorithm is thus m , defined as: obtained that eventually minimizes CNIemp C
m NIemp
m ` 1X 1 X (f) = m k=1 ` i=1
µ
¶2 i
k,i
f (x + η ) − y
i
,
(1.3)
where the number of replications m is set under user control but finite. The average value of the perturbed cost is: "
` 1X CNI ( f ) = Eη ` i=1
¶2 #
µ i
f (x + η ) − y
i
.
(1.4)
In this article, the noise η is assumed to be a centered gaussian vector with independent coordinates: E[η ] = 0 and E[η T η ] = σ 2 I. The success of NI is intuitively explained by asserting that minimizing CNI (see equation 1.4) ensures that similar inputs lead to similar outputs. It raises two questions: When should we prefer to minimize CNI rather than m converge toward C Cemp ? How does CNIemp NI ?
Noise Injection
1095
Recently, several authors (Webb, 1994; Bishop, 1995; Leen, 1995; Reed, Marks, & Oh, 1995; An, 1995) have resorted to Taylor expansions to describe the impact of NI and to motivate the minimization of CNI rather than Cemp . They not only try to provide a formal description of NI but also aim at m . finding a deterministic alternative to the minimization of CNIemp This article takes a different approach: when the injected noise is gaussian, the Taylor expansion approach is connected to the action of the heat kernel, and the dependence of CNI (see equation 1.4) on the noise variance is shown to obey the heat equation (see section 2.1). This clear connection between partial differential equations and NI provides some indications on the relevance domain of traditional Taylor expansions (see section 2.2). Finally, we analyze the simplified expressions that are assumed to be valid locally around optimal solutions (see section 2.3). The connection between NI and the action of the heat kernel also enables control of the fluctuations of the random perturbed cost function. Under some natural global smoothness property of the class of MLPs under consideration, tools from gaussian analysis provide exponential bounds on the probability of deviation of the perturbed cost (see section 3.3). This suggests that mixing NI with smoothness-based penalization might be profitable (see section 3.4). 2 Taylor Expansions 2.1 Gaussian Perturbation and Heat Equation. To exhibit the connection between gaussian NI and the heat equation, let us define u as a function from R+ × R`×d by:
u(0, x) =
` 1X ` i=1 "
u(t, x) = Eη
µ
¶2 f (xi ) − yi
` 1X ` i=1
(2.1) ¶2 #
µ i
i
f (x + η ) − y
i
.
(2.2)
Obviously, we have u(0, x) = Cemp ( f ) and u(t, x) = CNI ( f ), when t is the noise variance σ 2 . Each value of the noise variance defines a linear operator Tt that maps u(0, .) onto u(t, .). Moreover since the sum of two independent with variances s and t is gaussian with variance s + t, the family ¢ ¡gaussian Tt t≥0 defines a semigroup, the heat semigroup (cf. Ethier & Kutrz, 1986, for an introduction to semigroup operators). The function u obeys the heat equation (cf. Karatzas & Shreve, 1988, chap. 4, sec. 3, 4): 1 ∂u = ∆xx u ∂t 2
(2.3)
1096
Yves Grandvalet, St´ephane Canu, and St´ephane Boucheron
where ∆xx is the Laplacian with respect to x, and where the initial conditions are defined by equation 2.1. For the sake of self-containment, a derivation of equation 2.3 is given in the appendix when initial conditions are square integrable. Let us denote by CNI ( f, t) the perturbed cost when the noise is gaussian of variance is t; equation 2.3 yields: CNI ( f, t) = Cemp ( f ) +
1 2
Z 0
t
∆xx CNI ( f, s) ds.
(2.4)
Therefore, CNI can be investigated in the purely analytical framework of partial differential equations (under the gaussian assumption). The possibility of forgetting about the original probabilistic setting when dealing with neural networks follows from the Tikhonov uniqueness theorem (Karatzas & Shreve, 1988, chap. 4, sec. 4.3). Observation 2.1. If F is a class of functions definable by some feedforward architecture using sigmoidal, radial basis functions, or piecewise polynomials as activation functions, and if injected noise follows a gaussian law, then the perturbed cost CNI is the unique function of the variance t that obeys the heat equation (2.1) with initial conditions defined in equation 2.1. Any deterministic faithful simulation of NI should use some numerical analysis software to integrate the heat equation and then run some backpropagation software on the result. We do not recommend such a methodology for efficiency reasons and insist that stochastic representations of solutions of partial differential equations (PDEs) have proved useful in analysis (Karatzas & Shreve, 1988). Methods reported in the literature (such as finite differences and finite element; cf. Press, Teukolsky, Vetterling, & Flannery, 1992) assume that the function defining the initial conditions has bounded support. This assumption that makes sense in physics is not valid in the neural network setting. Hence Monte-Carlo methods appear to be the ideal technique to solve the PDE problem raised by NI. 2.2 Taylor Expansion Validity Domain. Let CTaylor ( f ) be the first-order Taylor expansion of CNI ( f ) as a function of t = σ 2 . For various kinds of noise and function classes F , it has been shown in Matsuoka (1992), Webb (1994), Grandvalet and Canu (1995), Bishop (1995), and Reed et al. (1995) that: CTaylor ( f ) = Cemp ( f ) +
σ2 ∆xx Cemp ( f ). 2
(2.5)
In the context of gaussian NI, this means that the Laplacian is the infinitesimal generator of the heat semigroup. To emphasize the distinction
Noise Injection
1097
between equations 2.5 and 2.3, one should stress the fact that the heat equation is not only a correct description of the impact of gaussian NI in the small variance limit but also for any value of the variance. The Taylor approximation validity domain is restricted to those functions f such that lim
σ 2 →0
CNI ( f ) − CTaylor ( f ) = 0. σ2
(2.6)
Observation 2.2. A sufficient condition for the Taylor approximation to be valid is that Cemp belongs to the domain of the generator of the heat semigroup. The empirical cost Cemp has to be a licit initial condition for the heat equation, which is always true in the neural network context (cf. conditions in Karatzas & Shreve, 1988, theorems 3.3, 4.2, chap. 4). The preceding statement is purely analytical and does not say much about the relevance of minimizing CTaylor while training neural networks. This issue may be analyzed according to several directions: Is the minimization of CTaylor equivalent to the minimization of CNI ? Is the minimization of CTaylor interesting in its own right? The second issue and related developments are addressed in Bishop (1995) and Leen (1995). The first issue cannot be settled in a positive way for arbitrary variances in general, but the principle of minimizing CNI (and CTaylor ) should be definitively ruled out if if the minima of CNI (and CTaylor ) did not converge toward the minima of Cemp when t = σ 2 → 0. This cannot be deduced directly from equation 2.6 since it describes only simple convergence. A uniform convergence over F is required. As we will vary t, let us denote by CNI ( f, t) (respectively CTaylor ( f, t)) the perturbed cost (resp. its Taylor approximation) of f when the noise variance is t, we get: 1 CNI ( f, t) − CTaylor ( f, t) = 2
Z t· 0
¸
∆xx CNI ( f, s) − ∆xx Cemp ( f ) ds.
(2.7)
If some uniform (over F and s ≤ t0 < 0 ) bound on |∆xx CNI ( f, s) − ∆xx Cemp ( f )| is available, then limt→0 max f ∈F CNI ( f, t)−CTaylor ( f, t) = 0, and small manipulations using the triangular inequality reveal the convergence of minima. The same argument shows the convergence of the minima of CNI toward minima of Cemp . If some upper bounds is imposed on the weights of a sigmoidal neural network, those global bounds are automatically enforced. Imposing bounds on weights is an important requirement to ensure the validity of the Taylor approximation. We intuitively expect the truncation of the Taylor series to be valid in the small variance limit. But if f (x) = g(wx), where g is a parameterized function and w is a free parameter, then
Yves Grandvalet, St´ephane Canu, and St´ephane Boucheron
Cemp
CTaylor
1098
0
0 −50 a
−100
−0.5
0
0.5
−50 a
b
−0.5
−100
0.5
0 b
CNI
C
a=−50
−0.5
0
0.5
C
a=−100 0 −50 a
−100
−0.5
0 b
0.5 −0.5
0 b
0.5
Figure 1: Costs Cemp (top left), CTaylor (top right), and CNI (bottom left) in the space (a, b). These costs are compared on the bottom-right figures for two values of the parameter a: Cemp is solid, CTaylor is dotted, and CNI is dashed.
f (x + η) = g(wx + wη). The noise η injected in f appears to g as a noise wη , that is, as a noise of variance w2 σ 2 . Therefore, if g is nonlinear (i.e., f nonlinear), the Taylor expansion will fail for large w2 . A simple illustration is given in Figure 1. The sample contains 10 (x, y) pairs, where the xi are regularly placed on [−0.5, 0.5], and y = 1x 0 such that for any t, 0 ≤ t ≤ t0 , to any critical point of Cemp , there correspond a critical point of CTaylor and a critical point of CNI ; moreover, those critical points are within distance Kt of each other for some constant K that depends on the sample under consideration. Proof. The argument extends Leen’s (1995) suggestion. In the course of the argument we will assume that CNI and CTaylor have partial derivatives with respect to t at t = 0; this can be enforced by taking a linear continuation for t < 0. Let us assume that F is parameterized by W weights. Let ∇w CNI ( f, t) and ∇w CTaylor ( f, t) denote the gradient of CNI and CTaylor with respect to the weight assignment for some value of f and σ 2 = t (note that at t = 0 the two values coincide with ∇w Cemp ( f )). If f • is some nondegenerate critical point of Cemp , then the matrix of partial derivatives of ∇w CNI ( f, t) and ∇w CTaylor ( f, t) with respect to weights and time has full rank; thus, by the implicit function theorem in its surjective form (Hirsch, 1976, p. 214), there exists a neighborhood of ( f • , 0),
1100
Yves Grandvalet, St´ephane Canu, and St´ephane Boucheron
and diffeomorphisms φ and ψ defined in a neighborhood of (0, 0) ∈ RW+1 , such that φ(0, 0) = ( f • , 0) (resp. ψ(0, 0) = ( f • , 0)) and ∇w CNI (φ(u, v)) = u (resp. ∇w CTaylor (ψ(u, v)) = u). The gradients of φ and ψ at (0, 0) are of L2 norm less or equal than the norm of the matrix of partial derivatives of ∇w Cemp ( f • , 0). For sufficiently small values of t, φ(0, t) and ψ(0, t), define continuous curves of critical points of CNI (., t) and CTaylor (., t). As the number of critical points of Cemp is finite, we may assume that those curves do not intersect and that the norms of the gradients of φ(0, t) and ψ(0, t) with respect to t are upper bounded. The observation follows. Remark 1. For sufficiently small t, the local minima of Cemp can be injected in the set of local minima of CTaylor and CNI . For CTaylor , the reverse is true. The definition of CTaylor obeys the same constraints as the definition of Cemp : it is defined using solely +, ×, constants, and exponentiation; hence, by Sontag bound, for any t, except on a set of measure 0 of target values y, the number of critical points of CTaylor is finite, and the argument that was used in the proof works in the other direction. For sufficiently small t, CTaylor does not introduce new local minima. This argument cannot be adapted to CNI , which definition also requires an integration. Nevertheless, experimental results suggest that NI tends to suppress spurious local minima (Grandvalet, 1995). Remark 2. The validity of observation relies on the choice of activation functions. If F were constituted by the class of functions parameterized by α ≥ 0 mapping, x 7→ sin(αx). One could manufacture a sample such that Cemp and CTaylor both have countably many nondegenerate minima, which are in one-to-one correspondence and such that the convergence of the minima of CNI toward the minima of Cemp as t tends toward 0 is not uniform.
3 Noise Injection and Generalization The alleged improvement in generalization provided by NI remains to be analytically confirmed and explained. Most attempts to provide an explanation resort to the concepts of penalization and regularization. Usually penalization consists of adding a positive functional Ä(.) to the empirical risk. It is called a regularization if the sets { f : f ∈ F , Ä( f ) ≤ α} are compact for the topology on F . Penalization and regularization are standard ways of improving regression and estimation techniques (e.g., Grenander, 1981; Vapnik, 1982; Barron, Birge, & Massart, 1995). When F is the set of linear functions, NI has been recognized as a regularizer in the Tikhonov sense (Tikhonov & Arsenin, 1977). This is the only reported case, but it is enough to consider F as constituted by univariate degree 2 polynomial to realize that NI cannot generally be regarded as a
Noise Injection
1101
penalization procedure (Barron et al., 1995): CNI − Cemp is not always positive. Thus, the improvement of generalization ability attributed to NI still requires some explanations. 3.1 Noise Injection and Kernel Density Estimation. An appealing interpretation connects NI with kernel estimation techniques (Comon, 1992; Holmstrom ¨ & Koistinen, 1992; Webb, 1994). Minimization of the empirical risk might be a poor or inefficient heuristic because the minima (if there are any) of Cemp could be far away from those of C. Recall that when trying to perform regression with sigmoidal neural networks, we have no guarantees that CNI has a single global minima, or even that the infimum of CNI is realized (Auer, Hebster, & Warmuth, 1996). Hence the safest (and, to our knowledge, only) way to warrant the consistency of the NI technique is to get a global control on the fluctuations of CNI (.) with respect to C(.), that is, on supF |CNI ( f )−C( f )|. The poor performance of minimization of empirical risk could be due to the slow convergence of the empir1 P ical measure pˆZ = i δxi ,yi toward the sampling probability in the pseudo1
ˆ pˆ0 ) = sup f ∈F |Epˆ ( f (X)−Y)2 −Epˆ 0 ( f (X)−Y)2 |). metric induced by F ( dF (p− But minimizing CNI is equivalent (up to an irrelevant constant factor) to minimizing ` ¡ ¢2 1X Eη ,η 0 f (xi + η i ) − (yi + η 0 ) , ` i=1
(3.1)
where η 0 is a scalar gaussian independent of η . Minimizing CNI consists of minimizing the empirical risk against a smoothed version of the empirical measure: the Parzen-Rosenblatt estimator of the density (for details on the latter, see Devroye, 1987). The connection with gaussian kernel density estimation and regularization can thus be established in a perspective described in Grenander (1981). Because the gaussian kernel is the fundamental solution of the heat equation, the smoothed density that defines equation 3.1 is obtained by running the heat equation using the empirical measure as an initial condition (this is called the Weierstrass transform in Grenander (1981). It is then tempting to explain the improvement in generalization provided by NI using upper bounds on the rate of convergence of the Parzen-Rosenblatt estimator. Though conceptually appealing, this approach might be disappointing; it actually suggests solving a density estimation problem as a subproblem of a regression problem. The former is much harder than the latter (Vapnik, 1982; Devroye, 1987). 3.2 Consistency of NI. To assess the consistency of minimizing CNI , we would like to control CNI (.) − C(.). A reasonable way of analyzing the fluctuations of CNI (.) − C(.) consists of splitting it in two summands and
1102
Yves Grandvalet, St´ephane Canu, and St´ephane Boucheron
bounding each summand separately: |C( f ) − CNI ( f )| ≤ |C( f ) − EpZ CNI ( f )| + |CNI ( f ) − EpZ CNI ( f )|.
(3.2)
The first summand bounds the bias induced by taking the smoothed version of the squared error. It is not a stochastic quantity; it should be analyzed using tools from approximation theory. This first summand is likely to grow with t. On the contrary, the second term captures the random deviation of CNI with respect to its expectation. Taking advantage that CNI depends smoothly on the sample, it may be analyzed using tools from empirical process theory (cf. Ledoux & Talagrand, 1991, chap. 14). It is expected that this second summand decreases with t. 3.3 Bounding Deviations of Perturbed Cost. As practitioners are likely m rather than CNI , another difficulty has to be faced: conto minimize CNIemp m with respect to CNI . For a fixed sample, trolling the fluctuations of CNIemp m is a sum of independent random functions with expectation CNI ; in CNIemp principle, it could also be analyzed using empirical process techniques. In the case of sigmoidal neural networks , the boundedness of the summed m ( f ) converges random variables ensures that for each individual f , CNIemp ¢ √ ¡ m (f) − C almost surely toward CNI ( f ) as m → ∞ and that m CNIemp NI ( f ) converges in distribution toward a gaussian random variable. But using empirical processes would not pay tribute to the fact that the sampling process defined by NI is under user control, and in our case gaussian. Sufficiently smooth functions of gaussian vectors actually obey nice concentration properties, as illustrated in the following theorem: Theorem 3.1 (Tsirelson, 1976; c. f. Ledoux & Talagrand, 1991). Let X be a standard gaussian vector on Rd . Let f be a Lipschitz function on Rd with Lipschitz constant smaller than L; then: ¾ ½ 2 2 P | f (X) − E f (X)| > r ≤ 2 exp−r /2L . m . 3.3.1 Pointwise Control of the Fluctuations of CNIemp If F is constituted by a class of sigmoidal neural networks, the differentiability assumption for the square loss as a function of inputs is automatically fulfilled.
Assumption 1. In the sequel, we will assume that all weights defining functions in F are bounded so that F is uniformly bounded by some constant M0 and the gradient of f ∈ F is smaller than some constant L. Let M be a constant greater than M0 + maxi yi . ¢ P P` ¡ i k,i i 2 is regarded as a function Then if 1/(`m) m k=1 i=1 f (x + η ) − y on R`md provided with √ the Euclidean norm, its Lipschitz constant is upper bounded by 2LM/ `m.
Noise Injection
1103
m (.) implies the following obserApplying the preceding theorem to CNIemp vation:
Observation 3.1.
If F satisfies assumption 1, then: ¾
½
Pη |C
m NIemp
( f ) − CNI ( f )| > r ≤ 2e−m`r
2
/(8tL2 M2 ) .
Remark. The dependence of the upper bound on the Lipschitz constant of CNI with respect to x cannot be improved since theorem 3.1 is tight for linear functions, but the combined dependence on M and L has to be assessed for sigmoidal neural networks. For those networks, large inputs tend to generate small gradients; hence the upper bound provided here may not be tight. m . m −C Pointwise control of CNIemp 3.3.2 Global Control on CNIemp NI is insufm with respect to ficient since we need to compare the minimization of CNIemp m ( f ) is a biased the minimization of CNI . We may first notice that inf f ∈F CNIemp estimator of inf f ∈F CNI ( f ): m ( f ) ≤ inf C Eη inf CNIemp NI ( f ). f ∈F f ∈F
(3.3)
m ( f ) is a concave function of the Second, we may notice that inf f ∈F CNIemp empirical measure defined by η ; hence, it is a backward supermartingale1 and thus converges almost surely toward a random variable as m → ∞. If m is to converge toward C m ( f ) is due to converge toward CNIemp NI , inf f ∈F CNIemp inf CNI . To go beyond this qualitative picture, we need to get a global control on m (f) − C the fluctuations of sup f ∈F |CNIemp NI ( f )|. Let g denote the function: ¯ ¯ ¯ ¯ 1 X¡ ¢ ¯ ¯ i k,i i 2 f (x + η ) − (y ) − CNI ( f )¯ . η 7→ sup ¯ ¯ ¯ m` i,k f ∈F
If F satisfies √ assumption 1, then g is also Lipschitz with a coefficient less than 2LM/ m`. m (f) − C Let us denote Eη sup f ∈F |CNIemp NI ( f )| by E. E may be finite or infinite. E is a function of t, `, m. 1 Up to some measurability conditions that are enforced for fixed architecture neural networks.
1104
Yves Grandvalet, St´ephane Canu, and St´ephane Boucheron
Assumption 2. E is finite. Theorem 3.1 also applies to g , and we get the following concentration result: Observation 3.2. If F is a class of bounded Lipschitz functions satisfying assumptions 1 and 2, then ½ ¾ 2 2 2 m (f) − C Pη sup |CNIemp ( f )| > E + r ≤ 2e−m`r /(8tL M ) . NI f ∈F
(3.4)
This result is only partial in the absence of a good upper bound on E. It is possible to provide explicit upper bounds on E using metric entropy techm process described niques and using the subgaussian behavior of the CNIemp by observation 3.1. Using observation 3.2, we may partially control the deviations of approxm with respect to the infimum of C imate minima of CNIemp NI : Observation 3.3. If F satisfies assumptions 1 and 2, then if for any sample m ( f • ) < inf f • satisfies CNIemp f ∈F CNI ( f ) + ², then
ECNI ( f • ) < inf CNI ( f ) + 2E + ², f ∈F and ¾ ½ 2 2 2 Pη CNI ( f • ) ≥ inf CNI ( f ) + ² + 2(E + r) ≤ 2e−m`r /(8tL M ) . f ∈F If ² and E can be taken arbitrarily close to 0 using proper tuning of m and t, the m is a consistent way of approximating corollary shows that minimizing CNIemp the minimum of CNI . 3.4 Combining Global Smoothness Prior and Input Perturbation. The derivative-based regularizers proposed in Bishop (1995) and Leen (1995) are based on sums of terms evaluated at a finite set of data points. It has been argued that they are not global smoothing priors. At first sight, it may be that NI escapes this difficulty; the previous analysis, and particularly observation 3.1, as rough as it may be, shows that it may be quite hard to control NI in the absence of any smoothness assumption. Thus it seems cautious to supplement NI or data-dependent derivative-based regularizers with global smoothness constraints. For sigmoidal neural networks, weight decay can be combined with NI. The sum M of the absolute values of the weights provides an upper bound on the output values. Let L be the product over the different layers of the
Noise Injection
1105
sum of absolute values of the weights in those layers; L is an upper bound on the Lipschitz constant of the network. Penalizing by L+M, would restrict the search space so that CNI is well behaved. Such combinations of penalizers have been reported in the literature (Plaut, Nowlan, & Hinton, 1986; Sietsma & Dow, 1991) to be successful. However, it seems quite difficult to compare analytically the respective advantages of penalization alone and penalization supplemented with NI. This would require establishing lower rate of convergence for one of the techniques. Such rates are not available to our knowledge. 4 Conclusion and Perspective Under a gaussian distribution assumption, NI can be cast as a stochastic alternative to a regularization of the empirical cost by a partial differential equation. This is more likely to facilitate a rigorous analysis of the NI heuristic than to be a source of efficient algorithms. It allows the assignment of a precise meaning to the concept of validity of Taylor approximations of the perturbed cost function. For sigmoidal neural networks, and more generally for classes of functions that can be defined using exponentials and polynomials, those Taylor approximations are valid in the sense that minimizing CTaylor and CNI turns out to be equivalent in the infinitesimal variance limit. The practical relevance of this equivalence remains questionable since practical uses of NI require finite noise variance. The relationship between NI and the heat equation enables establishing a simple bridge between regularization and the transformation of Cemp into CNI : the contractivity of the heat semigroup ensures that the regularized version of the effective cost is easier to estimate than the original effective cost. Finally, because practical m of CNI applications are more likely to minimize empirical versions CNIemp than CNI itself, we resort to results from the theory of stochastic calculus to m . provide bounds on the tail probability of deviations of CNIemp Despite the clarification provided by the heat connection, this is far from being the last word. A lot of quantitative works needs to be done if theory is to meet practice. The global bounds provided in section 3.3 need to be complemented by a local analysis focused on the neighborhood of the critical points of Cemp . Ultimately this should provide rates of convergence for specific F and classes of target dependencies E(Y|x). Because practitioners often use stochastic versions of backpropagation, it will be interesting to see whether the approach advocated here can refine the results presented in An (1995). NI is a naive and rather conservative way of trying to enforce translation invariance while training neural networks. Other, and possibly more interesting, forms of invariance under transformation groups deserve to be examined in the NI perspective, as proposed by Leen (1995). Transformation groups can often be provided with a Riemannian manifold structure on which some heat kernels may be defined; thus it is appealing to check
1106
Yves Grandvalet, St´ephane Canu, and St´ephane Boucheron
whether the generalizations of the tools presented here (the theory of diffusion on manifolds; cf. Ledoux & Talagrand, 1991, for some results) can facilitate the analysis of NI versions of tangent-prop–like algorithms. Appendix: From Heat Equation to Noise Injection This appendix rederives the relation between the heat equation and NI using Fourier analysis techniques. To put the idea in the simplest setting, the problem is treated in one dimension. The heat equation under concern is: ∂u = 12 ∆xx u, ∂t ¶2 ` µ 1X f (xi ) − yi . u(0, x) = ` i=1 The Fourier transform of the heat equation is: 1 ∂b u(t, ξ ) = − ξ2 b u(t, ξ ). ∂t 2
(A.1)
This is an ordinary differential equation whose solution is: µ 2 ¶ ξ t b , u(t, ξ ) = K(ξ ) exp − 2
(A.2)
where the integration constant K(ξ ) is the Fourier transform of the initial condition; thus: µ 2 ¶ ξ t b . u(t, ξ ) = b u(0, ξ ) exp − 2
(A.3)
The inverse Fourier transform maps the product into a convolution; this entails: µ µ 2 ¶¶ ξ t . u(t, x) = u(0, t) ∗ F−1 exp − 2
(A.4)
Thus, we recover the definition of CNI : CNI ( f, t) = Cemp ( f ) ∗ N(t),
(A.5)
where N(t) is the density of a centered normal distribution with variance t.
Noise Injection
1107
Acknowledgments We thank two anonymous referees for helpful comments on a draft of this article, particularly for suggesting that NI or derivative-based regularizers should be combined with global smoothers. Part of this work has been done while St´ephane Boucheron was visiting the Institute for Scientific Interchange in Torino. Boucheron has been partially supported by the GermanFrench PROCOPE program 96052. References An, G. (1995). The effects of adding noise during backpropagation training on generalization performance. Neural Computation 8:643–674. Auer, P., Hebster, M., & Warmuth, M. K. (1996). Exponentially many local minima for single neurons. Advances in Neural Information Processing Systems 8:316–322. Barron, A., Birge, L., & Massart, P. (1995). Risk bounds for model selection via penalization (Tech. Report). Universit´e Paris-Sud. http://www.math.upsud.fr/stats/preprints.html. Bishop, C. (1995). Training with noise is equivalent to Tikhonov regularization. Neural Computation 7(1):108–116. Comon, P. (1992). Classification supervis´ee par r´eseaux multicouches. Traitement du Signal 8(6):387–407. Devroye, L. (1987). A Course in Density Estimation. Basel: Birkh¨auser. Ethier, S., & Kutrz, T. (1986). Markov Processes. New York: Wiley. Grandvalet, Y. (1995). Effets de l’injection de bruit sur les perceptrons multicouches. Unpublished doctoral dissertation, Universit´e de Technologie de Compi`egne, Compi`egne, France. Grandvalet, Y. & Canu, S. (1995). A comment on noise injection into inputs in back-propagation learning. IEEE Transactions on Systems, Man, and Cybernetics 25(4):678–681. Grenander, U. (1981). Abstract Inference. New York: Wiley. Haussler, D. (1992). Decision theoretic generalizations of the PAC model for neural net and other learning applications. Information and Computation 100:78– 150. Hirsch, M. (1976). Differential Topology. New York: Springer-Verlag. Holmstrom, ¨ L., & Koistinen, P. (1992). Using additive noise in back-propagation training. IEEE Transactions on Neural Networks 3(1):24–38. Karatzas, I., & Shreve, S. (1988). Brownian Motion and Stochastic Calculus (2nd ed.). New York: Springer-Verlag. Ledoux, M., & Talagrand, M. (1991). Probability in Banach Spaces. Berlin: SpringerVerlag. Leen, T. K. (1995). Data distributions and regularization. Neural Computation 7:974–981. Matsuoka, K. (1992). Noise injection into inputs in backpropagation learning. IEEE Transactions on Systems, Man, and Cybernetics 22(3):436–440.
1108
Yves Grandvalet, St´ephane Canu, and St´ephane Boucheron
Plaut, D., Nowlan, S., & Hinton, G. (1986). Experiments on learning by back propagation (Tech. Rep. CMU-CS-86-126). Pittsburgh, PA: Carnegie Mellon University, Department of Computer Science. Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (1992). Numerical Recipes in C. Cambridge: Cambridge University Press. Reed, R., Marks II, R., & Oh, S. (1995). Similarities of error regularization, sigmoid gain scaling, target smoothing and training with jitter. IEEE Transactions on Neural Networks 6(3):529–538. Sietsma, J., & Dow, R. (1991). Creating artificial neural networks that generalize. Neural Networks 4(1):67–79. Sontag, E. D. (1996). Critical points for least-squares problems involving certain analytic functions, with applications to sigmoidal neural networks. Advances in Computational Mathematics 5:245–268. Tikhonov, A. N., & Arsenin, V. Y. (1977). Solution of Ill-Posed Problems. Washington, DC: W. H. Wilson. Vapnik, V. (1982). Estimation of Dependences Based on Empirical Data. New York: Springer-Verlag. Webb, A. (1994). Functional approximation by feed-forward networks: A leastsquares approach to generalization. IEEE Transactions on Neural Networks 5(3):363–371.
Received February 1, 1996; accepted September 4, 1996.
Communicated by Marwan Jabri
The Faulty Behavior of Feedforward Neural Networks with Hard-Limiting Activation Function Zhiyu Tian Department of Automation, Tsinghua University, Beijing 100084, People’s Republic of China
Ting-Ting Y. Lin Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, California 92093-0407, U.S.A.
Shiyuan Yang Shibai Tong Department of Automation, Tsinghua University, Beijing 10084, People’s Republic of China
With the progress in hardware implementation of artificial neural networks, the ability to analyze their faulty behavior has become increasingly important to their diagnosis, repair, reconfiguration, and reliable application. The behavior of feedforward neural networks with hardlimiting activation function under stuck-at faults is studied in this article. It is shown that the stuck-at-M faults have a larger effect on the network’s performance than the mixed stuck-at faults, which in turn have a larger effect than that of stuck-at-0 faults. Furthermore, the fault-tolerant ability of the network decreases with the increase of its size for the same percentage of faulty interconnections. The results of our analysis are validated by Monte-Carlo simulations. 1 Introduction Artificial neural networks (ANN) are generally claimed to be intrinsically reliable and fault tolerant in the neural network literature (Hecht-Nielsen, 1988; Lisboa, 1992). With the advance research on their hardware implementation, the analysis of ANN’s faulty behavior becomes increasingly important for their diagnosis, repair, and reconfiguration. Faulty characteristics of feedback ANNs were discussed in Nijhuis and Spaanenburg (1989); Belfore and Johnson (1991); Chung and Krile (1992); and Protzel, Palumbo, and Arras (1993). The effect of weight disturbances on the performance degradation of feedforward ANNs has been examined by Stevenson, Winter, and Widrow (1990); Choi and Choi (1992); and Piche (1992). However, none of the feedforward analysis adopted the stuck-at fault Neural Computation 9, 1109–1126 (1997)
c 1997 Massachusetts Institute of Technology °
1110
Zhiyu Tian, Ting-Ting Y. Lin, Shiyuan Yang, and Shibai Tong
model, which has been a popular model used in the analysis of feedback network interconnections. The use of stuck-at fault model in characterizing feedforward ANNs is the focus here. In this article, the ANN modeled has L layers, with the first layer being the input layer, the Lth layer being the output layer, and the L − 2 layers in between being the hidden layers. Each node in the output layer and the hidden layers receives the outputs from all the nodes in the preceding layer as its inputs, and produces an output as follows: Ãn ! `−1 X wi xi + θj (1.1) j = 1, . . . , n` , yj = Sgn i=1
where n` is the number of nodes in the `th layer (` = 2, . . . , L); xi (i = 1, . . . , n`−1 ) is the ith input of the node, which has only two values: +1 and −1; wi (i = 1, . . . , n`−1 ) is the weight of the ith input connection, with values in the range of [−M, +M]; and θ is the bias, having values in the same range as wi . When nl−1 is large, the effect of θ can often be neglected. Three types of stuck-at faults are considered: stuck-at-M (i.e., stuck-at-(−M)), stuck-at-0, and a mixture of s-a-M (−M) and s-a-0. Specifically, interconnection faults are represented with either M (−M) or 0, replacing the corresponding wi xi . Our work focuses on the general effect of faults at the output, while the inputs and weights are random variables assumed to be independent identically distributed (iid), and the means of the inputs and weights are 0. These assumptions support the fact that these neural networks can be used in various applications, thus assuming any specific set of values for input and weights would not be appropriate. The network is evaluated using the probability that the output is correct when faults occur as its performance measure. The higher the probability, the more fault tolerant. Section 2 presents the effect of interconnection faults on the performance of an individual neuron. Section 3 shows the effect of erroneous inputs on the performance of an individual neuron. Section 4 combines the effects of both erroneous inputs and faulty interconnections on the performance of an individual neuron. The behavior of faulty feedforward networks with a hard-limiting activation function is presented in section 5. And the performance of the network when s-a-M, s-a-(−M), and s-a-0 faults exist is analyzed in section 6. Sections 7 and 8 conclude the article. 2 The Effect of Interconnection Faults on the Performance of an Individual Neuron We begin with a fault-free case, where the output of a neuron, neglecting the bias θ , is: n P wi xi ≥ 0, 1 i=1 (2.1) y= n P wi xi < 0, −1 i=1
Faulty Behavior of Feedforward Neural Networks
1111
where n is the number of inputs. In case of n f faults, let F be the set of faulty interconnections and S be the set of normal ones; the output becomes:
y0 =
1 −1
P i∈S P
wi xi + n f M ≥ 0, (2.2)
wi xi + n f M < 0.
i∈S
The output error, 1y, defined as the difference of the respective outputs is:
0
1y = y − y =
2 −2 0
P
wi xi + n f M ≥ 0,
i∈S
P
wi xi + n f M < 0,
n P i=1 n P
wi xi < 0, wi xi ≥ 0,
(2.3)
i=1
i∈S
otherwise.
P Pn Let event P A be i=1 wi xi i∈F wi xi , we have i=1 wi xi < i∈S wi xi + n f M, which means that B implies A (i.e., B ⊂ A). We therefore have Pr {AB} = Pr {B}
(2.4)
Pr {1y = 2} = Pr {AB} = Pr {A} − Pr {AB} = Pr {A} − Pr {B}
(2.5)
Pr {1y = −2} = Pr {A B} = Pr {B} − Pr {AB} = 0,
(2.6)
where Pr {•} is the occurrence probability of Event •. Plugging in event B, we obtain )
( Pr {B} = Pr
X
wi xi < −n f M .
(2.7)
i∈S
We assume that wi and xi (i = 1, . . . , n) are iid random variables; the mean and the variance of wi are 0 and σw2 , respectively. When the means of the weights and inputs are not zero, the static offset will not affect the result of our analysis. If xi is either +1 or −1, E[x2i ] = 1 (i = 1, . . . , n). Furthermore, 2 is: the mean of wi xi would be 0, and its variance σwx 2 σwx = E[(wi xi )2 ] − {E[wi xi ]}2 = E[w2i ]E[x2i ]
= σw2 + (E[wi ])2 = σw2
(2.8)
To solve for Pr {B}, we use theP central limit theorem, which implies that when p n − n f is sufficiently large, ( i∈S wi xi )/( n − n f σwx ) approximates normal
1112
Zhiyu Tian, Ting-Ting Y. Lin, Shiyuan Yang, and Shibai Tong
distribution N(0, 1). Equation 2.7 can now be written as: P wi xi M n f i∈S i and Mi will exactly model the network’s behavior. Proof. Suppose Mi is deterministic for some i and consider the input string 6 = σ1 , . . . , σn , with σj ∈ {0, 1} for 1 ≤ j ≤ n. We must show that 6 is accepted by Mi if and only if it is accepted by the network. Recall that x0 denotes the initial point, and let xj = wσj (xj−1 ), for 1 ≤ j ≤ n. Recall that the states of Mi are indexed by the elements Uk of subdivision Si , and for 0 ≤ j ≤ n let U (j) be that subset Uk that contains xj . Then in particular U (0)
Analysis of Dynamical Recognizers
1131
contains x0 and is therefore the initial state of Mi . Since xj = wσj (xj−1 ) we have (j) U (j−1) ∩ w−1 6= ∅, σj U
so U (j−1) is connected to U (j) with an edge labeled by σj . As Mi is deterministic, this must be the only such edge emanating from U (j−1) . By induction on j, it follows that Mi must be put into state U (j) when string σ1 , . . . , σj is input. In particular, once the whole string has been input, Mi will be in state U (n) , so 6 is accepted by Mi
⇔ ⇔ ⇔
U (n) ⊆ Uacc xn ∈ Uacc 6 is accepted by the network.
To show that Mi+1 = Mi and no further subdivisions are made, suppose to the contrary that there are two points x1 and x2 that lie in the same element Uj −1 of the partition Si , but in different elements of Si+1 = Si ∨w−1 0 (Si )∨w1 (Si ). Then it must be for σ = either 0 or 1 that wσ (x1 ) lies in some Uk ∈ Si while wσ (x2 ) lies in Ul with Uk 6= Ul . But then Uj would be connected to both Uk and Ul with edges labeled by σ , contradicting the assumption that Mi was deterministic. If the induced language is not regular, the machines Mi in theory will continue to grow indefinitely in complexity. Of course in practice we cannot implement the above procedure exactly for our trained networks but must instead use a discrete approximation. Following the analysis of Giles et al. (1992) we can approximate A0 computationally along the lines of equation 3.1 as follows. First, “discretize” the system at resolution r by dividing the state space into r d points in a regular lattice and approximating w0 and ˆ is the w1 with functions wˆ 0 and wˆ 1 that act on these points such that wˆ σ (x) ˆ Each Yi will then be represented by a finite set nearest lattice point to wσ (x). Yˆ i , and the condition Yˆ i+1 ⊇ Yˆ i guarantees that the procedure will terminate to produce a discrete approximation Aˆ 0 for A0 . In discrete form, the above procedure may be equated with the Hopcroft minimization algorithm (Hopcroft and Ullman, 1979), or the method of εmachines (Crutchfield & Young, 1990; Crutchfield, 1994), and was first used in the present context by Giles et al. (1992). Using small values of r, their aim was to extract an FSA that might not model the network’s behavior exactly but would model it closely enough to classify the training data faithfully. We had in mind a different purpose: to test empirically whether the network is regular (modeling an FSA) and to analyze its behavior in more fine-grained detail. The minimization procedure described above is bound to terminate in finite time due to the finite resolution of the representation.
1132
Alan D. Blair and Jordan B. Pollack
Table 1: Epochs to Learning (L) and Regularity (R) for the Seven Tomita Languages. Network
N1 L/R
N2 N3 L
L
N4 L
N5 R
L/R
N6 L
N7 R
L
Epochs to learning 200 600 400 200 — 800 200 — 600 Epochs to regularity 200 — — — 400 800 — 600 — 200 × 200 FSA size 2 341 272 2052 4 21 7123 3 264 500 × 500 FSA size 2 881 808 7248 4 7 39200 3 806 Tomita’s FSA size 2 3 5 — 4 4 — 3 5
However, as the resolution is scaled up, the size of the largest FSA generated should in principle stabilize in the case of regular networks and grow rapidly in the case of nonregular ones. Therefore, as an empirical test of regularity, we perform these computations at two different resolutions (200 × 200 and 500 × 500) and report (in rows 4 and 5 of Table 1) the size (number of states) of the largest FSA generated. If the largest FSAs generated at the different resolutions are the same, we take this as an indication that the network is regular; if they are vastly different, that it is not regular. This allows us to probe the nature of the network’s behavior as the training progresses. In some cases the network undergoes a phase transition to regularity at some point in its training, and we are able to measure with considerable precision the time at which this phase transition occurs. It should be stressed, however, that these discretized computations do not constitute a formal proof but merely provide strong empirical evidence as to the regularity or otherwise of the network and its induced language. More rigorous results may be found in Casey (1996, 1993), who described the general dynamical properties that a neural network or any dynamical system has to have in order to model an FSA robustly. Tino, ˇ Horne, Giles, & Collingwood (1995) also made a detailed analysis of networks with a two-dimensional state space under certain assumptions about the sign and magnitude of the weights. It is important also to note that the regularity we are testing for is a property of the trained network and not an intrinsic property of the input strings, since the (necessarily finite) training data may be generalized in an infinite number of ways, each producing an equally valid induced language that may be regular or nonregular. Good symbolic algorithms already exist for finding a regular language compatible with given input data (Trakhtenbrot & Barzdin’, 1973; Lang, 1992). Our purpose is rather to analyze the kinds of languages that a dynamical system such as a neural network is likely to come up with when trained on those data. We do not claim that our methods are efficient when scaled up to higher dimensions. Rather, we hope that a detailed analysis of networks in low dimensions will lead to a better understanding of the phenomenon of regular versus nonregular network behavior in general.
Analysis of Dynamical Recognizers
1133
Finally, we remark that the functions w0 and w1 map X continuously into itself and as such define an iterated function system (IFS) (Barnsley, 1988) as noted in Kolen (1994). The attractor A of this IFS may be defined as follows: 1. Z0 = X . 2. For i ≥ 0, Zi+1 = w0 (Zi ) ∪ w1 (Zi ) T 3. A = i≥0 Zi .
[by induction Zi+1 ⊆ Zi ].
As with A0 , we can find a discrete approximation Aˆ for A at any given resolution. We have found empirically that the convergence to the attractor was very rapid for our trained networks so that Aˆ 0 was equal to a subset of Aˆ plus a very small number of “transient” points. For this reason we call A0 a subattractor of the IFS. If w0 and w1 were contractive maps, A0 would contain the whole of A (Barnsley, 1988), but in our case they are generally not contractive, so Aˆ 0 may contain only part of Aˆ . The close relationship between Aˆ 0 and Aˆ should make it possible to analyze the general family of languages accepted by the network as x0 varies, though we do not pursue this line of inquiry here. 4 Results Networks with the architecture described in section 2 were trained to recognize formal languages using backpropagation through time (Williams & Zipser, 1989; Rumelhart, Hinton, & Williams, 1986), with a modification similar to Quickprop (Fahlman, 1989)1 and a learning rate of 0.03. The weights were updated in batch mode, and the perceptron weights Pj were Pd constrained by rescaling to the region where j=1 Pj 2 ≤ 1. In contrast to Pollack (1991) where the backpropagation was truncated, we backpropagated through all levels of recurrence, as in Williams and Zipser (1989). In addition, we allowed the initial point x0 to vary as part of the backpropagation, a modification that was also developed in independent work by Forcada and Carrasco (1995). Our seven groups of training strings (see appendix) were copied exactly from Tomita (1982) except that we did not include the empty string in our training sets. Rows 4 and 5 of Table 1 show the size (number of states) of the largest FSA generated from the trained networks at two different resolutions 1
Specifically, the cost function we used was
³
1+z 1 E = − (1 + s)2 log 2 1+s
´
−
³
1−z 1 (1 − s)2 log 2 1−s
´
+ s(z − s)
(where z is the actual output and s the desired output), which leads to a delta rule of δ = (1 − sz)(s − z).
1134
Alan D. Blair and Jordan B. Pollack
Figure 2: Subattractor, weights, and equivalent FSA for network N1.
by the methods described in section 3. For comparison, row 6 shows the size of the minimal FSAs that Tomita found by exhaustive search. A network can be said to have learned all its training data once the output is positive for all accept strings and negative for all reject strings (a max error of 1.0), but it makes sense to aim for a lower maximum error in order to provide a safety margin. For network N5, the maximum error reached its lowest value of 0.46 after 800 epochs. The other networks were trained until they achieved a maximum error of less than 0.4, the number of epochs required being shown in row 2 (epochs to learning). Since the test for regularity is computationally intensive, we did not test our networks at every epoch but only at intervals of 200 epochs. For network N1, the derived FSA is the same for both resolutions, providing evidence that the induced language is regular. For networks N2, N3, N4, N6, and N7 on the other hand, the maximum FSA grows dramatically as we increase the resolution, suggesting that the induced language is not regular. We continued to train these networks to see if they would later become regular and found that networks N4 and N6 became regular after 400 and 600 epochs, respectively, as indicated in row 3 (epochs to regularity), while networks N2, N3, and N7 remained nonregular even after 10,000 epochs. For N5, the size of the maximum FSA actually decreased with higher resolution from 21 to 7, suggesting that the induced language is regular but that high resolution is required to detect its regularity. As an illustration, Figure 2 shows the subattractor for N1, which learned its training data after 200 epochs. The axes measure the activations of hidden nodes x1 and x2 , respectively. The initial point x0 is indicated by a cross (upper-right corner). Also shown is the dividing line between accept points and reject points. Roughly speaking, w0 squashes the entire state space into
Analysis of Dynamical Recognizers
1135
Figure 3: The phase transition of N4.
the lower-left corner, while w1 flattens it out onto the line x1 = x2 . The dividing line separates A0 into two pieces: Uacc in the upper-right corner and Urej in the lower left. w1 maps each piece back into itself, while w0 maps both pieces into Urej . The ε-machine method thus produces the two-state FSA shown at the right. (Final states are indicated by a double circle, the initial state by an open arrow.) In this simple case the FSA can be shown to model the network’s behavior exactly. Of course, N1 was trained on an extremely easy task that could have been learned with even fewer weights. Figure 3 shows network N4 captured at its phase transition. After 371 epochs (left) it has learned the training set, but the induced language is not regular. At the next epoch (right), it has become regular. Although the weights have been shifted by only 0.004 units in euclidean space, the dynamics of the two networks are quite different. Applied to the left network at 500 × 500 resolution, our analysis produced a series of FSAs of maximum size 2564. Applied to the right network, it terminated after three steps to produce the same four-state FSA that Tomita (1982) found by exhaustive search. The states of the FSA correspond to the following four subsets of A0 : U1 in the upper-left corner, U2 around (−0.6, −0.9), U3 barely visible at (0.4, −1.0), and U4 in the lower-right corner. The details of the successive subdivisions are outlined in Figure 4.
1136
Figure 4: The analysis of N4.
Alan D. Blair and Jordan B. Pollack
Analysis of Dynamical Recognizers
1137
Figure 5: N2, which is not regular, and N5, which is regular but with no obvious clusters.
The above examples would also be amenable to previous, clusteringbased approaches because of the way they “partition [their] state space into fairly well-separated, distinct regions or clusters” as hypothesized by Giles et al. (1992). Those shown in Figure 5 seem to be trickier. The subattractor for N5 (right) appears to be a bunch of scattered points with no obvious clustering, yet our fine-grained analysis was able to extract from it a seven-node FSA—a little larger than the minimal FSA of four nodes Tomita found. Network N2 (left) seems to have induced a nonregular language. Figure 6 shows the first four iterations of analysis applied to it. Note that each FSA refines the previous one, bringing to the light more details. Much can be learned about the induced language by examining these finite state approximations. For example, we can infer from M2 that the network rejects all strings ending in 1 and from M3 that it accepts all nonempty strings of the form (10)∗ but rejects any string ending in 110.
1138
Alan D. Blair and Jordan B. Pollack
0,1
M1
0,1 0
0,1
M2
1 0,1
0
0,1
1
0 1
M3
0,1
1
1
0
0
0,1
0,1
0
0,1
M4
0 1
1 0,1
1
0
1
1
1
0
0
0
1
1
0
0,1
1
0,1 0
1
1
0,1
1 0
0
Figure 6: The first four steps of analysis applied to N2.
5 Conclusion By allowing the decision boundary and the initial point to vary, our networks with two hidden nodes were able to induce languages from all the data sets of Tomita (1982) within a few hundred epochs. Many researchers implicitly regard an extracted FSA as superior to the trained network from which it was extracted (Omlin and Giles, 1996) with regard to predictability, compactness of description, and the particular way each generalizes to classify new, unseen input strings. For this reason, earlier work in the field focused on extracting an FSA that approximates the
Analysis of Dynamical Recognizers
1139
behavior of the network. However, that approach is imprecise if the network has induced a nonregular language and does not exactly model an FSA. We have provided a fine-grained analysis for a number of trained networks, both regular and nonregular, using an approach similar to the method of ε-machines that Crutchfield and Young (1990) used to analyze certain handcrafted dynamical systems. In particular, we were able to measure empirically whether the induced language was regular. The fact that several of the networks induced nonregular languages suggests a discrepancy between languages that are “simple” for dynamical recognizers and those that are “simple” from the point of view of automata theory (the regular languages). It is easier for these learning systems to induce a nonregular language to fit the sparse data of the Tomita training sets rather than the expected minimal regular language. The use of comprehensive training sets, or intentional heuristics, might constrain networks away from these interesting dynamics. It could be argued that the network and FSA ought to be seen on a more equal footing, since the 17 parameters of the network provide a compactness of description comparable to that of the FSA, and the language induced by the network is in principle on a par with that of the FSA in the sense that both generalize the same training data. We hope that further work in this direction may lead to a better understanding of network dynamics and help to clarify, compare, and contrast the relative merits of symbolic and dynamical systems.
Appendix: Tomita’s Data Sets N1 Accept
N1 Reject
N2 Accept
N2 Reject
1 11 111
0 10 01
10 1010 101010
1 0 11
1111 11111 111111 1111111 11111111
00 011 110 11111110 10111111
10101010 10101010101010
00 01 101 100 1001010 10110 110101010
N3 Accept
N3 Reject
N4 Accept
N4 Reject
1 0
10 101
1 0
000 11000
01 11 00 100 110 111 000 100100 110000011100001
010 1010 1110 1011 10001 111010 1001000 11111000 0111001101
10 01 00 100100 001111110100 0100100100 11100 0010
0001 000000000 11111000011 1101010000010111 1010010001 0000 00000
111101100010011100
11011100110
1140
Alan D. Blair and Jordan B. Pollack
N5 Accept
N5 Reject
N6 Accept
N6 Reject
11 00 1001 0101 1010 1000111101
0 111 010 000000000 1000 01
10 01 1100 101010 111 000000
1 0 11 00 101 011
1001100001111010 111111 0000
10 1110010100 010111111110 0001 011
10111 0111101111 100100100
11001 1111 00000000 010111 101111011111 1001001001
N7 Accept
N7 Reject
1 0 10 01
1010 00110011000 0101010101 1011010
11111 000 00110011 0101 0000100001111
10101 010100 101001 100100110101
00100 011111011111 00
Acknowledgments This research was funded by a Krasnow Foundation Postdoctoral Fellowship, by ONR grant N00014-95-0173, and by NSF grant IRI-95-29298. We are indebted to David Wittenberg for helping to improve its presentation and to Mike Casey for stimulating discussions. References Barnsley, M. F. (1988). Fractals everywhere. San Diego, CA: Academic Press. Casey, M. (1993). Computation dynamics in discrete-time recurrent neural networks. Proceedings of the Annual Research Symposium of UCSD Institute for Neural Computation, 78–95. Casey, M. (1996). The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction. Neural Computation, 8(6). Cleeremans, A., Servan-Schreiber, D., & McClelland, J. (1989). Finite state automata and simple recurrent networks. Neural Computation, 1(3), 372–381. Crutchfield, J. P. (1994). The calculi of emergence: Computation, dynamics and induction. Physica D, 75, 11–54. Crutchfield, J. P., & Young, K. (1990). Computation at the onset of chaos. In W. H. Zurek (Ed.), Complexity, Entropy and the Physics of Information. Reading, MA: Addison-Wesley. Das, S., & Mozer, M. C. (1994). A unified gradient-descent/clustering architecture for finite state machine induction. Neural Information Processing Systems, 6, 19–26.
Analysis of Dynamical Recognizers
1141
Fahlman, S. E. (1989). Fast-learning variations on back-propagation: An empirical study. In D. Touretzky, G. Hinton, & T. Sejnowski (Eds.), Proceedings of the 1988 Connectionist Models Summer School (pp. 38–51). San Mateo, CA: Morgan Kaufmann. Forcada, M. L., & Carrasco, R. C. (1995). Learning the initial state of a secondorder recurrent neural network during regular-language inference. Neural Computation, 7(5), 923–930. Frasconi, P., Gori, M., & Soda, G. (1995). Recurrent neural networks and prior knowledge for sequence processing: A constrained nondeterministic approach. Knowledge Based Systems, 8(6), 313–332. Giles, C. L., Miller, C. B., Chen, D., Chen, H. H., Sun, G. Z., & Lee, Y. C. (1992). Learning and extracting finite state automata with second-order recurrent neural networks. Neural Computation, 4(3), 393–405. Hopcroft, J. E., Ullman, J. D. (1979). Introduction to automata theory, languages, and computation. Reading, MA: Addison-Wesley. Jordan, M. I. (1986). Attractor dynamics and parallelism in a connectionist sequential machine. Proceedings of the Eighth Conference of the Cognitive Science Society. Amherst, MA, 531–546. Kolen, J. F. (1993). Fool’s gold: Extracting finite state machines from recurrent network dynamics. Neural Information Processing Systems, 6, 501–508. Kolen, J. F. (1994). Exploring the computational capabilities of recurrent neural networks. Unpublished doctoral dissertation, Ohio State University. Lang, K. J. (1992). Random DFA’s can be approximately learned from sparse uniform examples. Proc. Fifth ACM Workshop on Computational Learning Theory, 45–52. Manolios, P., & Fanelli, R. (1994). First order recurrent neural networks and deterministic finite state automata. Neural Computation, 6(6), 1155–1173. Omlin, C. W., & Giles, C. L. (1996). Extraction of rules from discrete-time recurrent neural networks. Neural Networks, 9(1), 41. Pollack, J. B. (1987). Cascaded back propagation on dynamic connectionist networks. Proceedings of the Ninth Annual Conference of the Cognitive Science Society, Seattle, WA, 391–404. Pollack, J. B. (1991). The induction of dynamical recognizers. Machine Learning, 7, 227–252. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323, 533–536. Siegelmann, H. T., & Sontag, E. D. (1992). On the computational power of neural networks. Proceedings of the Fifth ACM Workshop on Computational Learning Theory, Pittsburgh, PA. Tino, ˇ P., Horne, B. G., Giles, C. L., & Collingwood, P. C. (1995). Finite state machines and recurrent neural networks—Automata and dynamical systems approaches (Tech. Rep. No. UMIACS-TR-95-1). College Park, MD: Institute for Advanced Computer Studies, University of Maryland. Tino, ˇ P., & Sajda, J. (1995). Learning and extracting initial mealy automata with a modular neural network model. Neural Computation, 7(4), 822. Tomita, M. (1982). Dynamic construction of finite-state automata from examples
1142
Alan D. Blair and Jordan B. Pollack
using hill-climbing. Proceedings of the Fourth Annual Cognitive Science Conference, Ann Arbor, MI, 105–108. Trakhtenbrot, B. A., & Barzdin’, Ya. M. (1973). Finite automata: Behavior and synthesis. Amsterdam: North-Holland. Watrous, R. L., & Kuhn, G. M. (1992). Induction of finite state languages using second-order recurrent networks. Neural Computation, 4(3), 406–414. Williams, R. J., & Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1(2), 270. Zeng, Z., Goodman, R. M., & Smyth, P. (1994). Learning finite state machines with self-clustering recurrent networks. Neural Computation, 5(6), 976–990.
Received September 19, 1995; accepted July 30, 1996.
Communicated by David Haussler
A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split Michael Kearns AT&T Research Labs, Murray Hill, NJ 07974, U.S.A.
We give a theoretical and experimental analysis of the generalization error of cross validation using two natural measures of the problem under consideration. The approximation rate measures the accuracy to which the target function can be ideally approximated as a function of the number of parameters, and thus captures the complexity of the target function with respect to the hypothesis model. The estimation rate measures the deviation between the training and generalization errors as a function of the number of parameters, and thus captures the extent to which the hypothesis model suffers from overfitting. Using these two measures, we give a rigorous and general bound on the error of the simplest form of cross validation. The bound clearly shows the dangers of making γ —the fraction of data saved for testing—too large or too small. By optimizing the bound with respect to γ , we then argue that the following qualitative properties of cross-validation behavior should be quite robust to significant changes in the underlying model selection problem: • When the target function complexity is small compared to the sample size, the performance of cross validation is relatively insensitive to the choice of γ . • The importance of choosing γ optimally increases, and the optimal value for γ decreases, as the target function becomes more complex relative to the sample size. • There is nevertheless a single fixed value for γ that works nearly optimally for a wide range of target function complexity. 1 Introduction In this article, we analyze the performance of cross validation in the context of model selection and complexity regularization.1 We work in a setting 1 Perhaps in conflict with accepted usage in statistics, where the method is often referred to as keeping a “hold-out” set, here we use the term cross validation to mean the simple method of saving out an independent test set to perform model selection. Precise definitions will be stated shortly.
Neural Computation 9, 1143–1161 (1997)
c 1997 Massachusetts Institute of Technology °
1144
Michael Kearns
in which we must choose the right number of parameters for a hypothesis function in response to a finite training sample, with the goal of minimizing the resulting generalization error. There is a large and interesting literature on cross validation methods, which often emphasizes asymptotic statistical properties, or the exact calculation of the generalization error for simple models. (The literature is too large to survey here; foundational papers include Stone, 1974, 1977.) Our approach here is somewhat different and is primarily inspired by two sources. The first is the work of Barron and Cover (1991), who introduced the idea of bounding the error of a model selection method (in their case, the minimum description length principle) in terms of a quantity known as the index of resolvability. The second is the work of Vapnik (1982), who provided extremely powerful and general tools for uniformly bounding the deviation between the training and generalization error. (For related methods and problems, we refer the reader to Devroye (1988).) We combine these methods to give a new and general analysis of crossvalidation performance. In the first and more formal part of the article, we give a rigorous bound on the error of cross validation in terms of two parameters of the underlying model selection problem: the approximation rate and the estimation rate. Taken together, these two problem parameters determine our analog of the index of resolvability, and the work of Vapnik and others yields estimation rates applicable to many natural problems. In the second and more experimental part of the article, we investigate the implications of our bound for choosing γ , the fraction of data withheld for testing in cross validation. The most interesting aspect of this analysis is the identification of several qualitative properties of the optimal γ that appear to be invariant over a wide class of model selection problems. 2 The Formalism We consider model selection as a two-part problem: choosing the appropriate number of parameters for the hypothesis function and tuning these parameters. The training sample is used in both steps of this process. In many settings, the tuning of the parameters is determined by a fixed learning algorithm such as backpropagation, and then model selection reduces to the problem of choosing the architecture (which determines the number of weights, the connectivity pattern, and so on). For concreteness, we adopt an idealized version of this division of labor. We assume a nested sequence of function classes H1 ⊂ · · · ⊂ Hd · · ·, called the structure (Vapnik, 1982), where Hd is a class of boolean functions of d parameters, each function being a mapping from some input space X into {0, 1}. For simplicity, in this article we assume that the Vapnik-Chervonenkis (VC) dimension (Vapnik & Chervonenkis, 1971; Vapnik, 1982) of the class Hd is O(d). To remove this assumption, one simply replaces all occurrences of d in our bounds by the VC dimension of Hd . We assume that we have in our possession a learning
Bound on the Error of Cross Validation
1145
algorithm L that on input any training sample S and any value d will output a hypothesis function hd ∈ Hd that minimizes the training error over Hd — that is, ²t (hd ) = minh∈Hd {²t (h)}, where ²t (h) is the fraction of the examples in S on which h disagrees with the given label. In many situations, training error minimization is known to be computationally intractable, leading researchers to investigate heuristics such as backpropagation. The extent to which the theory presented here applies to such heuristics will depend in part on the extent to which they approximate training error minimization for the problem under consideration. Model selection is thus the problem of choosing the best value of d. More precisely, we assume an arbitrary target function f (which may or may not reside in one of the function classes in the structure H1 ⊂ · · · ⊂ Hd · · ·), and an input distribution P; f and P together define the generalization error function ²g (h) = Prx∈P [h(x) 6= f (x)]. We are given a training sample S of f , consisting of m random examples drawn according to P and labeled by f (with the labels possibly corrupted by a noise process that randomly complements each label independently with probability η < 1/2). In many model selection methods, such as Rissanen’s minimum description length (MDL) principle (Rissanen, 1989) and Vapnik’s guaranteed risk minimization (Vapnik, 1982), for each value of d = 1, 2, 3, . . . we give the entire sample S and d to the learning algorithm L to obtain the function hd minimizing the training error in Hd . A “penalty” for complexity (a function of d and m) is then added to the training errors ²t (hd ) to choose among the sequence h1 , h2 , h3 , . . .. Whatever the method, we interpret the goal to be that of minimizing the generalization error of the hypothesis selected. In this article, we will make the rather mild but very useful assumption that the structure has the property that for any sample size m, there is a value dmax (m) such that ²t (hdmax (m) ) = 0 for any labeled sample S of m examples.2 We call the function dmax (m) the fitting number of the structure. The fitting number formalizes the simple notion that with enough parameters, we can always fit the training data perfectly, a property held by most sufficiently powerful function classes (including multilayer neural networks). We typically expect the fitting number to be a linear function of m or at worst a polynomial in m. The significance of the fitting number for us is that no reasonable model selection method should choose hd for d ≥ dmax (m), since doing so simply adds complexity without reducing the training error. In this article we concentrate on the simplest version of cross validation. Unlike the methods mentioned above, which use the entire sample for training the hd , in cross validation we choose a parameter γ ∈ [0, 1], which determines the split between training and test data. Given the input sample S of m examples, let S0 be the subsample consisting of the first (1 − γ )m examples in S, and S00 the subsample consisting of the last γ m examples. In
2
Weaker definitions for dmax (m) also suffice, but this is the simplest.
1146
Michael Kearns
cross validation, rather than giving the entire sample S to L, we give only the smaller sample S0 , resulting in the sequence h1 , . . . , hdmax ((1−γ )m) of increasingly complex hypotheses. Each hypothesis is now obtained by training on only (1 − γ )m examples, which implies that we will only consider values of d smaller than the corresponding fitting number dmax ((1 − γ )m); let us γ introduce the shorthand dmax for dmax ((1 − γ )m). Cross validation chooses the hd satisfying hd = mini∈{1,...,dγmax } {²t00 (hi )} where ²t00 (hi ) is the error of hi on the subsample S00 . Notice that we are not considering multifold cross validation, or other variants that make more efficient use of the sample, because our analyses will require the independence of the test set. However, we believe that many of the themes that emerge here may apply to these more sophisticated variants as well. We use ²cv (m) to denote the generalization error ²g (hd ) of the hypothesis hd chosen by cross validation when given as input a sample S of m random examples of the target function. Obviously, ²cv (m) depends on S, the structure, f , P, and the noise rate. When bounding ²cv (m), we will use the expression with high probability to mean with probability 1 − δ over the sample S, for some small fixed constant δ > 0. All of our results can also be stated with δ as a parameter at the cost of a log(1/δ) factor in the bounds, or in terms of the expected value of ²cv (m). 3 The Approximation Rate It is apparent that any nontrivial bound on ²cv (m) must take account of some measure of the complexity of the unknown target function f . The correct measure of this complexity is less obvious. Following the example of Barron and Cover’s (1991) analysis of MDL performance in the context of density estimation, we propose the approximation rate as a natural measure of the complexity of f and P in relation to the chosen structure H1 ⊂ · · · ⊂ Hd · · ·. Thus we define the approximation rate function ²g (d) to be ²g (d) = minh∈Hd {²g (h)}. The function ²g (d) tells us the best generalization error that can be achieved in the class Hd , and it is a nonincreasing function of d. If ²g (s) = 0 for some sufficiently large s, this means that the target function f , at least with respect to the input distribution, is realizable in the class Hs , and thus s is a coarse measure of how complex f is. More generally, even if ²g (d) > 0 for all d, the rate of decay of ²g (d) still gives a nice indication of how much representational power we gain with respect to f and P by increasing the complexity of our models. Still missing, of course, is some means of determining the extent to which this representational power can be realized by training on a finite sample of a given size, but this will be added shortly. First we give examples of the approximation rate that we will examine in some detail following the general bound on ²cv (m). 3.1 The Intervals Problem. In this problem, the input space X is the real interval [0, 1], and the class Hd of the structure consists of all boolean step
Bound on the Error of Cross Validation
1147
Three Approximation Rates 1
0.8
0.6
y
0.4
0.2
0
50
100
150 d
200
250
300
Figure 1: Plots of three approximation rates. For the intervals problem with target complexity s = 250 intervals (linear plot intersecting d-axis at 250), for the Perceptron problem with target complexity s = 150 nonzero weights (nonlinear plot intersecting d-axis at 150), and for power law decay asymptoting at ²min = 0.05.
functions over [0, 1] of at most d steps; thus, each function partitions the interval [0, 1] into at most d disjoint segments (not necessarily of equal width) and assigns alternating positive and negative labels to these segments. The input space is one-dimensional, but the structure contains arbitrarily complex functions over [0, 1]. It is easily verified that our assumption that the VC dimension of Hd is O(d) holds here, and that the fitting number obeys dmax (m) ≤ m. Now suppose that the input density P is uniform, and suppose that the target function f is the function of s alternating segments of equal width 1/s, for some s (thus, f lies in the class Hs ). We will refer to these settings as the intervals problem. Then the approximation rate is ²g (d) = (1/2)(1 − d/s) for 1 ≤ d < s and ²g (d) = 0 for d ≥ s (see Figure 1). Thus, as long as d < s, increasing the complexity d gives linear payoff in terms of decreasing the optimal generalization error. For d ≥ s, there is no payoff for increasing d, since we can already realize the target function. The reader can easily verify that if f lies in Hs but does not have equal width intervals, ²g (d) is still piecewise linear, but for d < s is concave up: the gain in approximation obtained by incrementally increasing d diminishes as d becomes larger. Although the intervals problem is rather simple and artifi-
1148
Michael Kearns
cial, a precise analysis of cross-validation behavior can be given for it, and we will argue that this behavior is representative of much broader and more realistic settings. 3.2 The Perceptron Problem. In this problem, the input space X is 0); it will be more important to recognize and model the cases in which power law behavior is grossly violated. Note that this universal estimation rate bound holds only under the assumption that the training sample is noise free, but straightforward generalizations exist. For instance, if the training datapare corrupted by random label noise at rate 0 ≤ η < 1/2, then ρ(d, m) = O( (d/(1 − 2η)2 m) log(m/d)) is again a universal estimation rate bound. It is important to realize that once we have an estimation rate bound, it is a straightforward matter to derive a natural approach to the problem of model selection. Namely, since we have ²g (hd ) ≤ ²t (hd ) + ρ(d, m)
(4.2)
with high probability, if we assume equality in equation 4.2, we might be led to choose the value of d minimizing the quantity ²t (hd ) + ρ(d, m). This is 4 In the later part of the article, we will often assume that specific forms of ρ(d, m) are not merely upper bounds on this deviation but accurate approximations to it. 5 The results of Vapnik actually show the stronger result that |² (h) − ² (h)| = t g
p
O( (d/m) log(m/d)) for all h ∈ Hd , not only for the training error minimizer hd .
1150
Michael Kearns
exactly the approach taken in Vapnik’s guaranteed risk minimization (Vapnik, 1982), and it leads to very general and powerful bounds on generalization error for the approach just described. Our motivation here, however, is slightly different; we are interested in providing similar bounds for cross validation, primarily because it is an approach in wide experimental use. Thus, rather than assuming an estimation rate bound and examining the model selection method suggested by the minimization of equation 4.2, we are instead examining what can be said about cross validation if we assume certain approximation and estimation rates. After giving a general bound on ²cv (m) in which the approximation and estimation rate functions are parameters, we investigate the behavior of ²cv (m) (and more specifically, of the parameter γ ) for specific choices of these parameters. 5 The Bound Theorem 1. Let H1 ⊂ · · · ⊂ Hd · · · be any structure, where the VC dimension of Hd is O(d). Let f and P be any target function and input distribution, let ²g (d) be the approximation rate function for the structure with respect to f and P, and let ρ(d, m) be an estimation rate bound for the structure with respect to f and P. Then for any m, with high probability
²cv (m) ≤
minγ
γ log(d ) max , (5.1) ²g (d) + ρ(d, (1 − γ )m) + O γm ª
©
1≤d≤dmax
s
γ
where γ is the fraction of the training sample used for testing, and dmax is the fitting number dmax ((1−γ )m). Using the universal estimation bound rate of equation 4.1 and the rather weak assumption that dmax (m) is polynomial in m, we obtain that with high probability Ãs
( ²cv (m) ≤
minγ
1≤d≤dmax
Ãs
+O
²g (d) + O
!)
³m´ d log (1 − γ )m d !
(5.2)
log((1 − γ )m) . γm
Straightforward generalizations of these bounds for the case where the data are corrupted by classification noise can be obtained using the modified estimation rate bound given in section 4.6 6 The main effect of classification noise at rate η is the replacement of occurrences in the bound of the sample size m by the smaller effective sample size (1 − η)2 m.
Bound on the Error of Cross Validation
1151
Proof Sketch. We have space only to highlight the main ideas of the proof. γ For each d from 1 to dmax , fix a function fd ∈ Hd satisfying ²g ( fd ) = ²g (d); thus, fd is the best possible approximation to the target f within the class it can be shown that with high Hd . By a standard Chernoff bound argument q γ
γ
probability we have |²t ( fd ) − ²g ( fd )| ≤ log(dmax )/m for all 1 ≤ d ≤ dmax . This means q that within each Hd , the minimum training error ²t (hd ) is at most ²g (d) +
γ
log(dmax )/m. Since ρ(d, m) is an estimation rate bound, we have γ
that with high probability, for all 1 ≤ d ≤ dmax , ²g (hd ) ≤ ²t (hd ) + ρ(d, m) q γ ≤ ²g (d) + log(dmax )/m + ρ(d, m).
(5.3) (5.4) γ
Thus we have bounded the generalization error of the dmax hypotheses hd , only one of which will be selected. If we knew the actual values of these generalization errors (equivalent to having an infinite test sample in cross γ validation), we could bound our error by the minimum over all 1 ≤ d ≤ dmax of the expression 5.4. However, in cross validation we do not know the exact values of these generalization errors but must instead use the γ m testing examples to estimate them. Again q by standard Chernoff bound arguments, γ
this introduces an additional log(dmax )/(γ m) error term, resulting in our final bound. This concludes the proof sketch. In the bounds given by equations 5.1 and 5.2, the min{·} expression is analogous to Barron and Cover’s (1991) index of resolvability. The final term in the bounds represents the error introduced by the testing phase of cross validation. These bounds exhibit trade-off behavior with respect to the parameter γ . As we let γ approach 0, we are devoting more of the sample to training the hd , and the estimation q rate term ρ(d, (1 − γ )m) is decreasing. γ
However, the test error term O( log(dmax )/(γ m)) is increasing, since we have fewer data to estimate the ²g (hd ) accurately. The reverse phenomenon occurs as we let γ approach 1. While we believe theorem 1 to be enlightening and potentially useful in its own right, we would now like to take its interpretation a step further. More precisely, suppose we assume that the bound is an approximation to the actual behavior of ²cv (m). Then in principle we can optimize the bound to obtain the best value for γ . Of course, in addition to the assumptions involved—the main one being that ρ(d, m) is a good approximation to the training generalization error deviations of the hd —this analysis can only be carried out given information that we should not expect to have in practice (at least in exact form)—in particular, the approximation rate function ²g (d), which depends on f and P. However, we argue in the coming sections that several interesting qualitative phenomena regarding the choice of γ are largely invariant to a wide range of natural behaviors for ²g (d).
1152
Michael Kearns
6 A Case Study: The Intervals Problem We begin by performing the suggested optimization of γ for the intervals problem. Recall that the approximation rate here is ²g (d) = (1/2)(1−d/s) for d < s and ²g (d) = 0 for d ≥ s, where s is the complexity of the target function. Here we analyze the behavior obtainedpby assuming that the estimation rate ρ(d, m) actually behaves as ρ(d, m) = d/(1 − γ )m (so we are omitting the log factor from the universal bound),7 and to simplify the formal analysis apbit (but without changing the qualitative behavior) we replace the term p log((1 − γ )m)/(γ m) by the weaker log(m)/m. Thus, if we define the function q p (6.1) F(d, m, γ ) = ²g (d) + d/(1 − γ )m + log(m)/(γ m) then following equation 5.1, we are approximating ²cv (m) by ²cv (m) ≈ min1≤d≤dγmax {F(d, m, γ )}.8 The first step of the analysis is to fix a value for γ and differentiate F(d, m, γ ) with respect to d to discover the minimizing value of d. This differentiation must have two regimes due to the discontinuitypat d = s in ²g (d). It is easily verified that the derivative is −(1/2s) + 1/(2 d(1 − γ )m) p for d < s and 1/(2 d(1 − γ )m) for d ≥ s. It can be shown that provided that (1 − γ )m ≥ 4s then d = s is a global minimum of F(d, m, γ ), and if this condition is violated, then the value we obtain for ²cv (m) is vacuously large anyway (meaning that this fixed choice of γ cannot end up being the optimal one, or, if m < 4s, that our analysis claims we simply do not have enough data for nontrivial generalization, regardless of how we split it between training and p p testing). Plugging in d = s yields ²cv (m) ≈ F(s, m, γ ) = s/(1 − γ )m + log(m)/(γ m) for this fixed choice of γ . Now by differentiating F(s, m, γ ) with respect to γ , it can be shown that the optimal choice of γ under the assumptions is γopt =
(log(m)/s)1/3 . 1 + (log(m)/s)1/3
(6.2)
It is important to remember at this point that despite the fact that we have derived a precise expression for γopt , due to the assumptions and approxima7 It can be argued that this power law estimation rate is actually a rather accurate approximation for the true behavior of the training generalization deviations of the hd for this particular problem at moderate classification noise levels. For very small noise rates, a better approximation would replace the two square root terms by linear terms. 8 Note that although there are hidden constants in the O(·) notation of theorem 1, it is the relative weights of the estimation and test error terms that are important when we optimize equation 5.1 for γ . In the definition of F(d, m, γ ) we have chosen both terms to have constant multipliers equal to 1, which is reasonable because both terms have the same Chernoff bound origins.
Bound on the Error of Cross Validation
1153
cv error bound, intervals, d = s = 1000 slice, m=10000 0.5
0.4
0.3
y
0.2
0.1
00
0.2
0.4
0.6
0.8
1
trainfrac
Figure 2: Plot of the predicted generalization error of cross validation for the intervals model selection problem, as a function of the fraction 1 − γ of data used for training. (In the plot, the fraction of training data is 0 on the left (γ = 1) and 1 on the right (γ = 0).) The fixed sample size m = 10, 000 was used, and the six plots show the error predicted by the theory for target function complexity values s = 10 (bottom plot), 50, 100, 250, 500, and 1000 (top plot).
tions we have made in the various constants, any quantitative interpretation of this expression is meaningless. However, we can reasonably expect that this expression captures the qualitative way in which the optimal γ changes as the amount of data m changes in relation to the target function complexity s. On this score, the situation initially appears rather bleak, as the function (log(m)/s)1/3 /(1 + (log(m)/s)1/3 ) is quite sensitive to the ratio log(m)/s. For example, for m =10,000, if s = 10 we obtain γopt = 0.524 · · ·, if s = 100 we obtain γopt = 0.338 · · ·, and if s = 1000 we obtain γopt = 0.191 · · ·. Thus γopt is becoming smaller as log(m)/s becomes small, and the analysis suggests vastly different choices for γopt depending on the target function complexity, which is something we do not expect to have the luxury of knowing in practice. However, it is both fortunate and interesting that γopt does not tell the entire story. In Figure 2, we plot the function F(s, m, γ )9 as a function of γ 9
In the plots, we now use the more accurate test penalty term since we are no longer concerned with simplifying the calculation.
p
log((1 − γ )m)/(γ m)
1154
Michael Kearns
for m = 10,000 and for several different values of s (note that for consistency with the later experimental plots, the x-axis of the plot is actually the training fraction 1 − γ ). Here we can observe four important qualitative phenomena, which we list in order of increasing subtlety: 1. When s is small compared to m, the predicted error is relatively insensitive to the choice of γ . As a function of γ , F(s, m, γ ) has a wide, flat bowl, indicating a wide range of γ yielding essentially the same near-optimal error. 2. As s becomes larger in comparison to the fixed sample size m, the relative superiority of γopt over other values for γ becomes more pronounced. In particular, large values for γ become progressively worse as s increases. For example, the plots indicate that for s = 10 (again, m = 10,000), even though γopt = 0.524 · · · the choice γ = 0.75 will result in error quite near that achieved using γopt . However, for s = 500, γ = 0.75 is predicted to yield greatly suboptimal error. Note that for very large s, the bound predicts vacuously large error for all values of γ , so that the choice of γ again becomes irrelevant. 3. Because of the insensitivity to γ for when s is small compared to m, there is a fixed value of γ that seems to yield reasonably good performance for a wide range of values for s. This value is essentially the value of γopt for the case where s is large but nontrivial generalization is still possible, since choosing the best value for γ is more important there than for the small s case. 4. The value of γopt is decreasing as s increases. This is slightly difficult to confirm from the plot, but can be seen clearly from the precise expression for γopt . Despite the fact that the analysis so far has been rather specialized (addressing the behavior for a fixed structure, target function, and input distribution), it is our belief that the four phenomena just listed hold for many other model selection problems. For instance, we shall shortly demonstrate that the theory again predicts that these phenomena hold for the Perceptron problem and for the case of power law decay of ²g (d) described earlier. First we give an experimental demonstration that at least the predicted properties 1, 2, and 3 truly do hold for the intervals problem. In Figure 3, we plot the results of experiments in which labeled random samples of size m = 5000 were generated for a target function of s equal width intervals, for s =10,100 and 500. The samples were corrupted by random label noise at rate η = 0.3. For each value of γ and each value of d, (1 − γ )m of the sample was given to a program performing training error minimization within
Bound on the Error of Cross Validation
1155
err vs train set size, s=10,100,500, 30% noise, m=5000 0.5
0.4
0.3
0.2
0.1
00
1000
2000
3000
4000
5000
trainsize
Figure 3: Experimental plots of cross-validation generalization error in the intervals problem as a function of training set size (1 − γ )m. Experiments with the three target complexity values s =10,100, and 500 (bottom plot to top plot) are shown. Each point represents performance averaged over 10 trials.
Hd .10 The remaining γ m examples were used to select the best hd according to cross validation. The plots show the true generalization error of the hd selected by cross validation as a function of γ (the generalization error can be computed exactly for this problem). Each point in the plots represents an average over 10 trials. While there are obvious and significant quantitative differences between these experimental plots and the theoretical predictions of Figure 2 (indeed, even apparently systematic differences), the properties 1, 2, and 3 are rather clearly borne out by the data: 1. In Figure 3, when s is small compared to m, there is a wide range of acceptable γ ; it appears that any choice of γ between 0.10 and 0.50 yields nearly optimal generalization error. 2. By the time s = 100, the sensitivity to γ is considerably more pronounced. For example, the choice γ = 0.50 now results in clearly 10 A nice feature of the intervals problem is that training error minimization can be performed in almost linear time using a dynamic programming approach (Kearns, Mansour, Ng, & Ron, 1995).
1156
Michael Kearns
suboptimal performance, and it is more important to have γ close to 0.10. 3. Despite these complexities, there does indeed appear to be a single value of γ —approximately 0.10—that performs nearly optimally for the entire range of s examined. The fourth property—that the optimal γ decreases as the target function complexity is increased relative to a fixed m—is certainly not refuted by the experimental results, but any such effect is simply too small to be verified. It would be interesting to verify this prediction experimentally, perhaps on a different problem where the predicted effect is more pronounced.
7 Power Law Decay and the Perceptron Problem For the cases where the approximation rate ²g (d) obeys either power law decay or is that derived for the Perceptron problem discussed in section 3, the behavior of ²cv (m) as a function of γ predicted by our theory is largely the same. For example, if ²g (d) = (c/d), and we use the standard estimation p rate ρ(d, m) = d/(1 − γ )m, then an analysis similar to that performed for the intervals problem reveals that for fixed γ , the minimizing choice of d is d = (4c2 (1 − γ )m)1/3 ; plugging this value of d back into the bound on ²cv (m) and plotting for various values of c yields Figure 4. Similar to Figure 2 for the intervals problem, Figure 4 shows the predicted behavior of ²cv (m) as a function of γ for a fixed m and several different choices for the complexity parameter c. We again see that properties 1 through 4 hold strongly despite the change in ²g (d) from the intervals problem, although quantitative aspects of the prediction, which already must be taken lightly for reasons previously stated, have obviously changed, such as the interesting values of the ratio of sample size to target function complexity (that is, the values of this ratio for which the generalization error is most sensitive to the choice of γ ). Through a combination of formal analysis and plotting, it is possible to demonstrate that the properties 1 through 4 are robust to wide variations in the parameters α and ²min in the parametric form ²g (d) = (c/d)α + ²min , as well as wide variations in the form of the estimation rate ρ(d, m). For example, if ²g (d) = (c/d)2 (faster than the approximation rate examined above) and ρ = d/m (faster than the estimation rated examined above), then for the interesting ratios of m to c (that is, where the generalization error predicted is bounded away from 0 and the trivial value of 1/2), a figure quite similar to Figure 4 is obtained. Similar predictions can be derived for the Perceptron problem using the universal estimation rate bound or any similar power law form.
Bound on the Error of Cross Validation
1157
cv bound, (c/d) for c from 1.0 to 150.0, m=25000 0.5
0.4
0.3
y
0.2
0.1
00
0.2
0.4
0.6
0.8
1
trainfrac
Figure 4: Plot of the predicted generalization error of cross validation for the power law case ²g (d) = (c/d), as a function of the fraction 1 − γ of data used for training. The fixed sample size m =25,000 was used, and the six plots show the error predicted by the theory for target function complexity values c = 1 (bottom plot), 25, 50, 75, 100, and 150 (top plot).
8 Backpropagation Experiments As a final test of the predictions made by the theory presented here, we describe some experiments using the backpropagation learning algorithm for neural networks (see e.g., Rumelhart, Hinton, & Williams, 1986). In the first set of experiments (see Figure 5), backpropagation was used to train a neural network with two input units, eight hidden units, and a single output unit. The data were generated according to a nearest-neighbor function over the unit square in the real plane. More precisely, the target function was defined by s randomly chosen points (exemplars) in the unit square, along with randomly chosen labels for these exemplars. Any input in the unit square is then assigned the label of the nearest exemplar (with respect to Euclidean distance). Thus s is a measure of the complexity of the target function analogous to those for the intervals and Perceptron problems. For each such target function, 100 examples were generated and corrupted by random classification noise at rate η = 0.15. These examples were then divided into a training set and a testing set, which in this case was used to determine the apparent best number of backpropagation training epochs
1158
Michael Kearns err vs train fraction, backprop with increasing s 0.5
0.4
0.3
0.2
0.1
00
0.2
0.4
0.6
0.8
1
trainfrac
Figure 5: Experimental plots of cross-validation generalization error as a function of training set fraction (1 − γ ) for backpropagation, where m = 100 and the test set is used to determine the number of training epochs. Experiments with many different nearest-neighbor target functions are shown, with the number of exemplars s in the target function increasing from the bottom plot (s = 1, plot lies along horizontal axis) to the top plot (s = 128). Each point represents performance averaged over 300 trials.
to perform. For the purposes of the experiment, the generalization error of the chosen network was then estimated using an additional set of 8000 examples. Note that we are deviating here from the formal framework in which we began. Rather than explicitly searching for a good value of complexity in a nested sequence of increasingly complex hypothesis classes, here we are training within a single, fixed architecture. Intuitively, we are assuming that increased training time within this fixed architecture leads to increased effective complexity, which must be regulated by cross validation. This intuition, while still in need of rigorous theoretical and experimental examination, appears to have the status of a folk result within the neural network research community. Despite the deviation from our initial framework, the results shown in Figure 5 seem to confirm properties 1 through 3: sensitivity to the trainingtest split is relatively low for small s but increases as s becomes larger (until, as usual, s is sufficiently large to preclude nontrivial generalization for any split). Nevertheless, a fixed value for the split—in this case, somewhere
Bound on the Error of Cross Validation
1159
err vs train fraction, backprop with increasing noise 0.5
0.4
0.3
0.2
0.1
00
0.2
0.4
0.6
0.8
1
trainfrac
Figure 6: Experimental plots of cross-validation generalization error as a function of training set fraction (1 − γ ) for backpropagation, where m = 100 and the test set is used to determine the number of training epochs. Experiments with a fixed nearest-neighbor target function with s = 2 exemplars are shown, but with the noise rate increasing from the bottom plot (η = 0.0) to the top plot (η = 0.45). Each point represents performance averaged over 300 trials.
between 20 and 30 percent devoted for testing—seems to give good performance for all s. Similar results held in the experiments depicted by Figure 6. Here the nearest-neighbor target function was fixed to have s = 2 exemplars (with different labels), but we varied the classification noise rate η.11 Again the cross-validation test set was used to determine the number of backpropagation training epochs, and in Figure 6 we plot the resulting generalization error as a function of the training-test split for many different values of the η. Thus, in this case we are increasing the complexity of the data by adding noise. Despite the philosophical differences between adding deterministic complexity and adding noise, we see that the outcome is similar and again provides coarse confirmation of the theory.
11
Note that for the s = 2 case, the target function is actually linearly separable.
1160
Michael Kearns
9 Conclusions In summary, our theory predicts that although significant quantitative differences in the behavior of cross validation may arise for different model selection problems, the properties 1 through 4 should be present in a wide range of problems. At the very least, the behavior of our bounds exhibits these properties for a wide range of problems. It would be interesting to try to identify natural problems for which one or more of these properties is strongly violated; a potential source for such problems may be those for which the underlying learning curve deviates from classical power law behavior (Seung et al., 1992; Haussler et al., 1994). Perhaps the greatest weakness of the results presented here is in our failure to treat multifold cross validation. The difficulty lies in our apparent need for independence between the training and test sets to apply our methods. We leave the extension to multifold approaches for future research.
Acknowledgments I give warm thanks to Yishay Mansour, Andrew Ng, and Dana Ron for many enlightening conversations on cross validation and model selection. Additional thanks to Andrew Ng for his help in conducting the experiments.
References Barron, A. (1990). Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory 19:930–944. Barron, A. R., & Cover, T. M. (1991). Minimum complexity density estimation. IEEE Transactions on Information Theory 37:1034–1054. Devroye, L. (1988). Automatic pattern recognition: A study of the probability of error. IEEE Trans. Pattern Anal. Mach. Intell. 10(4):530–543. Haussler, D., Kearns, M., Seung, H. S., & Tishby, N. (1994). Rigourous learning curve bounds from statistical mechanics. In Proceedings of the Seventh Annual ACM Confernce on Computational Learning Theory (pp. 76–87). Kearns, M., Mansour, Y., Ng, A., & Ron, D. (1995). An experimental and theoretical comparison of model selection methods. In Proceedings of the Eighth Annual ACM Conference on Computational Learning Theory. Rissanen, J. (1989). Stochastic complexity in statistical inquiry. Singapore: World Scientific. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature 323:533–536. Seung, H. S., Sompolinsky, H., & Tishby, N. (1992). Statistical mechanics of learning from examples. Physical Review A 45:6056–6091. Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society B 36:111–147.
Bound on the Error of Cross Validation
1161
Stone, M. (1977). Asymptotics for and against cross-validation. Biometrika 64(1):29–35. Vapnik, V. N. (1982). Estimation of dependences based on empirical data. New York: Springer-Verlag. Vapnik, V. N., & Chervonenkis, A. Y. (1971). On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and Its Applications 16(2):264–280.
Received January 4, 1996; accepted October 1, 1996.
Communicated by Leo Breiman
Averaging Regularized Estimators Michiaki Taniguchi Volker Tresp Siemens AG, Central Research, 81730 Munich, Germany
We compare the performance of averaged regularized estimators. We show that the improvement in performance that can be achieved by averaging depends critically on the degree of regularization which is used in training the individual estimators. We compare four different averaging approaches: simple averaging, bagging, variance-based weighting, and variance-based bagging. In any of the averaging methods, the greatest degree of improvement—if compared to the individual estimators—is achieved if no or only a small degree of regularization is used. Here, variance-based weighting and variance-based bagging are superior to simple averaging or bagging. Our experiments indicate that better performance for both individual estimators and for averaging is achieved in combination with regularization. With increasing degrees of regularization, the two bagging-based approaches (bagging and variance-based bagging) outperform the individual estimators, simple averaging, and variance-based weighting. Bagging and variance-based bagging seem to be the overall best combining methods over a wide range of degrees of regularization. 1 Introduction Several authors have noted the advantages of averaging estimators that were trained on either identical training data (Perrone, 1993) or bootstrap samples of the training data (a procedure termed bagging predictors by Breiman, 1994). Both theory and experiments show that averaging helps most if the errors in the individual estimators are not positively correlated and if the estimators have only small bias. On the other hand, it is well known from theory and experiment that the best performance of a single predictor is typically achieved if some form of regularization (weight decay), early stopping, or pruning is used. All three methods tend to decrease variance and increase bias of the estimator. Therefore, we expect that the optimal degrees of regularization for a single estimator and for averaging would not necessarily be the same. In this article, we investigate the effect of regularization on averaging. In addition to simple averaging and bagging, we also perform experiments using combining principles where the weighting functions are dependent on the input. The weighting functions Neural Computation 9, 1163–1178 (1997)
c 1997 Massachusetts Institute of Technology °
1164
Michiaki Taniguchi and Volker Tresp
can be derived by estimating the variance of each estimator for a given input (variance-based weighting, Tresp & Taniguchi, 1995; variance-based bagging, Taniguchi & Tresp, 1995). In the next section we derive some fundamental equations for averaging biased and unbiased estimators. In section 3 we show how the theory can be applied to regression problems, and we introduce the different averaging methods used in the experiments described in section 4. In section 5 we discuss the results, and in section 6 we present conclusions. 2 A Theory of Combining Biased Estimators 2.1 Optimal Weights for Combining Biased Estimators. We would like to estimate the unknown variable t based on the realizations of a set of M . The expected squared error between fi and t, random variables { fi }i=1 E( fi − t)2
=
E( fi − mi + mi − t)2
=
E( fi − mi )2 + E(mi − t)2 + 2E(( fi − mi )(mi − t))
=
vari + b2i ,
decomposes into the variance vari = E( fi − mi )2 , and the square of the bias bi = mi − t with mi = E( fi ). E(.) stands for the expected value. Note that E[( fi − mi )(mi − t)] = (mi − t)E( fi − mi ) = 0. In the following we are interested in estimating t by forming a linear combination of the fi , tˆ =
M X
gi fi = g0 f,
i=1
where f = ( f1 , . . . , fM )0 and the weighting vector g = (g1 , . . . , gM )0 . The expected error of the combined system is (Meir, 1995) E(tˆ − t)2
=
E(g0 f − E(g0 f ))2 + E(E(g0 f ) − t)2
=
E(g0 ( f − E( f )))2 + E(g0 m − t)2
=
g0 Äg
+
(g0 m
−
(2.1)
t)2 ,
where Ä is an M × M covariance matrix with Äij = E[( fi − mi )( fj − mj )] and with m = (m1 , . . . , mM )0 . The expected error of the combined system is minimized for1 g∗ = (mm0 + Ä)−1 tm. 1 Interestingly, even if the estimators are unbiased, that is, m = t ∀i = 1, . . . , M, the i minimum error estimator is biased, which confirms that a biased estimator can have a
Averaging Regularized Estimators
1165
2.2 Constraints. A commonly used constraint, which we also use in our experiments, is that M X
gi = 1,
gi ≥ 0, i = 1, . . . , M.
i=1
In the following, g can be written as g = (u0 h)−1 h,
(2.2)
where u = (1, . . . , 1)0 is an M-dimensional vector of ones, h = (h1 , . . . , hM )0 , and hi > 0, ∀i = 1, . . . , M. The constraint can be enforced in minimizing equation 2.1 by using the Lagrangian function, L = g0 Äg + (g0 m − t)2 + µ(g0 u − 1), with Lagrange multiplier µ. The optimum is achieved if we set (Tresp & Taniguchi, 1995) h∗ = [Ä + (m − tu)(m − tu)0 ]−1 u. Now the individual biases (mi − t) appear explicitly in the weights. For the optimal weights, E(tˆ − t)2 =
1 . u0 (Ä + (m − tu)(m − tu)0 )−1 u
Note that by using the constraint in equation 2.2, the combined estimator is unbiased if the individual estimators are unbiased, which is the main reason for employing the constraint. With unbiased estimators we obtain h∗ = Ä−1 u, and for the optimal weights, E(tˆ − t)2 = (u0 Ä−1 u)−1 . If in addition the individual estimators are uncorrelated, we obtain h∗i =
1 . vari
smaller expected error than an unbiased estimator. For example, consider the case that M = 1 and m1 = t (no bias). Then g∗ = t2 /(t2 + var1 ). This term is smaller than one if t 6= 0 and var1 > 0. Then, E(tˆ) = E(g∗ f1 ) < m1 ; that is, the minimum expected error estimator is biased.
1166
Michiaki Taniguchi and Volker Tresp
3 Averaging Regularized Estimators 3.1 Training. The previous theory can be applied to the problem of function estimation. Let us assume we have a training data set L = {(xk , yk )}Kk=1 , xk ∈ 0.
For a motivation of the preprocessing and further details, see Dichtl (1995). Acknowledgments This research was partially supported by the Bundesministerium fur ¨ Bildung, Wissenschaft, Forschung und Technologie, grant number 01 IN 505 A.
1178
Michiaki Taniguchi and Volker Tresp
References Breiman, L. (1994). Bagging predictors (Tech. Rep. No. 421). Department of Statistics, University of California at Berkeley. Dichtl, H. (1995). Zur Prognose des Deutschen Aktienindex DAX mit Hilfe von Neuro-Fuzzy-Systemen. Beitr¨age zur Theorie der Finanzm¨arkte, 12, Institut fur ¨ Kapitalmarktforschung, J. W. Goethe-Universit¨at, Frankfurt. Efron, B., & Tibshirani, R. (1993). An introduction to the bootstrap. London: Chapman and Hall. Jacobs, R. A. (1995). Methods for combining experts’ probability assessment. Neural Computation, 7, 867–888. Krogh, A., & Vedelsby, J. (1995). Neural network ensembles, cross validation, and active learning. In G. Tesauro, D. S. Touretzky, and T. K. Leen (Eds.), Advances in Neural Information Processing Systems 7. Cambridge, MA: MIT Press. Meir, R. (1995). Bias, variance and the combination of least squares estimators. In G. Tesauro, D. S. Touretzky, and T. K. Leen (Eds.), Advances in Neural Information Processing Systems 7. Cambridge, MA: MIT Press. Perrone, M. P. (1993). Improving regression estimates: Averaging methods for variance reduction with extensions to general convex measure optimization. Doctoral dissertation, Brown University, Providence, RI. Taniguchi, M., & Tresp, V. (1995). Variance-based combination of estimators trained by bootstrap replicates. In Proc. Inter. Symposium on Artificial Neural Networks. Taiwan: Hsinchu. Tibshirani, R. (1994). A comparison of some error estimates for neural network models (Tech. Rep.). Toronto: Department of Statistics, University of Toronto. Tresp, V., & Taniguchi, M. (1995). Combining estimators using non-constant weighting functions. In G. Tesauro, D. S. Touretzky, and T. K. Leen (Eds.), Advances in Neural Information Processing Systems 7. Cambridge, MA: MIT Press. Wolpert, D. H. (1992). Stacked generalization. Neural Networks, 5, 241–259.
Received March 27, 1996; accepted August 8, 1996.
REVIEW
Communicated by Dan Johnston
The NEURON Simulation Environment M. L. Hines Department of Computer Science and Neuroengineering and Neuroscience Center, Yale University, New Haven, CT 06520, U.S.A.
N. T. Carnevale Psychology Department, Yale University, New Haven, CT 06520, U.S.A.
The moment-to-moment processing of information by the nervous system involves the propagation and interaction of electrical and chemical signals that are distributed in space and time. Biologically realistic modeling is needed to test hypotheses about the mechanisms that govern these signals and how nervous system function emerges from the operation of these mechanisms. The NEURON simulation program provides a powerful and flexible environment for implementing such models of individual neurons and small networks of neurons. It is particularly useful when membrane potential is nonuniform and membrane currents are complex. We present the basic ideas that would help informed users make the most efficient use of NEURON. 1 Introduction NEURON (Hines, 1984, 1989, 1993, 1994) provides a powerful and flexible environment for implementing biologically realistic models of electrical and chemical signaling in neurons and networks of neurons. This article describes the concepts and strategies that have guided the design and implementation of this simulator, with emphasis on those features that are particularly relevant to its most efficient use.
1.1 The Problem Domain. Information processing in the brain results from the spread and interaction of electrical and chemical signals within and among neurons. This involves nonlinear mechanisms that span a wide range of spatial and temporal scales (Carnevale & Rosenthal, 1992) and are constrained to operate within the intricate anatomy of neurons and their interconnections. Consequently the equations that describe brain mechanisms generally do not have analytical solutions, and intuition is not a reliable guide to understanding the working of the cells and circuits of the brain. Furthermore, these nonlinearities and spatiotemporal complexities are quite unlike those that are encountered in most nonbiological systems, Neural Computation 9, 1179–1209 (1997)
c 1997 Massachusetts Institute of Technology °
1180
M. L. Hines and N. T. Carnevale
so the utility of many quantitative and qualitative modeling tools that were developed without taking these features into consideration is severely limited. NEURON is designed to address these problems by enabling both the convenient creation of biologically realistic quantitative models of brain mechanisms and the efficient simulation of the operation of these mechanisms. In this context the term biological realism does not mean “infinitely detailed.” Instead, it means that the choice of which details to include in the model and which to omit are at the discretion of the investigator who constructs the model, and not forced by the simulation program. To the experimentalist, NEURON offers a tool for cross-validating data, estimating experimentally inaccessible parameters, and deciding whether known facts account for experimental observations. To the theoretician, it is a means for testing hypotheses and determining the smallest subset of anatomical and biophysical properties that is necessary and sufficient to account for particular phenomena. To the student in a laboratory course, it provides a vehicle for illustrating and exploring the operation of brain mechanisms in a simplified form that is more robust than the typical “wet lab” experiment. For the experimentalist, theoretician, and student alike, a powerful simulation tool such as NEURON can be an indispensable aid to developing the insight and intuition that is needed if one is to discover the order hidden within the intricacy of biological phenomena, the order that transcends the complexity of accident and evolution. 1.2 Experimental Advances and Quantitative Modeling. Experimental advances drive and support quantitative modeling. Over the past two decades, the field of neuroscience has seen striking developments in experimental techniques, among them the following: • High-quality electrical recording from neurons in vitro and in vivo using patch clamp. • Multiple impalements of visually identified cells. • Simultaneous intracellular recording from paired pre- and postsynaptic neurons. • Simultaneous measurement of electrical and chemical signals. • Multisite electrical and optical recording. • Quantitative analysis of anatomical and biophysical properties from the same neuron. • Photolesioning of cells. • Photorelease of caged compounds for spatially precise chemical stimulation.
The NEURON Simulation Environment
1181
• New drugs such as channel blockers and receptor agonists and antagonists. • Genetic engineering of ion channels and receptors. • Analysis of messenger RNA and biophysical properties from the same neuron. • “Knockout” mutations. These and other advances are responsible for impressive progress in the definition of the molecular biology and biophysics of receptors and channels; the construction of libraries of identified neurons and neuronal classes that have been characterized anatomically, pharmacologically, and biophysically; and the analysis of neuronal circuits involved in perception, learning, and sensorimotor integration. The result is a data avalanche that catalyzes the formulation of new hypotheses of brain function, while at the same time serving as the empirical basis for the biologically realistic quantitative models that must be used to test these hypotheses. Following are some examples from the large list of topics that have been investigated through the use of such models: • The cellular mechanisms that generate and regulate chemical and electrical signals (Destexhe, Contreras, Steriade, Sejnowski, & Huguenard, 1996; Jaffe et al., 1994). • Drug effects on neuronal function (Lytton & Sejnowski, 1992). • Presynaptic (Lindgren & Moore, 1989) and postsynaptic (Destexhe & Sejnowski, 1995; Traynelis, Silver, & Cull-Candy, 1993) mechanisms underlying communication between neurons. • Integration of synaptic inputs (Bernander, Douglas, Martin, & Koch, 1991; Cauller & Connors, 1992). • Action potential initiation and conduction (H¨ausser, Stuart, Racca, & Sakmann, 1995; Hines & Shrager, 1991; Mainen, Joerges, Huguenard, & Sejnowski, 1995). • Cellular mechanisms of learning (Brown, Zador, Mainen, & Claiborne, 1992; Tsai, Carnevale, & Brown, 1994a). • Cellular oscillations (Destexhe, Babloyantz, & Sejnowski, 1993a; Lytton, Destexhe, & Sejnowski, 1996). • Thalamic networks (Destexhe, McCormick, & Sejnowski, 1993b; Destexhe, Contreras, Sejnowski, & Steriade, 1994). • Neural information encoding (Hsu et al., 1993; Mainen & Sejnowski, 1995; Softky, 1994).
1182
M. L. Hines and N. T. Carnevale
2 Overview of NEURON NEURON is intended to be a flexible framework for handling problems in which membrane properties are spatially inhomogeneous and where membrane currents are complex. Since it was designed specifically to simulate the equations that describe nerve cells, NEURON has three important advantages over general-purpose simulation programs. First, the user is not required to translate the problem into another domain, but instead is able to deal directly with concepts that are familiar at the neuroscience level. Second, NEURON contains functions that are tailored specifically for controlling the simulation and graphing the results of real neurophysiological problems. Third, its computational engine is particularly efficient because of the use of special methods and tricks that take advantage of the structure of nerve equations (Hines, 1984; Mascagni, 1989). The general domain of nerve simulation, however, is still too large for any single program to deal optimally with every problem. In practice, each program has its origin in a focused attempt to solve a restricted class of problems. Both speed of simulation and the ability of the user to maintain conceptual control degrade when any program is applied to problems outside the class for which it is best suited. NEURON is computationally most efficient for problems that range from parts of single cells to small numbers of cells in which cable properties play a crucial role. In terms of conceptual control, it is best suited to tree-shaped structures in which the membrane channel parameters are approximated by piecewise linear functions of position. Two classes of problems for which it is particularly useful are those in which it is important to calculate ionic concentrations and those where one needs to compute the extracellular potential just next to the nerve membrane. It is especially capable for investigating new kinds of membrane channels since they are described using a high-level neuron model description language (NMODL) (Moore & Hines, 1996), which allows the expression of models in terms of kinetic schemes or sets of simultaneous differential and algebraic equations. To maintain efficiency, user-defined mechanisms in NMODL are automatically translated into C, compiled, and linked into the rest of NEURON. The flexibility of NEURON comes from a built-in object-oriented interpreter, which is used to define the morphology and membrane properties of neurons, control the simulation, and establish the appearance of a graphical interface. The default graphical interface is suitable for exploratory simulations involving the setting of parameters, control of voltage and current stimuli, and graphing variables as a function of time and position. Simulation speed is excellent since membrane voltage is computed by an implicit integration method optimized for branched structures (Hines, 1984). The performance of NEURON degrades very slowly with increased complexity of morphology and membrane mechanisms, and it has been applied to very large network models: 104 cells with six compartments each,
The NEURON Simulation Environment
1183
for a total of 106 synapses in the net (T. Sejnowski, personal communication, March 29, 1996). 3 Mathematical Basis Strategies for numerical solution of the equations that describe chemical and electrical signaling in neurons have been discussed in many places. Elsewhere we have briefly presented an intuitive rationale for the most commonly used methods (Hines & Carnevale, 1995). Here we start from this base and proceed to address those aspects that are most pertinent to the design and application of NEURON. 3.1 The Cable Equation. The application of cable theory to the study of electrical signaling in neurons has a long history, which is briefly summarized elsewhere (Rall, 1989). The basic computational task is to solve numerically the cable equation ∂ 2V ∂V + I(V, t) = , ∂t ∂x2
(3.1)
which describes the relationship between current and voltage in a onedimensional cable. The branched architecture typical of most neurons is incorporated by combining equations of this form with appropriate boundary conditions. Spatial discretization of this partial differential equation is equivalent to reducing the spatially distributed neuron to a set of connected compartments. The earliest example of a multicompartmental approach to the analysis of dendritic electrotonus was provided by Rall (1964). Spatial discretization produces a family of ordinary differential equations of the form cj
X vk − vj dvj + iionj = . dt rjk k
(3.2)
Equation 3.2 is a statement of Kirchhoff’s current law, which asserts that net transmembrane current leaving the jth compartment must equal the sum of axial currents entering this compartment from all sources (see Figure 1). The left-hand side of equation 3.2 is the total membrane current, which is the sum of capacitive and ionic components. The capacitive component is cj dvj /dt, where cj is the membrane capacitance of the compartment. The ionic component iionj includes all currents through ionic channel conductances. The right-hand side of equation 3.2 is the sum of axial currents that enter this compartment from its adjacent neighbors. Currents injected through a microelectrode would be added to the right-hand side. The sign conventions for current are as follows: outward transmembrane current is positive; axial
1184
M. L. Hines and N. T. Carnevale
Figure 1: The net current entering a region must equal zero.
current flow into a region is positive; positive injected current drives vj in a positive direction. Equation 3.2 involves two approximations. First, axial current is specified in terms of the voltage drop between the centers of adjacent compartments. The second approximation is that spatially varying membrane current is represented by its value at the center of each compartment. This is much less drastic than the often-heard statement that a compartment is assumed to be “isopotential.” It is far better to picture the approximation in terms of voltage varying linearly between the centers of adjacent compartments. Indeed, the linear variation in voltage is implicit in the usual description of a cable in terms of discrete electrical equivalent circuits. If the compartments are of equal size, it is easy to use Taylor’s series to show that both of these approximations have errors proportional to the square of compartment length. Thus, replacing the second partial derivative by its central difference approximation introduces errors proportional to 1x2 , and doubling the number of compartments reduces the error by a factor of four. It is often not convenient for the size of all compartments to be equal. Unequal compartment size might be expected to yield simulations that are only first-order accurate. However, comparison of simulations in which unequal compartments are halved or quartered in size generally reveals a
The NEURON Simulation Environment
1185
second-order reduction of error. A rough rule of thumb is that simulation error is proportional to the square of the size of the largest compartment. The first of two special cases of equation 3.2 that we wish to discuss allows us to recover the usual parabolic differential form of the cable equation. Consider the interior of an unbranched cable with constant diameter. The axial current consists of two terms involving compartments with the natural indices j − 1 and j + 1, that is, cj
dvj vj−1 − vj vj+1 − vj + iionj = + . dt rj−1,j rj,j+1
If the compartments have the same length 1x and diameter d, then the capacitance of a compartment is Cm π d1x, and the axial resistance is Ra 1x/π(d/2)2 . Cm is called the specific capacitance of the membrane, which is generally taken to be 1 µf/cm2 . Ra is the axial resistivity, which has different reported values for different cell classes (e.g., 35.4 Ä cm for squid axon). Equation 3.2 then becomes Cm
dvj d vj+1 − 2vj + vj−1 + ij = , dt 4Ra 1x2
where we have replaced the total ionic current iionj with the current density ij . The right-hand term, as 1x → 0, is just ∂ 2 v/∂x2 at the location of the now-infinitesimal compartment j. The second special case of equation 3.2 allows us to recover the boundary condition. This is an important exercise since naive discretizations at the ends of the cable have destroyed the second-order accuracy of many simulations. Nerve boundary conditions are that no axial current flows at the end of the cable; the end is sealed. This is implicit in equation 3.2, where the right-hand side consists of only the single term (vj−1 − vj )/rj−1,j when compartment j lies at the end of an unbranched cable. 3.2 Spatial Discretization in a Biological Context: Sections and Segments. Every nerve simulation program solves for the longitudinal spread of voltage and current by approximating the cable equation as a series of compartments connected by resistors (see Figure 4 and equation 3.2). The sum of all the compartment areas is the total membrane area of the whole nerve. Unfortunately, it is usually not clear at the outset how many compartments should be used. Both the accuracy of the approximation and the computation time increase as the number of compartments used to represent the cable increases. When the cable is “short,” a single compartment can be made to represent the entire cable adequately. For long cables or highly branched structures, it may be necessary to use a large number of compartments. This raises the question of how best to manage all the parameters that exist within these compartments. Consider membrane capacitance, which
1186
M. L. Hines and N. T. Carnevale
has a different value in each compartment. Rather than specify the capacitance of each compartment individually, it is better to deal in terms of a single specific membrane capacitance that is constant over the entire cell and have the program compute the values of the individual capacitances from the areas of the compartments. Other parameters such as diameter or channel density may vary widely over short distances, so the graininess of their representation may have little to do with numerically adequate compartmentalization. Although NEURON is a compartmental modeling program, the specification of biological properties (neuron shape and physiology) has been separated from the numerical issue of compartment size. What makes this possible is the notion of a section, which is a continuous length of unbranched cable. Although each section is ultimately discretized into compartments, values that can vary with position along the length of a section are specified in terms of a continuous parameter that ranges from 0 to 1 (normalized distance). In this way, section properties are discussed without regard to the number of segments used to represent it. This makes it easy to trade off between accuracy and speed, and enables convenient verification of the numerical correctness of simulations. Sections are connected together to form any kind of branched tree structure. Figure 2 illustrates how sections are used to represent biologically significant anatomical features. The top of this figure is a cartoon of a neuron with a soma that gives rise to a branched dendritic tree and an axon hillock connected to a myelinated axon. Each biologically significant component of this cell has its counterpart in one of the sections of the NEURON model, as shown in the bottom of Figure 2: the cell body (Soma), axon hillock (AH), myelinated internodes (Ii ), nodes of Ranvier (Ni ), and dendrites (Di ). Sections allow this kind of functional and anatomical parcellation of the cell to remain foremost in the mind of the person who constructs and uses a NEURON model. To accommodate requirements for numerical accuracy, NEURON represents each section by one or more segments of equal length (see Figures 3 and 4). The number of segments is specified by the parameter nseg, which can have a different value for each section. At the center of each segment is a node, the location where the internal voltage of the segment is defined. The transmembrane currents over the entire surface area of a segment are associated with its node. The nodes of adjacent segments are connected by resistors. It is crucial to realize that the location of the second-order correct voltage is not at the edge of a segment but rather at its center, that is, at its node. This is the discretization method employed by NEURON. To allow branching and injection of current at the precise ends of a section while maintaining second-order correctness, extra voltage nodes that represent compartments with zero area are defined at the section ends. It is possible to achieve second-order accuracy with sections whose end nodes have
The NEURON Simulation Environment
1187
Figure 2: (Top) Cartoon of a neuron indicating the approximate boundaries between biologically significant structures. The left-hand side of the cell body (Soma) is attached to an axon hillock (AH) that drives a myelinated axon (myelinated internodes Ii alternating with nodes of Ranvier Ni ). From the right-hand side of the cell body originates a branched dendritic tree (Di ). (Bottom) How sections would be employed in a NEURON model to represent these structures.
nonzero area compartments. However, the areas of these terminal compartments would have to be exactly half that of the internal compartments, and extra complexity would be imposed on administration of channel density at branch points. Based on the position of the nodes, NEURON calculates the values of internal model parameters such as the average diameter, axial resistance, and compartment area that are assigned to each segment. Figures 3 and 4 show how an unbranched portion of a neuron, called a neurite (see Figure 3A), is represented by a section with one or more segments. Morphometric analysis generates a series of diameter measurements whose centers lie on the midline of the neurite (the thin axial line in Figure 3B). These measurements and the path lengths between their centers are the dimensions of the section, which can be regarded as a chain of truncated cones or frusta (see Figure 3C). Distance along the length of a section is discussed in terms of the normalized position parameter x. That is, one end of the section corresponds to
1188
M. L. Hines and N. T. Carnevale
Figure 3: (A) Cartoon of an unbranched neurite (thick lines) that is to be represented by a section in a NEURON model. Computer-assisted morphometry generates a file that stores successive measurements of diameter (thin circles) centered at x, y, z coordinates (thin crosses). (B) Each adjacent pair of diameter measurements (thick circles) becomes the parallel faces of a truncated cone or frustum, the height of which is the distance between the measurement locations. The outline of each frustum is shown with thin lines, and a thin centerline passes through the central axis of the chain of solids. (C) The centerline has been straightened so the faces of adjacent frusta are flush with each other. The scale underneath the figure shows the distance along the midline of the section in terms of the normalized position parameter x. The vertical dashed line at x = 0.5 divides the section into two halves of equal length. (D) Electrical equivalent circuit of the section as represented by a single segment (nseg = 1). The open rectangle includes all mechanisms for ionic (noncapacitive) transmembrane currents.
x = 0 and the other end to x = 1. In Figure 3C these locations are depicted as being on the left- and right-hand ends of the section. The locations of the nodes and the boundaries between segments are conveniently specified
The NEURON Simulation Environment
1189
Figure 4: How the neurite of Figure 3 would be represented by a section with two segments (nseg = 2). Now the electrical equivalent circuit (bottom) has two nodes. The membrane properties attached to the first and second nodes are based on neurite dimensions and biophysical parameters over the x intervals [0, 0.5] and [0.5, 1], respectively. The three axial resistances are computed from the cytoplasmic resistivity and neurite dimensions over the x intervals [0, 0.25], [0.25, 0.75], and [0.75, 1].
in terms of this normalized position parameter. In general, a section has nseg segments that are demarcated by evenly spaced boundaries at intervals of 1/nseg. The nodes at the centers of these segments are located at x = (2i − 1)/2 nseg where i is an integer in the range [1, nseg]. As we shall see later, x is also used in specifying model parameters or retrieving state variables that are a function of position along a section (see section 4.4). The special importance of x and nseg lies in the fact that they free the user from having to keep track of the correspondence between segment number and position on the nerve. In early versions of NEURON, all nerve properties were stored in vector variables where the vector index was the segment number. Changing the number of segments was an error-prone and laborious process that demanded a remapping of the relationship between the user’s mental image of the biologically important features of the model, on the one hand, and the implementation of this model in a digital computer, on the other. The use of x and nseg insulates the user from the most inconvenient aspects of such low-level details.
1190
M. L. Hines and N. T. Carnevale
When nseg = 1 the entire section is lumped into a single compartment. This compartment has only one node, which is located midway along its length, at x = 0.5 (see Figures 3C and D). The integral of the surface area over the entire length of the section (0 ≤ x ≤ 1) is used to calculate the membrane properties associated with this node. The values of the axial resistors are determined by integrating the cytoplasmic resistivity along the paths from the ends of the section to its midpoint (dashed line in Figure 3C). The left- and right-hand axial resistances of Figure 3D are evaluated over the x intervals [0, 0.5] and [0.5, 1], respectively. Figure 4 shows what happens when nseg = 2. Now NEURON breaks the section into two segments of equal length that correspond to x intervals [0, 0.5] and [0.5, 1]. The membrane properties over these intervals are attached to the nodes at 0.25 and 0.75, respectively. The three axial resistors Ri1 , Ri2 , and Ri3 are determined by integrating the path resistance over the x intervals [0, 0.25], [0.25, 0.75], and [0.75, 1]. 3.3 Integration Methods. Spatial discretization reduced the cable equation, a partial differential equation with derivatives in space and time, to a set of ordinary differential equations with first-order derivatives in time. Selection of an integration method to solve these equations is guided by concerns of stability, accuracy, and efficiency (Hines & Carnevale, 1995). NEURON offers the user a choice of two stable implicit integration methods: backward Euler and a variant of Crank-Nicholson (C-N). Backward Euler is the default because of its robust numerical stability properties. Backward Euler can be used with extremely large time steps in order to find the steady-state solution for a linear (“passive”) system. It produces good qualitative results even with large time steps, and it works even if some or all of the equations are strictly algebraic relations among states. A more accurate method for small time steps is available by setting the global parameter secondorder to 2. NEURON then uses a variant of the C-N method, in which numerical error is proportional to 1t2 . Both of these are implicit integration methods, in which all current balance equations must be solved simultaneously. The backward Euler algorithm does not resort to iteration to deal with nonlinearities, since its numerical error is proportional to 1t anyway. The special feature of the C-N variant is its use of a staggered time step algorithm to avoid iteration of nonlinear equations (see section 3.3.1). This converts the current balance part of the problem to one that requires only the solution of simultaneous linear equations. Although the C-N method is formally stable, it is sometimes plagued by spurious large-amplitude oscillations (see Hines and Carnevale, 1995, Figure 7). This occurs when 1t is too large, as may occur in models that involve fast voltage clamps or have compartments coupled by very small resistances. However, C-N is safe in most situations, and it can be much more efficient than backward Euler for a given accuracy.
The NEURON Simulation Environment
1191
These two methods are almost identical in terms of computational cost per time step (see section 3.3.1). Since the current balance equations have the structure of a tree (there are no current loops), direct gaussian elimination is optimal for their solution (Hines, 1984). This takes exactly the same number of computer operations as would be required for an unbranched cable with the same number of compartments. The best way to determine the method of choice for any particular problem is to compare both methods with several values of 1t to see which allows the largest 1t consistent with the desired accuracy. In performing such trials, one must remember that the stability properties of a simulation depend on the entire system that is being modeled. Because of interactions between the “biological” components and any “nonbiological” elements, such as stimulators or voltage clamps, the time constants of the entire system may be different from those of the biological components alone. A current source (perfect current clamp) does not affect stability because it does not change the time constants. Any other signal source imposes a load on the compartment to which it is attached, changing the time constants and potentially requiring use of a smaller time step to avoid numerical oscillations in the C-N method. The more closely a signal source approximates a voltage source (perfect voltage clamp), the greater this effect will be.
3.3.1 Efficiency. Nonlinear equations generally need to be solved iteratively to maintain second-order correctness. However, voltage-dependent membrane properties, which are typically formulated in analogy to Hodgkin-Huxley (HH) type channels, allow the cable equation to be cast in a linear form, still second order correct, that can be solved without iterations. A direct solution of the voltage equations at each time step t → t + 1t using the linearized membrane current I(V, t) = G · (V − E) is sufficient as long as the slope conductance G and the effective reversal potential E are known to second order at time t + 0.51t. HH type channels are easy to solve at t + 0.51t since the conductance is a function of state variables, which can be computed using a separate time step that is offset by 0.51t with respect to the voltage equation time step. That is, to integrate a state from t − 0.51t to t + 0.51t, we require only a second-order correct value for the voltage-dependent rates at the midpoint time t. Figure 5 contrasts this approach with the common technique of replacing nonlinear coefficients by their values at the beginning of a time step. For HH equations in a single compartment, the staggered time grid approach converts four simultaneous nonlinear equations at each time step to four independent linear equations that have the same order of accuracy at each time step. Since the voltage-dependent rates use the voltage at the midpoint of the integration step, integration of channel states can be done analytically in just a single addition and multiplication operation and two table-lookup operations. Although this efficient scheme achieves second-order accuracy,
1192
M. L. Hines and N. T. Carnevale
Figure 5: The equations shown at the top of the figure are computed using the Crank-Nicholson method. (Top) x(t+1t) and y(t+1t) are determined using their values at time t. (Bottom) Staggered time steps yield decoupled linear equations. y(t + 1t/2) is determined using x(t), after which x(t + 1t) is determined using y(t + 1t/2).
the trade-off is that the tables depend on the value of the time step and must be recomputed whenever the time step changes. Neuronal architecture can also be exploited to increase computational efficiency. Since neurons generally have a branched tree structure with no loops, the number of arithmetic operations required to solve the cable equation by gaussian elimination is exactly the same as for an unbranched cable with the same number of compartments. That is, we need only O(N) arith-
The NEURON Simulation Environment
1193
metic operations for the equations that describe N compartments connected in the form of a tree, even though standard gaussian elimination generally takes O(N3 ) operations to solve N equations in N unknowns. The tremendous efficiency increase results from the fact that in a tree, one can always find a leaf compartment i that is connected to only one other compartment j, so that the equation for compartment i (see equation 3.3a) involves only the voltages in compartments i and j, and the voltage in leaf compartment i is involved only in the equations for compartments i and j (see equations 3.3a and 3.3b): aii Vi + aij Vj = bi
(3.3a)
aji Vi + ajj Vj + [terms from other compartments] = bj .
(3.3b)
Using equation 3.3a to eliminate the Vi term from equation 3.3b, which requires O(1) (instead of N) operations, gives equation 3.4 and leaves N − 1 equations in N − 1 unknowns. ajj0 Vj + [terms from other compartments] = bj0
(3.4)
where ajj0 = ajj − (aij aji /aii ) and bj0 = bj − (bi aji /aii ). This strategy can be applied until there is only one equation in one unknown. Assume that we know the solution to these N − 1 equations, and in particular that we know Vj . Then we can find Vi from equation 3.3a with O(1) step. Therefore, the effort to solve these N equations is O(1) plus the effort needed to solve N − 1 equations. The number of operations required is independent of the branching structure, so a tree of N compartments uses exactly the same number of arithmetic operations as a one-dimensional cable of N compartments. Efficient gaussian elimination requires an ordering that can be found by a simple algorithm: choose the equation with the current minimum number of terms as the equation to use in the elimination step. This minimum degree ordering algorithm is commonly employed in standard sparse matrix solver packages. For example, NEURON’s Matrix class uses the matrix library written by Stewart and Leyk (1994). This and many other sparse matrix packages are freely available on the Internet at http://www.netlib.org.
1194
M. L. Hines and N. T. Carnevale
4 The Neuron Simulation Environment No matter how powerful and robust its computational engine may be, the real utility of any software tool depends largely on its ease of use. Therefore a great deal of effort has been invested in the design of the simulation environment provided by NEURON. In this section we first briefly consider general aspects of the high-level language used for writing NEURON programs. Then we turn to an example of a model of a nerve cell to introduce specific aspects of the user environment, after which we cover these features more thoroughly. 4.1 The hoc Interpreter. NEURON incorporates a programming language based on hoc, a floating-point calculator with C-like syntax described by Kernighan and Pike (1984). This interpreter has been extended by the addition of object-oriented syntax (not including polymorphism or inheritance) that can be used to implement abstract data types and data encapsulation. Other extensions include functions that are specific to the domain of neural simulations and functions that implement a graphical user interface (see below). With hoc one can quickly write short programs that meet most problem-specific needs. The interpreter is used to execute simulations, customize the user interface, optimize parameters, analyze experimental data, calculate new variables such as impulse propagation velocity, and so forth. NEURON simulations are not subject to the performance penalty often associated with interpreted (as opposed to compiled) languages because computationally intensive tasks are carried out by highly efficient precompiled code. Some of these tasks are related to integration of the cable equation, and others are involved in the emulation of biological mechanisms that generate and regulate chemical and electrical signals. NEURON provides a built-in implementation of the microemacs text editor. Since the choice of a programming editor is highly personal, NEURON will also accept hoc code in the form of straight ASCII files created with any other editor. 4.2 A Specific Example. In the following example we show how NEURON might be used to model the cell in the top of Figure 6. Comments in the hoc code are preceded by double slashes (//), and code blocks are enclosed in curly brackets ({}). 4.2.1 First Step: Establish Model Topology. One very important feature of NEURON is that it allows the user to think about models in terms that are familiar to the neurophysiologist, keeping numerical issues (e.g., number of spatial segments) entirely separate from the specification of morphology and biophysical properties. This separation is achieved through the use of one-dimensional cable “sections” as the basic building block from which model cells are constructed. These sections can be connected together to
The NEURON Simulation Environment
1195
Figure 6: (Top) Cartoon of a neuron with a soma, three dendrites, and an unmyelinated axon (not to scale). The diameter of the spherical soma is 50 µm. Each dendrite is 200 µm long and tapers uniformly along its length from 10 µm diameter at its site of origin on the soma, to 3 µm at its distal end. The unmyelinated cylindrical axon is 1000 µm long and has a diameter of 1 µm. An electrode (not shown) is inserted into the soma for intracellular injection of a stimulating current. (Bottom) Topology of a NEURON model that represents this cell (see text for details).
form any kind of branched cable and endowed with properties that may vary with position along their length. The idealized neuron in Figure 6 has several anatomical features whose existence and spatial relationships we want the model to include: a cell body (soma), three dendrites, and an unmyelinated axon. The following hoc code sets up the basic topology of the model: create soma, axon, dendrite[3] connect axon(0), soma(0) for i=0,2 { connect dendrite[i](0), soma(1) }
The program starts by creating named sections that correspond to the important anatomical features of the cell. These sections are attached to each other using connect statements. As noted previously, each section has a normalized position parameter x, which ranges from 0 at one end to 1 at the other. Because the axon and dendrites arise from opposite sides of the cell body, they are connected to the 0 and 1 ends of the soma section (see the
1196
M. L. Hines and N. T. Carnevale
bottom of Figure 6). A child section can be attached to any location on the parent, but attachment at locations other than 0 or 1 is generally employed only in special cases such as spines on dendrites. 4.2.2 Second Step: Assign Anatomical and Biophysical Properties. Next we set the anatomical and biophysical properties of each section. Each section has its own segmentation, length, and diameter parameters, so it is necessary to indicate which section is being referenced. There are several ways to declare which is the currently accessed section, but here the most convenient is to precede blocks of statements with the appropriate section name: // specify anatomical and biophysical properties soma { nseg = 1 // compartmentalization parameter L = 50 // [µm] length diam = 50 // [µm] diameter insert hh // standard Hodgkin-Huxley // currents gnabar hh = 0.5*0.120 // max HH sodium conductance } axon { nseg = 20 L = 1000 diam = 1 insert hh } for i=0,2 dendrite[i] { nseg = 5 L = 200 diam(0:1) = 10:3 // dendritic diameter tapers // along its length insert pas // standard passive current e pas = -65 // [mv] equilibrium potential // for passive current // [siemens/cm2 ] conductance g pas = 0.001 // for passive current }
The fineness of the spatial grid is determined by the compartmentalization parameter nseg. Here the soma is lumped into a single compartment (nseg = 1), while the axon and each of the dendrites are broken into several subcompartments (nseg = 20 and 5, respectively). In this example, we specify the geometry of each section by assigning values directly to section length and diameter. This creates a stylized model. Alternatively, one can use the 3D method, in which NEURON computes
The NEURON Simulation Environment
1197
section length and diameter from a list of (x, y, z, diam) measurements (see section 4.5). Since the axon is a cylinder, the corresponding section has a fixed diameter along its entire length. The spherical soma is represented by a cylinder with the same surface area as the sphere. The dimensions and electrical properties of the soma are such that its membrane will be nearly isopotential, so the cylinder approximation is not a significant source of error. If chemical signals such as intracellular ion concentrations were important in this model, it would be necessary to approximate not only the surface area but also the volume of the soma. Unlike the axon, the dendrites become progressively narrower with distance from the soma. Furthermore, unlike the soma, they are too long to be lumped into a single compartment with constant diameter. The taper of the dendrites is accommodated by assigning a sequence of decreasing diameters to their segments. This is done through the use of range variables, which are discussed in section 4.4. In this model the soma and axon contain HH sodium, potassium, and leak channels (Hodgkin & Huxley, 1952), while the dendrites have constant, linear (“passive”) ionic conductances. The insert statement assigns the biophysical mechanisms that govern electrical signals in each section. Particular values are set for the density of sodium channels on the soma (gnabar hh) and for the ionic conductance and equilibrium potential of the passive current in the dendrites (g pas and e pas). More information about membrane mechanisms is presented in section 4.6. 4.2.3 Third Step: Attach Stimulating Electrodes. This code emulates the use of an electrode to inject a stimulating current into the soma by placing a current pulse stimulus in the middle of the soma section. The stimulus starts at t = 1 ms, lasts for 0.1 ms, and has an amplitude of 60 nA. objref stim soma stim = new IClamp(0.5) stim.del = 1 stim.dur = 0.1 stim.amp = 60
// // // //
put it in middle of soma [ms] delay [ms] duration [nA] amplitude
The stimulating electrode is an example of a point process. Point processes are discussed in more detail in section 4.6. 4.2.4 Fourth Step: Control Simulation Time Course. At this point all model parameters have been specified. All that remains is to define the simulation parameters, which govern the time course of the simulation, and write some code that executes the simulation. This is generally done in two procedures. The first procedure initializes the membrane potential and the states of the inserted mechanisms (channel states, ionic concentrations, extracellular potential next to the membrane).
1198
M. L. Hines and N. T. Carnevale
The second procedure repeatedly calls the built-in single-step integration function fadvance() and saves, plots, or computes functions of the desired output variables at each step. In this procedure it is possible to change the values of model parameters during a run. dt = 0.05 tstop = 5 finitialize(-65)
// // // //
[ms] integration time step [ms] initialize membrane potential, state variables, and time
proc integrate() { print t, soma.v(0.5)
// show starting time // and initial somatic // membrane potential
while (t < tstop) { fadvance() // advance solution by dt // function calls to save or plot results // would go here print t, soma.v(0.5) // show present time // and somatic // membrane potential // statements that change model parameters // would go here } }
The built-in function finitialize() initializes time t to 0, membrane potential v to −65 mv throughout the model, and the HH state variables m, n, and h to their steady-state values at v = −65 mv. Initialization can also be performed with a user-written routine if there are special requirements that finitialize() cannot accommodate, such as nonuniform membrane potential. Both the integration time step dt and the solution time t are global variables. For this example dt = 50 µs. The while(){} statement repeatedly calls fadvance(), which integrates the model equations over the interval dt and increments t by dt on each call. For this example, the time and somatic membrane potential are displayed at each step. This loop exits when t ≥ tstop. When this program is first processed by the NEURON interpreter, the model is set up and initiated, but the integrate() procedure is not executed. When the user enters an integrate() statement in the NEURON interpreter window, the simulation advances for 5 ms using 50 µs time steps. 4.3 Section Variables. Three parameters apply to the section as a whole: cytoplasmic resistivity Ra (Ä cm), the section length L, and the compartmentalization parameter nseg. The first two are ordinary in the sense that they
The NEURON Simulation Environment
1199
do not affect the structure of the equations that describe the model. Note that the hoc code specifies values for L but not for Ra. This is because each section in a model is likely to have a different length, whereas the cytoplasm (and therefore Ra) is usually assumed to be uniform throughout the cell. The default value of Ra is 35.4 Ä cm, which is appropriate for invertebrate neurons. Like L it can be assigned a new value in any or all sections (e.g., ∼ 200 Ä cm for mammalian neurons). The user can change the compartmentalization parameter nseg without having to modify any of the statements that set anatomical or biophysical properties. However, if parameters vary with position in a section, care must be taken to ensure that the model incorporates the spatial detail inherent in the parameter description. 4.4 Range Variables. Like dendritic diameter in our example, most cellular properties are functions of the position parameter x. NEURON has special provisions for dealing with these properties, which are called range variables. Other examples of range variables are the membrane potential v and ionic conductance parameters such as the maximum HH sodium conductance gnabar hh (siemens/cm2 ). Range variables enable the user to separate property specification from segment number. A range variable is assigned a value in one of two ways. The simpler and more common is as a constant. For example, the statement axon.diam = 10 asserts that the diameter of the axon is uniform over its entire length. The syntax for a property that changes along a length of a section is rangevar(xmin : xmax) = e1 : e2. The four italicized symbols are expressions, with e1 and e2 being the values of the property at xmin and xmax, respectively. The position expressions must meet the constraint 0 ≤ xmin ≤ xmax ≤ 1. Linear interpolation is used to assign the values of the property at the segment centers that lie in the position range [xmin, xmax]. In this manner a continuously varying property can be approximated by a piecewise linear function. If the range variable is diameter, neither e1 nor e2 should be 0, or corresponding axial resistance will be infinite. In our model neuron, the simple dendritic taper is specified by diam(0:1) = 10:3 and nseg = 5. This results in five segments that have centers at x = 0.1, 0.3, 0.5, 0.7, and 0.9 and diameters of 9.3, 7.9, 6.5, 5.1, and 3.7, respectively. The value of a range variable at the center of a segment can appear in any expression using the syntax rangevar(x) in which 0 ≤ x ≤ 1. The value returned is the value at the center of the segment containing x, not the linear interpolation of the values stored at the centers of adjacent segments. If the parentheses are omitted, the position defaults to a value of 0.5 (middle of the section). A special form of the for statement is available: for (var) stmt. For each value of the normalized position parameter x that defines the center
1200
M. L. Hines and N. T. Carnevale
of each segment in the selected section (along with positions 0 and 1), this statement assigns var that value and executes the stmt. This hoc code would print the membrane potential as a function of physical position (in µm) along the axon: axon for (x) print x*L, v(x)
4.5 Specifying Geometry: Stylized versus 3D. There are two ways to specify section geometry. Our example uses the stylized method, which simply assigns values to section length and diameter. This is most appropriate when cable length and diameter are authoritative and 3D shape is irrelevant. If the model is based on anatomical reconstruction data (quantitative morphometry) or if 3D visualization is paramount, it is better to use the 3D method. This approach keeps the anatomical data in a list of (x, y, z, diam) “points.” The first point is associated with the end of the section that is connected to the parent (this is not necessarily the zero end), and the last point is associated with the opposite end. There must be at least two points per section, and they should be ordered in terms of monotonically increasing arc length. This pt3d list, which is the authoritative definition of the shape of the section, automatically determines the length and diameter of the section. When the pt3d list is nonempty, the shape model used for a section is a sequence of frusta. The pt3d points define the locations and diameters of the ends of these frusta. The effective area, diameter, and resistance of each segment are computed from this sequence of points by trapezoidal integration along the segment length. This takes into account the extra area introduced by diameter changes; even degenerate cones of 0 length can be specified (i.e., two points with same coordinates but different diameters), which add area but not length to the section. No attempt is made to deal with the effects of centroid curvature on surface area. The number of 3D points used to describe a shape has nothing to do with nseg and does not affect simulation speed. 4.6 Density Mechanisms and Point Processes. The insert statement assigns biophysical mechanisms, which govern electrical and (if present) chemical signals, to a section. Many sources of electrical and chemical signals are distributed over the membrane of the cell. These density mechanisms are described in terms of current per unit area and conductance per unit area; examples include voltage-gated ion channels such as the HH currents. However, density mechanisms are not the most appropriate representation of all signal sources. Synapses and electrodes are best described in terms of localized absolute current in nanoamperes and conductance in microsiemens. These are called point processes.
The NEURON Simulation Environment
1201
An object syntax is used to manage the creation, insertion, attributes, and destruction of point processes. For example, a current clamp (electrode for injecting a current) is created by declaring an object variable and assigning it a new instance of the IClamp object class. When a point process is no longer referenced by any object variable, the point process is removed from the section and destroyed. In our example, redeclaring stim with the statement objref stim would destroy the pulse stimulus, since no other object variable is referencing it. The location of a point process can be changed with no effect on its other attributes. In our example, the statement dendrite[2] stim.loc(1) would move the current stimulus to the distal end of the third dendrite. Many user-defined density mechanisms and point processes can be simultaneously present in each compartment of a neuron. One important difference between density mechanisms and point processes is that any number of the same kind of point process can exist at the same location. User-defined density mechanisms and point processes can be linked into NEURON using the model description language NMODL. This lets the user focus on specifying the equations for a channel or ionic process without regard to its interactions with other mechanisms. The NMODL translator then constructs the appropriate C program, which is compiled and becomes available for use in NEURON. This program properly and efficiently computes the total current of each ionic species used, as well as the effect of that current on ionic concentration, reversal potential, and membrane potential. An extensive discussion of NMODL is beyond the scope of this article, but its major advantages can be listed succinctly. 1. Interface details to NEURON are handled automatically, and there are a great many such details. NEURON needs to know that model states are range variables and which model parameters can be assigned values and evaluated from the interpreter. Point processes need to be accessible via the interpreter object syntax, and density mechanisms need to be added to a section when the “insert” statement is executed. If two or more channels use the same ion at the same place, the individual current contributions need to be added together to calculate a total ionic current. 2. Consistency of units is ensured. 3. Mechanisms described by kinetic schemes are written with a syntax in which the reactions are clearly apparent. The translator provides tremendous leverage by generating a large block of C code that calculates the analytic Jacobian and the state fluxes. 4. There is often a great increase in clarity since statements are at the model level instead of the C programming level and are independent of the numerical method. For example, sets of differential and non-
1202
M. L. Hines and N. T. Carnevale
linear simultaneous equations are written using an expression syntax such as x’ = f(x, y, t) ~ g(x, y) = h(x, y)
where the prime refers to the derivative with respect to time (multiple primes such as x’’ refer to higher derivatives), and the tilde introduces an algebraic equation. The algebraic portion of such systems of equations is solved by Newton’s method, and a variety of methods are available for solving the differential equations, such as Runge-Kutta or backward Euler. 5. Function tables can be generated automatically for efficient computation of complicated expressions. 6. Default initialization behavior of a channel can be specified. 4.7 Graphical Interface. The user is not limited to operating within the traditional code-based command-mode environment. Among its many extensions to hoc, NEURON includes functions for implementing a fully graphical, windowed interface. Through this interface, and without having to write any code at all, the user can effortlessly create and arrange displays of menus, parameter value editors, graphs of parameters and state variables, and views of the model neuron. Anatomical views, called space plots, can be explored, revealing what mechanisms and point processes are present and where they are located. The purpose of NEURON’s graphical interface is to promote a match between what the user thinks is inside the computer and what is actually there. These visualization enhancements are a major aid to maintaining conceptual control over the simulation because they provide immediate answers to questions about what is being represented in the computer. The interface has no provision for constructing neuronal topology, a conscious design choice based on the strong likelihood that a graphical toolbox for building neuronal topologies would find little use. Small models with simple topology are so easily created in hoc that a graphical topology editor is unnecessary. More complex models are too cumbersome to deal with using a graphical editor. It is best to express the topological specifications of complex stereotyped models through algorithms, written in hoc, that generate the topology automatically. Biologically realistic models often involve hundreds or thousands of sections, whose dimensions and interconnections are contained in large data tables generated by hours of painstaking quantitative morphometry. These tables are commonly read by hoc procedures that in turn create and connect the required sections without operator intervention. The basic features of the graphical interface and how to use it to monitor and control simulations are discussed elsewhere (Moore & Hines, 1996).
The NEURON Simulation Environment
1203
However, several sophisticated analysis and simulation tools that have special utility for nerve simulation are worthy of mention: • The Function Fitter optimizes a parameterized mathematical expression to minimize the least squared difference between the expression and data. • The Run Fitter allows one to optimize several parameters of a complete neuron model to experimental data. This is most useful in the context of voltage clamp data that are contaminated by incomplete space clamp or models that cannot be expressed in closed form, such as kinetic schemes for channel conductance. • The Electrotonic Workbench plots small signal input and transfer impedance and voltage attenuation as functions of space and frequency (Carnevale, Tsai, & Hines, 1996). These plots include the neuromorphic (Carnevale, Tsai, Claiborne, & Brown, 1995) and L versus x (O’Boyle, Barrevale, Carnevale, Claiborne, & Brown, 1996) renderings of the electrotonic transformation (Brown et al., 1992; Tsai, Carnevale, Claiborne, & Brown, 1994; Zador, Agmon-Snir, & Segev, 1995). By revealing the effectiveness of signal transfer, the Workbench quickly provides insight into the “functional shape” of a neuron. All interactions with these and other tools takes place in the graphical interface, and no interpreter programming is needed to use them. However, they are constructed entirely within the interpreter and can be modified when special needs require. 4.8 Object-Oriented Syntax. 4.8.1 Neurons. It is often convenient to deal with groups of sections that are related. Therefore NEURON provides a data class called a SectionList that can be used to identify subsets of sections. Section lists fit nicely with the “regular expression” method of selecting sections, used in earlier implementations of NEURON, in that the section list is easily constructed by using regular expressions to add and delete sections. After the list is constructed it is available for reuse, and it is much more efficient to loop over the sections in a section list than to pick out the sections accepted by a combination of regular expressions. This code objref alldend alldend = new SectionList() forsec "dend" alldend.append() forsec alldend print secname()
forms a list of all the sections whose names contain the string “dend” and then iterates over the list, printing the name of each section in it. For the example program presented here, this would generate the following output
1204
M. L. Hines and N. T. Carnevale
in the NEURON interpreter window, dendrite[0] dendrite[1] dendrite[2]
although in this very simple example it would clearly have been easy enough to loop over the array of dendrites directly, for example: for i = 0,2 { dendrite[i] print secname() }
4.8.2 Networks. To help the user manage very large simulations, the interpreter syntax has been extended to facilitate the construction of hierarchical objects. This is illustrated by the following code fragment, which specifies a pattern for a simple stylized neuron consisting of three dendrites connected to one end of a soma and an axon connected to the other end: begintemplate Cell1 public soma, dendrite, axon create soma, dendrite[3], axon proc init() { for i=0,2 connect dendrite[i](0), soma(0) connect axon(0), soma(1) axon insert hh } endtemplate Cell1
Whenever a new instance of this pattern is created, the init() procedure automatically connects the soma, dendrite, and axon sections together. A complete pattern would also specify default membrane properties, as well as the number of segments for each section. Names that can be referenced outside the pattern are listed in the public statement. In this case, since init is not in the list, the user could not reinitialize by calling the init() procedure. Public names are referenced through a dot notation. The particular benefit of using templates (“classes” in standard objectoriented terminology) is the fact that they can be employed to create any number of instances of a pattern. For example, objref cell[10][10] for i=0,9 for j=0,9 cell[i][j]=new Cell1()
creates an array of one hundred objects of type Cell1 that can be referenced individually by the object variable cell. In this example, cell[4][5].axon. gnabar hh(0.5) is the value of the maximum HH sodium conductance in the middle of the axon of cell[4][5].
The NEURON Simulation Environment
1205
As this example implies, templates offer a natural syntax for the creation of networks. However, it is entirely up to the user to organize the templates logically so that they appropriately reflect the structure of the problem. Generally any given structural organization can be viewed as a hierarchy of container classes, such as cells, microcircuits, layers, or networks. The important issue is how much effort is required for the concrete network representation to support a range of logical views of the same abstract network. A logical view that organizes the cells differently may not be easy to compute if the network is built as an elaborate hierarchy. This kind of pressure tends to encourage relatively flat organizations that make it easier to implement functions that search for specific information. The bottom line is that network simulation design remains an ad hoc process that requires careful programming judgment. One very important class of logical views that are not generally organizable as a hierarchy are those of synaptic organization. In connecting cells with synapses, one is often driven to deal with general graphs, which is to say no structure at all. In addition to the notions of classes and objects (a synapse is an object with a pre- and a postsynaptic logical connection) the interpreter offers one other fundamental language feature that can be useful in dealing with objects that are collections of other objects: the notion of iterators, taken from the Sather programming language (Murer, Omohundro, Stoutamire, & Szyerski, 1996). This is a separation of the process of iteration from that of “what is to be done for each item.” If a programmer implements one or more iterators in a collection class, the user of the class does not need to know how the class indexes its items. Instead the class will return each item in turn for execution in the context of the loop body. This allows the user to write for layer1.synapses(syn, type) { // statements that manipulate the object // reference named "syn" (The iterator // causes "syn" to refer, in turn, to // each synapse of a certain type in // the layer1 object) }
without being aware of the possibly complicated process of picking out these synapses from the layer (that is the responsibility of the author of the class of which layer1 is an instance). It is to be sadly emphasized that these kinds of language features, though very useful, do not impose any policy with regard to the design decisions that users must make in building their networks. Different programmers express very different designs on the same language base, with the consequence that it is more often than not infeasible to reconcile slightly different representations of even very similar concepts.
1206
M. L. Hines and N. T. Carnevale
An example of a useful way to deal uniformly with the issue of synaptic connectivity is the policy implemented in NEURON by Lytton (1996). This implementation uses the normal NMODL methodology to define a synaptic conductance model and enclose it within a framework that manages network connectivity. 5 Summary The recent striking expansion in the use of simulation tools in the field of neuroscience has been encouraged by the rapid growth of quantitative observations that both stimulate and constrain the formulation of new hypotheses of neuronal function, enabled by the availability of ever-increasing computational power at low cost. These factors have motivated the design and implementation of NEURON, the goal of which is to provide a powerful and flexible environment for simulations of individual neurons and networks of neurons. NEURON has special features that accommodate the complex geometry and nonlinearities of biologically realistic models, without interfering with its ability to handle more speculative models that involve a high degree of abstraction. As we note in this article, one particularly advantageous feature is that the user can specify the physical properties of a cell without regard for the strictly computational concern of how many compartments are employed to represent each of the cable sections. In a future publication, we will examine how the NMODL translator is used to define new membrane channels and calculate ionic concentration changes. Another will describe the Vector class. In addition to providing very efficient implementations of frequently needed operations on lists of numbers, the vector class offers a great deal of programming leverage, especially in the management of network models. NEURON source code, executables, and documents are available at http://neuron.duke.edu and http://www.neuron.yale.edu, and by ftp from ftp.neuron.yale.edu. Acknowledgments We thank John Moore, Zach Mainen, Bill Lytton, David Jaffe, and the many other users of NEURON for their encouragement, helpful suggestions, and other contributions. This work was supported by NIH grant NS 11613 (Computer Methods for Physiological Problems) to M.L.H. and by the Yale Neuroengineering and Neuroscience Center. References Bernander, O., Douglas, R. J., Martin, K. A. C., & Koch, C. (1991). Synaptic background activity influences spatiotemporal integration in single pyramidal cells. Proc. Nat. Acad. Sci., 88, 11569–11573.
The NEURON Simulation Environment
1207
Brown, T. H., Zador, A. M., Mainen, Z. F., & Claiborne, B. J. (1992). Hebbian computations in hippocampal dendrites and spines. In T. McKenna, J. Davis, & S. F. Zornetzer (Eds.), Single Neuron Computation (pp. 81–116). San Diego: Academic Press. Carnevale, N. T., & Rosenthal, S. (1992). Kinetics of diffusion in a spherical cell: I. No solute buffering. J. Neurosci. Meth., 41, 205–216. Carnevale, N. T., Tsai, K. Y., Claiborne, B. J., & Brown, T. H. (1995). The electrotonic transformation: A tool for relating neuronal form to function. In G. Tesauro, D. S. Touretzky, & T. K. Leen (Eds.), Advances in Neural Information Processing Systems (vol. 7, pp. 69–76). Cambridge, MA: MIT Press. Carnevale, N. T., Tsai, K. Y., & Hines, M. L. (1996). The Electrotonic Workbench. Society for Neuroscience Abstracts, 22, 1741. Cauller, L. J., & Connors, B. W. (1992). Functions of very distal dendrites: Experimental and computational studies of layer I synapses on neocortical pyramidal cells. In T. McKenna, J. Davis, & S. F. Zornetzer (Eds.), Single Neuron Computation (pp. 199–229). San Diego: Academic Press. Destexhe, A., Babloyantz, A., & Sejnowski, T. J. (1993). Ionic mechanisms for intrinsic slow oscillations in thalamic relay neurons. Biophys. J., 65, 1538– 1552. Destexhe, A., Contreras, D., Sejnowski, T. J., & Steriade, M. (1994). A model of spindle rhythmicity in the isolated thalamic reticular nucleus. J. Neurophysiol., 72, 803–818. Destexhe, A., Contreras, D., Steriade, M., Sejnowski, T. J., & Huguenard, J. R. (1996). In vivo, in vitro and computational analysis of dendritic calcium currents in thalamic reticular neurons. J. Neurosci., 16, 169–185. Destexhe, A., McCormick, D. A., & Sejnowski, T. J. (1993). A model for 8–10 Hz spindling in interconnected thalamic relay and reticularis neurons. Biophys. J., 65, 2474–2478. Destexhe, A., & Sejnowski, T. J. (1995). G-protein activation kinetics and spillover of GABA may account for differences between inhibitory responses in the hippocampus and thalamus. Proc. Nat. Acad. Sci., 92, 9515–9519. H¨ausser, M., Stuart, G., Racca, C., & Sakmann, B. (1995). Axonal initiation and active dendritic propagation of action potentials in substantia nigra neurons. Neuron, 15, 637–647. Hines, M. (1984). Efficient computation of branched nerve equations. Int. J. BioMed. Comput., 15, 69–76. Hines, M. (1989). A program for simulation of nerve equations with branching geometries. Int. J. Bio-Med. Comput., 24, 55–68. Hines, M. (1993). NEURON—a program for simulation of nerve equations. In F. Eeckman (Ed.), Neural Systems: Analysis and Modeling (pp. 127–136). Norwell, MA: Kluwer. Hines, M. (1994). The NEURON simulation program. In J. Skrzypek (Ed.), Neural Network Simulation Environments (pp. 147–163). Norwell, MA: Kluwer. Hines, M., & Carnevale, N. T. (1995). Computer modeling methods for neurons. In M. A. Arbib (Ed.), The Handbook of Brain Theory and Neural Networks (pp. 226–230). Cambridge, MA: MIT Press.
1208
M. L. Hines and N. T. Carnevale
Hines, M., & Shrager, P. (1991). A computational test of the requirements for conduction in demyelinated axons. J. Restor. Neurol. Neurosci., 3, 81–93. Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol., 117, 500–544. Hsu, H., Huang, E., Yang, X.-C., Karschin, A., Labarca, C., Figl, A., Ho, B., Davidson, N., & Lester, H. A. (1993). Slow and incomplete inactivations of voltage-gated channels dominate encoding in synthetic neurons. Biophys. J., 65, 1196–1206. Jaffe, D. B., Ross, W. N., Lisman, J. E., Miyakawa, H., Lasser-Ross, N., & Johnston, D. (1994). A model of dendritic Ca2+ accumulation in hippocampal pyramidal neurons based on fluorescence imaging experiments. J. Neurophysiol., 71, 1065–1077. Kernighan, B. W., & Pike, R. (1984). Appendix 2: Hoc manual. In B. W. Kernighen & R. Pike (Eds.), The UNIX Programming Environment (pp. 329–333). Englewood Cliffs, NJ: Prentice Hall. Lindgren, C. A., & Moore, J. W. (1989). Identification of ionic currents at presynaptic nerve endings of the lizard. J. Physiol., 414, 210–222. Lytton, W. W. (1996). Optimizing synaptic conductance calculation for network simulations. Neural Computation, 8, 501–509. Lytton, W. W., Destexhe, A., & Sejnowski, T. J. (1996). Control of slow oscillations in the thalamocortical neuron: A computer model. Neurosci., 70, 673–684. Lytton, W. W., & Sejnowski, T. J. (1992). Computer model of ethosuximide’s effect on a thalamic neuron. Ann. Neurol., 32, 131–139. Mainen, Z. F., Joerges, J., Huguenard, J., & Sejnowski, T. J. (1995). A model of spike initiation in neocortical pyramidal neurons. Neuron, 15, 1427–1439. Mainen, Z. F., & Sejnowski, T. J. (1995). Reliability of spike timing in neocortical neurons. Science, 268, 1503–1506. Mascagni, M. V. (1989). Numerical methods for neuronal modeling. In C. Koch & I. Segev (Eds.), Methods in Neuronal Modeling (pp. 439–484). Cambridge, MA: MIT Press. Moore, J. W., & Hines, M. (1996). Simulations with NEURON 3.1, on-line documentation in HTML format, available at http://neuron.duke.edu. Murer, S., Omohundro, S. M., Stoutamire, D., & Szyerski, C. (1996). Iteration abstraction in Sather. ACM Transactions on Programming Languages and Systems, 18, 1–15. O’Boyle, M. P., Carnevale, N. T., Claiborne, B. J., & Brown, T. H. (1996). A new graphical approach for visualizing the relationship between anatomical and electrotonic structure. In J. M. Bower (Ed.), Computational Neuroscience: Trends in Research 1995. San Diego: Academic Press. Rall, W. (1964). Theoretical significance of dendritic tree for input-output relation. In R. F. Reiss (Ed.), Neural Theory and Modeling (pp. 73–97). Stanford: Stanford University Press. Rall, W. (1989). Cable theory for dendritic neurons. In C. Koch & I. Segev (Ed.), Methods in Neuronal Modeling (pp. 8–62). Cambridge, MA: MIT Press. Softky, W. (1994). Sub-millisecond coincidence detection in active dendritic trees. Neurosci., 58, 13–41.
The NEURON Simulation Environment
1209
Stewart, D., & Leyk, Z. (1994). Meschach: Matrix computations in C. In Proceedings of the Centre for Mathematics and Its Applications (Vol. 32). Canberra, Australia: School of Mathematical Sciences, Australian National University. Traynelis, S. F., Silver, R. A., & Cull-Candy, S. G. (1993). Estimated conductance of glutamate receptor channels activated during epscs at the cerebellar mossy fiber-granule cell synapse. Neuron, 11, 279–289. Tsai, K. Y., Carnevale, N. T., & Brown, T. H. (1994). Hebbian learning is jointly controlled by electrotonic and input structure. Network, 5, 1–19. Tsai, K. Y., Carnevale, N. T., Claiborne, B. J., & Brown, T. H. (1994). Efficient mapping from neuroanatomical to electrotonic space. Network, 5, 21–46. Zador, A. M., Agmon-Snir, H., & Segev, I. (1995). The morphoelectrotonic transform: A graphical approach to dendritic function. J. Neurosci., 15, 1669–1682.
Received September 26, 1996; accepted January 10, 1997.
ARTICLE
Communicated by Jurgen ¨ Schmidhuber
On Bias Plus Variance David H. Wolpert IBM Almaden Research Center, San Jose, CA 95120, U.S.A.
This article presents several additive corrections to the conventional quadratic loss bias-plus-variance formula. One of these corrections is appropriate when both the target is not fixed (as in Bayesian analysis) and training sets are averaged over (as in the conventional bias plus variance formula). Another additive correction casts conventional fixed-trainingset Bayesian analysis directly in terms of bias plus variance. Another correction is appropriate for measuring full generalization error over a test set rather than (as with conventional bias plus variance) error at a single point. Yet another correction can help explain the recent counterintuitive bias-variance decomposition of Friedman for zero-one loss. After presenting these corrections, this article discusses some other lossfunction-specific aspects of supervised learning. In particular, there is a discussion of the fact that if the loss function is a metric (e.g., zero-one loss), then there is bound on the change in generalization error accompanying changing the algorithm’s guess from h1 to h2 , a bound that depends only on h1 and h2 and not on the target. This article ends by presenting versions of the bias-plus-variance formula appropriate for logarithmic and quadratic scoring, and then all the additive corrections appropriate to those formulas. All the correction terms presented are a covariance, between the learning algorithm and the posterior distribution over targets. Accordingly, in the (very common) contexts in which those terms apply, there is not a “bias-variance trade-off” or a “bias-variance dilemma,” as one often hears. Rather there is a bias-variance-covariance trade-off.
1 Introduction The bias-plus-variance formula (Geman et al., 1992) is an extremely powerful tool for analyzing supervised learning scenarios that have quadratic loss functions, fixed targets (i.e., “truths”), and averages over the training sets. Indeed, there is little doubt that it is the most frequently cited formula in the statistical literature for analyzing such scenarios. Despite this breadth of utility in such scenarios, the bias-plus-variance formula has never been extended to other learning scenarios. In this article an additive correction to the formula is presented, appropriate for learning scenarios where the target is not fixed. The associated Neural Computation 9, 1211–1243 (1997)
c 1997 Massachusetts Institute of Technology °
1212
David H. Wolpert
formula for expected loss constitutes a midway point between Bayesian analysis and conventional bias-plus-variance analysis, in that both targets and training sets are averaged over. After presenting this correction, other correction terms are presented, appropriate for when other sets of random variables are averaged over. In particular, the correction term to the bias-plus-variance formula for when the test set point is not fixed—as it is not in almost all of computational learning theory as well as most other investigations of “generalization error”—is presented. (The conventional bias plus variance formula has the test point fixed.) In addition, it is shown how to cast expected loss in conventional Bayesian analysis (where the training set is fixed but targets are averaged over) directly in terms of bias plus variance. All of this emphasizes that the conventional bias-plus-variance decomposition is only a very specialized case of a much more general and important phenomenon. Next is a brief discussion of some other loss-function-specific properties of supervised learning. In particular, it is shown how with quadratic loss there is a scheme that assuredly, independent of the target, improves the performance of any learning algorithm with a random component. On the other hand, using the same scheme for concave loss functions results in assured degradation of performance. It is also shown that, without any concern for the target, one can bound the change in zero-one loss generalization error associated with making some guess h1 rather than a different guess h2 . (This is not possible for quadratic loss.) All of the extensions mentioned above to the conventional version of the bias-plus-variance formula use the same quadratic loss function occurring in the conventional formula itself. That loss function is often appropriate when the output space is numeric. Kong and Dietterich (1995) recently proposed an extension of the conventional (fixed-target, training set–averaged) formula to the zero-one loss function. (See also Wolpert & Kohavi, 1997 and Kohavi & Wolpert, 1996.) That loss function is often appropriate when the output space is categorical rather than numeric. For such categorical output spaces, sometimes one’s algorithm produces a guessed probability distribution over the output space rather than a single guessed output value. In those scenarios “scoring rules” are usually a more appropriate form of measuring generalization performance than are loss functions. This article ends by presenting extensions of the fixed-target version of the bias-plus-variance formula to the logarithmic and quadratic scoring rules, and then presents the associated additive corrections to those formulas. (“Scoring rules” as explored in this article are similar to what are called “score functions” in the statistics literature; Bernardo & Smith, 1994.) All of the correction terms presented are a covariance, between the learning algorithm and the posterior distribution over targets. Accordingly, in the (very common) contexts in which they apply, there is not a “bias-variance trade-off” or a “bias-variance dilemma,” as one often hears. Rather there is a bias-variance-covariance trade-off.
On Bias Plus Variance
1213
Section 2 presents the formalism that will be used in the rest of the article. Section 3 uses this formalism to recapitulate the traditional biasplus-variance formula. Certain desiderata that the terms in the bias-plusvariance decomposition should meet are also presented there. Sections 4 and 5 present the corrections to this formula appropriate for quadratic loss. Section 6 discusses some other loss-function-specific aspects of supervised learning. Recently Friedman (1996) drew attention to an important aspect of expected error for zero-one loss. His analysis appeared to indicate that under certain circumstances, when (what he identified as) the variance increased, it could result in decreased generalization error. In section 7, it is shown how to perform a bias-variance decomposition for Friedman’s scenario where commonsense characteristics of bias and variance (like error being an increasing function of each) are preserved. This discussion serves as a useful illustration of the utility of covariance terms, since it is the presence of that term that explains Friedman’s apparently counterintuitive results. Section 8 begins by presenting the extensions of the bias-plus-variance formula for scoring-rule-based rather than loss-function-based error. That section then investigates logarithmic scoring. Section 9 investigates quadratic scoring. Finally, section 10 discusses future work. 2 Nomenclature This article uses the extended Bayesian formalism (EBF) (Wolpert, 1994, 1995, 1996a, 1996b). In the current context, the EBF is just conventional probability theory, applied to the case where one has a different random variable for the hypothesis output by the learning algorithm and for the target relationship. It is this crucial extension that separates the EBF from conventional Bayesian analysis and allows the EBF (unlike conventional Bayesian analysis) to subsume all other major mathematical treatments of supervised learning like computational learning theory and sampling theory statistics. (See Wolpert, 1994.) This section presents a synopsis of the EBF. A quick reference of this synopsis can be found in Table 1. Readers unsure of any aspects of this synopsis, and in particular unsure of any of the formal basis of the EBF or justifications for any of its assumptions, are directed to the detailed exposition of the EBF in Appendix A of Wolpert (1996a). 2.1 Overview. The input and output spaces are X and Y, respectively. For simplicity, they are taken to be finite. (This imposes no restrictions on the real-world utility of the results in this article, since in the real world data are always analyzed on a finite digital computer and are based on the output of instruments having a finite number of possible readings.) The two spaces contain n and r elements respectively. A generic element of X is indicated by x, and a generic element of Y is indicated by y. Some-
1214
David H. Wolpert
Table 1: Summary of the Terms in the EBF The sets X and Y, of sizes n and r
The input and output space, respectively
The set d, of m X-Y pairs The X-conditioned distribution over Y, f The X-conditioned distribution over Y, h The real number c
The training set The target, used to generate test sets The hypothesis, used to guess for test sets The cost
The X-value q The Y-value yF The Y-value yH
The test set point The sample of the target f at point q The sample of the hypothesis h at point q
P(h | d) P( f | d) P(d | f ) P( f )
The learning algorithm The posterior The likelihood The prior
If c = L(yF , yH ), L(·, ·) is the “loss function.” Otherwise c is given by a “scoring rule.”
times (e.g., when requiring a Bayes-optimal algorithm to guess an expected Y value) it will implicitly be assumed that Y is a large set of real numbers that are very close to one another, so that there is no significant difference between the element in Y closest to some real number ψ and ψ itself. The lowercase Greek delta (δ) indicates either the Kronecker or Dirac delta function, as appropriate. Random variables are indicated using capital letters. Associated instantiations of a random variable are indicated using lowercase letters. Note, though, that some quantities (e.g., the space X) are neither random variables nor instantiations of random variables, and therefore their written case carries no significance. Only rarely will it be necessary to refer to a random variable rather than an instantiation of it. In particular, whenever possible, the argument of a probability distribution will be taken to indicate the associated random variable (e.g., whenever possible, P(a) will be written rather than PA (a).) In accord with standard statistics notation, E(A | b) will R be used to mean the expectation value of A given B = b, that is, to mean da a P(a | b). (Sums replace integrals if appropriate.) The primary random variables are the hypothesis X-Y relationship output by the learning algorithm (indicated by H), the target (i.e., “true”) X-Y relationship (F), the training set (D), and the real-world cost (C). These variables are related to one another through other random variables representing the (test set) input space value (Q), and the associated target and hypothesis Y-values, YF and YH , respectively (with instantiations yF and yH , respectively).
On Bias Plus Variance
1215
This completes the list of random variables. Formal definitions of them appear below. As an example of the relationship between these random variables and supervised learning, f , a particular instantiation of a target, could refer to a “teacher” neural net together with superimposed noise. This noisecorrupted neural net generates the training set d. The hypothesis h, on the other hand, could be the neural net made by one’s “student” algorithm after training on d. Then q would be an input element of the test set, yF and yH associated samples of the outputs of the two neural nets for that element (the sampling of yF including the effects of the superimposed noise), and c the resultant “cost” (e.g., c could be (yF − yH )2 ). 2.2 Training Sets and Targets. m is the number of elements in the (ordered) training set d. {dX (i), dY (i)} is the set of m input and output values in d. m0 is the number of distinct values in dX . Targets f are always assumed to be of the form of X-conditioned distributions over Y, indicated by the real-valued function f (x ∈ X, y ∈ Y) (i.e., P(yF | f, q) = f (q, yF )). Equivalently, where Sr is defined as the rdimensional unit simplex, targets can be viewed as mappings f : X → Sr . Note that any such target is a finite set of real numbers indexed by an X value and a Y value. Any restrictions on f are imposed by the full joint distribution P( f, h, d, c), and in particular by its marginalization, P( f ). Note that any output noise process is automatically reflected in P(yF | f, q). Note also that the equality P(yF | f, q) = f (q, yF ) only directly refers to the generation of test set elements; in general, training set elements can be generated from targets in a different manner (for example, if training and testing have different noise levels). The “likelihood” is P(d | f ). It says how d was generated from f . As an example, the conventional independent identically distributed (I I D) likelihood is P(d | f ) = 5m i=1 π(dX (i)) f (dX (i), dY (i)) (where π(x) is the “sampling distribution”). In other words, under this likelihood, d is created by repeatedly and independently choosing an input value dX (i) by sampling π(·), and then choosing an associated output value by sampling f (dX (i), ·), the same distribution used to generate test set outputs. (None of the results in this article depend on the choice of the likelihood.) The term “posterior” usually means P( f | d), and the term “prior” usually means P( f ). 2.3 The Learning Algorithm. Hypotheses h are always assumed to be of the form of X-conditioned distributions over Y, indicated by the real-valued function h(x ∈ X, y ∈ Y) (i.e., P(yH | h, q) = h(q, yH )). Equivalently, where Sr is defined as the r-dimensional unit simplex, hypotheses can be viewed as mappings h: X → Sr . Note that any such hypothesis is a finite set of real numbers.
1216
David H. Wolpert
Any restrictions on h are imposed by P( f, h, d, c). Here and throughout, a “single-valued” distribution is one that, for a given x, is a delta function about some y. Such a distribution is a single-valued function from X to Y. As an example, if one is using a neural net as one’s regression through the training set, usually the (neural net) h is single valued. On the other hand, when one is performing probabilistic classification (as in softmax), h is not single valued. The generalization behavior of any learning algorithm (aka “generalizer”) is completely specified by P(h | d), although writing down a learning algorithm’s P(h | d) explicitly is often quite difficult. A learning algorithm is “deterministic” if the same d always gives the same h. Backpropagation with a random initial weight is not deterministic. Nearest neighbor is. The learning algorithm sees only the training set d, and in particular does not directly see the target. So P(h | f, d) = P(h | d), which means that P(h, f | d) = P(h | d) × P( f | d), and therefore P( f | h, d) = P(h, f | d)/P(h | d) = P( f | d). By definition of f , in supervised learning, YF and YH are conditionally independent given f and q: P(yF , yH | f, q) = P(yF | yH , f, q) P(yH | f, q) = P(yF | f, q)P(yH | f, q). Similarly, YF and YH are conditionally independent given d and q. R Proof. P(yF , yH | d, q) = P(yF | d, q)P(yH | d,Rq, yF ) = P(yF | d, q) dhP(yH | h, d, q, yF )P(h R | d, q, yF ) = P(yF | d, q) dhP(yH | h, q)P(h | d) = P(yF | d, q) dhP(yH | h, d, q)P(h | d, q) = P(yF | d, q)P(yH | d, q). 2.4 The Cost and “Generalization Error.” Given values of F, H, and a test set point q ∈ X, the associated cost or error is indicated by the random variable C. Often C is a loss function and can be expressed in terms of a mapping L taking Y × Y to a real number. Formally, in these cases the probability that C takes on the value c, conditioned on given values h, f , and q, is P(c | f, h, q) = 6yH ,yF P(c | yH , yF )P(yH , yF | f, h, q) = 6yH ,yF δ{c − L(yH , yF )}h(q, yH ) f (q, yF ).1 As an example, quadratic loss has L(yH , yF ) = (yH − yF )2 , so E(C | f, h, q) = 6yH ,yF f (q, yF )h(q, yH )(yH − yF )2 . Generically, when the distribution of c given f , h, and q cannot be reduced in this way to a loss function from Y×Y to R, it will be referred to as a scoring rule. Scoring rules are often appropriate when you are trying to guess a 1 Note that if L(·, ·) is a symmetric function of its arguments, this expression for P(C | f, h, q) is a non-Euclidean inner product between the (Y-indexed) vectors f (q, ·) and h(q, ·). Such inner products generically arise in response to conditional independencies among the random variables. In the case here, it arises due to the fact that P(yH | yF , h, f, q) = P(yH | h, q). Similarly, in Wolpert (1995) it is shown that because P(h | d, f ) = P(h | d), E(C | d) is a non-Euclidean inner product between the (H-indexed) vector P(h | d) and the (F-indexed) vector P( f | d); your expected cost is set by how “aligned” your learning algorithm P(h | d) is with the actual posterior, P( f | d).
On Bias Plus Variance
1217
distribution over Y, and loss functions are usually appropriate when you are trying to guess a particular value in Y. As an example, for “logarithmic scoring,” P(c | f, h, q) = δ{c − 6y f (q, y) ln[h(q, y)]}. This cost is the logarithm of the probability one would assign to an infinite data set generated according to the target f , if one had assumed (erroneously) that it was actually generated according to the hypothesis h. The “generalization error function” used in much of supervised learning is given by c0 ≡ E(C | f, h, d). It is the average over all q of the cost c, for a given target f , hypothesis h, and training set d. 2.5 Miscellaneous. Note the implicit rule of probability theory that any random variable not conditioned on is marginalized over. For example (using the conditional independencies in conventional supervised learning), expected cost given the target, training set size, and test set point, is given by Z E(C | f, m, q) = dh6d E(C | h, d, f, q)P(h | d, f, q, m)P(d | f, q, m) Z = dh6d E(C | f, h, q)P(h | d)P(d | f, q, m) Z = dhE(C | f, h, q) ×{6d P(h | d)P(dY | f, dX )P(dX | f, m, q)}. (I do not equate P(dX | f, q, m) with P(dX | m), as is conventionally (though implicitly) done in most theoretical supervised learning, because in general the test set point may be coupled to dX and even f . See Wolpert, 1995.) 3 Bias Plus Variance for Quadratic Loss 3.1 The Bias-Plus-Variance Formula. This section reviews the conventional bias-plus-variance formula for quadratic loss, with a fixed target and averages over training sets. Write E(YF | f, q) = 6y yf (q, y) E(YF2 | f, q) = 6y y2 f (q, y) Z E(YH | f, q, m) = dh6d P(d | f, q, m)P(h | d)6y yh(q, y) Z 2 | f, q, m) = dh6d P(d | f, q, m)P(h | d)6y y2 h(q, y), E(YH where for succinctness the m-conditioning in the expectation values is not indicated if the expression is independent of m. These are, in order, the
1218
David H. Wolpert
average Y and Y2 values of the target (at q) and the average of the hypotheses made in response to training sets generated from the target (again, evaluated at q). Note that these averages need not exist in Y in general. For example, this is almost always the case if Y is binary. Now write C = (YH − YF )2 . Then simple algebra (use the conditional independence of YH and YF ) verifies the following formula: 2 + (bias f,m,q )2 + variance f,m,q , E(C | f, m, q) = σ f,m,q
(3.1)
where 2 ≡ E(YF2 | f, q) − [E(YF | f, q)]2 , σ f,m,q
bias f,m,q ≡ E(YF | f, q) − E(YH | f, q, m), 2 | f, q, m) − [E(Y | f, q, m)]2 . variance f,m,q ≡ E(YH H
The subscript { f, m, q} indicates the conditioning event for the expected error and will become important below. When the conditioning event is clear, or not important, bias f,m,q may be referred to simply as the bias, and similarly for the variance and the noise terms. The bias-variance formula in Geman et al. (1992) is a special case of equation 3.1, where the learning algorithm always guesses the same h given the same training set d (something that is not the case for backpropagation with a random initial weight, for example). In addition, in Geman et al. (1992) the hypothesis h that the learning algorithm guesses is always a single-valued mapping from X to Y. Note that essentially no assumptions are made in deriving equation 3.1. Any likelihood is allowed, any learning algorithm, any relationship between q and f and/or d, and so on. This will be true for all of the analysis in this article. In addition to such generality, the utility of the bias-plus-variance formula lies in the fact that often there is a “bias-variance trade-off.” For example, it may be that a modification to a learning algorithm improves its bias for the target at hand. (This often happens when more free parameters are incorporated into the learning algorithm’s model, for example.) But this is often at the expense of increased variance. In addition, the terms in the bias-plus-variance formula all involve (functions of) expectation values of the fundamental random variables described in the previous section; no new random variables are involved. This means that the bias-plus-variance formula is particularly intuitive and easy to interpret, as the discussion in the next subsection illustrates. 3.2 Desiderata Obeyed by the Terms in the Quadratic Loss Bias-PlusVariance Formula. To facilitate the generalization of the bias-plus-variance
On Bias Plus Variance
1219
formula, define z ≡ { f, m, q}, the set of values of random variables we are conditioning on in equation 3.1. Then intuitively, in equation 3.1, for the point q: 1. σz2 measures the intrinsic error due to the target f , independent of the learning algorithm. For the current choice of z it is given by E(C | h, z)/2 for h = f , that is, it equals half the expected loss of f at “guessing itself” at the point q. 2. The bias measures the difference between the average YH and the average YF (where YF is formed by sampling f and YH is formed by sampling h’s created from d’s that are in turn created from f ). 2 plus the squared bias measures the expected loss 3. Alternatively, σ f,m,q
between YF and the average YH , E((YF − [E(YH | z)])2 | z). 4. The variance reflects the variability of the guessed yH about the average yH as one varies over training sets (generated according to the given fixed value of z). If the learning algorithm always guesses the same h for the same d, and that h is always a single-valued function from X to Y, then the variance is given directly by the variability of the learning algorithm’s guess as d is varied. 5. It is worth pointing out one special property for when z = { f, m, q}. For that case the variance does not depend on f directly, but only indirectly through the induced distribution over training sets, P(d | f, m, q). So, for example, consider having the target changed in such a way that the resultant distribution P(d | f, m, q) over training sets does not change. (For instance, this would be the case if there were negligibly small probability that the q at hand exists in the training set, perhaps because n À m.) Then the variance term does not change. This is desirable because we wish the intrinsic noise term to reflect the target alone, the bias to reflect the target’s relation with the learning algorithm, and the variance term to reflect the learning algorithm alone. In particular, for the usual intuitive characterization of variance to hold, we want it to reflect how sensitive the algorithm is to changes in the data set, regardless of the target. Note that although σz2 appears to be identical to variancez if one simply replaces YF with YH , the two quantities have different kinds of relations with the other random variables. For example, for z = { f, m, q}, variancez depends on the target as well as the learning algorithm, whereas σz2 depends only on the target. So expected quadratic loss reflects noise in the target, plus the difference between the target and the average guess, plus the variability in the guess-
1220
David H. Wolpert
ing. In particular, we have the following properties, which can be viewed as desiderata for our three terms: a. If f is a delta function in Y for the q at hand (i.e., if at the point X = q, f is a single-valued function from X to Y), the intrinsic noise term equals 0. In addition, the intrinsic noise term is independent of the learning algorithm. Finally, the intrinsic noise term is a lower bound on the error; for no learning algorithm can E(C | z) be lower than the intrinsic noise term. (In fact, for the decomposition in equation 3.1, the intrinsic noise term is the greatest lower bound on that error.) b. If the average hypothesis-determined guess equals the average targetdetermined “guess,” then biasz = 0. Biasz is large if the difference between those averages is large. c. Variance is nonnegative, equals 0 if the guessed h is always the same single-valued function (independent of d), and is large when the guessed h varies greatly in response to changes in d. d. The variance does not depend on z directly, but only indirectly through the induced distribution P(h | z). For any z, the associated variance is set by the h dependence of P(h | z). Now P(h | z) = 6d P(h | d, z)P(d | z). Moreover, it is often the case that P(h | d, z) = P(h | d). (For example, this holds for z = { f, m, q}.) In such a case, presuming one knows the learning algorithm (i.e., one knows P(h | d)), then the h dependence of P(h | z) is set by the d dependence of P(d | z). This means that changes to z that do not affect the induced distribution over d do not affect the variance. As mentioned above, this is needed if the variance term is only to reflect how sensitive the algorithm is to changes in the training set. e. All the terms in the decomposition are continuous functions of the target. This is particularly desirable when one wishes to estimate those terms from limited information concerning the target (e.g., from a finite data set). If there were discontinuities and the target were near such a discontinuity, the resultant estimates would often be poor. More generally, it would be difficult to ascribe the usual intuitive meanings to the terms in the decomposition if they were discontinuous functions of the target. Desiderata a through e are somewhat more general than conditions 1 through 4, in that (for example) they are meaningful even if Y is a nonnumeric space, so that expressions like “E(YF | z)” are not defined. Accordingly, I will rely on them more than on conditions 1 through 4 in the extensions of the bias-plus-variance formula presented below. In the final analysis though, both the conditions 1 through 4 and the desiderata a through e are not God-given principles that any bias-plusvariance decomposition must obey. Rather, they are useful aspects of the
On Bias Plus Variance
1221
decomposition that facilitate that decompostion’s “intuitive and easy” interpretation. There is nothing that precludes one’s using slight variants of these conditions, or perhaps even replacing them altogether. The next two sections show how to generalize the bias-plus-variance formula to other conditioning events besides z = { f, m, q} while still obeying (almost all of) conditions 1 through 4 and desiderata a through c. First, in section 4 the generalization to z = {m, q} is presented. Then in section 5, the generalization to arbitrary z is explored. 4 The Midway Point Between Bayesian Analysis and Bias Plus Variance 4.1 Bias Plus Variance for Averaging over Targets: The Covariance Correction. It is important to realize that illustrative as it is, the bias-plusvariance formula examines the wrong quantity. In the real world, it is almost never E(C | f, m) that is directly of interest, but rather E(C | d). (We know d and therefore can fix its value in the conditioning event. We do not know f .) Analyzing E(C | d) is the purview of Bayesian analysis (Buntine & Weigend, 1991; Bernardo & Smith, 1994). Generically, it says that for quadratic loss, one should guess the posterior average y (Wolpert, 1995). As conventionally discussed, E(C | d) does not bear any connection to the bias-plus-variance formula. However there is a midway point between Bayesian analysis and the kind of analysis that results in the bias-plusvariance formula. In this middle approach, rather than fix f as in bias plus variance, one averages over it, as in the Bayesian approach. In this way one circumvents the annoying fact that (common lore to the contrary) there need not always be a bias-variance trade-off, in that there exists an algorithm with both zero bias f,m,q and zero variance f,m,q —the algorithm that always “by luck” guesses yH = E(YF | f, q), independent of d. (Indeed, Breiman’s (1994) bagging scheme is usually justified as a way to try to estimate that “lucky” algorithm.) Since in this middle approach f is not fixed, such a lucky guess is undefined. In addition, in this middle approach, rather than fix d as in the Bayesian approach, one averages over d, as in bias plus variance. In this way one maintains the illustrative power of the bias-plus-variance formula. The result of adopting this middle approach is the following Bayesian correction to the quadratic loss bias-plus-variance formula (Wolpert, 1995): 2 + (biasm,q )2 + variancem,q − 2covm,q , E(C | m, q) = σm,q
where 2 ≡ E(Y2 | q) − [E(Y | q)]2 , σm,q F F
biasm,q ≡ E(YF | q) − E(YH | m, q),
(4.1)
1222
David H. Wolpert 2 |m, q) − [E(Y | m, q)]2 , and variancem,q ≡ E(YH H
covm,q ≡ 6yF ,yH P(yH , yF | m, q) × [yH − E(YH | m, q)] × [yF − E(YF | q)]. 2 | q, m) In this equation, the terms E(YF | q), E(YF2 | q), E(YH | q, m), and E(YH are given by the R formulas just before equation 3.1, provided one adds an outer integral df P( f ),R to average out f . To evaluate the covariance term, use P(yH , yF | m, q) = dh df 6d P(yH , yF , h, d, f, |m, q). Then use the simple identity
P(yH , yF , h, d, f, | m, q) = f (q, yF )h(q, yH )P(h | d)P(d | f, q, m)P( f ). Formally, the reason that the covariance term exists in equation 4.1 when there was none in equation 3.1 is that yH and yF are conditionally independent if one is given f and q (as in equation 3.1), but not only given q (as in equation 4.1). To illustrate the latter point, note that knowing yF , for example, tells you something about f you do not already know (assuming f is not fixed, as it is in equation 3.1). This in turn tells you something about d, and therefore something about h and yH . In this way yH and yF are statistically coupled if f is not fixed. Intuitively, the covariance term simply says that one would like the learning algorithm’s guess to track the (posterior) most likely targets, as one varies training sets. Without such tracking, simply having low biasm,q and low variancem,q does not imply good generalization. This is intuitively reasonable. Indeed, the importance of such tracking between the learning algorithm P(h | d) and the posterior P( f | d) is to be expected, given that E(C | m, q) can be written as a non-Euclidean inner product between P( f | d) and P(h | d). (This is true for any loss function; see Wolpert, 1995.) 4.2 Discussion. The terms biasm,q , variancem,q , and σm,q play the same roles as bias f,m,q , variance f,m,q , and σ f,m,q do in equation 3.1. The major difference is that here they involve averages over f according to P( f ), since the target f is not fixed. In particular, desiderata b and c are obeyed exactly by biasm,q and variancem,q . Similarly the first part of desideratum a is obeyed exactly, if the reference to “ f ” there is taken to mean all f for which P( f ) is nonzero, and if the delta functions referred to are implicitly restricted to be 2 is independent of the learning identical (at q) for all such f . In addition σm,q algorithm, in agreement with the second part of desideratum a. Now that we have the covariance term though, the third part of desideratum a is no longer obeyed. Indeed, by using P(d, yF | m, q) = P(yF | d, q)P(d | 2 as the sum of the expected cost of the best possible m, q) we can rewrite σm,q (Bayes-optimal) learning algorithm for quadratic loss (that is, the loss of the learning algorithm that obeys P(yH | d, q) = δ(yH , E(YF | d, q))), plus another term. Here and throughout, any expression of the form “δ(·, ·)” indicates the Kronecker delta function.
On Bias Plus Variance
1223
That second term is called the data worth of the problem, since it sets how much of an improvement in error can be had from paying attention to the data: 2 = 6d,yF P(d, yF | m, q)[yF − E(YF | d, q)]2 σm,q
(the Bayes-optimal algorithm’s cost)
2
+ 6d P(d | m, q)([E(YF | d, q)] − [E(YF | m, q)]2 )
(4.2)
(the data worth). One might wonder why all of this does not also apply to the conventional bias-variance decomposition for z = { f, m, q}. The reason is that for f -conditioned probabilities, the best possible algorithm does not guess E(YF | d, q) but rather E(YF | f, q) so that the equivalent of the data worth term vanishes. This is why the decomposition in equation 4.2 does not also 2 . apply to σ f,m,q Note that for the Bayes-optimal learning algorithm, covm,q exactly equals the data worth. This is to be expected, since for that learning algorithm, biasm,q equals 0 and variancem,q equals covm,q .2 To see why the data worth measures how much paying attention to the data can help you to guess f , note that the expected cost of the best possible 2 . So the data worth is the data-independent learning algorithm equals σm,q difference between the expected cost of this algorithm and that of the Bayesoptimal algorithm. Note the nice property that when the variance (as one varies training sets d) of the Bayes-optimal algorithm is large, so is data worth. So, reasonably enough, when the Bayes-optimal algorithm’s variance is large, there is a large potential gain in paying attention to the data. Conversely, if the variance for the Bayes-optimal algorithm is small, then not much can be gained by using it rather the optimal data-independent learning algorithm. As it must, E(C | m, q) reduces to the expression in equation 3.1 for E(C | f = f ∗ , m, q) for the prior P( f ) = δ( f − f ∗ ). The special case of equation 3.1 where there is no noise, and the learning algorithm always 2 The latter point follows from the following identities: For the optimal algorithm, whose guessing is governed by a delta function in Y, the variance is given by 2 6d P(d | m, q)[E(YH | d, q) − E2 (YH | q, m)]
= 6d P(d | m, q)[E2 (YF | d, q) − E2 (YF | q, m)]. In addition, the covariance is given by 6d,yF ,yH P(yH | d, q)P(yF | d, q)P(d | m, q) × [yH − E(YH | q, m)] × [yF − E(YF | q, m)] = 6d P(d | m, q) × [E(YF | d, q) − E(YF | q)] × [E(YF | d, q) − E(YF | q)].
1224
David H. Wolpert
guesses the same single-valued input-output function for the same training set, is given in Wolpert (1995). One can argue that E(C | m, q) is usually of more direct interest than E(C | f, m, q), since one can rarely specify the target in the real world but must instead be content to characterize it with a probability distribution. Insofar as this is true, by equation 4.1 there is not a bias-variance tradeoff, as is conventionally stated. Rather there is a bias-variance-covariance trade-off. More generally, one can argue that one should always analyze the distribution P(C | z) where z is chosen to reflect directly the statistical scenario with which one is confronted. (See the discussion of the honesty principle at the end of Wolpert 1995.) In the real world, this usually entails having z = {d}. In toy experiments with a fixed target, it usually means having z = { f, m}. For other kinds of experiments, it means other kinds of z’s. The only justification for not setting z this way is calculational intractability of the resultant analysis, or difficulty in determining the distributions needed to perform that resultant analysis, or both. (This latter issue is why Bayesian analysis does not automatically meet our needs in the real world. With such analysis there is often the issue of how to determine P( f ), the prior.) However at the level of abstraction of this article, neither difficulty arises. So for current purposes, the applicability of the covariance terms introduced is determined solely by the statistical scenario with which one is confronted. 5 Other Corrections to Quadratic Loss Bias-Plus-Variance 5.1 The General Quadratic Loss Bias-Plus-Variance-Plus-Covariance Decomposition. Often in supervised learning, one is interested in generalization error, the average error between f and h over all q. For fixed f and m, the expectation of this error is E(C | f, m). This quantity is ubiquitous in computational learning theory (COLT), as well as several other popular theoretical approaches to supervised learning (Wolpert, 1995). Nor is interest in it restricted to the theoretical (as opposed to applied) communities; it seems fair to say that it is far more commonly investigated than is E(C | f, m, q) in both the machine learning and neural net literatures. To address generalization error, note that for any set of random variables Z, taking (set) values z, E(C | z) = σz2 + (biasz )2 + variancez −2 covz , where σz2 ≡ E(YF2 | z) − [E(YF | z)]2 , biasz ≡ E(YF | z) − E(YH | z),
(5.1)
On Bias Plus Variance
1225
2 | z) − [E(Y | z)]2 , variancez ≡ E(YH H
and
covz ≡ 6yF ,yH P(yH , yF | z) × [yH − E(YH | z)] × [yF − E(YF | z)].
The terms in this formula can usually be interpreted just as the terms in the conventional (z = { f, m, q}) decomposition can. So, for example, variancez measures the variability of the learning algorithm’s guess as one varies over those random variables not specified in z. Equation 4.1 is a special case of this formula where z = {m, q}, and equation 3.1 is a special case where z = { f, m, q} (and consequently the covariance term vanishes). However, both of these equations have z contain q, when, by equation 5.1, we could just as easily have z not contain q. By doing that, we get a correction to the bias-plus-variance formula for generalization error. (Note that this correction is in no sense a “Bayesian” correction.) To be more precise, the following is an immediate corollary of equation 5.1: 2 + (bias f,m )2 + variance f,m −2 cov f,m E(C | f, m) = σ f,m
(5.2)
(with definitions of the terms given in equation 5.1). Note that corrections similar to that of equation 5.2 hold for E(C | f, d) and E(C | m). In all three of these cases, σz2 , biasz , and variancez play the same role 2 , bias f,m,q , and variance f,m,q in equation 3.1. The only difference as do σ f,m,q is that different quantities are averaged over. (So, for example, in E(C | f, d), variance partially reflects variability in the learning algorithm’s guess as one varies q.) In particular, most of our desiderata for these quantities are met. In addition, though, for all three of these cases, P(yH , yF | z) 6= P(yH | z)P(yF | z) in general, and therefore the covariance correction term is nonzero in general. So for these three cases, as well as for the one presented in the previous subsection, there is not a bias-variance dilemma. Rather there is a bias-variance-covariance dilemma. It is only for a rather specialized kind of analysis, where both q and f are fixed, that one can ignore the covariance term. For almost all other analyses—in particular, the popular generalization error analyses—the huge body of lore explaining various aspects of supervised learning in terms of a bias-variance dilemma is less than the whole story. 5.2 Averaging over a Random Variable versus Having It Be in z. Before leaving the topic of equations 5.1 and 5.2, it should be pointed out that one could just as easily use equation 3.1 and write E(C | f, m) = 6q P(q | 2 f, m)[σ f,m,q + bias2f,m,q + variance f,m,q ] rather than use equation 5.2. Under commonly made assumptions (see Wolpert, 1994) P(q | f, m) just equals P(q), which is often called the sampling distribution. In such cases, this q average of equation 3.1 is often straightforward to evaluate. It is instructive to compare such a q average to the expansion in equation 5.2. First, such a q average is often less informative than equation 5.2. As
1226
David H. Wolpert
an example, consider the simple case where f is single valued (i.e., a single valued function from X to Y), h is single-valued, and the same h is guessed 2 = 6q P(q | f, m) for all training sets sampled from f . Then 6q P(q | f, m)σ f,m,q variance f,m,q = 0; the q-averaged intrinsic noise and variance terms provide no useful information, and the expected error is given solely by the bias 2 tells us the amount that f (q) varies about term. On the other hand, σ f,m its average (over q) value, and variance f,m is given by a similar term. In addition bias f,m tells us the difference between those q-averages of f and h. And, finally, the covariance term tells us how much our h tracks f as one moves across X. So all the terms in equation 5.2 provide helpful information, whereas the q average of equation 2.1 reduces to the tautology “6q P(q | f, m)E(C | f, m, q) = 6q P(q | f, m)E(C | f, m, q).” In addition to this difficulty, in certain respects the individual terms in the q average of equation 3.1 do not meet all of our desiderata. In particular, desideratum b is not obeyed in general by that decomposition, if one tries to identify “bias2 ” as 6q P(q | f, m) bias2f,m,q (note that here, the z referred to in desideratum b is { f, m}). On the other hand, the terms in equation 5.2 do meet all of our desiderata, including the third part of a. (The best possible algorithm if one is given f and m is the algorithm that always guess h(q, y) = δ(y, E(YF | f )), regardless of the data.) There are also aesthetic advantages to using equation 5.1 and its corollaries (like equation 5.2) rather than averages of equation 3.1. For example, equation 5.1 does not treat z = { f, m, q} as special in any sense; all z’s are treated equally. This contrasts with formulas based on equation 3.1, in which one writes E(C | f, m) as a q average of E(C | f, m, q), E(C | m, q) as an f average of E(C | f, m, q), and so forth. Moreover, equation 5.1 holds even when z is not a subset of { f, m, q}. So, for example, E(C | h, q) can be expressed directly in terms of bias plus variance by using equation 5.2. However, there is no simple way to express the same quantity in terms of equation 3.1. Indeed, equation 5.1 even allows us to write the purely Bayesian quantity E(C | d) in bias-plus-variance terms, something we cannot do using equation 3.1: E(C | d) = σd2 + (biasd )2 + varianced − 2 covd ,
(5.3)
where σd2 ≡ E(YF2 | d) − [E(YF | d)]2 , biasd ≡ E(YF | d) − E(YH | d), 2 varianced ≡ E(YH | d) − [E(YH | d)]2 ,
and covd ≡ 6yF ,yH P(yH , yF | d) × [yH − E(YH | d)] × [yF − E(YF | d)]. By using this formula one does not even have to go to a midway point
On Bias Plus Variance
1227
between Bayesian analysis and conventional bias-plus-variance analysis to relate the two. Rather equation 5.3 directly provides a fully Bayesian bias-plus-variance decomposition. So, for example, as long as one is aware that different variables are being averaged over than is the case for the conventional bias-variance decomposition (namely, q and f rather than d), equation 5.3 allows us to say directly that for quadratic loss, in the statistical scenario of interest in Bayesian analysis the Bayes-optimal learning algorithm has zero bias. Traditionally, the very notion of “bias” has had no meaning in Bayesian analysis. None of this means that one should not use the appropriate average of equation 3.1 (in those cases where there is such an average) rather than equation 5.1 and its corollaries. There are scenarios in which that average provides a helpful perspective on the learning problem that equation 5.1 does not. For example, if Y contains very few values (e.g., two), then in many scenarios bias f,m , which equals 6q P(q | f, m)[E(YH | f, m, q) − E(YF | f, q)], is close to zero, regardless of the learning algorithm. In the same scenarios though, the associated q average of equation 3.1 term, 6q P(q | f, m) bias2f,m,q = 6q P(q | f, m)[E(YH | f, m, q) − E(YF | f, q)]2 , is often far from zero. In such cases the bias term in equation 5.2 is not very informative whereas the associated q-average term is. In the end, which kind of decomposition one uses, just like the choice of what variables z represents, depends on what one is trying to understand about one’s learning problem. 6 Other Characteristics Associated with the Loss 6.1 General Properties of Convex and/or Concave Loss Functions. There are a number of other special properties of quadratic loss besides the equations presented thus far. For example, for quadratic loss, for any f , E(C | f, m, q, algorithm A) ≤ E(C| f, m, q, algorithm B) so long as A’s guess is the average of B’s (formally, so long as we have P(yH | d, q, A) = δ(yH , E(YH | d, q, B)) = δ(yH , 6y yh(q, y)P(h | d, B)). So without any concerns for priors, one can always construct an algorithm that is assuredly superior to an algorithm with a stochastic nature: simply guess the stochastic algorithm’s average. (This is a result of Jensen’s inequality; see Wolpert, 1995, 1996a, 1996b; and Perrone, 1993.) This is true whether the stochasticity is due to nonsingle-valued h or (as with backpropagation with a random initial weight) due to the learning algorithm’s being nondeterministic. Now the EBF is symmetric under h ↔ f . Accordingly, this kind of result can immediately be “turned around.” In such a form, it says, loosely, that a prior that is the average of another prior assuredly results in lower expected cost, regardless of the learning algorithm. In this particular sense, for quadratic loss, one can place an algorithm-independent ordering over some priors. (Of course, one can also order them in an algorithm-dependent
1228
David H. Wolpert
manner if one wishes, for example, by looking at the expected generalization error of the Bayes-optimal learning algorithm for the prior in question.) The exact opposite behavior holds for loss functions that are concave rather than convex. For such functions, guessing randomly is assuredly superior to guessing the average, regardless of the target. (There is a caveat to this: one cannot have a loss function that is both concave everywhere across an infinite Y and nowhere negative, so formally, this statement holds only if we know that the yF and yH are both in a region of concave loss.) 6.2 General Properties of Metric Loss Functions. Finally, there are other special properties that some loss functions possess but that quadratic loss does not. For example, if the loss can be written as a function L(·, ·) that is a metric (e.g., absolute value loss, zero-one loss), then for any f , |E(C | f, h1 , m, q) − E(C | f, h2 , m, q)| ≤ 6y,y0 L(y, y0 )h1 (q, y)h2 (q, y0 ). (6.1) For such loss functions, you can bound how much replacing h1 by h2 can improve or hurt generalization by looking only at h1 and h2 . That bound holds without any concern for the prior over f . It is simply the expected loss between h1 and h2 . Unfortunately, quadratic loss is not a metric, and therefore one cannot employ this bound for quadratic loss. 7 An Alternative Anlysis of the Friedman Effect Kong and Dietterich (1995) have raised the issue of what the appropriate bias-plus-variance decomposition is for zero-one (misclassification) loss, L(yH , yF ) = 1 − δ(yH , yF ). They raised the issue in the context of the classical conditioning event: the target, the training set size, and the test set question. The decomposition they suggested (i.e., their suggested definitions of bias and variance for zero-one loss) has several shortcomings. Not least of these is that decomposition’s allowing negative variance. Several subsequent papers (Kohavi & Wolpert, 1996; Wolpert & Kohavi, 1997; Tibshirani, 1996; Breiman, 1996) have offered alternative decompositions, with different strengths and weaknesses, as discussed in Kohavi and Wolpert (1996) and Wolpert and Kohavi (1997). Recently Friedman (1996) contributed another zero-one loss decomposition to the discussion. Friedman’s decomposition applies only to learning algorithms that perform their classification by first predicting the probabilities ηy of all the possible output classes and then picking the class argmaxi [ηi ]. In other words, he considers cases where h is single valued, but the value h(q) ∈ Y is determined by finding the maximum over the components of a Euclidean vector random variable η, dependent on q and d, whose components always sum to 1 and are all nonnegative. Intuitively, the ηy are the probabilities of the
On Bias Plus Variance
1229
various possible YF values for q, as guessed by the learning algorithm in response to the training set. The restriction of Friedman’s analysis to such algorithms is not lacking in consequence. For example, it rules out perhaps the simplest possible learning algorithm, one that is of great interest in the computational learning community: from a set of candidate hypothesis input-output functions, pick the one that best fits the training set. This restriction makes Friedman’s analysis less general than the other zero-one loss decompositions that have been suggested. There are several other peculiar aspects to Friedman’s decomposition. Oddly, it is multiplicative rather than additive in its suggested definitions of bias and variance. More important, it assumes that as one varies training sets d while keeping f , m, and q constant, the induced distribution over η is gaussian. It then further assumes that the truncation on integrals over such a gaussian distribution imposed by the limits on the range of each of the ηy (each ηy ∈ [0, 1]) is irrelevant, so that erf functions can be replaced by integrals from −∞ to +∞. (Note that even if the gaussian in question is fairly peaked about a value well within [0, 1), that part of the dependence of the integral on certain quantities occurring in the limits on the integration may be significant, in comparison to the other ways that that integral depends on those quantities.) In addition, as Friedman defines it, bias can be negative. Furthermore, his bias does not reduce to zero for the Bayes classifier (and in fact the approximations invoked in his analysis are singular for that classifier). And perhaps most important, his definition of variance depends directly on the underlying target distribution f , rather than indirectly through the f -induced distribution over training sets P(d | f, m, q). This last means that the variance term does not simply reflect how sensitive the learning algorithm is to (target-governed) variability in the training set; the variance also changes even if one makes a change to the target that has no effect on the induced probability distribution over training sets. These oddities should not obscure the fact that Friedman’s analysis is a major contribution. Perhaps the most important aspect of Friedman’s analysis is its drawing attention to the following phenomenon. Consider the case where we are interested in expected zero-one loss conditioned on a fixed target, test set question, and training set size. Presume further that we are doing binary classification (r = 2), so the class label probabilities guessed by the learning algorithm reduce to a single real number η1 giving the guessed probability of class 1 (i.e., η = (η1 , 1 − η1 )). Examine the case where, for the test set point at hand, the average (over training sets generated from f ) value of η1 is greater than one-half. So the guess corresponding to that average η1 is class 1. However let us say that the truly optimal prediction (as determined by the target) is class 2. Now modify the scenario so that the variability (over training sets) of the guess η1 grows, while its average stays the same (i.e., the width of the distribution over η1 grows). Assume that this extra variability results in having η1 < 1/2 for more training sets.
1230
David H. Wolpert
So for more training sets, the η1 produced by the learning algorithm corresponds to the (correct) guess of class 2. Therefore increasing the variability of η1 while keeping the average the same has reduced overall generalization error. Note that this effect arises only when the average of η1 results in a non-optimal prediction, i.e., only when the average is “wrong.” (This point will be returned to shortly.) Now view this variability as, intuitively, akin to a variance. (This definition differs only a little from the formal definition of variance Friedman advocates.) Similarly, view the average of η1 as giving a bias. Then we have the peculiar result that increasing variance while keeping bias fixed can reduce overall expected generalization error. I will refer to such behavior as the Friedman effect. (See also Breiman, 1996—in particular, the discussion in the first appendix.) Variability can be identified with the width of the distribution over η1 and in that sense can indeed be taken to be a variance. The question is whether it makes sense to view it as a variance in the restricted desiderata-biased sense appropriate to bias-variance decompositions. In particular, note that whether the Friedman effect obtains—whether increasing the variability increases or decreases expected error—depends directly not only on the learning algorithm and the distribution over training sets, but also on the target (the average η1 must result in a guessed class label that differs from the optimal prediction as determined by the target for the Friedman effect to hold). This direct dependence of the Friedman effect on the target is an important clue. Recall in particular that our desiderata preclude having a variance with such a direct dependence. However, such a dependence of the “variability” term could be allowed if the variability that is being increased is identified with a variance combined with another quantity. Given the general form of the bias-variance decomposition, the obvious choice for that other quantity is a term reflecting a covariance. By having that covariance involve varying over yF values as well as training sets, we get the direct dependence on the target arising in the Friedman effect. In addition, viewing the variability as involving a covariance term along with a variance term could potentially explain away the peculiarity of the Friedman effect’s having error shrink as variability increases. This would be the case, for example, if holding the covariance part of the variability fixed while increasing the variance part always did increase expected generalization error. The idea would be that the way that variability is increased in the Friedman effect involves changes to the covariance term as well as the variance term, and it is the changes to the covariance term that are ultimately responsible for the reduction in expected generalization error. Increasing the variance term, by itself, can only increase expected error, exactly as in the conventional bias-plus-variance decomposition. As it turns out, this hypothesized behavior is exactly what lies behind the Friedman effect. This can best be seen by using a different decomposition from Friedman’s. In exploring that alternative decomposition, we will see
On Bias Plus Variance
1231
that there is nothing inherently unusual about how variance is related to generalization error for zero-one loss; in this alternative decomposition, the decrease in generalization error associated with increasing variability is due to the covariance term rather than the variance term. Common intuition is salvaged. This alternative decomposition has the additional advantage that it is valid even if the assumptions that Friedman’s analysis requires do not hold. Moreover this alternative decomposition is additive rather than multiplicative, holds for all algorithms that employ η’s, and in general avoids the other oddities of Friedman’s analysis. Unfortunately, to clarify all this, it is easiest to work with somewhat generalized notation. (The reader’s forbearance is requested.) The reason for this is that since it can take on only two values, the zero-one loss can hide a lot of phenomena via “accidental” cancellation of terms and the like. To have all such phenomena manifest, we need the generalized notation. First write the cost in terms of an abstraction of the expected zero-one loss: C = C( f, η, q) = 6y [1 − R(y, η)] f (q, y) = 1 − 6y R(y, η) f (q, y). It is required that R(y, η) be unchanged if one relabels both y and the components of η simultaneously and in the same manner. As an example of an R, for the precise case Friedman considers, there are two possible values of y, and R(y, η) = 1 for y = argmaxi (ηi ), 0 for all other y. Note that when expressed this way, for fixed q the cost is given by a dot product between two vectors indexed by Y values—namely, R and f . Moreover, for many R (e.g., the zero-one R), that dot product is between two probability distributions over y. Note also that whereas h and f are on an equal footing in the EBF (cf. section 2, especially its end), the same is not true for η and f . Indeed, whereas h and f arise in a symmetric manner in the zero-one loss cost (C( f, h, q) = 6yH ,yF [1 − δ(yH , yF )]h(q, y) f (q, y)), the same is manifestly not true for the variables η and f . For fixed η and q, respectively, for both the (Y-indexed) vector R(y, η) and the vector f (q, y), there are only r − 1 free components, since in both cases the components must sum to 1. Accordingly, as in Friedman’s analysis, here it makes sense to reduce the number of free variables to 2r − 2 by expressing one of the R components in terms of the others and similarly for one of the f components. A priori, it is not clear which such components should be reexpressed this way. Here I will choose to reexpress the component y∗ (z) ≡ argmaxy R(y, E[η | z]) this way for both R and f . For Friedman’s R(·, ·), this y∗ (z) is equivalent to the y maximizing E([ηy | z]). (So in the example above of the Friedman
1232
David H. Wolpert
effect, y∗ (z) = class 1.) Accordingly, from now on I will replace R(y∗ (z), η) with 1 − 6y6=y∗ (z) R(y, ηy ) and I will replace f (q, y∗ (z)) with 1 − 6y6=y∗ (z) f (q, y) wherever those terms appear. (From now on, when the context makes z clear, I will write y∗ rather than y∗ (z).) Having made these replacements, we can write C = 6y6=y∗ R(y, η) + 6y6=y∗ f (q, y) © ª − 6y6=y∗ R(y, η) × 6y6=y∗ f (q, y) − 6y6=y∗ R(y, η) f (q, y), which we can rewrite as C = 6y6=y∗ R(y, η) + 6y6=y∗ f (q, y) − Sy6=y∗ (R(y, ηy ), f (q, y)), if we use the shorthand S{i} (g(i), h(i)) ≡ 6i g(i) + 6i h(i) + 6i g(i)h(i). Now we must determine what noise plus bias is. Since η and f do not live in the same space, not all of the desiderata listed previously are well defined. Nonetheless, we can satisfy their spirit by taking noise plus bias to be E[C(., E[η | z], q) | z), that is, by taking it to be the expected value of C when one guesses using the expected output of the learning algorithm. In particular, this definition makes sense in light of desiderata 3. Writing it out, by using the multilinearity of S(·, ·) we see that for this definition noise plus bias is given by 6y6=y∗ R(y, E[η | z]) + 6y6=y∗ E[F(Q, y) | z] −Sy6=y∗ (R(y, E[η | z]), E[F(Q, y) | z]). R (As a point of notation, E[F(Q, y) | z] means df 6q f (q, y)P( f, q | z); it is the y component of the z-conditioned average target for average q.) As an example, for the zero-one loss R(·, ·) Friedman considers, R(y, E[η | z]) = 0 for all y 6= y∗ . Therefore noise plus bias reduces to 6y6=y∗ E[F(Q, y) | z]. For his z, { f, m, q}, this is just 1 − f (q, y∗ ). So for Friedman’s r = 2 scenario, if the class corresponding to the learning algorithm’s average η1 is the same as the target’s average class y∗ , noise plus bias is miny f (q, y). Otherwise it is maxy f (q, y). If we identify miny f (q, y) with the noise term, this means that the bias equals either zero or | f (q, y = 1) − f (q, y = 2)|, depending on whether the class corresponding to the learning algorithm’s average η1 is the same as the target’s average class. Continuing with our more general analysis, the difference between E(C | z) and noise plus bias is 6y6=y∗ E[R(y, η) | z] − R(y, E[η | z]) − {E[Sy6=y∗ (R(y, η), F(Q, y)) | z] − Sy6=y∗ (R(y, E[η | z]), E[F(Q, y) | z])}.
(7.1)
On Bias Plus Variance
1233
For many R the expression on the first line of equation 7.1 is nonnegative. For example, due to the definition of y∗ , this is the case for the zero-one R. In addition, that expression does not involve f directly, equals zero when the learning algorithm’s guesses never changes, and so forth. Accordingly, that expression (often) meets the usual desiderata for a variance and will here be identified as a variance. If everything else is kept constant while this variance is increased, then expected error also increases; there is none of the peculiar behavior Friedman found if one identifies variance with this expression from equation 7.1. For the zero-one R, this variance term is the probability that the y maximizing ηy is not the one that maximizes the expected ηy . The remaining terms in equation 7.1 collectively act like the negative of a covariance. Indeed, for the case Friedman considers where r = 2, if we define ∼ y∗ as the element of Y that differs from y∗ , we can write (the negative of) those terms as 2{E[R(∼ y∗ , η) × F(Q, ∼ y∗ ) | z] − R(∼ y∗ , E[η | z]) × E[F(Q, ∼ y∗ ) | z]}. Note the formal parallel between this expression for the remaining terms in equation 7.1 and the functional forms of the covariance terms in the biasvariance decompositions for other costs that were presented above. Of course, the parallel is not exact. In particular, this expression is not exactly a covariance; a covariance would have an E[R | z] × E[ f | z] type term rather than an R(. . . , E[. . . | z]) × E[F | z] term. The presence of the R(·, ·) functional is also somewhat peculiar. Indeed, the simple fact that the covariance term is nonzero for z = { f, m, q} is unusual (see the quadratic loss decomposition, for example). Ultimately, all these effects follow from the fact that the decomposition considered here does not treat targets and hypotheses the same way; targets are represented by f ’s, whereas hypotheses are represented by η’s rather than h’s. (Recall that for Friedman’s scenario’s R, h is determined by the maximal component of η.) If instead hypotheses were represented by h’s, then the zero-one loss decomposition would involve a proper covariance, without any R(·, ·) functional (Kohavi & Wolpert, 1996). It is the covariance term, not the variance term, that results in the possibility that an increase in variability reduces expected generalization error. To see this, along with Friedman, take z to equal { f, m, q}. Then our covariance term becomes 2{E[R(∼ y∗ , η) | f, m, q] × f (q, ∼ y∗ ) − R(∼ y∗ , E[η | f, m, q]) × f (q, ∼ y∗ )}. (Note that for the zero-one R(·, ·), R(∼ y∗ , E[η | f, m, q]) = 0, since by definition ∼ y∗ is not the Y value that maximizes E[η | f, m, q].)
1234
David H. Wolpert
Again following along with Friedman, have E[η | f, m, q] fixed while increasing variability. Next presume that that increase in variability increases E[R(∼ y∗ , η) | z], as Friedman does. (E[R(∼ y∗ , η) | z] gives the amount of “spill” of the learning algorithm’s guess η1 into ∼ y∗ , the output label other than the one corresponding to the algorithm’s average η1 .) Doing all this will always decrease the contribution to the expected error arising from the covariance term. It will also always increase the variance term’s contribution to the expected error. So long as f (q, ∼ y∗ ) > 1/2, the former phenomenon will dominate the latter, and overall, expected error will decrease. This condition on f (q, ∼ y∗ ) is exactly the one that corresponds to Friedman’s “peculiar behavior”; it means that the target is weighted toward the opposite Y value from the one given by the expected guess of the learning algorithm. So whether the peculiar behavior holds when one increases variability depends on whether the associated increase in variance manages to offset the associated increase in covariance (this latter increase resulting in a decrease in contribution to expected error). As in so much else having to do with the bias-variance decomposition, we have a classical trade-off between two competing phenomena, both arising in response to the same underlying modification to the learning problem. There is nothing peculiar in this, but in fact only classical bias-variance phenomenology. Nonetheless, it should be noted that for zero-one R, the covariance just equals 2 f (q, y∗ ) times the variance. Moreover, for zero-one R and z={ f, m, q}, noise plus bias = 1 − f (q, y∗ ). So if noise and bias are held constant, it is impossible to change the variance while everything else is held constant; the covariance will necessarily change as well, assuming the bias and noise do not. This behavior should not be too surprising. Since the zero-one loss can take on only two values, one might expect that it is not possible for all four of noise, bias, variance, and covariance to be independent. The reason for the precise dependency among those terms encountered here can be traced to two factors: choosing z to equal { f, m, q}, and performing the analysis in terms of η rather than h. For decompositions involving h (rather than η), the covariance term relates variability in h to variability in f , and therefore must vanish for the fixed f induced by having z equal { f, m, q}. However, in this section we are doing the analysis in terms of η, and η does not have the formal equal footing with f that h does. So rather than a proper covariance between η and f , we instead have here an unconventional “covariancelike” term. And this term does not disappear even for fixed f . As a sort of substitute, the covariance-like term instead is uniquely determined by the values of the noise, bias, and variance, if f is fixed. 8 Bias Plus Variance for Logarithmic Scoring 8.1 The { f, m, q}-Conditioned Decomposition. To begin the analysis of bias-plus-variance for logarithmic scoring, consider the case where z =
On Bias Plus Variance
1235
{ f, m, q}. The logarithmic scoring rule is related to Kullback-Liebler distance, and is given by E(C | f, h, q) = −6y f (q, y) ln[h(q, y)], so
Z E(C | f, m, q) = −6y f (q, y)6d P(d | f, m)
dhP(h | d) ln[h(q, y)].
Unlike quadratic loss, logarithmic scoring is not symmetric under f ↔ h. This scoring rule (sometimes instead referred to as the “log loss function” and proportional to the value of the “logarithmic score function” for an infinite test set) can be appropriate when the output of the learning algorithm h is meant to be a guess for the entire target distribution f (Bernardo & Smith, 1994). This is especially true when Y is a categorical rather than a numeric space. To understand that, consider the case where you guess h and have some test set T generated from f that you wish to use to score h. How to do that? One obvious way is to score h as the log likelihood of T given h. If we now have T be infinite; take the geometric mean of the log-likelihood to get it to be non-zero; and average over all T generated from f , we get logarithmic scoring. Note that logarithmic scoring can be used even when there is no metric structure on Y (as there must be for quadratic loss to be used). In creating an analogy of the bias-plus-variance formula for cases where C is not given by quadratic loss, one would like to meet conditions 1 through 4 and a through c presented above. However often there is no such thing as E(YF | f, q) when logarithmic scoring is used (i.e., often Y is not a metric space when logarithmic scoring is used). So we cannot measure intrinsic noise relative to E(YF | f, q), as in the quadratic loss bias-plus-variance formula. This means that meeting conditions 1 through 4 will be impossible; the best we can do is come up with a formula whose terms meet desiderata a through c and that have analogous behavior in some sense to the conditions 1 through 4. One natural way to measure intrinsic noise for logarithmic scoring is as the Shannon entropy of f , σls; f,m,q ≡ −6y f (q, y) ln[ f (q, y)].
(8.1)
Note that this definition meets all three parts of desideratum a. It is also the expected error of f at guessing itself, in close analogy to the intrinsic noise term for quadratic loss (see condition 1). To form an expression for logarithmic scoring that is analogous to the bias2 term for quadratic loss, we cannot directly start with the terms involved in quadratic loss because they need not be properly defined (e.g., E(YF | z) is not defined for categorical spaces Y, even though logarithmic scoring is perfectly well defined for that case). To circumvent this difficulty,
1236
David H. Wolpert
first note that for quadratic loss, expected Y values are the modes of optimal hypotheses (recall that hypotheses are distributions, and note that for quadratic loss, minimizing expected loss means taking a mean Y value in general). In addition, squares of differences between Y values are expected costs. Keeping this in mind, the bias2 term for quadratic loss can be rewritten as the following sum: 6y,y0 L(y, y0 )δ(y, E(YF | f, q, m))δ(y, E(YH | f, q, m)). This is the expected quadratic loss between two X-conditioned distributions over Y given by the two delta functions. The first of those distributions is what the optimal hypothesis would be if the target were given by E(F | z) (for z = { f, m, q}, E(F | z) = f ). More formally, the first of the two distributions is δ(y, E(YF | z)) = argminh E(C | f = E(F | z), m, q, h). I will indicate this by defining Opt(u) ≡ argminh E(C | f = u, m, q, h), where u is any distribution over Y. So this first of our two distributions is Opt(E(F | z)). R For z = { f, m, q}, P(yF | f = E(H | z), q, m) = R f (q, yF ) evaluated for f = dhP(h | f, m, q)h; that is, it equals the vector dhhP(h | f, m, q) evaluated for indices q Rand yF . By the usual properties of vector spaces, this can be rewritten as dhh(q, yF )P(h | f, m, q). However, this is just P(yH | f, q, m) evaluated for yH = yF . Accordingly, E(YH | f, q, m) = E(YF | f = E(H | z), q, m). So the second of the two distributions in our expression for bias2 is what the optimal hypothesis would be if the target were given by E(H | z): argminh E(C | f = E(H | z), m, q, h). We can indicate this second optimal hypothesis by Opt(E(H | z)). Combining, we can now rewrite the bias2 for quadratic loss as E(C | f = Opt(E(F | z)),
h = Opt(E(H | z)), q, m).
Unlike the original expression for bias2 for quadratic loss, this new expression can be evaluated even for logarithmic scoring. The Opt(·) function is different for logarithmic scoring and quadratic loss. For logarithmic scoring, by Jensen’s inequality, Opt(u) = u. (Scoring rules obeying this property are sometimes said to be proper scoring rules.) Accordingly, our bias term can be written as E(C | f = E(F | z), h = E(H | z), q). For z = { f, m, q}; this reduces to E(C | f, h = E(H | f, m, q), q). As an aside, note that for quadratic loss, this same expression would instead be identified with noise plus bias. This different way of interpreting the same expression reflects the difference between measuring cost by comparing yH and yF versus doing it by comparing h and f . Writing it out, the bias term for logarithmic scoring is Z −6y f (q, y) ln{6d P(d | f, m)
dhP(h | d)h(q, y)} .3
On Bias Plus Variance
1237
One difficulty with this expression is that its minimal value over all learning algorithms is greater than zero. To take care of that, I will subtract from this expression the additive constant of its minimal value. That minimal value is given by the learning algorithm that always guesses h = f , independent of the training set. Note that this minimal value is just our noise term. So another way to arrive at our choice for the bias term is to simply identify E(C | f, h = E(H | f, m, q |, q) with noise plus bias, its value for quadratic loss. Accordingly, our final measure for the “bias” for logarithmic scoring is the Kullback-Leibler distance between the distribution f (q, ·) and the average h(q, ·). With some abuse of notation, this can be written as follows: biasls; f,m,q ≡ −6y f (q, y) ln[E(H(q, y) | f, m)/f (q, y)] R ¸ · 6d P(d | f, m) dhP(h | d)h(q, y) . = −6y f (q, y) ln f (q, y)
(8.2)
This definition of biasls; f,m,q meets desideratum b. Given these definitions, the variance for logarithmic scoring and conditioning on f , m, and q (so there should not be a covariance term), variancels; f,m,q , is fixed, and given by variancels; f,m,q ≡ −6y f (q, y){E(ln[H(q, y)] | f, m) − ln[E(H(q, y) | f, m)]} Z = −6y f (q, y){6d P(d | f, m) dhP(h | d) ln[h(q, y)] Z − ln[6d P(d | f, m) dhP(h | d)h(q, y)]}. (8.3) Combining, for logarithmic scoring, E(C | f, m, q) = σls; f,m,q + biasls; f,m,q + variancels; f,m,q .
(8.4)
It is straightforward to establish that variancels; f,m,q meets the requirements in desideratum c. First, consider the case where P(h | d) = δ(h−h0 ) for some h0 (this delta function is the multidimensional Dirac delta function). In this case the term inside the curly brackets in equation 8.3 just equals 3
Note that E(C | f, H = E(H | f, m, q), q) for quadratic loss is
Z
6yH , yF L(yH , yF ) f (q, yF )
dhP(h | d)P(d | f, m)h(q, yH ).
However, this just equals E(C | f, m, q), rather than (as for logarithmic scoring) E(C | f, m, q) minus a variance term. This difference between the two cases reflects the fact that whereas expected error for loss functions is linear in h in general, expected error for scoring rules is not.
1238
David H. Wolpert
ln[h0 (q, y)] − ln[h0 (q, y)] = 0. So variancels does equal zero when the guess h is independent of the training set d. (In fact, it equals zero even if h is not single valued, such single-valued h being the precise case desideratum c refers to.) Next, since the log is a concave function, we know that the term inside the curly brackets is never greater than zero. Since f (q, y) ≥ 0 or all q and y, this means that variancels; f,m,q ≥ 0 always. Finally, we can examine the P(h | d) that make variancels large. Any h is an |X|-fold Cartesian product of vectors living on |Y|-dimensional unit simplices. Accordingly, for any d, P(h | d) is a probability density function in a Euclidean space. To simplify matters further, assume that P(h | d) is deterministic, so it specifies a single unique distribution h for each d, indicated by hd . Then the term inside the curly brackets in equation 8.3 equals 6d P(d | f, m) ln[hd (q, y)] − ln[6d P(d | f, m)hd (q, y)]. This is the difference between an average of a function and the function evaluated at the average. Since the function in question is concave, this difference grows if the points going into the average are far apart. That is, to have large variancels; f,m,q , the hd (q, y) should differ markedly as d varies. This establishes the final part of desideratum c. 8.2 Alternative { f, m, q}-Conditioned Decompositions. The approach taken here to deriving a bias-plus-variance formula for logarithmic scoring is not perfect. For example, the formula for variancels; f,m,q is not identical to the formula for σls; f,m,q under the interchange of F with H (as is the case for the variance and intrinsic noise terms for quadratic loss). In addition, variancels; f,m,q can be made infinite by having hd (q, y) = 0 for one d and one y, assuming both f (q, y) and P(d | f ) are nowhere zero. Although not surprising given that we are interested in logarithmic scoring, this is not necessarily desirable behavior in a variance-like quantity. Other approaches tend to have even more major problems, however. For example, as an alternative to the approach taken here, one could imagine trying to define a variance first, and then define bias by requiring that the bias plus the variance plus the noise gives the expected error. It is not clear how to follow this approach, however. In particular, one natural definition of variance would be the average cost between h and the average h, Z − dhP(h | f, m, q)6y E(H | f, m, q) ln[h(q, y)]. (Cf. the formula at the beginning of this section giving E(C | f, m, q) for logarithmic scoring.) This can be rewritten as Z 0 −6y {6d0 P(d | f, m) dhP(h | d0 )h(q, y)}6d P(d | f, m) Z dh0 P(h0 | d) ln[h0 (q, y)].
On Bias Plus Variance
1239
However, consider the case where P(h | d) = δ(h− f ) for all d for which P(d | f, m) 6= 0. With this alternative definition of variance, in such a situation we would have the variance equaling −6y f (q, y) ln[ f (q, y)] = σls; f,m,q , not zero. (Indeed, just having P(h | d) = δ(h − h0 ) for some fixed h0 for all d for which P(d | f, m) 6= 0 suffices to violate our desiderata, since this in general will not result in zero variance.) Moreover, in this scenario, the variance would also equal E(C | f, m, q). So for this scenario, bias = E(C | f, m, q) − variance − σls; f,m,q would equal −σls; f,m,q , not zero. This violates our desideratum concerning bias. Yet another possible formulation of the variance would be (in analogy to the formula presented above for logarithmic scoring intrinsic noise) the Shannon entropy of the average H, E(H | z). But again, this formulation of variance would violate our desiderata. In particular, for this definition of variance, having h be independent of d need not result in zero variance. 8.3 Corrections to the Decomposition When z 6= { f, m, q}. Finally, just as there is an additive Bayesian correction to the { f, m, q}-conditioned quadratic loss bias-plus-variance formula, there is also one for the logarithmic scoring formula. As useful shorthand, write Z E(F(·, y) | z) ≡ 6q = 6q E(H(·, y) | z) ≡ 6q = 6q
Z Z Z
df P( f, q | z) × f (q, y) df P(q | z)P( f | q, z) f (q, y), dhP(h, q | z) × h(q, y), Z dh6d df P(q | z)P( f | q, z)
× P(d | f, q, z)P(h | d, q, z)h(q, y), Z E(ln[H(·, y)] | z) ≡ 6q dhP(h, q | z) × ln[h(q, y)] Z Z = 6q dh6d df P(q | z)P( f | q, z)P(d | f, q, z) × P(h | d, q, z) ln[h(q, y)], and Z E(F(·, y) ln[H(·, y)] | z) ≡ 6q = 6q
Z
df dhP(h, f, q | z) × f (q, y) ln[h(q, y)] df dh6d P(q | z)P( f | q, z)P(d | f, q, z)
P(h | d, q, z) f (q, y) ln[h(q, y)] .
1240
David H. Wolpert
Then simple algebra verifies the following: E(C | z) = σls;z + biasls;z + variancels;z + covls;z ,
(8.5)
where σls;z ≡ −6y E(F(·, y) | z) ln[E(F(·, y) | z)], ¸ · E(H(·, y) | z) biasls;z ≡ −6y E(F(·, y) | z) ln , E(F(·, y) | z) variancels;z ≡ −6y E(F(·, y) | z){E(ln[H(·, y)] | z) − ln[E(H(·, y) | z)]}, and covls;z ≡ −6y E(F(·, y) ln[H(·, y)] | z) − E(F(·, y) | z) ×E(ln[H(·, y)] | z). Note that in equation 8.5 we add the covariance term rather than subtract it (as in equation 3.1). Intuitively, this reflects the fact that − ln(·) is a monotonically decreasing function of its argument, in contrast to (·)2 . However even with the sign backward, the covariance term as it occurs in equation 8.5 still means that if the learning algorithm tracks the posterior—if when f (q, y) rises, so does h(q, y)—then the expected cost is smaller than it would be otherwise. 9 Bias Plus Variance for Quadratic Scoring 9.1 The { f, m, q}-Conditioned Decomposition. In quadratic scoring C( f, h, q) = 6y [ f (q, y)−h(q, y)]2 . This is not to be confused with the quadratic score function, which has C(yF , h, q) = 1 − 6y [h(q, y) − δ(y, yF )]2 . This score function can be used only when one has a finite test set, which is not the case for quadratic scoring. (See Bernardo & Smith, 1994.) Analysis of bias plus variance decompositions for the quadratic score function is the subject of future work. For quadratic scoring, for z = { f, m, q}, the lower bound on an algorithm’s error is zero: guess h to equal f always, regardless of d. Accordingly (see desideratum a), the intrinsic noise term must equal zero. To determine the bias term for quadratic scoring, employ the same trick used for logarithmic scoring to write that term as E(C | f = Opt(E(F | z)), h = Opt(E(H | z)), q). Again as with logarithmic scoring, for any Xconditioned distribution over Y, u, Opt(u) = u. Accordingly, for quadratic scoring, our bias term is E(C | f = E(F | z), h = E(H | z), q). For z = { f, m, q}, this reduces to E(C | f, h = E(H | f, m, q), q) = 6y [ f (q, y) − E(H(q, y) | f, m, q)]2 : biasqs; f,m,q = 6y [P(YF = y | f, m, q) − P(YH = y | f, m, q)]2 . As with logarithmic scoring, we then set variance to be the difference between E(C | f, m, q) and the sum of the intrinsic noise and bias terms. So
On Bias Plus Variance
1241
for quadratic scoring, E(C | f, m, q) = σqs; f,m,q + biasqs; f,m,q + varianceqs; f,m,q , where σqs; f,m,q ≡ 0, biasqs; f,m,q ≡ 6y [P(YH = y | f, m, q) − P(YF = y | f, m, q)]2 , and
Z dhP(h | f, m, q)[h(q, y)]2 Z − 6y [ dhP(h | f, m, q)h(q, y)]2 .
varianceqs; f,m,q ≡ 6y
As usual, these are in agreement with desiderata a through c. Interestingly, biasqs; f,m,q is the same as the bias2 for zero-one loss for z = { f, m, q} (see Kohavi & Wolpert, 1996 and Wolpert & Kohavi, 1997). 9.2 The Arbitrary z Decomposition. To present the general-z case, recall the definitions of E(F(·, y) | z) and E(H(·, y) | z) made just before equation 8.5. In addition, define Z E(F2 (·, y) | z) ≡ 6q
Z
2
E(H (·, y) | z) ≡ 6q E(F(·, y)H(·, y) | z) ≡ 6q
Z
df P( f, q | z) × f 2 (q, y), dhP(h, q | z) × h2 (q, y), and df dhP(h, f, q | z) × f (q, y)h(q, y).
Then we can write E(C | z) = σqs;z + biasqs;z + varianceqs;z − 2covqs;z ,
(9.1)
where σqs;z ≡ 6y {E(F2 (·, y) | z) − [E(F(·, y) | z)]2 }, biasqs;z ≡ 6y [P(YH = y | z) − P(YF = y | z)]2 , varianceqs;z ≡ 6y {E(H2 (·, y) | z) − [E(H(·, y) | z]2 }, and covqs;z ≡ 6y {E(F(·, y)H(·, y) | z) − E(F(·, y) | z) × E(H(·, y) | z)}. As usual, these decompositions meet essentially all of desiderata a through c.
1242
David H. Wolpert
10 Future Work Future work consists of the following projects: 1. Investigate the real-world manifestations of the Bayesian correction to bias plus variance for quadratic loss. (For example, it seems plausible that whereas the bias-variance trade-off involves things like the number of parameters involved in the learning algorithm, the covariance term may involve things like model misspecification in the learning algorithm.) 2. Investigate the real-world manifestations of the bias-variance tradeoff for the logarithmic and quadratic scoring definitions of bias and variance used here. 3. See if there are alternative definitions of bias and variance for logarithmic and quadratic scoring that meet our desiderata. More generally, investigate how unique bias-plus-variance decompositions are. 4. Investigate what aspects of the relationship between C and the other random variables (like YH and YF ) are necessary for there to be a bias plus variance decomposition for E(C | f, m, q) that meets conditions 1 through 4 and a through c. Investigate how those aspects change as one modifies z and/or the conditions one wishes the decomposition to meet. 5. The EBF provides several natural z-dependent ways to determine how close one learning algorithm is to another. For example, one could define the distance between learning algorithms A and B, 1(A, B), as the mutual information between the distributions P(c | z, A) and P(c | z, B). One could then explore the relationship between 1(A, B), and how similar the terms in the associated bias-plus-variance decompositions are. 6. Investigate the “data worth” for other z’s besides {m, q} and/or other C’s besides the quadratic loss function. In particular, for what C’s will all our desiderata be met and yet the intrinsic noise be given exactly by the error associated with the Bayes-optimal learning algorithm? 7. Instead of going from a C(F, H, Q) to definitions for the associated bias, variance, and so on, do things in reverse; that is, investigate the conditions under which one can back out to find an associated C(F, H, Q), given arbitrary definitions of bias, intrinsic noise, variance and covariance (arbitrary within some class of “reasonable” such definitions). Acknowledgments I thank Ronny Kohavi, Tom Dietterich, and Rob Schapire for getting me interested in the problem of bias plus variance for nonquadratic loss func-
On Bias Plus Variance
1243
tions, and Ronny Kohavi and David Rosen for helpful comments on the manuscript. This work was supported in part by the Santa Fe Institute and in part by TXN Inc. References Bernardo, J., & Smith, A. (1994). Bayesian theory. New York: Wiley and Sons. Breiman, L. (1994). Bagging predictors (Tech. Rep. 421). Berkeley: Department of Statistics, University of California. Breiman, L. (1996). Bias, variance, and arcing classifiers. Unpublished manuscript. Buntine, W., & Weigend, A. (1991). Bayesian back-propagation. Complex Systems 5: 603–643. Friedman, J. (1996). On bias, variance, 0/1-loss, and the curse of dimensionality. Unpublished manuscript. Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemma. Neural Computation 4: 1–58. Kohavi, R., & Wolpert, D. (1996). Bias plus variance for zero-one loss functions. In Proceedings of the 13th International Machine Learning Conference. San Mateo, CA: Morgan Kauffman. Kong, E. B., & Dietterich, T. G. (1995). Error-correcting output coding corrects bias and variance. In Proceedings of the 13th International Conference on Machine Learning (pp. 314–321). San Mateo, CA: Morgan Kauffman. Perrone, M. (1993). Improving regression estimation: Averaging methods for variance reduction with extensions to general convex measure optimization. Unpublished doctoral dissertation, Brown University, Providence, RI. Tibshirani, R. (1996). Bias, variance, and prediction error for classification rules. Unpublished manuscript. Wolpert, D. (1994). Filter likelihoods and exhaustive learning. In S. J. Hanson et al. (Eds.), Computational Learning Theory and Natural Learning Systems II (pp. 29–50). Cambridge, MA: MIT Press. Wolpert, D. (1995). The relationship between PAC, the statistical physics framework, the Bayesian framework, and the VC framework. In D. Wolpert (Ed.), The Mathematics of Generalization. Reading, MA: Addison-Wesley. Wolpert, D. (1996a). The lack of a priori distinctions between learning algorithms. Neural Computation 8: 1341. Wolpert, D. (1996b). The existence of a priori distinctions between learning algorithms. Neural Computation 8: 1391. Wolpert, D., & Kohavi, R. (1997). The mathematics of the bias-plus-variance decomposition for zero-one loss functions. In preparation.
Received September 1, 1995; accepted September 5, 1996.
NOTE
Communicated by David Wolpert
Note on Free Lunches and Cross-Validation Cyril Goutte Department of Mathematical Modelling, Technical University of Denmark, DK-2800 Lyngby, Denmark
The “no-free-lunch” theorems (Wolpert & Macready, 1995) have sparked heated debate in the computational learning community. A recent communication (Zhu & Rohwer, 1996) attempts to demonstrate the inefficiency of cross-validation on a simple problem. We elaborate on this result by considering a broader class of cross-validation. When used more strictly, cross-validation can yield the expected results on simple examples. 1 Introduction A recent contribution to computational learning, and neural networks in particular, the no-free-lunch (NFL) theorems (Wolpert & Macready, 1995) give surprising insight into computational learning schemes. One implication of the NFL is that no learning algorithm performs better than random guessing over all possible learning situations. This means in particular that the widely used cross-validation (CV) methods, if successful in some cases, should fail on others. Considering the popularity of these schemes, it is of considerable interest to exhibit such a problem where the use of CV leads to a decrease in performance. Zhu and Rohwer (1996) propose a simple setting in which a “cross-validation” method yields worse results than the maximum likelihood estimator it is based on. In the following, we extend these results to a stricter definition of cross-validation and provide an analysis and discussion of the results. 2 Experiments The experimental setting is the following: a gaussian variable x has mean µ and unit variance. The mean should be estimated from a sample of n realization of x. Three estimators are compared: A(n): The mean of the n point sample. It is the maximum likelihood and least mean squares estimator, as well as the optimal unbiased estimator. B(n): The maximum of the n point sample. Neural Computation 9, 1245–1249 (1997)
c 1997 Massachusetts Institute of Technology °
1246
Cyril Goutte
C(n): The estimator obtained by a cross-validation choice between A(n) and B(n). The original “cross-validation” setting (Zhu & Rohwer, 1996) will be noted Z(n + 1); it samples one additional point and chooses the estimator A or B that is closer to this point. However, the widely used concept of cross-validation (Ripley, 1996; FAQ, 1996) corresponds to resampling and averaging estimators rather than sampling additional points. In this context, the estimator proposed in Zhu and Rohwer (1996) is closer to the splitsample, or hold out, method. This method is known to be noisy, especially when the validation set is small, which is the case here. On the other hand, a thorough cross-validation scheme would use several validation sets resampled from the data and average over them before choosing estimator A or B. In the leave-one-out (LOO) flavor, the CV score is calculated as the average distance between each point and the estimator obtained on the rest. Note that in this setting, estimator Z operates with more information (one supplementary data point) than the LOO estimator. The result of an experiment done with n = 16 and 106 samples gives mean squared errors: A(16) : 0.0624
B(16) : 3.4141
CLOO (16) : 0.0624
Z(16 + 1) : 0.5755
In this case, it seems that the proper cross-validation procedure always picks estimator A, whose theoretical mean squared error is 1/16 = 0.0625. 3 Short Analysis The simple setting used in these experiments allows for a full analysis of the behavior of CLOO . Consider n data points xi . The LOO cross-validation estimate is computed by averaging the squared error between each point and the average of the rest. Let us denote by x the average of all xi and by xk the average of all xi excluding example xk : xk =
nx − xk . n−1
Accordingly, (xk − xk ) =
n (x − xk ) , n−1
hence the final cross-validation score for estimator A: µ CV =
n n−1
¶2 S (x) ,
(3.1)
P where S(w) = n1 ni=1 (w − xi )2 is the mean squared error between estimator w and the data. Let us now note x∗ = max{xi } the maximum of the (aug-
Free Lunches and Cross-Validation
1247
mented) sample, and x∗∗ = max({xi }\x∗ ) the second largest element. The cross-validation score for estimator B is: ¡ ¢ 1¡ ∗ ¢2 x − x∗∗ . CV ∗ = S x∗ + n
(3.2)
In order to realize how the cross-validation estimator behaves, let us first recall that x is the least mean squared estimator (MSE). As such, it minimizes S (w). Furthermore, from Huygens’s formula, S (x∗ ) = S (x) + (x − x∗ )2 . Accordingly, we can rewrite ¢2 ¢2 ¡ 1 ¡ ∗ x − x∗∗ . CV ∗ = S (x) + x − x∗ + n+1 These observations show that in order for equation 3.2 to be lower than 3.1 requires an extremely unlikely situation. x∗ should be quite close to both x and x∗∗ . One such situation could arise in the presence of a negative outlier. Because the mean is not robust, the outlier would produce a severe downward bias in estimator A. Thus, the maximum could very well be a better estimator than the mean in such a case. However, no such happenstance was observed in 5.106 experiments. 4 Discussion 1. In the experiment of section 2, the LOO estimator does not use the additional data point allotted to C. When using this data point, the performance is identical. CV does not extract any information from the extra sample, but at least manages to keep the information in the original sample. 2. The cross-validation estimator does not perform better than A(16) and yields worse performance than A(17), for which the theoretical MSE is 1/17 ' 0.0588. However, the setting imposes a choice between A and B. The optimal choice over 105 samples leads to an MSE of just 0.0617. This is a lower bound for estimator C and is significantly beyond the optimal seventeen-points estimator. On the other hand, a random choice between A and B leads to an MSE of 1.7364 on the same sample. 3. It could be objected that LOO is just one more flavor of cross-validation, so results featuring this particular estimator do not necessarily have any relevance to CV methods as a whole. Let us then compare the performance of m-fold CV on the original sixteen-point sample. It consists in dividing the sample into m validation sets and averaging the validation performance of the m estimators obtained on the remaining data. For 105 samples, we get (CVm is the m-fold CV estimator): Estimator MSE
A
B
CV16 = CLOO
CV8
CV4
CV2
0.0624
3.4141
0.0624
0.0624
0.0624
0.0628
1248
Cyril Goutte
The slight decrease in performance in CV2 is due to the fact that we average over only two validation sets. If we resample two additional sets, the performance is identical to all other CV estimators. 4. While requiring additional computation, none of the CV estimators above gains anything on A (even with the help of one additional point). Better performance can be observed however with a different choice of estimators. Consider, for example, D(n), a median of the sample, and E(n), the average between the minimum and the maximum. Using 105 samples, we compare the CV estimator to D and E calculated on sixteen-point samples: D(16) : 0.0949
E(16) : 0.1529
CLOO (16) : 0.0931
Here cross-validation outperforms both estimators it is based on. 5 Conclusion The NFL results imply that for every situation where a learning method performs better than random guessing, another situation exists where it performs correspondingly worse. Numerous reports of successful applications of usual learning procedures suggest that the “unspecified prior” under which they outperform random guessing verifies in a number of practical cases. Exhibiting these assumptions is of importance in order to check whether the conditions for success hold when tackling a new problem. However, it is unlikely that such a simple setting could challenge these yet unknown assumptions. Cross-validation has many drawbacks, and it is far from being the most efficient learning method (even among non-Bayesian frameworks). In that simple case, though, it provides perfectly decent results. We now know that there is no free lunch for cross-validation. However, the task of exhibiting an easily understandable, nondegenerate case where it fails has yet to be completed. Furthermore, the task of exhibiting the hidden prior under which cross-validation is beneficial provides challenging prospects for the future. Acknowledgments This work was partially supported by a research fellowship from the Technical University of Denmark, and partially performed at LAFORIA (URA 1095), Universit´e Pierre et Marie Curie, Paris. I am grateful to the referees for constructive criticism and to Huaiyu Zhu and Lars Kai Hansen for challenging remarks on drafts of the article.
Free Lunches and Cross-Validation
1249
References FAQ. (1996). Neural networks frequently asked questions. ftp://ftp.sas.com/pub/neural/FAQ3.html. FAQ in comp.ai.neural-nets, part 3. Ripley, B. D. (1996). Pattern recognition and neural networks. Cambridge: Cambridge University Press. Wolpert, D. H., & Macready, W. G. (1995). The mathematics of search, (Tech. Rep. No. SFI-TR-95-02-010). Santa Fe: Santa Fe Institute. Zhu, H., & Rohwer, R. (1996). No free lunch for cross validation. Neural Computation, 8(7), 1421–1426.
Received October 9, 1996; accepted February 4, 1997.
LETTERS
Communicated by Paul Bush
Gamma Oscillation Model Predicts Intensity Coding by Phase Rather than Frequency Roger D. Traub IBM Research Division, T. J. Watson Research Center, Yorktown Heights, NY 10598, and Department of Neurology, Columbia University, New York, NY 10032, U.S.A.
Miles A. Whittington Department of Physiology, Imperial College School of Medicine at St. Mary’s, London W2 1PG, U.K.
John G. R. Jefferys Department of Physiology, University of Birmingham School of Medicine, Birmingham B15 2TT, U.K.
Gamma-frequency electroencephalogram oscillations may be important for cognitive processes such as feature binding. Gamma oscillations occur in hippocampus in vivo during the theta state, following physiological sharp waves, and after seizures, and they can be evoked in vitro by tetanic stimulation. In neocortex, gamma oscillations occur under conditions of sensory stimulation as well as during sleep. After tetanic or sensory stimulation, oscillations in regions separated by several millimeters or more occur at the same frequency, but with phase lags ranging from less than 1 ms to 10 ms, depending on the conditions of stimulation. We have constructed a distributed network model of pyramidal cells and interneurons, based on a variety of experiments, that accounts for near-zero phase lag synchrony of oscillations over long distances (with axon conduction delays totaling 16 ms or more). Here we show that this same model can also account for fixed positive phase lags between nearby cell groups coexisting with near-zero phase lags between separated cell groups, a phenomenon known to occur in visual cortex. The model achieves this because interneurons fire spike doublets and triplets that have average zero phase difference throughout the network; this provides a temporal framework on which pyramidal cell phase lags can be superimposed, the lag depending on how strongly the pyramidal cells are excited. 1 Introduction Gamma-frequency electroencephalogram (EEG) oscillations occur in cortical (and diencephalic) structures during a variety of behavioral states and stimulus paradigms (reviewed by Eckhorn, 1994; Gray, 1994; Singer & Gray, Neural Computation 9, 1251–1264 (1997)
c 1997 Massachusetts Institute of Technology °
1252
Roger D. Traub, Miles A. Whittington, and John G. R. Jefferys
1995). Examples for the hippocampus include the theta state (Soltesz & Deschˆenes, 1993; Bragin, Jando, ´ N`adasdy, Hetke, & Wise, 1995; Sik, Penttonen, Ylinen, & Buzs`aki, 1995), during which gamma activity is continuous; following limbic seizures (Leung, 1987); transient (few hundred ms) gamma following synchronized pyramidal cell firing, such as physiological sharp waves in vivo (Traub, Whittington, Colling, Buzs´aki, & Jefferys, 1996); epileptiform bursts, either in vivo or in vitro (Traub, Whittington, Colling, Buzs´aki, & Jefferys, 1996); and following brief tetanic stimulation in vitro (Whittington, Traub, & Jefferys, 1995; Traub, Whittington, Stanford, & Jefferys, 1996). Situations where gamma-frequency oscillations occur in the neocortex include sleep—both slow-wave sleep, in which gamma is localized to a spatial scale of 1–2 mm (Steriade, Amzica, & Contreras, 1995; Steriade, Contreras, Amzica, & Timofeev, 1996), and rapid eye movement sleep, in which gamma is widespread (Llin´as & Ribary, 1993); during complex motor performance in behaving monkeys (Murthy & Fetz, 1992); following auditory stimulation (Joliot, Ribary, & Llin´as 1994); and following visual stimulation (Gray & Singer, 1989; Frien, Eckhorn, Bauer, Woelbern, & Kehr, 1994). Sensory-evoked gamma oscillations have also been observed in the olfactory bulb (Adrian, 1950) and olfactory cortex (Freeman, 1959). The variety of states associated with gamma oscillations makes it unlikely that gamma serves a simple unified function. Somewhat along the lines of other authors (Singer & Gray, 1995), we shall adopt the following working hypothesis: gamma-frequency oscillations that occur sufficiently locally may perhaps be epiphenomenal, but gamma-frequency oscillations that are synchronized (or at least phase locked) over longer distances (say, several millimeters) act to coordinate pools of neurons devoted to a common task. This hypothesis is especially attractive in the visual system, because stimulation with a long bar elicits oscillations that are tightly synchronized (phase lags can be less than 1 ms) across distances of several millimeters in cortical sites specific for receptive field and orientation preference (Gray, Konig, ¨ Engel, & Singer, 1989). If the hypothesis is correct, it becomes important to work out the cellular mechanisms of gamma oscillations. This could provide experimental tools for manipulating the oscillation and insight into the usefulness for coding of phase versus frequency of oscillations in neuronal populations. With respect to this latter issue, Konig, ¨ Engel, Roelfsema, and Singer (1995) have shown that visual stimulus properties are not encoded solely by precisely synchronized oscillation of selected neuronal populations. Rather, certain optimally driven populations oscillate with near-zero phase difference, even though these populations are spatially separated, whereas other neuronal populations—near to the index population but less optimally driven (by virtue of off-angle orientation tuning)—oscillate with phase delays of up to 4.5 ms. This observation suggests that oscillation phase, but not frequency, could be used to encode stimulus features. In this article, we present a testable cellular mechanism for the observa-
Gamma Oscillation Model
1253
tion of Konig ¨ et al. (1995), based on an experimental and computer model of gamma oscillations. This model derives from hippocampal and neocortical slice studies, and has the following basic features: 1. Populations of synaptically connected GABAergic interneurons can generate synchronized gamma-frequency oscillations, without a requirement for ionotropic glutamate input from pyramidal cells, provided the interneurons are tonically excited (Whittington et al., 1995). This can be shown experimentally in hippocampal or neocortical slices, during pharmacological blockade of AMPA (α-amino-3-hydroxy-5methyl-4-isoxazole propionic acid) and NMDA (N-methyl-D-aspartate) receptors, provided there is stimulation of the interneurons with glutamate itself or with the metabotropic glutamate agonist (1S,3R) ACPD ((1S,3R)-1-aminocyclopentane-1,3,dicarboxylic acid). The physical mechanism by which this synchrony obtains is similar to that described by Wang and Rinzel (1993) for synchronization of GABAergic nucleus reticularis thalami neurons, at near-zero phase lag, by mutual inhibition; it is possible when the inhibitory postsynaptic conductance (IPSC) time course is slow compared to the intrinsic frequency of the uncoupled cells (“intrinsic frequency” as used here means the frequency at which the metabotropically excited cells would fire when synaptically uncoupled). (See also van Vreeswijk, Abbott, & Ermentrout, 1994.) 2. The synchronized firing of a population of pyramidal cells can elicit gamma-frequency oscillations in networks of interneurons, probably in part via a slow, metabotropic excitatory postsynaptic potential (EPSP) (Traub, Whittington, Colling, Busz´aki, & Jefferys, 1996). 3. Gamma-frequency network oscillations can occur simultaneously with the firing of pyramidal cells, for example, after tetanic stimulation in vitro (Traub, Whittington, Stanford, & Jefferys, 1996). 4. Long-range (at least 4 mm) synchrony of gamma-frequency network oscillations can occur with near-zero phase lags, despite axon conduction delays. This was predicted to occur, based on simulations, when interneurons fire spike doublets, the second spike being elicited by recurrent excitation from both nearby and more distant pyramidal cells. Experiments in the hippocampal slice then showed that interneurons do indeed fire doublets when oscillations occur at two distant sites with near-zero phase lag but fire singlets when oscillations only occur locally (Traub, Whittington, Stanford, & Jefferys, 1996). Also as predicted by this model, the first action potential in the interneuron doublet occurs simultaneously with the pyramidal cell action potential (Traub, Whittington, Stanford, & Jefferys, 1996; Whittington, Stanford, Colling, Jefferys, & Traub, in press). (The model explicated by Bush and Sejnowski (1996) produces tight synchrony between two
1254
Roger D. Traub, Miles A. Whittington, and John G. R. Jefferys
columns—also separated by conduction delays—without interneuron doublets but with interneurons firing short bursts. It is not known, however, whether longer-range synchrony can be produced by this model. It is possible that bursts of action potentials, fired by interneurons in the model of Bush and Sejnowski, have a physically similar effect to the doublets and triplets in our model and to the doublets in hippocampal experiments.) 5. In local networks (those without axon conduction delays) that generate gamma-frequency oscillations, mean excitation of pyramidal cells is encoded by firing phase relative to the population oscillation. This is a model prediction, not yet verified experimentally (Traub, Jefferys, & Whittington, in press). We shall demonstrate how, in a network of appropriate topology, the above properties of gamma oscillations can combine so that distant groups of cells oscillate in phase (when both groups are driven strongly), while at the same time, weakly driven cell groups oscillate at the same frequency but with a phase lag, thereby replicating the result of Konig ¨ et al. (1995). 2 Methods The approach to the network modeling is similar to that described elsewhere (Traub et al., in press; Traub, Whittington, Stanford, & Jefferys, 1996). Briefly, pyramidal cells (e-cells) are simulated as in Traub et al. (1994), and GABAergic interneurons (i-cells) as in Traub and Miles (1995). Pyramidal cells, while capable of intrinsic bursting, are driven by injection of sufficiently large currents to force them into a repetitive firing mode (Wong & Prince, 1981), consistent with observations on actual pyramidal cell firing patterns after tetanic stimulation in vitro (Whittington, Stanford, Colling, Jefferys, & Traub, in press). Synaptic interactions in the model are: 1. e → i, onto interneuron dendrites, with unitary AMPA receptor EPSP large enough to cause the i-cell to fire (Guly´as et al., 1993). The unitary AMPA receptor conductance is 8 te−t nS, t in ms. In addition, pyramidal cell firing elicits NMDA receptor-mediated EPSPs in interneurons, with kinetics and saturation as described previously (Traub et al., in press). Most likely, the dominant relevant EPSPs for gamma oscillations are metabotropic glutamate, but as long as the EPSPs are slow enough (and a relaxation time constant of 60 ms works), there is little functional significance of this distinction for the model. 2. i → e, onto pyramidal cell somata and proximal dendrites, as would be made by basket cells. The unitary IPSCs are GABAA receptormediated, with abrupt onset to 2 nS, then exponential decay with time-constant 10 ms (Whittington, Jefferys, & Traub, 1996).
Gamma Oscillation Model
1255
3. i → i, onto interneuron dendrites. These also are GABAA receptor mediated, with abrupt onset to 0.75 nS, then exponential decay with time-constant 10 ms (Traub, Whittington, Colling, Buzs´aki, & Jefferys, 1996). 4. e → e synaptic interactions are omitted for the sake of simplicity. Simulation of both local and distributed networks suggests that gamma oscillation properties persist when e → e connections are present, provided the conductances are not too large. In particular, recurrent excitation must be weak enough not to evoke dendritic calcium spikes and cellular bursts (Traub et al., in press). Metabotropic glutamate receptor activation, of likely importance for gamma oscillations, is expected to reduce excitatory synaptic conductances (Baskys & Malenka, 1991; Gereau & Conn, 1995), which might favor the stability of the oscillation. If recurrent excitation occurs in dendritic regions that do not develop full calcium spikes, stability of the oscillation is also favored (Traub, unpublished data). All three types of synaptic interaction (e → i, i → e, i → i) take place within columns and between columns. As before, pyramidal cells and interneurons are organized into groups (columns), in the present case each having eight pyramidal cells and eight interneurons. The global organization is shown in figure 1. Such an organization allows stimulation parameters and axon conduction delays to be specified readily. In addition, the network has a form that assumes functional significance. For example, each column might correspond to a visual stimulus orientation/receptive field. If the global visual stimulus is a long horizontal bar, it would excite columns A1 and B1 maximally, and the other columns less; we simulate this by using maximal driving currents to the cells of A1 and B1, with smaller driving currents to the other columns. Hence, current to A1,B1 > A2,B2 > A3,B3 > A4,B4. Specific details are as follows. Pyramidal cells and interneurons receive synaptic input from all pyramidal cells and all interneurons in the same column and in connected columns (see Figure 1). In the example to be illustrated, the average driving current to e-cells in columns A1 and B1 was 2.92 and 3.12 nA, respectively; to columns A2 and B2, 2.36 and 2.52 nA; to A3 and B3, 1.8 and 1.92 nA; to A4 and B4, 1.18 and 1.26 nA. (Thus, the current ratios for columns A1:A2:A3:A4, respectively B1:B2:B3:B4, are 1:0.8:0.6:0.4.) For the example simulation illustrated below, axon conduction delays were less than 0.5 ms within a column, 2 ms for distinct columns within a hypercolumn, and 4 ms between hypercolumns. Qualitatively similar results were obtained with different delays, including 3 ms between distinct columns in a hypercolumn and 6 ms between hypercolumns; and, respectively, 2.5 and 7.5 ms. Also, similar results were obtained with different distributions of driving currents. For example, the following current ratios were also used: 1:x:x:x (where x could be 0.2, 0.5, 0.6, or 0.7); 1:0.8:0.7:0.6; 1:0.7:0.6:0.5;
1256
Roger D. Traub, Miles A. Whittington, and John G. R. Jefferys
Figure 1: Structure of the model. The network consists of two “hypercolumns,” A and B, each consisting of four columns. A hypercolumn can be thought of as corresponding to a piece of receptive field and a column as corresponding to a piece of receptive field with a certain orientation. Structurally, a column has eight pyramidal cells and eight GABAergic interneurons, with between-column connections as indicated by the arrows. When two columns are connected by an arrow (or a column to itself), it means that each cell (pyramidal or interneuron) in one column is connected to each cell (pyramidal or interneuron) in the other. Axon conduction delays are less than 0.5 ms within a column and, in later figures, 2 ms between columns in the same hypercolumn and 4 ms between hypercolumns.
1:0.6:0.5:0.4; 1:0.6:0.4:0.2. Hence, no particular parameter tuning was necessary to produce the qualitative behavior described below. A total of 29 simulations provided background data for this study, along with over 200 simulations of models with different distributed network topology. Programs were written in FORTRAN augmented with special instructions for IBM SP1, and SP2 parallel computers. Simulation of 1500 ms of neural activity took 3.6 hours on the SP2 and 5.2 hours on the SP1, both using eight nodes. This study thus used 835 SP2 node-hours. (For further details on the simulations, send e-mail to
[email protected].) 3 Results We shall illustrate in detail the results of one particular simulation. In this simulation, the connected columns A1 and B1, while in distinct hypercolumns and separated by 4 ms axon-conduction delays, are both driven maximally (and nearly equally); all other columns are driven less (but not uniformly less). This is the situation that might be expected to occur in visual cortex following stimulation with a long bar.
Gamma Oscillation Model
1257
Figure 2: Behavior of two maximally stimulated columns in different hypercolumns, separated by 4 ms axon conduction delay. The average of four pyramidal cell voltages (respectively, interneuron voltages) is plotted for cells from hypercolumn A, column 1 (A1) and hypercolumn B, column 1 (B1). The population oscillation (35 Hz) is apparent. Pyramidal cell firing in A1 may lag firing in B1 (−) or lead it (+). Interneurons fire synchronized doublets and triplets. (The peaks in these signals that are less than full action potential height result from averaging a small number of voltage signals, in which action potentials are not perfectly synchronized.)
The behavior of the system can be summarized as follows: 1. All columns oscillate at the same frequency (35 Hz). 2. Pyramidal cell firing in the maximally stimulated columns, A1 and B1, occurs with some relative jitter (see Figure 2), but the mean phase lag in the cross-correlation is less than 1 ms (Figure 4). This occurs despite the axon conduction delay of 4 ms (or up to 7.5 ms in other simulations, not shown) between hypercolumns A and B. 3. In contrast, pyramidal cells in nearby (but differently excited) columns A1 and A3 fire consistently out of phase, with maximally driven A1 always leading A3 (see Figure 3), and a mean phase lag of 2.8 ms (see Figure 4). This takes place even though the conduction delay is only 2 ms between A1 and A3. 4. Interneurons fire in synchronized doublets and triplets (see Figures 2 and 3), in a pattern similar to that described previously (Traub, Whittington, Stanford, & Jefferys, 1996).
1258
Roger D. Traub, Miles A. Whittington, and John G. R. Jefferys
Figure 3: Behavior of two nearby columns, separated by 2 ms axon conduction delay, one column stimulated maximally and the other not. The averages of four pyramidal cell voltages (respectively, interneuron voltages) is plotted for cells from hypercolumn A, column 1 (A1) and hypercolumn A, column 3 (A3). The pyramidal cells in A3 receive 0.6 times the mean driving current of pyramidal cells in A1. Pyramidal cell firing in A1 consistently leads pyramidal cell firing in A3 (+).
5. The interneuron cross-correlations are, at least approximately, distributed symmetrically about zero (see Figure 5). 4 Discussion Our model replicates an interesting experimental observation on stimulusevoked gamma-frequency oscillations in the visual cortex: distant (at least 7 mm experimentally), but strongly stimulated, columns can oscillate with near-zero phase lag; while nearby, but unequally stimulated, columns can oscillate at the same frequency, but with the weakly stimulated column lagging the strongly stimulated one. Thus, oscillation phase would encode one aspect of how strongly driven neurons are by a nonlocal stimulus (Hopfield, 1995). The essential idea is this: in local networks, current drive to pyramidal cells alters phase relative to the mean oscillation; in distributed networks with uniform stimulation, phase delays can be small despite long axon conduction delays; and, at least with a simple network topology (and without recurrent excitation), these two principles can work together. In order for the replication of data to be useful, the model must meet two re-
Gamma Oscillation Model
1259
Figure 4: Auto- and cross-correlations of mean pyramidal cell voltages. Autoand cross-correlations were performed on 750 ms of data, the average voltage of four pyramidal cells from each of three columns (A1, A3, and B1). Each column oscillates at 35 Hz. Columns A1 and B1, both maximally driven but also with maximum respective axon conduction delay, oscillate with 0.9 ms mean phase difference. Column A1 leads column A3 by an average of 2.8 ms. Columns A1 and A3 have only 2 ms mutual axon conduction delay, but A3 receives 0.6 times the drive of A1. (The auto- and cross-correlations appear so smooth because it is intracellular voltage signals, including subthreshold voltages, that are being correlated, rather than extracellular unit firing signals, as might be used with in vivo data.)
quirements: it must rest on a reasonable experimental basis, and it must make experimental predictions. Selected, but critical, elements of the network model are supported by the following data: 1. The subnetwork consisting of a local group of interneurons, synaptically interconnected, correctly predicts the dependence of oscillation frequency on GABAA conductance and IPSC time constant; it also correctly predicts the breakup of the oscillation as a sufficiently low frequency is reached (Whittington, Traub, & Jefferys, 1995; Traub, Whittington, Colling, Buzs´aki, & Jefferys, 1996; Wang & Buzs´aki, 1996). 2. The model correctly predicted the existence of a “tail” of gammafrequency oscillations following the synchronous firing of pyramidal cells (Traub, Whittington, Colling, Buzs´aki, & Jefferys, 1996).
1260
Roger D. Traub, Miles A. Whittington, and John G. R. Jefferys
Figure 5: Auto- and cross-correlations of mean interneuron voltages. Auto- and cross-correlations were performed on 750 ms of data, the average voltage of four interneurons from each of three columns (A1, A3, and B1). Synchronized multiplet firing shows up as the secondary peaks in the autocorrelations at ±4 to 5 ms. The A1/A3 cross-correlation has a trough near 0 ms, with peaks at +1.7 and −2.4 ms.
3. The model correctly predicted several nonintuitive features of collective gamma oscillations in the hippocampal slice: oscillations coherent over long distances (up to 4 mm), but not oscillations occurring only locally, are associated with firing of interneuron doublets (intradoublet interval usually about 4–5 ms) at gamma frequency; the first interneuron spike of the doublet is synchronized with the pyramidal cell action potential; population oscillations slow in frequency as larger neuronal ensembles are entrained (Traub, Whittington, Stanford, & Jefferys, 1996; Whittington, Stanford, Colling, Jefferys, & Traub, in press). 4. Experiments indicate that tetanic stimuli, when of different amplitudes, applied to different subregions of CA1 (in the hippocampal slice) lead to simultaneous oscillations in the respective subregions, of common frequency, but different phase (Whittington, Stanford, Colling, Jefferys, & Traub, in press). These data provide confidence that certain structural underpinnings of the gamma oscillation model are accurate. Of course, it is true that most
Gamma Oscillation Model
1261
of the experimental data supporting this model derive from the hippocampal CA1 region, wherein recurrent excitation is sparse, while we would like to draw conclusions about the neocortex, wherein recurrent excitation is dense. Further experiments and simulations are clearly necessary. The experimental analysis of recurrent excitation may prove difficult. For example, nonspecific blockade of AMPA receptors will block AMPA receptors on interneurons, which is predicted to disrupt long-range synchronization (Traub, Whittington, Stanford, & Jefferys, 1996). While a selective blocker for interneuron AMPA receptors exists (Iino, Koike, Isa, & Ozawa, 1996), what is required instead is a blocker specific for pyramidal cell AMPA receptors. An alternative approach, used here, is to generate testable predictions in a model that simply omits (for now) e → e connections. What is different about the model explained here, as compared with the model in previous publications, is the more complex topological structure, which attempts to capture two connected “hypercolumns” rather than simply a one-dimensional chain of cell groups, and the use of different axon conduction delays for nearby columns, as opposed to more distant ones. One result of this additional complexity is the more complicated firing patterns of the interneurons in this, as compared with earlier, network models (Traub et al., in press; Traub, Whittington, Stanford, & Jefferys, 1996): triplets in the present case versus predominantly doublets before. This feature may provide one critical experimental test of this model; if interneurons exist that connect between cortical columns and hypercolumns, possibly layer III large basket cells (Kisv´arday, 1992; Kisv´arday, Beaulieu, & Eysel, 1993; Kisv´arday & Eysel, 1993), then these interneurons should fire in short bursts during visual stimulation with large objects, objects that evoke gamma oscillations synchronized over many millimeters. (See also Bush & Sejnowski, 1996.) Furthermore, on average, interneuron bursts should be synchronized with one another.
Acknowledgments This work was supported by IBM and the Wellcome Trust. We are grateful for helpful discussions to Hannah Monyer, Rodolfo Llin´as, Charles Gray, Wolf Singer, Gyorgy ¨ Buzs`aki, and Nancy Kopell.
References Adrian, E. D. (1950). The electrical activity of the mammalian olfactory bulb. Electroenceph. Clin. Neurophysiol., 2, 377–388. Baskys, A., & Malenka, R. C. (1991). Agonists at metabotropic glutamate receptors presynaptically inhibit EPSCs in neonatal rat hippocampus. J. Physiol., 444, 687–701.
1262
Roger D. Traub, Miles A. Whittington, and John G. R. Jefferys
Bragin, A., Jando, ´ G., Nadasdy, Z., Hetky, J., Wise, K., & Buzs´aki, G. (1995). Gamma (40–100 Hz) oscillation in the hippocampus of the behaving rat. J. Neurosci., 15, 47–60. Bush, P. C., & Sejnowski, T. J. (1996). Inhibition synchronizes sparsely connected cortical neurons within and between columns in realistic network models. J. Comput. Neurosci., 3, 91–110. Eckhorn, R. (1994). Oscillatory and non-oscillatory synchronizations in the visual cortex and their possible roles in associations of visual features. Prog. Brain Res., 102, 405–426. Freeman, W. J. (1959). Distiribution in space and time of prepyriform electrical activity. J. Neurophysiol., 22, 644–666. Frien, A., Eckhorn, R., Bauer, R., Woelbern, T., & Kehr, H. (1994). Stimulusspecific fast oscillations at zero phase between visual areas V1 and V2 of awake monkey. NeuroReport, 5, 2273–2277. Gereau, R. W., IV, & Conn, P. J. (1995). Multiple presynaptic metabotropic glutamate receptors modulate excitatory and inhibitory synaptic transmission in hippocampal area CA1. J. Neurosci., 15, 6879–6889. Gray, C. M. (1994). Synchronous oscillations in neuronal systems: Mechanisms and functions. J. Comput. Neurosci., 1, 11–38. Gray, C. M., Konig, ¨ P., Engel, A. K., & Singer, W. (1989). Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties. Nature, 338, 334–337. Gray, C. M., & Singer, W. (1989). Stimulus-specific neuronal oscillations in orientation columns of cat visual cortex. Proc. Natl. Acad. Sci. USA, 86, 1698–1702. Guly´as, A. I., Miles, R., Sik, A., Toth, ´ K., Tamamaki, N., & Freund, T. F. (1993). Hippocampal pyramidal cells excite inhibitory neurons through a single release site. Nature, 366, 683–687. Hopfield, J. J. (1995). Pattern recognition computation using action potential timing for stimulus representation. Nature, 376, 33–36. Iino, M., Koike, M., Isa, T., & Ozawa, S. (1996). Voltage-dependent blockage of Ca2+ -permeable AMPA receptors by joro spider toxin in cultured rat hippocampal neurones. J. Physiol., 496, 431–437. Joliot, M., Ribary, U., & Llin´as, R. (1994). Human oscillatory brain activity near 40 Hz coexists with cognitive temporal binding. Proc. Natl. Acad. Sci. USA, 91, 11748–11751. Kisv´arday, Z. F. (1992). GABAergic networks of basket cells in the visual cortex. Prog. Brain Res., 90, 385–405. Kisv´arday, Z. F., Beaulieu, C., & Eysel, U. T. (1993). Network of GABAergic large basket cells in cat visual cortex (area 18): Implication for lateral disinhibition. J. Compar. Neurol., 327, 398–415. Kisv´arday, Z. F., & Eysel, U. T. (1993). Functional and structural topography of horizontal inhibitory connections in cat visual cortex. Eur. J. Neurosci., 5, 1558–1572. Konig, ¨ P., Engel, A. K., Roelfsema, P. R., & Singer, W. (1995). How precise is neuronal synchronization? Neural Comp., 7, 469–485. Leung, L. S. (1987). Hippocampal electrical activity following local tetanization. I. Afterdischarges. Brain Res., 419, 173–187.
Gamma Oscillation Model
1263
Llin´as, R., & Ribary, U. (1993). Coherent 40-Hz oscillation characterizes dream state in humans. Proc. Natl. Acad. Sci. USA, 90, 2078–2081. Murthy, V. N., & Fetz, E. E. (1992). Coherent 25- to 35-Hz oscillations in the sensorimotor cortex of awake behaving monkeys. Proc. Natl. Acad. Sci. USA, 89, 5670–5674. Sik, A., Penttonen, M., Ylinen, A., & Buzs´aki, G. (1995). Hippocampal CA1 interneurons: An in vivo intracellular labeling study. J. Neurosci., 15, 6651–6665. Singer, W., & Gray, C. M. (1995). Visual feature integration and the temporal correlation hypothesis. Annual Rev. Neurosci., 18, 555–586. Soltesz, I., & Deschˆenes, M. (1993). Low- and high-frequency membrane potential oscillations during theta activity in CA1 and CA3 pyramidal neurons of the rat hippocampus under ketamine-xylazine anesthesia. J. Neurophysiol., 70, 97–116. Steriade, M., Amzica, F., & Contreras, D. (1995). Synchronization of fast (30– 40 Hz) spontaneous cortical rhythms during brain activation. J. Neurosci., 16, 392–417. Steriade, M., Contreras, D., Amzica, F., & Timofeev, I. (1996). Synchronization of fast (30–40 Hz) spontaneous oscillations in intrathalamic and thalamocortical networks. J. Neurosci., 16, 2788–2808. Traub, R. D., Jefferys, J. G. R., & Whittington, M. A. (in press). Simulation of gamma rhythms in networks of interneurons and pyramidal cells. J. Comput. Neurosci. Traub, R. D., & Miles, R. (1995). Pyramidal cell-to-inhibitory cell spike transduction explicable by active dendritic conductances in inhibitory cell. J. Comput. Neurosci., 2, 291–298. Traub, R. D., Whittington, M. A., Colling, S. B., Buzs´aki, G., & Jefferys, J. G. R. (1996). Analysis of gamma rhythms in the rat hippocampus in vitro and in vivo. J. Physiol., 493, 471–484. Traub, R. D., Whittington, M. A., Stanford, I. M., & Jefferys, J. G. R. (1996). A mechanism for generation of long-range synchronous fast oscillations in the cortex. Nature, 383, 621–624. van Vreeswijk, C., Abbott, L. F., & Ermentrout, G. B. (1994). When inhibition not excitation synchronizes neural firing. J. Comput. Neurosci. 1, 313–321. Wang, X.-J., & Buzs´aki, G. (1996). Gamma oscillation by synaptic inhibition in a hippocampal interneuronal network model. J. Neurosci., 16, 6402–6413. Wang, X.-J., & Rinzel, J. (1993). Spindle rhythmicity in the reticularis thalami nucleus: Synchronization among mutually inhibitory neurons. Neuroscience, 53, 899–904. Whittington, M. A., Stanford, I. M., Colling, S. B., Jefferys, J. G. R., & Traub, R. D. (in press). Spatiotemporal patterns of gamma-frequency oscillations tetanically induced in the rat hippocampal slice. J. Physiol. Whittington, M. A., Jefferys, J. G. R., & Traub, R. D. (1996). Effects of intravenous anaesthetic agents on fast inhibitory oscillations in the rat hippocampus in vitro. Br. J. Pharmacol., 118, 1977–1986. Whittington, M. A., Traub, R. D., & Jefferys, J. G. R. (1995). Synchronized oscillations in interneuron networks driven by metabotropic glutamate receptor activation. Nature, 373, 612–615.
1264
Roger D. Traub, Miles A. Whittington, and John G. R. Jefferys
Wong, R. K. S., & Prince, D. A. (1981). Afterpotential generation in hippocampal pyramidal cells, J. Neurophysiol., 45, 86–97.
Received September 27, 1996; accepted January 22, 1997.
Communicated by William Skaggs
Multiunit Normalized Cross Correlation Differs from the Average Single-Unit Normalized Correlation Purvis Bedenbaugh Department of Otolaryngology and Keck Center for Integrative Neuroscience, University of California at San Francisco, San Francisco, CA 94143, U.S.A.
George L. Gerstein Department of Neuroscience, University of Pennsylvania, Philadelphia, PA 19104, U.S.A.
As the technology for simultaneously recording from many brain locations becomes more available, more and more laboratories are measuring the cross-correlation between single-neuron spike trains, and between composite spike trains derived from several undiscriminated cells recorded on a single electrode (multiunit clusters). The relationship between single-unit correlations and multiunit cluster correlations has not yet been fully explored. We calculated the normalized cross-correlation (NCC) between singleunit spike trains and between small clusters of units recorded in the rat somatosensory cortex. The NCC between small clusters of units was larger than the NCC between single units. To understand this result, we investigated the scaling of the NCC with the number of units in a cluster. Multiunit cross-correlation can be a more sensitive detector of neuronal relationship than single-unit cross-correlation. However, changes in multiunit cross-correlation are difficult to interpret uniquely because they depend on the number of cells recorded on each electrode and because they can arise from changes in the correlation between cells recorded on a single electrode or from changes in the correlation between cells recorded on two electrodes. 1 Introduction One of the great challenges of contemporary neuroscience is to understand the distributed patterns of brain activity associated with learned behaviors and remembered stimuli. Increasing evidence supports the view that assemblies of neurons (Hebb, 1949; Gerstein, Bedenbaugh, & Aertsen, 1989) subserving particular percepts and behaviors are embedded in a nonlinear, cooperative network. The collective activity of the cells within a neuronal assembly provides a context for interpreting the activity of each of its component cells. The experimental detection of such neuronal assemblies Neural Computation 9, 1265–1275 (1997)
c 1997 Massachusetts Institute of Technology °
1266
Purvis Bedenbaugh and George L. Gerstein
and study of their properties depends on simultaneously recording from many neurons and on measurement of timing interrelationships among the several neuronal activities. Analysis of the interactions between neuronal assemblies is even more demanding of recording technology and intensive calculations. For over thirty years, the cross-correlation between two neuronal spike trains (Perkel, Gerstein, & Moore, 1967b), its generalizations (Gerstein & Perkel, 1972; Aertsen, Gerstein, Habib, & Palm, 1989), and its homolog in the frequency domain (Perkel, 1970; Rosenberg, Amjad, Breeze, Brillinger, & Halliday, 1989) have been the principal tools for assessing neuronal relationships. Methods have been developed to normalize the cross-correlation so that the resulting correlation coefficient may be made independent of both firing rate and of modulations in firing rate (Aertsen et al., 1989; Palm, Aertsen, & Gerstein, 1988) and may be fairly compared across different observations. When these methods are applied to single-unit recordings, correlating the spikes of two simultaneously recorded single neurons, the results are unambiguous. It is clear which particular neurons are correlated; changes in correlation are due to a change in the coordination of particular cells, not to changes in recruitment or to poor recording stability. There are, however, difficulties in using spike train pair correlation to study neural assembly properties. Single-unit cross-correlation is a relatively insensitive measure of neuronal relationship (Bedenbaugh, 1993). Given the low firing rates of most cortical neurons, this makes it difficult to detect a relationship between simultaneously recorded spike trains within a reasonably short recording time. In turn, this translates into fewer data points per experiment, a major limitation for investigations that seek to explore the distributed pattern of neuronal responses and their coordination. The development of spike-shape sorting technology has partially improved the situation with respect to sampling and efficiency problems, since it allows separation of multiunit observations from a single electrode into spike trains that originate from different neurons. However, the price is a complex technology, and also a combinatorial explosion of pairs as the number of separable simultaneously recorded neurons increases. The latter problem has been partially mitigated, although at the cost of intensive computation (Gerstein, Perkel, & Dayhoff, 1985; Gerstein & Aertsen, 1985). In reaction to these difficulties, a number of investigators have begun to use multiunit recordings without separation of waveforms; that is, they use a composite data stream consisting of several superimposed spike trains from each electrode. The rationale is that cells in localized cortical columns have homogeneous response properties. Changes in the firing rate of a multiunit recording often reflect changes in the activity of the individual neurons in the population. If we assume that possible confounding factors, such as recruitment of additional neurons or recording instability, can be minimized, it is reasonable and increasingly common to study neuronal assembly properties by cross-correlation of multiunit recordings.
Multiunit Normalized Cross-Correlation
1267
We might expect that the normalized cross-correlation (NCC) between two multiunit spike trains would correspond to the average cross-correlation between the single units recorded on different electrodes. However, for our recordings in rat somatosensory cortex, we found that the multiunit normalized cross-correlation was larger than the single-unit normalized crosscorrelation. The analysis in this article suggests that this may be because the correlation between units recorded on a single electrode influences the multiunit cross-correlation. This view is also consistent with Kreiter and Singer’s (1995) recent report of dependence of multiunit cross-correlation on the local correlation between the single-unit spike trains that comprise the multiunit spike trains. 2 Methods 2.1 Experimental. Extracellular neuronal potentials were recorded from the forepaw region of the somatosensory cortex of seven rats using either three or five tungsten microelectrodes and conventional methods, described elsewhere (Bedenbaugh, 1993; Bedenbaugh, Gerstein, & Bolton, 1997). Recordings from each of the several electrodes used in each experiment were separated into a total of eighty-nine single-unit spike trains with 0.5 millisecond time resolution using hardware principal component spike sorters (Gerstein, Bloom, Espinosa, Evanczuk, & Turner, 1983). Of these, 638 pairs of single-unit trains had been simultaneously recorded and were available for cross-correlation analysis. Finally, a total of twenty-one multiunit spike trains were created by merging of the sorted single-unit spike trains. Each cell pair was recorded under four different somatic stimulation conditions: no stimulus (spontaneous activity), touch to the best receptive field of the cells recorded by one electrode, touch to the best receptive field of the cells recorded by another electrode, and a simultaneous touch to both skin sites (Bedenbaugh, 1993; Bedenbaugh et al., 1997). The data were blocked into a sequence of stimulus-dependent trials and peri-stimulus-time (PST) histograms and raw cross-correlograms were computed for each stimulus condition. The normalized cross-correlation (correlation coefficient) histogram for a pair of spike trains was calculated by subtracting the cross-correlation of the two PST histograms divided by the number of stimuli (PST predictor) (Perkel et al., 1967a) from the raw crosscorrelogram, then normalizing the result by the geometric mean variance of the two point processes (Bedenbaugh, 1993; Bedenbaugh et al., 1997). 2.2 Analytical. Consider the situation illustrated schematically in Figure 1. There are two electrodes, A and B, each recording from m cells so that the spike train recorded on electrode A is A(t) = a1 (t) + a2 (t) + · · · + am (t), and the spike train recorded on electrode B is B(t) = b1 (t)+b2 (t)+· · ·+bm (t), where t is time, and m = 2 in the figure. Discretize time into NT total bins of equal duration, short enough that any one cell may fire no more than one
1268
Purvis Bedenbaugh and George L. Gerstein
spike in one bin-width of time. If we assign the value 1 to the occurrence of a spike, and 0 to the absence of a spike in a bin, then the expected number of spikes per unit time of any individual cell, say ai , is Ni /NT , where P T Ni = N k=1 aik where k denotes a particular time bin. Since the point process derived from the spike train can only take the values 0 or 1, the expectation (mean value) E[a2ik ] is also Ni /NT . Further, assuming that all of the individual spike trains contain the same number of spikes and that the number of times any two cells recorded on a single electrode both fire in one time bin is Nclose , we see then that the mean and variance of a multiunit process, say A, are µA = E[Ak ] =
m X
E[aik ] = m
i=1
Ni NT
σA2 = E[A2k ] − E2 [Ak ] Ã ! m m X X = E aik ajk − µ2A i=1
=
m m X X
j=1
E[aik ajk ] − µ2A
i=1 j=1
= m(m − 1)
Nclose Ni N2 +m − m2 2i , NT NT NT
where expectation is taken across the time index k. The assumptions leading to these equations are equivalent to saying that all of the cells recorded by a single electrode have the same firing rate, and the normalized crosscorrelation between any two of the single-unit spike trains recorded by a single electrode has the same value, ρclose . If we further assume that the correlation coefficient between the spike trains of any two cells recorded on different electrodes has the same value, ρdistant , and that the average firing rate of all of the cells recorded on both electrodes is the same, we can find that the covariance of the two multiunit spike trains is CAB = E[Ak Bk ] − E[Ak ]E[Bk ] Ã ! m m X X aik bjk − µA µB = E i=1
=
m m X X
j=1
E[aik bjk ] − µA µB
i=1 j=1
= m2
Ndistant N2 − m2 2i . NT NT
Multiunit Normalized Cross-Correlation
1269
A
B
2
1
3
4
ρdistant ρclose
ρclose
Figure 1: Analytical paradigm. The analysis considers the cross-correlation between two multiunit spike trains recorded by electrode A (recording cells 1 and 2) and electrode B (recording cells 3 and 4). The analysis considers the situation in which all spike trains recorded on a single electrode have one value of normalized cross-correlation, (ρclose , solid lines) and spike trains recorded on different electrodes have another value of NCC (ρdistant , dashed lines).
Normalizing by the geometric mean of the variance of the two multiunit spike trains and canceling terms, we find that the normalized crosscorrelation (correlation coefficient) function for the multiunit spike trains is
CAB ρAB = q σA2 σB2 =
mρdistant . (m − 1)ρclose + 1
Although it might appear that this expression for ρAB might range beyond the interval [−1, 1], this possibility is excluded by assumptions that the correlation between any two cells recorded on one electrode is the same, and that the correlation between any two cells recorded on different electrodes is the same. Moreover, in the usual situation for neurophysiological recordings, with the number of spikes small compared to the total number of time bins, only positive and very small negative values for cross-correlation are possible.
Purvis Bedenbaugh and George L. Gerstein
4 3 0
0
1
2
multiunit pairs
150 100 50
single unit pairs
200
5
250
1270
0.0
0.2
0.4
0.6
single unit NCC peak area
0.8
0.0
0.2
0.4
0.6
0.8
multiunit NCC peak area
Figure 2: Single- and multiunit peak NCC (normalized cross correlation). The peak NCC for our population of pairs of single-unit spike trains is shown at the left. At the right, the peak NCC for the corresponding population of multiunit spike trains is shown. The multiunit correlation peaks are much larger.
3 Results 3.1 Experimental. Figure 2 shows the area under the central peak (τ = ±10 msec, τ the time shift variable) of the normalized cross-correlation for our rat somatosensory system data set (Bedenbaugh, 1993; Bedenbaugh et al., 1997). The normalized cross-correlation for the multiunit spike trains is clearly larger than the normalized cross-correlation for the single-unit spike trains. Note that the scale on the ordinate is different for the two graphs; since only one multiunit spike train was obtained from each electrode, the data set included many more single-unit cross-correlograms. 3.2 Analytical. Figure 3 shows that for our model system, the multiunit normalized cross-correlation (ρmultiunit ) is a linear function of the NCC between the single-unit spike trains recorded on different electrodes, ρdistant . The slope of this relationship is determined by the NCC between spike trains recorded on the same electrode. Counting the correlation between each of the cells recorded on one electrode and each cell recorded on the
Multiunit Normalized Cross-Correlation
1271
ρmultiunit 1.0
0.5
-0.5
0.5
1.0
ρdistant
close =-0.
.0
ρ
=0 se clo
ρ
clo
ρ
ρ
cl
os
e
se
=1
=0
.0
.5
5
-1.0
-1.0
Figure 3: Multiunit NCC as a function of distant NCC (m = 2). The multiunit NCC, ρmultiunit , is proportional to the distant NCC, ρdistant . The slope is linearly proportional to m, the number of cells recorded on a single electrode, and is inversely related to the close NCC, ρclose . Note that the minimum slope is 1, when m = 1 and ρclose = 1.
other electrode effectively amplifies the multiunit NCC. Each spike train has multiple opportunities to contribute to the multiunit NCC. Note that the linear dependence of ρmultiunit on ρdistant with zero intercept implies that a difference in the sign of two ρmultiunit measurements may be unambiguously interpreted as arising from a difference in the sign of ρdistant . This is also evident in the symmetry about the abscissa of ρmultiunit as a function of ρclose , as seen in Figure 4. The special case of ρclose = ρdistant = ρ is illustrated in Figure 5. Note that the slope for negative correlation is steeper than for positive correlation, implying that in this circumstance multiunit NCC is a more sensitive detector of negative cross-correlation than of positive cross-correlation. The NCC is plotted as an inset in Figure 5 for the case of two cells recorded on each electrode. The special case of ρclose = ρdistant courses a trajectory diagonally across this surface.
1272
Purvis Bedenbaugh and George L. Gerstein
ρmultiunit 1.0
ρdistant= 0.5
ρdistant= 0.9
ρdistant= 0.1 0.5
-1.0
-0.5
0.5
1.0
ρclose
-0.5 ρdistant= -0.1 -1.0
ρdistant= -0.5
ρdistant= -0.9
Figure 4: Multiunit NCC as a function of close NCC (m = 2). The multiunit NCC, ρmultiunit , is a nonlinear function of close NCC, ρclose . Note that in the situation we considered, positive values for ρdistant lead to positive values for ρmultiunit , and negative values for ρdistant lead to negative values for ρmultiunit .
3.3 Discussion. This study was inspired by the observation that the normalized cross-correlation (correlation coefficient) between multiunit spike trains recorded from the rat somatosensory cortex was significantly larger than the cross-correlation between the single-unit spike trains composing the multiunit recordings. Our analysis suggests why this should be so and suggests that multiple-unit cross-correlation is an intrinsically more sensitive detector of neuronal relationships than is the single-unit cross correlation. Two technical points should be noted. First, the analysis here is valid for any bin in a normalized cross-correlation histogram, for both exact coincidences and delayed coincidences. Second, the analysis presented here does not take into account the refractory period of the recording system measuring the multiunit spike trains, making it incapable of resolving simultaneous events on a single electrode. This dead time deletes some nearly simultaneous spikes from different cells from the multiunit spike train, and
Multiunit Normalized Cross-Correlation
1273
ρmultiunit 1.0
m=4 m=2
0.5
-1.0
-0.5
0.5
1.0
ρ
inset for m=2 1
ρmultiunit -1.0
−1 0.8 0.0
ρ
−.8 −.8 distant
0.0
0.8
ρclose
Figure 5: Multiunit NCC when all single-unit NCCs are equal. In the special case where the NCC between any two cells is the same, regardless of location (ρclose = ρdistant = ρ), ρmultiunit is a nonlinear function of ρ. Given our assumptions, it is an even more sensitive indicator of negative cross-correlation than of positive cross-correlation. Note, however, that the range of negative correlations that can satisfy the restriction of all recorded cells equally correlated is small. The inset plots the multiunit correlation as a surface parameterized by ρclose and ρdistant for the case of two neurons recorded on each electrode. The large curve for m = 2 courses a trajectory diagonally across this surface.
so should reduce the sensitivity of the multiunit cross-correlation to correlations between spike trains recorded on a single electrode. This analysis considered the case with all observed neurons having the same, constant firing rate. If firing rate changes slowly, these results should still apply. If some of the neurons within a multiunit cluster fire at different rates, we expect less dependence of cross-correlation on the number of neurons in a cluster. For this model system, the multiunit NCC is larger than the single-unit cross-correlation between cells recorded on different electrodes. This means
1274
Purvis Bedenbaugh and George L. Gerstein
that although the NCC between two multiunit spike trains is related to the NCC between the individual spike trains recorded by the two electrodes, it is not an unambiguous quantitative measure of that relationship. For example, if we take ρclose = 0, the multiunit NCC increases linearly with the number of cells comprising the multiunit spike trains, so that a change in recruitment may masquerade as a change in multiunit cross-correlation. More generally, a change in multiunit NCC can also result from either a change in the local NCC between cells recorded on a single electrode or the distant NCC between cells recorded on different electrodes, unless the change results in a sign change. These ambiguities limit the utility of comparisons among individual multiunit cross-correlograms. There is a place for comparing populations of multiunit cross-correlograms across treatment groups, though possible changes in the typical number of cells recorded due to the treatment would remain a concern. On the other hand, multiunit NCC is a more sensitive indicator of a relationship between spike trains than is single-unit NCC, and if the recordings are stable and stationary, a change in multiunit cross-correlation implies some underlying change in the cross-correlation between the single-unit spike trains. Multiunit NCC may be especially useful for detecting negative cross-correlations, which have been observed more rarely than have positive cross-correlations. The ambiguity with respect to the underlying correlation relationship and the difficulty of assessing the stationarity (uniform recruitment) and stability of multiunit recordings limit the conclusions that can be drawn by comparing multiunit cross-correlation measurements. With appropriate caution, the technical ease with which such measurements can be obtained, and their special sensitivity for detecting cross-correlation, make them useful tools for neuroscientists nonetheless. Acknowledgments A preliminary version of this work was presented as a poster at the 1993 annual meeting of the Society for Neuroscience (Bedenbaugh & Gerstein, 1993). It was supported by NIDCD F32-DC00144 and RO1-DC01249 and by NIMH MH R37-467428. Thanks to Michael Brecht and Heather Read for reading the manuscript. Thanks to Hagai Attias for helpful discussions. Portions of this work were performed in the laboratory of Michael M. Merzenich. References Aertsen, A. M. H. J., Gerstein, G. L., Habib, M. K., & Palm, G. (1989). Dynamics of neuronal firing correlation: Modulation of “effective connectivity.” Journal of Neurophysiology, 61 (5), 900–917. Bedenbaugh, P. H. (1993). Plasticity in the rat somatosensory cortex induced
Multiunit Normalized Cross-Correlation
1275
by local microstimulation and theoretical investigations of information flow through neurons. Unpublished doctoral dissertation, University of Pennsylvania, Philadelphia. Bedenbaugh, P. H., & Gerstein, G. L. (1993). Multiunit normalized cross correlation differs from the average single unit normalized correlation in rat somatosensory cortex. Society for Neuroscience Abstracts, 19 (2), 1566. Bedenbaugh, P. H., Gerstin, G. L., & Bolton, M. (1997). Intracortical microstimulation in rat somatosensory cortex. Receptive field plasticity and somatic stimulus dependent spike train correlations. Unpublished manuscript. Gerstein, G. L., & Aertsen, A. M. H. J. (1985). Representation of cooperative firing activity among simultaneously recorded neurons. Journal of Neurophysiology, 54 (6), 1513–1528. Gerstein, G. L., Bedenbaugh, P., & Aertsen, A. M. H. J. (1989). Neuronal assemblies. IEEE Transactions on Biomedical Engineering, 36 (1), 4–14. Gerstein, G. L., Bloom, M. J., Espinosa, I. E. Evanczuk, S., & Turner, M. R. (1983). Design of a laboratory for multineuron studies. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13 (5), 668–676. Gerstein, G. L., & Perkel, D. H. (1972). Mutual temporal relationships among neuronal spike trains. Biophysical Journal, 12, 453–473. Gerstein, G. L., Perkel, D. H., & Dayhoff, J. E. (1985). Cooperative firing activity in simultaneously recorded populations of neurons: Detection and measurement. Journal of Neuroscience, 5 (4), 881–889. Hebb, D. O. (1949). The organization of behavior. New York: Wiley. Kreiter, A. K., & Singer, W. (1995). Dynamic patterns of synchronization between neurons in area mt of awake macaque monkeys. Society for Neuroscience Abstracts, 21 (2), 905. Palm, G., Aertsen, A. M. H. J., & Gerstein, G. L. (1988). On the significance of correlations among neuronal spike trains. Biological Cybernetics, 59, 1–11. Perkel, D. H. (1970). Spike trains as carriers of information. In F. O. Schmitt (Ed.), The Neurosciences: A Second Study Program (pp. 587–596). New York: Rockefeller University Press. Perkel, D. H., Gerstein, G. L., & Moore, G. P. (1967a). Neuronal spike trains and stochastic point processes ii. simultaneous spike trains. Biophysical Journal, 7 (4), 419–440. Perkel, D. H., Gerstein, G. L., & Moore, G. P. (1967b). Neuronal spike trains and stochastic point processes i. The single spike train. Biophysical Journal, 7 (4), 391–418. Rosenberg, J. R., Amjad, A. M., Breeze, P., Brillinger, D. R., & Halliday, D. M. (1989). The Fourier approach to the identification of functional coupling between neuronal spike trains. Progress in Biophysics and Molecular Biology, 53, 1–31.
Received May 14, 1996; accepted November 12, 1996.
Communicated by Garrison Cottrell
A Simple Common Contexts Explanation for the Development of Abstract Letter Identities Thad A. Polk Department of Psychology, University of Michigan, Ann Arbor, MI 48109, U.S.A.
Martha J. Farah Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, U.S.A.
Abstract letter identities (ALIs) are an early representation in visual word recognition that are specific to written language. They do not reflect visual or phonological features, but rather encode the identities of letters independent of case, font, sound, and so forth. How could the visual system come to develop such a representation? We propose that because many letters look similar regardless of case, font, and other characteristics, these provide common contexts for visually dissimilar uppercase and lowercase forms of other letters (e.g., e between k and y in key and E in the visually similar context K-Y). Assuming that the distribution of words’ relative frequencies is comparable in upper- and lowercase (that just as key is more frequent than pew, KEY is more frequent than PEW), these common contexts will also be similarly distributed in the two cases. We show how this statistical regularity could lead Hebbian learning to produce ALIs in a competitive architecture. We present a self-organizing artificial neural network that illustrates this idea and produces ALIs when presented with the most frequent words from a beginning reading corpus, as well as with artificial input. 1 Introduction People can read text written in many different ways: in UPPERCASE, lowercase, or even AlTeRnAtInG cAsE; in different sizes and in different fonts. In many cases, these different forms look very different (e.g., compare are and ARE) and yet we effortlessly identify them as the same word. How do we do it? One popular hypothesis is that an early stage in reading involves the computation of abstract letter identities (ALIs), that is, a representation of letters that denotes their identity but abstracts away from their visual appearance (uppercase versus lowercase, font, size, etc.) (Coltheart, 1981; Besner, Coltheart, & Davelaar, 1984; Bigsby, 1988; Mozer, 1989; Prinzmetal, Hoffman, & Vest, 1991), and a number of empirical studies have provided support for this hypothesis. For example, even when subjects are asked to classify pairs Neural Computation 9, 1277–1289 (1997)
c 1997 Massachusetts Institute of Technology °
1278
Thad A. Polk and Martha J. Farah
of letter strings as same or different based purely on physical criteria, letter strings that differ in case but share the same letter identities (e.g., HILE/hile) are distinguished less efficiently than are strings with different spellings but the same phonological code (e.g., HILE/hyle) (Besner et al., 1984). This result suggests that subjects do compute ALIs (identical ALIs [HILE/hile] impair performance), and this computation cannot be turned off and is thus architectural (the task is based on physical equivalence, not ALI equivalence, and yet ALIs affected performance). Also, just as subjects tend to underestimate the number of letters in a string of identical letters (e.g., DDDD) compared with a heterogeneous string (e.g., GBUF), subjects also tend to underestimate the total number of Aa’s and Ee’s in a display more often when uppercase and lowercase instances of a single target appear (e.g., Aa) than when one of each target appears (e.g., Ae; Mozer, 1989). The repetition of ALIs must be responsible for this effect because visual forms were not repeated in the mixed-case displays (e.g., uppercase A and lowercase a look different). Notice that these effects cannot be due simply to the visual similarity of different visual forms for a given letter because they also show up for letters whose visual forms are not visually similar (e.g., A/a, D/d, E/e). It appears that this representation is not a phonological code but is computed earlier by the visual system itself. For example, Coltheart (1981) reported a brain-damaged patient who had lost the ability to produce or distinguish the sounds of words and letters, but was able to retrieve the meanings of written words and to determine whether nonwords that differed in case were composed of the same letters (ANER/aner versus ANER/aneg). This result suggests that the representation of ALIs does not depend on letter phonology because access to ALIs was preserved even when letter phonology was severely impaired. Similarly, Reuter-Lorenz and Brunn (1990) described a patient with damage to the left extrastriate visual cortex who was impaired at extracting letter identities and was thus unable to read. In a functional magnetic resonance imaging study, Polk et al. (1996) found an area in left inferior extrastriate visual cortex that responded significantly more to alternating case words and pseudowords than to consonant strings. A natural interpretation of this result is that the brain area is sensitive to abstract, rather than visual, letter and word forms, because the stimuli were visually unfamiliar. Finally, Bigsby (1988) presented empirical evidence in normal subjects suggesting that ALIs are computed prior to phonological or lexical access. The evidence, then, is that a relatively early representation in the visual processing of orthography (before lexical access or the representation of phonology) is specific to our writing system and does not reflect fundamental visual properties such as shape. Different stimuli that have little or no visual similarity (e.g., A and a) are represented with similar codes. What would lead visual areas to develop such a representation?
A Common Contexts Explanation for ALIs
1279
2 The Common Contexts Hypothesis We propose that the visual contexts in which different forms of the same letter appear interact with correlation-based Hebbian learning in the visual system of the brain to produce ALIs. Specifically, because many letters look similar regardless of case, font, and so forth, we assume that different visual forms of the same letter share similar distributions of visual contexts and that this correlation leads the visual system to produce representations corresponding to ALIs. People are exposed to any given word written in a variety of different ways: all capitals, lowercase, in different fonts, in different sizes, and so on. Because the shapes of some letters are relatively invariant over these transformations, they tend to make the contexts in which one visual form of a letter appear similar to the contexts in which other visual forms of that same letter appear. For example, if the visual form a occurs in certain contexts (e.g., between c and p in cap, before s in as), then the visual form A will also occur in visually similar contexts (between C and P in CAP, before S in AS). Of course, the different visual forms of some letters are fairly different (e.g., D versus d), and so there will be some contexts that are unique to one visual form of a letter (e.g., a but not A occurs in the context d-d). But given that a number of letters are similar in their uppercase and lowercase forms (Cc, Kk, Oo, Pp, Ss, Uu, Vv, Ww, Xx, Zz, and to some degree Bb, Ff, Hh, Ii, Jj, Ll, Mm, Nn, Tt, and Yy; the obvious exceptions are Aa, Dd, Ee, Gg, Qq, and Rr) and that letters in different fonts and sizes tend to look similar, there will be a significant proportion of common contexts for dissimilar-looking uppercase and lowercase letter forms.
3 A Neural Network Model Why would common contexts lead the visual system to produce representations corresponding to ALIs? Figure 1 presents a simple and natural mechanistic model that demonstrates one possibility. The model is a two-layer neural network that uses a Hebbian learning rule to modify the weights of the connections between the input and output layers. Hebbian learning is a neurophysiologically plausible mechanism that generally corresponds to the following rule: if two units are both firing, then their connection is strengthened; if only one unit of a pair is firing, then their connection is weakened (Hebb, 1949). The input layer represents the visual forms of input letters using a localist representation (each unit represents a visual form, and similar visual forms, such as C and c, are represented by the same unit). This representation does not code letter position because it does not play a role in our explanation. Initially, the output layer does not represent anything (since the connections from the input layer are initially random), but with training it should self-organize to represent ALIs. Each unit in the
1280
Thad A. Polk and Martha J. Farah
Figure 1: A neural network model of the development of abstract letter identities. The input layer (top) represents the visual forms of input letters but does not code their position. Letters whose uppercase and lowercase forms are visually similar are represented by a single unit (e.g., C, P, S). In this example, the word cap is presented. Initially, the output representation is random (left), but eventually a cluster of activity develops (right), and Hebbian learning strengthens the connections to it from the input letters.
output layer is connected to its eight surrounding neighbors by excitatory connections (except for units on the edges of the output layer; there is no wraparound) and units farther away are connected by inhibitory connections, in keeping with previous models of cortical self-organization (von der Malsburg, 1973). As the last simulation will demonstrate, the critical assumption is that there is competition in the output layer (implemented here by inhibitory connections); the excitatory connections are not necessary. Figure 1 illustrates the model’s behavior when the first word (say, cap) is initially presented. The pattern of output activity is initially random (see Figure 1, left), reflecting the random initial connection strengths. The shortrange excitatory connections lead to clusters around the most active units (see Figure 1, middle), and these in turn drive down activity elsewhere by the longer-range inhibitory connections leading to a single cluster (or a small number) (see Figure 1, right). The Hebbian rule then strengthens the connections to this active cluster from the active input units (the letters c, a, and p), but weakens the connections from other (inactive) inputs as well as the connections from the active input to the inactive output units. Because
A Common Contexts Explanation for ALIs
1281
of these weight changes, c, a, and p will subsequently be biased toward activating units in that cluster, while other inputs (e.g., d) will be biased away from that cluster. So, for example, if the word dog was presented next, it would tend to excite units outside the cluster excited by cap. Now suppose we present the same word, but in uppercase (CAP; see Figure 2). Because C and P are visually similar to c and p, their input representations will also be similar (in this simple localist model, that means they excite the same units; in a more realistic distributed model, the representations would share many units rather than being identical). As a result, C and P will be biased toward exciting some of the same output units that cap excited. The input A, however, has no such bias. Indeed, its connections to the cap cluster would have been weakened (because it was inactive when the cluster was previously active), and it excites units outside this cluster (top of the output layer at the left of Figure 2). The cluster inhibits these units via the long-range inhibitory connections, and it eventually wins out (right of Figure 2). Hebbian learning again strengthens the connections from the active inputs including A) to the cluster. The result is that a and A are biased toward exciting nearby units despite the fact that they are visually dissimilar and initially excited quite different units. An ALI has emerged. If this were the whole story, then one might expect all letters to converge on the same output representation because many different letters occasionally occur in common contexts (e.g., a and o in c-t). The reason this does not happen is that the distributions of contexts in which different letters appear are very different. For most contexts in which a occurs, there is a visually similar context in which A occurs (in the same word written in uppercase). Furthermore, these contexts will occur with comparable relative frequencies because words that are frequent in lowercase will also tend to be frequent in uppercase.1 So if a frequently occurs in a given context (e.g., says), then the corresponding visually similar context will also be frequent for A (e.g., SAYS). And infrequent contexts for a will be infrequent for A (zap and ZAP). As a result, the same forces that shape the representation of a (that it looks more like s than like z) will shape the representation of A, and the two inputs will end up having similar output representations. The same is not true for different letters. Although a and o occur in some of the same contexts (cat and cot), there are many contexts in which only one of the two letters can appear. Furthermore, even contexts that admit either letter will not occur with comparable relative frequencies (cat is roughly fifty times more frequent than cot in beginning reading books). Thus, very 1 The claim is that these contexts appear with comparable relative, not absolute, frequencies. Lowercase contexts are more frequent than uppercase contexts in most corpora, but the relative frequencies are similar. So just as says is more frequent than zap, SAYS is more frequent than ZAP, and it does not matter much to the model whether SAYS is more or less frequent than zap.
1282
Thad A. Polk and Martha J. Farah
Figure 2: The behavior of the network from Figure 1 when presented with the word CAP. C and P excite the previous cluster because of their visual similarity to the c and p in cap, but at first A excites a distinct set of units (top of the output layer). These two regions of activity compete until the previous cluster wins out. Hebbian learning then strengthens the connections from all the active inputs (including A) to the cluster, biasing a and A toward exciting nearby units.
different forces will shape the representation of these letters, and they will end up with dissimilar output representations. 4 Results Figure 3 shows the results of presenting a network like this with simple stimuli that satisfy the constraints outlined above. The stimuli consisted of a random sequence of thirty-six three-letter words: twelve in uppercase and twenty-four in lowercase (the same twelve words that appeared in uppercase were presented in lowercase twice in the random sequence). Each word contained one letter from a set of three ALI candidate letters—each of these letters had two possible visual forms (e.g., uppercase and lowercase)— and each such letter appeared in four of the twelve words. The other two letters were randomly chosen from a set of twenty whose visual forms were similar in uppercase and lowercase. Figure 3 shows the output activity when presenting each of the three candidate ALI letters in each of their visual forms after training. In all three cases, the output representations for the two different visual forms are virtually identical; the output representation
A Common Contexts Explanation for ALIs
1283
Figure 3: The patterns of output activity when presenting two different visual forms for each of three letters to the initial network model after training. In all three cases, the two different visual forms activate similar output representations, that is, ALIs. Note that each of the three letters has a distinct representation.
corresponds to an ALI for the letter. Also note that the output representations of the three different letters were distinct. In fact, these output patterns were unique; no other inputs produced these patterns. 5 Other Simulations This simple simulation demonstrates the feasibility of the common contexts hypothesis, but it does not demonstrate that the idea would work for more realistic input sets. Accordingly, we developed three other simulations using such sets. The third is the most realistic model of the three, but because the first two led to some important insights incorporated in the third, it is worth describing them briefly. In order to allow for all twenty-six letters, we increased the number of input units from twenty-six to thirty-two (twenty-six lowercase letters and the uppercase forms for the six letters that are most dissimilar in the two forms: A, D, E, G, Q, and R). Once again, all input units were connected to all output units and neighboring output units were excitatory while nonneighboring units were inhibitory. Our first attempt at a more realistic input set was Dr. Seuss’s Hop on Pop. After “reading” this book ten times, the sim-
1284
Thad A. Polk and Martha J. Farah
ulation developed similar representations for A and a but not for the other five letters with dissimilar forms. This corpus was too small for upper- and lowercase contexts to be comparably distributed (e.g., many words that occurred frequently in lowercase never appeared in uppercase, and vice versa), however, so it is not surprising that the other letters failed. Indeed, it was the failure of these other letters that suggested the importance of the overall distribution of contexts. Accordingly, we used a more realistic corpus from beginning reading books for our next simulation (Baker & Freebody, 1989). This corpus is composed of the words from a large number of school books that are used in teaching children to read in Australia. It was far too large to train our simulation on the entire corpus, so we constructed a corpus of 3750 words from the 50 most frequent words with appropriate relative frequencies in random order (50 percent of all the words in the corpus came from this set of 50 words). Also, the corpus did not distinguish different visual forms of the same word, so we presented the same number of UPPERCASE, lowercase, and Initial Capital forms for each word.2 This simulation consistently developed an ALI for E/e, but not for A/a or D/d, and the other three letters with dissimilar forms (G, Q, R) developed almost no representations at all. Both models demonstrated that the hypothesized interaction between common contexts and correlation-based learning can lead to some ALIs. The reason ALIs did not develop for all the letters is that the simple Hebbian learning rule we were using puts more and more emphasis on frequent letters and less and less on infrequent letters. Whenever an input is inactive, its connections to any active outputs are weakened. As a result, the connections from infrequent letters to output units that are activated with any regularity are constantly reduced in strength. Conversely, the connections from frequent inputs to the outputs they activate are regularly being strengthened. The result is that frequent letters come to dominate the representation of words. This undermines the development of ALIs in two ways. First, frequent letters such as A dominate the output representations of words in which they occur, making the surrounding context letters irrelevant. Thus, if A and a start out with different output representations, as they usually do, the common contexts in which they occur become irrelevant because A and a dominate the representations rather than the context letters. Second, whatever ALIs may develop for infrequent letters quickly get squashed by the weakening of their connection strengths when those letters are not present (and they normally are not present). This problem with the simple Hebbian learning rule we used has led previous researchers to reject it in favor of more sophisticated alternatives in
2 There were two exceptions. The lowercase forms of proper names still had the first letter capitalized and the Initial Capital forms of verbs that are not imperatives (which would rarely occur as the first word of a sentence) were presented in lowercase.
A Common Contexts Explanation for ALIs
1285
A
D
E
G
Q
R
a
d
e
g
q
r
Figure 4: The patterns of output activity when presenting two different visual forms for each of six letters to the final network model after training. In most cases, the two different visual forms again activate similar output representations, that is, ALIs.
which weights are normalized in some way so that frequent inputs will not completely dominate infrequent ones. After all, real neural learning mechanisms are quite adept at exploiting input features that are predictive, even if they are very infrequent. In our final simulation, we adopted a zero-sum Hebbian learning rule based on a number of neurophysiological constraints that has been shown to have nice convergence properties in both activation space and weight space (O’Reilly & McClelland, 1992; see Miller, 1990, and Rolls, 1989, for similar rules). The appendix gives the details of the model. Figure 4 presents the results of training the revised architecture on the beginning reading corpus using the zero-sum Hebbian learning rule. O’Reilly and McClelland (1992) used this rule in a winner-take-all architecture with inhibitory connections but no excitatory connections within a layer, so we tried it both with and without excitatory connections. The results were similar, so we present only the results without excitatory connections. As Figure 4 shows, the upper- and lowercase forms of five of the six letters of interest converged on similar representations. The representations of R and r were different, but only because the weights from r were smaller than those from R, so only the most active output passed the threshold of the allor-none style sigmoid activation function that was used. The underlying patterns of weights were similarly distributed; the weights from R were simply stronger. Also, notice that the representations of the infrequent letters (G, Q, and R) are similar to each other. The reason is that the zero-sum Hebbian rule is weight conserving: the positive and negative weight changes sum to zero. As a result, when the weights from infrequent inputs to frequently activated outputs are weakened, the connections from those infrequent inputs to the infrequent outputs are strengthened. Thus, the infrequent outputs are
1286
Thad A. Polk and Martha J. Farah
shared by all the infrequent inputs, and their representations look similar. There are certainly differences among the representations of the infrequent letters, but these results indicate that a substantial part of the representations for these letters is indicating that the letter is infrequent, not that it is a G, Q, or R. In fact, Q and q developed similar representations despite never appearing in the corpus at all. Finally, it is worth pointing out that although the different visual forms for each letter converged on similar representations, the similarity is not as strong as it was for the first simulation, which used artificial input. The reason is that the realistic input set used in this simulation often violated the assumptions of the common contexts hypothesis. Many of the words in the beginning reading corpus are composed of multiple letters that look different in upper- and lowercase (e.g., all three letters in are/ARE look different in upper- and lowercase). These words, some of which are among the most frequent in the corpus, tend to undermine the development of ALIs because they do not provide common visual contexts in their upper- and lowercase forms. Do humans represent A and a with similar codes, or do they compute an identical representation at some point during processing? The empirical evidence reviewed above does not distinguish these alternatives. Like identical codes, similar codes would make it harder to distinguish strings that differed only in case and could lead subjects to underestimate the number of target letters in a display in which upper- and lowercase instances of a single target appear. In any case, the common contexts hypothesis does not take a stand on this issue. The reason is that unsupervised neural networks are exceptionally good at learning to map similar inputs to identical outputs, so it would be straightforward to compute identical representations for different visual forms of individual letters from the outputs of the current model. 6 Discussion It is becoming clear that modality-specific visual processing can become specialized for dealing with written language. ALIs are not the only such representation. For example, in a positron emission tomography study, Petersen, Fox, Snyder, and Raichle (1990) found that certain areas in the left medial extrastriate visual cortex were activated by visually presented words and pseudowords that obey English spelling rules, but not by nonsense strings of letters or letter-like forms. The visual characteristics of these different stimulus types were carefully matched, suggesting that these visual areas had developed a representation that is sensitive to the statistical regularities of real text, not just to low-level visual features that appear in a wide range of stimuli. How could visual representations come to encode abstract, orthographic regularities? In the case of ALIs, we proposed that the statistics of the visual environment (the common contexts surrounding letters in words) interact
A Common Contexts Explanation for ALIs
1287
with correlation-based learning in the brain to form representations of letter identities that abstract away from differences in physical form (e.g., a, A). A variant of this hypothesis has previously been used to explain the observed neural localization of arbitrary and noninnate categories (e.g., letters and digits) (Polk & Farah, 1995a, 1995b; Farah, Polk, et al., 1996). To demonstrate the feasibility of the common contexts hypothesis as an explanation for the development of ALIs, we showed that simple competitive Hebbian networks spontaneously self-organize to produce ALIs when exposed to orthographic input that preserves a basic statistical feature of real text, namely, that different visual forms of the same letter appear in similarly distributed common contexts. An obvious alternative to the proposed explanation for the development of ALIs is that ALIs emerge from hearing the same sound associated with the different visual forms of a given letter, for example, when children are learning to read. Any supervised learning algorithm, such as backpropagation, could obviously learn to group the different visual forms of a single letter if it was provided with phonological feedback. This explanation is certainly viable, but the common contexts hypothesis does have certain advantages. First, supervised learning does not have the independent physiological plausibility that Hebbian learning has. Second, if phonology plays such a crucial role, one might expect an impairment in letter phonology to undermine any attempts to read. But phonological impairments need not destroy reading (Coltheart, 1981). Thus, one would need to assume that phonology plays a critical role in learning ALIs, but that once they are acquired, they no longer depend on phonology. Third, this hypothesis must assume that phonology, a representation that is relatively far downstream, influences the development of a representation used in a simple visual matching task, which does not even require recognizing that letters are present, much less identifying them and retrieving their names (Besner et al., 1984). The common contexts hypothesis, on the other hand, proposes that ALIs arise bottom up, from an interaction between the statistics of shapes present in the visual input and the nature of cortical learning mechanisms. It therefore accommodates in a natural way the findings that ALIs are independent of phonology and that ALI effects show up in simple visual matching tasks. Furthermore, the hypothesis does not depend on supervised learning, but rather is based on a simple, unsupervised Hebbian mechanism that has independent physiological plausibility. Empirical work is needed to distinguish these hypotheses conclusively, but the common contexts hypothesis does offer a viable and attractive alternative to the phonological account. Appendix: Details of the Final Neural Network Model The network has seventy-four total units. Thirty-two are inputs (6 ALI candidate letters × 2 visual forms each + the 20 other letters) and the other
1288
Thad A. Polk and Martha J. Farah
forty-two are outputs in a 7 × 6 2D arrangement. All input units are connected to all output units with plastic connections. Output units are connected to each other by fixed inhibitory connections (weight = −0.1). The minimum and maximum unit firing rates are fixed at 0.0 and 100.0, and the minimum and maximum connection weights are fixed at 0.0 and 1.0. Initially, the activity of output units is uniform random between 0.0 and 10.0, and the connection weights from inputs to outputs are uniform random between 0.0 and 0.6. The following zero-sum Hebbian learning rule based on firing rate is used after every cycle to update connection strengths between input and output units (for details and motivation, see O’Reilly & McClelland, 1992): changeij = 0.01(oj − hoiout )(oi − hoiin ) if changeij > 0 then wij = wij + changeij (1.0 − wij ) else wij = wij + changeij (wij − 0.0) where oi = activation of input unit i oj = activation of output unit j hoiin = average activation in input layer hoiout = average activation in output layer wij = weight of connection between unit i and unit j. The output units use a sigmoid transfer function: output = 100.0/(1 + e−(input−40.0) ). The total input to each output unit (the weighted sum of the activations of the units to which it is connected) is multiplied by a 0.9 gain factor before passing through the transfer function. The input units are clamped to their values (0.0 when not firing, 100.0 when firing) and do not decay. Acknowledgments This research was supported by a grant-in-aid for training from the McDonnell-Pew program in cognitive neuroscience to T.A.P. and by grants from the NIH, ONR, Alzheimer’s Disease Association, and the Research Association of the University of Pennsylvania. We gratefully acknowledge Max Coltheart for helpful comments on this research and article. Requests for reprints should be sent to Thad Polk, Department of Psychology, University of Michigan, 525 E. University, Ann Arbor, Michigan 48109-1109, U.S.A.
A Common Contexts Explanation for ALIs
1289
References Baker, C. D., & Freebody, P. (1989). Children’s first school books: Introductions to the culture of literacy. New York: Blackwell. Besner, D., Coltheart, M., & Davelaar, E. (1984). Basic processes in reading: Computation of abstract letter identities. Canadian Journal of Psychology, 38, 126– 134. Bigsby, P. (1988). The visual processor module and normal adult readers. British Journal of Psychology, 79, 455–469. Coltheart, M. (1981). Disorders of reading and their implications for models of normal reading. Visible Language, 15, 245–286. Farah, M., Polk, T. A., Stallcup, M., Aguirre, G., Alsop, D., D’Esposito, M., Detre, J., & Zarahn, E. (1996). Localization of a fine-grained category of shape: An extrasite letter area revealed by fMRI. Society for Neuroscience Abstracts, 291(1). Hebb, D. O. (1949). The organization of behavior: A neuropsychological theory. New York: Wiley. Miller, K. D. (1990). Correlation-based models of neural development. In M. Gluck & D. Rumelhart (Eds.), Neuroscience and connectionist theory. Hillsdale, NJ: Erlbaum. Mozer, M. (1989). Types and tokens in visual letter perception. Journal of Experimental Psychology: Human Perception and Performance, 15, 287–303. O’Reilly, R. C., & McClelland, J. L. (1992). The self-organization of spatially invariant representations (Tech. Rep. No. PDP.CNS.92.5). Pittsburgh, PA: Carnegie Mellon University. Petersen, S. E., Fox, P. T., Snyder, A. Z., & Raichle, M. E. (1990). Activation of extrastriate and frontal cortical areas by visual words and word-like stimuli. Science, 249, 1041–1044. Polk, T. A., & Farah, M. (1995a). Lute experience alters vision. Nature, 376, 648– 649. Polk, T. A., & Farah, M. (1995b). Brain localization for arbitrary stimulus categories: A simple account based on Hebbian learning. Proc. of the Nat’l Academy of Sciences USA, 92, 12370–12373. Polk, T. A., Stallcup, M., Aguirre, G., Alsop, D., D’Esposito, M., Detre, J., Zarahan, E., & Farah, M. (1996). Abstract, not just visual, orthographic knowledge encoded in extrastriate cortex: An fMRI study. Society for Neuroscience Abstracts, 291(2). Prinzmetal, W., Hoffman, H., & Vest, K. (1991). Automatic processes in word perception: An analysis from illusory conjunctions. Journal of Experimental Psychology: Human Perception and Performance, 17, 902–923. Reuter-Lorenz, P. A., & Brunn, J. L. (1990). A prelixical basis for letter-by-letter reading: A case study. Cognitive Neuropsychology, 7 (1), 1–20. Rolls, E. T. (1989). Functions of neuronal networks in the hippocampus and neocortex in memory. In J. H. Byrne & W. O. Berry, Neural Models of Plasticity: Experimental and Theoretical Approaches. San Diego: Academic Press. von der Malsburg, C. (1973). Self-organization of orientation sensitive cells in the striate cortex. Kybernetik, 14, 85–100. Received May 10, 1995; accepted November 11, 1996.
Communicated by Graeme Mitchison
A Unifying Objective Function for Topographic Mappings Geoffrey J. Goodhill Sloan Center for Theoretical Neurobiology, Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A. and Georgetown Institute for Cognitive and Computational Sciences, Georgetown University Medical Center, Washington, DC 20007, U.S.A.
Terrence J. Sejnowski The Howard Hughes Medical Institute, Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A. and Department of Biology, University of California San Diego, La Jolla, CA 92037, U.S.A.
Many different algorithms and objective functions for topographic mappings have been proposed. We show that several of these approaches can be seen as particular cases of a more general objective function. Consideration of a very simple mapping problem reveals large differences in the form of the map that each particular case favors. These differences have important consequences for the practical application of topographic mapping methods. 1 Introduction The notion of a topographic mapping that takes nearby points in one space to nearby points in another appears in many domains, both biological and practical. A number of algorithms have been proposed that claim to construct such mappings (reviewed in Goodhill & Sejnowski, 1996). However, a fundamental problem with these claims, or the claim that a particular given mapping is topographic, is that the computational-level meaning of topography has not been formally defined. Given the wide variety of contexts in which topography or degrees of topography are discussed, it is unlikely that an application-independent definition would be generally useful. A more productive goal is to attempt to lay bare the assumptions behind different approaches to topographic mappings, and thus clarify the relationships among them. In this way, the hope is to make it easier to choose the appropriate approach for a particular problem. In this article, we introduce an objective function we call the C measure and show that it has the capacity to unify several different approaches to topographic mapping. These approaches constitute particular instantiations of the functions from which the C measure is constructed. We then consider a simple mapping problem and explicitly optimize different versions of the C measure. It becomes apparent that the “optimally topographic” map can radically change depending on the particular measure employed. Neural Computation 9, 1291–1303 (1997)
c 1997 Massachusetts Institute of Technology °
1292
Geoffrey J. Goodhill and Terrence J. Sejnowski
2 The C measure For the purposes of this article, we consider only bijective (1-1) mappings and a finite number of points in each space. Some mapping problems intrinsically have this form, and many others can be reduced to this by separating out a clustering or vector quantization step from the mapping step. Consider an input space Vin and an output space Vout , each of which contains N points (see Figure 1). Let M be the mapping from points in Vin to points in Vout . We use the word space in a general sense: either or both of Vin and Vout may not have a geometric interpretation. Assume that for each space there is a symmetric similarity function that, for any given pair of points in the space, specifies by a nonnegative scalar value how similar (or dissimilar) they are. Call these functions F for Vin and G for Vout . Then we define a cost function C as follows, C=
N X X
F(i, j)G(M(i), M(j)),
(2.1)
i=1 j
where i and j label points in Vin , and M(i) and M(j) are their respective images in Vout . The sum is over all possible pairs of points in Vin . Since we have assumed that M is a bijection, it is therefore invertible, and C can equivalently be written as C=
N X X
F(M−1 (i), M−1 (j))G(i, j),
(2.2)
i=1 j
where i and j label points in Vout , and M−1 is the inverse map. A good mapping is one with a high value of C. However, if one of F or G is given
Vout
V in M
G(p,q)
F(i,j)
p i
j
Figure 1: The mapping framework.
q
Unifying Objective Function
1293
as a dissimilarity function (i.e., increasing with decreasing similarity), then a good mapping has a low value of C. How F and G are defined is problem specific. They could be Euclidean distances in a geometric space, some (possibly nonmonotonic) function of those distances, or they could just be given, in which case it may not be possible to interpret the points as lying in some geometric space. C measures the correlation between the F’s and the G’s. It is straightforward to show that if a mapping that preserves orderings exists, then maximizing C will find it. This is equivalent to saying that for two vectors of real numbers, their inner product is maximized over all permutations within the two vectors if the elements of the vectors are identically ordered, a proof of which can be found in Hardy, Littlewood, and Polya ´ (1934, p. 261). 3 Measures Related to C Several objective functions for topographic mapping can be seen as versions of the C measure, given particular instantiations of the F and G functions. 3.1 Metric Multidimensional Scaling. Metric multidimensional scaling (metric MDS) is a technique originally developed in the context of psychology for representing a set of N entities (e.g., subjects in an experiment) by N points in a low- (usually two-) dimensional space. For these entities one has a matrix that gives the numerical dissimilarity between each pair of entities. The aim of metric MDS is to position points representing entities in the low-dimensional space so that the set of distances between each pair of points matches as closely as possible the given set of dissimilarities. The particular objective function optimized is the summed squared deviations of distances from dissimilarities. The original method was presented in Torgerson (1952); for reviews, see Shepard (1980) and Young (1987). In terms of the framework presented earlier, the MDS dissimilarity matrix is F. Note that there may not be a geometric space of any dimensionality for which these dissimilarities can be represented by distances (for instance, if the dissimilarities do not satisfy the triangle inequality), in which case Vin does not have a geometric interpretation. Vout is the low-dimensional, continuous space in which the points representing entities are positioned, and G is Euclidean distance in Vout . Metric MDS selects the mapping M by adjusting the positions of points in Vout , which minimizes N X X (F(i, j) − G(M(i), M(j)))2 .
(3.1)
i=1 j
Under our assumptions, this objective function is identical to the C mea-
1294
Geoffrey J. Goodhill and Terrence J. Sejnowski
sure. Expanding out the square in equation 3.1 gives N X X (F(i, j)2 + G(M(i), M(j))2 − 2F(i, j)G(M(i), M(j))).
(3.2)
i=1 j
The last term is twice the C measure. The entries in F are fixed, so the first term is independent of the mapping. In metric MDS, the sum over the G’s varies as the entities are moved in the output space. If one instead considers the case where the positions of the points in Vout are fixed, and the problem is to find the assignment of entities to positions that minimize equation 3.1, then the sum over the G’s is also independent of the map. In this case the metric MDS measure becomes exactly equivalent to C. 3.2 Minimal Wiring. In minimal wiring (Mitchison & Durbin, 1986; Durbin & Mitchison, 1990), a good mapping is defined as one that maps points that are nearest neighbors in Vin as close as possible in Vout , where closeness in Vout is measured by, for instance, Euclidean distance raised to some power. The motivation here is the idea that it is often useful in sensory processing to perform computations that are local in some space of input features Vin . To do this in Vout (e.g., the cortex) the images of neighboring points in Vin need to be connected; the dissimilarity function in Vout is intended to capture the cost of the wire (e.g., axons) required to do this. Minimal wiring is equivalent to the C measure for ½ 1 : i, j neighboring F(i, j) = 0 : otherwise G(M(i), M(j)) = ||M(i) − M(j)||p . For the cases of one- or two-dimensional square arrays investigated in Mitchison and Durbin (1986) and Durbin and Mitchison (1990), neighbors are taken to be just the two or four adjacent points in the array, respectively. 3.3 Minimal Path Length. In this scheme, a good map is one such that, in moving between nearest neighbors in Vout , one moves the least possible distance in Vin . This is, for instance, the mapping required to solve the traveling salesman problem (TSP), where Vin is the distribution of cities and Vout is the one-dimensional tour. This goal is implemented by the elastic net algorithm (Durbin & Willshaw, 1987; Durbin & Mitchison, 1990; Goodhill & Willshaw, 1990), which measures dissimilarity in Vin by squared distances: F(i, j) = ||vi − vj ||2 ½ G(p, q) =
1 : 0 :
p, q neighboring otherwise
Unifying Objective Function
1295
where vk is the position of point k in Vin (we have only considered here the regularization term in the elastic net energy function, which also includes a term matching input points to output points). Thus, minimal wiring and minimal path length are symmetrical cases under equation 2.1, with respect to exchange of Vin and Vout . Their relationship is discussed further in Durbin & Mitchison (1990), where the abilities of minimal wiring and minimal path length are compared with regard to reproducing the structure of the map of orientation selectivity in the primary visual cortex (see also Mitchison, 1995). 3.4 The Approach of Jones, Van Sluyters, and Murphy (1991). Jones, Van Sluyters, and Murphy (1991) investigated the effect of the shape of the cortex (Vout ) relative to the lateral geniculate nuclei (Vin ) on the overall pattern of ocular dominance columns in the cat and monkey, using an optimization approach. They desired to keep both neighboring cells in each LGN (as defined by a hexagonal array), and anatomically corresponding cells between the two LGNs, nearby in the cortex (also a hexagonal array). Their formulation of this problem can be expressed as a maximization of C when ½ 1 : i, j neighboring, corresponding F(i, j) = 0 : otherwise and
½ G(p, q) =
1 : 0 :
p, q first or second nearest neighbors otherwise.
For two-dimensional Vin and Vout they found a solution such that if F(i, j) = 1, then G(M(i), M(j)) = 1, ∀i, j. Alternatively this problem could be expressed as a minimization of C when G(p, q) is the stepping distance (see below) between positions in the Vout array. They found this gave appropriate behavior for the problem addressed. 3.5 Minimal Distortion. Luttrell and Mitchison have introduced mapping functionals for the continuous case. Under appropriate assumptions to reduce them to the discrete case, these are equivalent to C. Luttrell (1990, 1994) defined a minimal distortion principle that can be interpreted as a measure of mapping quality. He defined “distortion” D as Z D = dx P(x) d{x, x0 [y(x)]}. x and y are vectors in the input and output spaces, respectively. y is the map in the input to output direction, x0 is the (in general different) map back again, and P(x) is the probability of occurrence of x. x0 and y are suitably adjusted to minimize D. An augmented version of D includes additive noise
1296
Geoffrey J. Goodhill and Terrence J. Sejnowski
in the output space, Z Z D = dx P(x) dn π(n) d{x, x0 [y(x) + n]}, where π(n) is the probability density of the noise vector n. Intuitively, the aim is now to find the forward and backward maps so that the reconstructed value of the input vector is as close as possible to its original value after being corrupted by noise in the output space. In Luttrell (1990), the d function was taken to be {x − x0 [y(x) + n]}2 . In this case, Luttrell showed that the minimal distortion measure can be differentiated to produce a learning rule that is almost the self-organizing map rule of Kohonen (1982). For the discrete version of this, it is necessary to assume that appropriate quantization has occurred so that y(x) defines a 1-1 map, and x0 (y) defines the same map in the opposite direction. In this case the minimal distortion measure becomes equivalent to the C measure, with F(i, j) the squared Euclidean distance between the positions of vectors i and j (assuming these to lie in a geometric space) and G(p, q) the noise process in the output space (generally assumed gaussian). The minimal distortion principle was generalized in Mitchison (1995) by allowing the noise process and reconstruction error to take arbitrary forms. For instance, they can be reversed, so that F is a gaussian and G is Euclidean distance. In this case, the measure can be interpreted as a version of the minimal wiring principle, establishing a connection between minimal distortion (and hence the SOM algorithm) and minimal wiring. This identification also yields a self-organizing algorithm similar to the SOM for solving minimal wiring problems, the properties of which are briefly explored in Mitchison (1995). Luttrell (1994) generalized minimal distortion to allow a probabilistic match between points in the two spaces, which in the discrete case can be expressed as X XX F(i, j) P(k|i)P(j|k), D= i
j
k
where D is the distortion to be minimized, F is Euclidean distance in the input space, P(k|i) is the probability that output state k occurs given that input state i occurs, and P(j|k) is the corresponding Bayes inverse probability that input state j occurs given that output state k occurs. This reduces to C in the special case that P(k|i) = P(i|k), which is true for bijections. 3.6 Dynamic Link Matching. Bienenstock and von der Malsburg (1987a, 1987b) considered position-invariant pattern recognition as a problem of graph matching. Their formalism consists of two sets of nodes, one with links expressing neighborhood relationships given by F(i, j) and one with analogous links given by G(p, q). The aim is to find the bijective mapping between the two sets of nodes that optimally preserves the link structure. By
Unifying Objective Function
1297
approximating the Hamiltonian for a particular type of spin neural network, they derived an objective function H that in our notation (taking p = M(i), q = M(j)) can be written as H=
N N X X
F(i, j)G(p, q)wip wjq
i,j=1 p,q=1
2 !2 Ã N N N N X X X X +γ wip − a + wip − a . i=1
p=1
p=1
(3.3)
i=1
Here γ and a are constants, and wip is the strength of connection between node i and node p. The first term is a generalized version of the C measure, which allows a weighted match between all pairs of nodes i and p. The second term enforces the regularization constraint that the sum of the weights from or to each node is equal to a. When a = 1 and γ is large, the minima of H are clearly the same as the maxima of C. More recent work has investigated its potential as a general objective function for topographic mappings (Wiskott & Sejnowski, 1997). 4 Application to a Simple Mapping Problem To show the range of optimal maps that the C measure implies for different F and G, we consider the very simple case of mapping between a regular 10 × 10 square array of points (Vin ) and a regular 1 × 100 linear array of points (Vout ). This is a paradigmatic example of dimension reduction (also used in Durbin & Mitchison, 1990 and Mitchison, 1995). It has the virtue that solutions are easily represented (see Figures 2 and 3) and easily compared with intuition. Calculation of the global optimum for this problem by exhaustive search is clearly impractical, since there are of the order of 100! possible mappings. Instead we employ simulated annealing (Kirkpatrick, Gelatt, & Vecchi, 1983) to find a solution close to optimal. The parameters used are given in the legend to Figure 2. Figure 2 shows the best maps obtained by this method for the minimal path, minimal wiring, and metric MDS versions of the C measure. Also shown for comparison is a solution to this problem produced by the elastic net algorithm, taken from Durbin & Mitchison (1990). An optimal map for the minimal path version (A) is one with no diagonal segments. Our simulated annealing algorithm found one with three diagonal segments and a cost of 100.243, 1.3 percent larger than the optimal of 99.0. The optimal map for the minimal wiring measure has been explicitly calculated (Mitchison & Durbin, 1986), and has a cost of 914.0. Our algorithm found the map shown in (B), which is similar in shape and has a cost of 917.0 (0.3 percent larger than the optimal value). These results confirm that the simulated annealing algorithm employed can find close to optimal solutions. The cost for the
1298
Geoffrey J. Goodhill and Terrence J. Sejnowski
Figure 2: Maps found by optimizing using simulated annealing. (A) Minimal path length solution. (B) Minimal wiring solution. (C) Metric MDS solution. (D) Map found by the elastic net algorithm (see Durbin & Mitchison, 1990). Parameters used for simulated annealing were as follows (for further explanation, see van Larhoven & Aarts, 1987). Initial map: random. Move set: pairwise interchanges. Initial temperature: 3 × the mean energy difference over 10,000 moves. Cooling schedule: exponential. Cooling rate: 0.998. Acceptance criterion: 1000 moves at each temperature. Upper bound: 10,000 moves at each temperature. Stopping criterion: zero acceptances at upper bound.
metric MDS map (C) is 6,352,324. The illusion of multiple ends to the line is due to the map frequently doubling back on itself. For instance, in the fourth row up the square, the horizontal component for neighboring points in the line progresses in the order 5, 6, 4, 7, 3, 8, 2, 9, 10. Local discontinuity arises because this measure takes into account neighborhood preservation at all scales: local continuity is not privileged over global continuity. The costs of the map produced by the elastic net (D) are 102.3 for the minimal
Unifying Objective Function
1299
Figure 3: Minimal distortion solutions. (A) σ = 1.0, cost = 43.3. (B) σ = 2.0, cost = 214.7. (C) σ = 4.0, cost = 833.2. (D) σ = 20.0, cost = 18467.1. Simulated annealing parameters as in Figure 2.
path function, 1140 for the minimal wiring function, and 6,455,352 for the metric MDS function. In each case this is a higher cost than for the maps shown in (A), (B), and (C), respectively. Figure 3 shows close-to-optimal maps for the minimal distortion version of the C measure (F is squared Euclidean distance in the square, G is given 2 2 by e−d /σ where d is Euclidean distance along the line), for varying σ . For small σ , the solution resembles the minimal path optimum of Figure 2A, since the contribution from more distant neighbors on the line compared to nearest neighbors is negligible. However, as σ increases, the map changes form. Local continuity becomes less important compared to continuity at the scale of σ , the map becomes spikier, and the number of large-scale folds in the map gradually decreases until at σ = 20 there is just one. This
1300
Geoffrey J. Goodhill and Terrence J. Sejnowski
last map also shows some of the frequent doubling back behavior seen in Figure 2C.1 5 Discussion 5.1 Types of Mappings. Several qualitatively different types of map are apparent in Figures 2 and 3. For the metric MDS measure, Figure 2C, the line progresses through the square by a series of locally discontinuous jumps along one axis of the square and a comparatively smooth progression in the orthogonal direction. One consequence is that nearby points in the square never map too far apart on the line. For the minimal distortion measure with σ = 20 (see Figure 3D), the strategy is similar except that there is now one fold to the map. This decreases the size of local jumps in following the line through the square, at the cost of introducing a seam across which nearby points in the square map a very long way apart on the line. For the minimal distortion measure with small σ (see Figures 3A–C), the strategy is to introduce several folds. This gives strong local continuity, at the cost that now many points that are nearby in the square map to points that are far apart on the line. What do these maps suggest about which measures are most appropriate for different problems? If it is desired that generally nearby points should always map to generally nearby points as much as possible in both directions, and one is not concerned about very local continuity, then something like the MDS measure is useful. This may be appropriate for some data visualization applications where the overall structure of the map is more important than the fine detail. If, on the other hand, one desires a smooth progression through the output space to imply a smooth progression through the input space, one should choose something like minimal path or minimal distortion for small σ . This may be important for data visualization where it is believed the data actually lie on a lower-dimensional manifold in the high-dimensional space. However, an important weakness of this representation is that some neighborhood relationships between points in the input space may be completely lost in the resulting representation. For understanding the structure of cortical mappings, self-organizing algorithms that optimize objectives of this type have proved useful (Durbin & Mitchison, 1990; Obermayer, Blasdel, & Schulten, 1992). However, very few other objectives have been applied to this problem, so it is still an open question which are most appropriate. The type of map seen in Figure 3D has not been much investigated. There may be some applications for which such maps are worthwhile, perhaps in a neurobiological context for understand-
1 We have also explicitly optimized the objective functions proposed by Sammon (1969) and Bezdek and Pal (1995). The former produces a map similar to that of Figure 2C, while the latter resembles Figure 3D (for more details, see Goodhill & Sejnowski, 1996).
Unifying Objective Function
1301
ing why different input variables are sometimes mapped into different areas rather than interdigitated in the same area. 5.2 Relation of C to Quadratic Assignment Problems. Formulating neighborhood preservation in terms of the C measure sets it within the well-studied class of quadratic assignment problems (QAPs). These occur in many different practical contexts and take the form of finding the minimal or maximal value of an equation similar to C (see Burkard, 1984, for a general review and Lawler, 1963, and Finke, Burkard, & Rendl, 1987, for more technical discussions). An illustrative example is the optimal design of typewriter keyboards (Burkard, 1984). If F(i, j) is the average time it takes a typist to sequentially press locations i and j on the keyboard, while G(p, q) is the average frequency with which letters p and q appear sequentially in the text of a given language (note that in this example F and G are not necessarily symmetrical), then the keyboard that minimizes average typing time will be the one that minimizes the product N N X X
F(i, j)G(M(i), M(j))
i=1 j=1
(see equation 2.1), where M(i) is the letter that maps to location i. The substantial amount of theory developed for QAPs is directly applicable to the C measure. As a concrete example, QAP theory provides several different ways of calculating bounds on the minimum and maximum values of C for each problem. This could be very useful for the problem of assessing the quality of a map relative to the unknown best possible. One particular case is the eigenvalue bound (Finke et al., 1987). If the eigenvalues of symmetric matrices F and G are λi and µi , respectively, such that Pλ1 ≤ λ2 ≤ · · · ≤ λn and µ1 ≥ µ2 ≥ · · · ≥ µn , then it can be shown that i λi µi gives a lower bound on the value of C. QAPs are in general known to be of complexity NP-hard. A large number of algorithms for both exact and heuristic solution have been studied (see, e.g., Burkard, 1984; the references in Finke et al., 1987; and Simi´c, 1991). However, particular instantiations of F and G may allow efficient algorithms for finding good local optima, or alternatively may beset C with many bad local optima. Such considerations provide an additional practical constraint on what choices of F and G are most appropriate. 5.3 Many-to-One Mappings. In many practical contexts, there are many more points in Vin than Vout , and it is necessary also to specify a many-toone mapping from points in Vin to N exemplar points in Vin , where N = number of points in Vout . It may be desirable to do this adaptively while simultaneously optimizing the form of the map from Vin to Vout . For instance, shifting a point from one cluster to another may increase the clustering cost, but by moving the positions of the cluster centers decrease the sum of this
1302
Geoffrey J. Goodhill and Terrence J. Sejnowski
and the continuity cost. The elastic net, for instance, trades off these two contributions explicitly with a ratio that changes during the minimization, so that finally each cluster contains only one point and the continuity cost dominates (Durbin & Willshaw, 1987; for discussions, see Sim´ıc, 1990, and Yuille, 1990). The self-organizing map of Kohonen (1982) trades off these two contributions implicitly. Luttrell (1994) discusses allowing each point in the input space to map to many in the output space in a probabilistic manner, and vice-versa. The H function (Bienenstock & von der Malsburg, 1987a, 1987b) offers an alternative way of introducing a weighted match between input and output points. Acknowledgments We thank Steve Finch for very useful discussions at an earlier stage of this work. We are grateful to Graeme Mitchison, Steve Luttrell, Hans-Ulrich Bauer, and Laurenz Wiskott for helpful discussions and comments on the manuscript. Research was supported by the Sloan Center for Theoretical Neurobiology at the Salk Institute and the Howard Hughes Medical Institute. References Bezdek, J. C., & Pal, N. R. (1995). An index of topological preservation for feature extraction. Pattern Recognition 28:381–391. Bienenstock, E., & von der Malsburg, C. (1987a). A neural network for the retrieval of superimposed connection patterns. Europhys. Lett. 3:1243–1249. Bienenstock, E., & von der Malsburg, C. (1987b). A neural network for invariant pattern recognition. Europhys. Lett. 4:121–126. Burkard, R. E. (1984). Quadratic assignment problems. Europ. J. Oper. Res. 15:283– 289. Durbin, R., & Mitchison, G. (1990). A dimension reduction framework for understanding cortical maps. Nature 343:644–647. Durbin, R., & Willshaw, D. J. (1987). An analogue approach to the traveling salesman problem using an elastic net method. Nature 326:689–691. Finke, G., Burkard, R. E., & Rendl, F. (1987). Quadratic assignment problems. Annals of Discrete Mathematics 31:61–82. Goodhill, G. J., & Sejnowski, T. J. (1996). Quantifying neighbourhood preservation in topographic mappings. In Proceedings of the 3rd Joint Symposium on Neural Computation (pp. 61–82). Pasadena: California Institute of Technology. Available from http://www.cnl.salk.edu/˜geoff/. Goodhill, G. J., & Willshaw, D. J. (1990). Application of the elastic net algorithm to the formation of ocular dominance stripes. Network 1:41–59. Hardy, G. H., Littlewood, J. E., & Polya, ´ G. (1934). Inequalities. Cambridge: Cambridge University Press. Jones, D. G., Van Sluyters, R. C., & Murphy, K. M. (1991). A computational model for the overall pattern of ocular dominance. J. Neurosci. 11:3794–3808.
Unifying Objective Function
1303
Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science 220:671-680. Kohonen, T. (1982). Self-organized formation of topologically correct feature maps. Biol. Cybern. 43:59–69. Lawler, E. L. (1963). The quadratic assignment problem. Management Science 9:586–599. Luttrell, S. P. (1990). Derivation of a class of training algorithms. IEEE Trans. Neural Networks 1:229–232. Luttrell, S. P. (1994). A Bayesian analysis of self-organizing maps. Neural Computation 6:767–794. Mitchison, G. (1995). A type of duality between self-organizing maps and minimal wiring. Neural Computation 7:25–35. Mitchison, G., & Durbin, R. (1986). Optimal numberings of an N×N array. SIAM J. Alg. Disc. Meth. 7:571–581. Obermayer, K., Blasdel, G. G., & Schulten, K. (1992). Statistical-mechanical analysis of self-organization and pattern formation during the development of visual maps. Phys. Rev. A 45:7568–7589. Sammon, J. W. (1969). A nonlinear mapping for data structure analysis. IEEE Trans. Comput. 18:401–409. Shepard, R. N. (1980). Multidimensional scaling, tree-fitting and clustering. Science 210:390–398. Simi´c, P. D. (1990). Statistical mechanics as the underlying theory of “elastic” and “neural” optimizations. Network 1:89–103. Simi´c, P. D. (1991). Constrained nets for graph matching and other quadratic assignment problems. Neural Computation 3:268–281. Torgerson, W. S. (1952). Multidimensional scaling, I: Theory and method. Psychometrika 17:401–419. van Laarhoven, P. J. M., & Aarts, E. H. L. (1987). Simulated annealing: Theory and applications. Dordrecht: Reidel. Wiskott, L., & Sejnowski, T. J. (1997). Objective functions for neural map formation (Tech. Rep. INC-9701). Institute for Neural Computation. Young, F. W. (1987). Multidimensional scaling: History, theory, and applications. Hillsdale, NJ: Erlbaum. Yuille, A. L. (1990). Generalized deformable models, statistical physics, and matching problems. Neural Computation 2:1–24.
Received February 1, 1996; accepted October 28, 1996.
Communicated by Klaus Obermayer
Faithful Representation of Separable Distributions Juan K. Lin David G. Grier Department of Physics, University of Chicago, Chicago, IL 60637, U.S.A.
Jack D. Cowan Department of Mathematics, University of Chicago, Chicago, IL 60637, U.S.A.
A geometric approach to data representation incorporating informationtheoretic ideas is presented. The task of finding a faithful representation, where the input distribution is evenly partitioned into regions of equal mass, is addressed. For input consisting of mixtures of statistically independent sources, we treat independent component analysis (ICA) as a computational geometry problem. First, we consider the separation of sources with sharply peaked distribution functions, where the ICA problem becomes that of finding high-density directions in the input distribution. Second, we consider the more general problem for arbitrary input distributions, where ICA is transformed into the task of finding an aligned equipartition. By modifying the Kohonen self-organized feature maps, we arrive at neural networks with local interactions that optimize coding while simultaneously performing source separation. The local nature of our approach results in networks with nonlinear ICA capabilities. 1 Introduction Barlow, Linsker, Atick, Redlich, and Field have all noted the primary importance of information theory in modeling sensory coding (Barlow, 1989; Linsker, 1989; Atick & Redlich, 1990; Field, 1994). Recently, Bell and Sejnowski (1995a, 1995b) developed an information theory-based algorithm for blind source separation. In this article, a simple synthesis of current research on coding and processing is presented by geometrically incorporating information-theoretic ideas into self-organized feature maps. Ritter and Schulten (1986) demonstrated that, contrary to intuition, the steady state of Kohonen’s self-organized feature maps (SOFM) (Kohonen, 1995) is not one in which the density of neural units is proportional to the input density. In one dimension, they showed that the Kohonen net underrepresents high-density and overrepresents low-density regions. Intuitively, an ideally formed input representation should have the density of the neural (output) units be strictly proportional to the input density. Loosely speaking, this is what is meant by a “faithful representation.” Past Neural Computation 9, 1305–1320 (1997)
c 1997 Massachusetts Institute of Technology °
1306
Juan K. Lin, David G. Grier, and Jack D. Cowan
faithful representation SOFMs of Linsker (1989) and DeSieno (1988) have relied on a form of memory to track the cumulative activity of individual units, which the authors termed “historical information” and “conscience,” respectively. Recently, Bauer, Der, and Herrmann (1996) used an adaptive step size to accomplish the task. This article continues along the same path and presents modifications to both the update rule and the partition to allow the SOFM to perform blind signal processing. Section 2 provides preliminary definitions. Two modifications of the onedimensional Kohonen SOFM are presented in section 3, which allow for an arbitrary power law relationship between input (data) and output (representation) densities. Section 4 generalizes the faithful representation SOFMs to higher dimensions for independent component analysis. The implications for source separation and optimal coding via the representations are made clear with examples. 2 Preliminaries We begin with some preliminary definitions and background results. Given an input set {ξE } with ξEj in the input vector space V, we say that {ξE } is a separable (input) distribution if it is constructed from nonsingular mixtures of statistically independent scalar sources. So {ξE } = {AEs}, where A is a nonsingular matrix, and the components of Es are Q all statistically independent. Thus the joint probability factorizes: P(Es) = Pk (sk ). By a repreE of regions in V, Ä(wEi ) ⊂ V. sentation of {ξE }, we mean a collection {Ä(w)} The unit wEi is said to represent ξj if ξj ∈ Ä(wEi ). A nonoverlapping representation is defined to be one in which the regions are pairwise disjoint: Ä(wEi ) ∩ Ä(wEi0 ) = ∅, for all pairs i 6= i0 of regions in the representation. A covering is a representation where V = ∪i Ä(wEi ). So the entire input vector space V is represented in a covering, and any element in V is represented by exactly one unit in a nonoverlapping covering (partition). Focusing on nonoverlapping P coverings simplifies things because probability is already normalized, i P(wEi ) = 1. The assignment of probability in overlapping regions does not have to be considered. For a nonoverlapping covering {Ä(w)} of {ξ }, P(w|ξ ) = P(w, ξ ) = 1 if w represents ξ , and 0 otherwise. The mutual P information H(ξ ; W) = H(W) + P(ξ, w) log P(w|ξ ) = H(W). This article deals exclusively with nonoverlapping coverings, Although equipartition is a better mathematical term, to incorporate some intuition into our terminology, we use the term faithful representation for a nonoverlapping covering in which the probability measures of all the regions are equal. Thus, faithful representations evenly partition the input distribution and maximize the mutual information between the input data and the representation. With M units in the faithful representation, the mutual information of the system is H(ξ ; W) = H(W) = log M, the capacity of a noiseless channel (Shannon & Weaver, 1949). In the continuum limit, a faithful input representation’s local density of units is proportional to the
Separable Distributions
1307
local input density. We now describe two local algorithms for obtaining faithful representations. 3 Modifications of the One-Dimensional SOFM For simplicity, we start with the one-dimensional case, V = 0, 0 for i = 0 and −1 for i < 0.
1308
Juan K. Lin, David G. Grier, and Jack D. Cowan
of the winning unit. The position of the input ξ does not directly enter in the n = 0 form of this variant. The network responds only according to its partition of the input space. Following the continuum limit analysis of Ritter and Schulten (1986), to order O(² 2 ), the steady-state configuration of both of the above modified SOFMs is (see appendix A)
(w(x)0 )−1 ∝ [P(w)]( n+2 ) , 2
(3.6)
where (w(x)0 )−1 ≡ D(w) is the density of the output units. The Kohonen algorithm is comparable to the n = 1 case, with the well-known Ritter and Schulten D(w) ∝ P(w)2/3 result for the steady state in which the network undersamples high-density and oversamples low-density regions. If −2 < n < 0, the network oversamples high-density regions, but the most interesting case occurs for n = 0 when D(w) ∝ P(w). In this case, the network forms a faithful representation of the input distribution. We performed simple Monte-Carlo simulations demonstrating the power law behavior for both update rules. The results are shown in Figure 1. For the n = 0 digital learning rules, the second variant (see equations 3.4 and 3.5) is not very good at untangling folds in the network because the position of the input has been discarded. Numerically, instead of algorithmically imposing a no-folding constraint, we took a shortcut by sorting and relabeling the units periodically. The second learning rule is purely a counting algorithm for achieving the equipartition steady state, so it must rely on the fact that the proper topology has already been formed. The first variant has the advantage of retaining the direction of ξ relative to the network units, necessary for the formation of the proper topology. However, this advantage comes at the cost of losing the pure counting aspect, which is essential for exact equipartition. Both n = 0 faithful representation learning rules are digital. This is interesting from both biological and computational perspectives; with the addition of an inhibitory population, the update rule can be implemented in a network with only binary communication between its elements! 4 Faithful Representation Networks for Independent Component Analysis 4.1 Branch Net for Finding High-Density Regions. Mixtures of sources with sharply peaked distribution functions will contain high-density directions corresponding to the independent component axes. Blind separation can be performed rapidly for this case in a net with one-dimensional branch topology, the conventional Voronoi partition, and the generalization of the
Separable Distributions
1309
Figure 1: Plot of log w(x) versus log(x) for P(w) ∝ w, showing power law behavior in a network with 100 units. The networks in the top, middle, and bottom plots were trained with n = 1, n = 0, and n = −1 learning rules, respectively. Equations 3.4 and 3.5 were used for n = 1 and n = 0 simulations; equation 3.3 was used for the n = −1 net. The three lines plotted are for log(w) ∝ (3/5) log(x), (1/2) log(x), and (1/3) log(x), corresponding to D(w) ∝ P(w)2/3 , P(w), and P(w)2 , respectively. Forty thousand data points were used in the n = 1 and n = 0 simulations; the units were sorted and relabeled every 5000 data points. For n = 1, κ = .005, σ = 10 for the first 20,000 points, and κ = .1, σ = 2 for the remaining 20,000 points. For n = 0, κ = .001; σ = 10 for the first 20,000 points, and σ = 5 for the remaining 20,000 points. For n = −1, to reduce the destabilizing effect of the singularity, the |ξ − wi |−1 term was capped at 1000, and the units were sorted every 1000 data points. The learning rate κ = .00002. For the first 30,000 points, σ = 20; then it was reduced to σ = 5 for 10,000 points.
1310
Juan K. Lin, David G. Grier, and Jack D. Cowan
n = 0 update rule given in equation 3.3:2 1wEi = κ3(²) · sgn(ξE − wEi ).
(4.1)
Numerically, we performed source separation and coding of two mixed signals in a net with the topology of two cross-linked branches (see Figure 2). The elements of the mixing matrix A were chosen randomly with uniform distribution between 1 and −1. The input was first prewhitened following Bell and Sejnowski (1995b) to reduce the amount of dilation and shearing required of the branch net. A typical simulation is shown in Figure 2. As seen in the figure, the branch net rotates very well, and since prewhitening tends to orthogonalize the independent component axes, much of the processing that remains after whitening is rotation to find the independence axes. The net quickly zeroes in on the independent component axes with little annealing. The neighborhood function is taken to be gaussian where ² is the distance to the winning unit along the branch structure. To demonstrate the generality of our approach, we show a faithful representation of a nonlinear mixture. Because our network has local dynamics, with enough units, the network can follow the curved independent component contours of the input distribution. The result is shown in Figure 3. With nonlinear mixtures, however, the transformation must be one-to-one in the region of interest. To unmix the input, the independent component contours can be found by using parametric least-squares fit to the branch contours. For the source separation in Figure 2, inserting the axes directions into an unmixing matrix W, we find a reduction of the amplitudes of the mixtures to less than 1 percent of the signal.3 This is typical of the quality of separation obtained in our simulations. Alternatively, taking full advantage of the input representation formed by the net, an approximation of the independent sources can E i∗ ). Or, as we be constructed from the positions in V of the winning unit (w show in Figure 4 for the nonlinear separation, the cell labels i∗ can be used to give a pseudowhitened signal approximation. Since there is only one winning unit along one branch, only one output channel is active at any given time. For sharply peaked source distributions such as speech, this does not hinder the fidelity of the signal approximation much since the input sources hover around zero most of the time. This property also has the potential for utilization in compression. However, for a full, rigorous whitened source representation, we must turn to a network with a topology that matches the dimensionality of the input data. 2 3
Here the sign function acts componentwise on · the vector. ¸ 1 0.0068 For the separation shown in Figure 2, WA = , where the largest .0054 1
elements of each row have been normalized to one.
Separable Distributions
1311
Figure 2: Independent component analysis by a branch architecture network. The two sources are audio files containing spoken Japanese phrases. The input distribution was prewhitened first. The branches initially lie along the ξ1 = ξ2 and ξ1 = −ξ2 directions and quickly spiral around to align with the independent component axes (dashed lines). The net is trained on 4000 randomly selected data points. For the annealing, κ = .0007; σ = 5 for the first 2000 points and σ = 2 for the remaining 2000 points. The configuration of the net is shown after every 200 data points, with the final unit positions marked with dots.
4.2 Aligned (M, N) Equipartition Net. The simplest equipartition of a separable distribution is the trivial partition generated from the independent component axes, which decouples the N-dimensional problem into N one-dimensional ones. This partition consists of N distinct hyperplane orientations, with cuts by, say, M parallel hyperplanes along each of the N orientations. We define this to be an (M, N) partition. Thus, the MN hyperplanes partition the input vector space into (M + 1)N regions. Our goal is to achieve the trivial equipartition using an SOFM with hypercube architecture. Here M + 1 is the number of units per dimension in the hypercube. For an (M, N) equipartition, since the number of constraints grows exponentially in N, while the number of degrees of freedom to define the MN hyperplanes grows only quadratically in N, with enough units per dimension
1312
Juan K. Lin, David G. Grier, and Jack D. Cowan
Figure 3: Input representation of a nonlinear mixture. The mixture is given by ξ1 = sgn(s2 ) · s22 + s1 − .2s2 , ξ2 = sgn(s1 ) · s21 − .4s1 + s2 . The network is trained on 6000 data points, with the position of the units shown every 200 data points in the figure. The learning rate κ = .0008; σ = 5 for the first 2000 points, σ = 2 for the next 2000 points, and σ = 1 for the remaining 2000 points. The independent component contours are drawn in the dashed lines. The units along the branches conform nicely to the statistically independent contours.
in the hypercube, the desired trivial equipartition will be the only (M, N) equipartition. Currently we believe as few as three units per dimension (M = 2) suffices for uniqueness. In order to achieve this independent component analysis (ICA) equipartition, the learning rule must be extended to many dimensions, and the definition of the partition needs to be modified. The second faithful representation learning rule is generalized to multidimensions as follows. With V = 0, K(τ )a,b ≡ hσa (t) σb (t − τ )ic
(3.7)
has nonzero diagonal elements: K(τ )a,b = δa,b Ka (τ ),
(3.8)
then one can state: Theorem 4. Let all the cumulants Ka (τ ), as defined in equation 3.8 be different from one another; then J is equal to the inverse of M (up to a sign permutation and a rescaling, as explained above) if and only if one has:
Redundancy Reduction and Independent Component Analysis
1433
for every i, i0 , ½
hhi (t)hi0 (t)ic = δi,i0 hhi (t) hi0 (t − τ )ic = 0 for i 6= i0
(3.9)
where h = {hi , i = 1, . . . , N} is the output vector as defined in equation 2.4. If one or several sets of sources have a comon value for Ka (τ ) (but different from one set to the other), then any J solution of equation 3.9 is the product of a sign permutation by a matrix that separates the sources having a distinct Ka (τ )1 cumulant, and separate the sets globally. The restriction of J MK0 2 to the subspace of a given set is still an arbitrary orthogonal matrix. The proof is essentially the same as the one of Theorem 3. One writes hhi (t) hi0 (t − τ )ic =
X
Xi,a Ka (τ ) Xi0 ,a .
(3.10)
a
What we want is to impose hhi (t) hi0 (t − τ )ic = 1i δi,i0
(3.11)
where the 1i are yet indeterminate constants. Equations 3.10 and 3.11 play the same role as equations 3.5 and 3.6 in the case of Theorem 3, and the rest of the proof follows in the same way. Averages are conveniently computed as time averages over a time window of some size T. In general, it could happen that the two-point correlation Ka (τ ) is a function of the particular time window [t − T, t] considered. However, if, on that time window, the sources are sufficiently independent (Ka (τ ) is diagonal), since the mixture matrix does not depend on time, the solution J obtained from the data collected on that single time window is an exact, time-independent solution. 3.3 Comparison with Other Criteria. To conclude this section, we comment briefly on other criteria, namely, those proposed by Comon (1994). This author has shown that ICA is obtained from maximizing the sum of the square of k-order cumulants, for any chosen k > 2. That is, it is sufficient to find an (absolute) minimum of
C≡−
N X h(hi )k i2c
(3.12)
i=1
for any given k > 2, after whitening has been performed. This is a nice result since it involves only N cumulants. However, defining a cost function is not exactly the same as giving explicit conditions to the cumulants (although
1434
Jean-Pierre Nadal and Nestor Parga
one can easily derive a cost function from these conditions, as we will do in section 6). In fact, the minimization of a cost function such as equation 3.12 does involve implicitly the computation of order N2 cumulants, as it should (see the counting argument at the begining of this subsection). This can be seen in two ways. First, since the value of the k-cumulants at the minimum is not known, the minimization of C given in equation 3.12 is not equivalent to a set of equations for these cumulants; then, if one wants to perform a gradient descent, one has to take the derivative of the cost function with respect to the couplings J, and this will generate cross cumulants. It is the resulting fixed-point equations that then matter in the counting argument (we will come back to the algorithmic aspects in section 6). Second, Comon showed also that the minimization of equation 3.12 is equivalent to setting to zero all the nondiagonal cumulants of the same order k (still in addition to whitening): hhi1 hi2 , . . . , hik ic = 0, for every set of k nonidentical indices.
(3.13)
Hence, what we have obtained is that only a small subset of these cumulants has to be considered. 4 Algebraic Solutions In this section we present four families of algebraic solutions. We point out their advantages and drawbacks insisting on their simplicity when formulated within the framework described in section 2 and relating them to the results of section 3. 4.1 Using Time-Delayed Correlations. In the case where each source σa shows time correlations, it has been shown (F´ety, 1988; Tong et al., 1990; Belouchrani, 1993; Molgedey & Schuster, 1994) that there is a simple algebraic solution using only second-order cumulants. More precisely, let us assume that the two-point correlation matrix K(τ ) for some time delay τ > 0, K(τ )a,b ≡ hσa (t) σb (t − τ )ic
(4.1)
has nonzero diagonal elements: K(τ )a,b = δa,b Ka (τ ).
(4.2)
It follows (Molgedey & Schuster, 1994) that source separation is obtained by asking for the rows of J to be the left eigenvectors of the following nonsymmetric matrix C(τ ) C0 −1 ,
(4.3)
Redundancy Reduction and Independent Component Analysis
1435
where C(τ ) is the second-order cumulant of the inputs at time delay τ : C(τ ) = hS(t) ST (t − τ )ic .
(4.4)
The equivalent, but more natural, aproach of F´ety (1988) and Tong et al. (1990) is to work with a symmetric matrix, making use of the reduction to the search for an orthogonal matrix presented in section 2. Indeed, let us first perform whitening (note that this implies the resolution of the eigenproblem for C0 , a task of the same complexity as the computation of its inverse, which is needed in equation 4.3). We then compute the correlations at time delay τ of the projected inputs h0 (as given by equation 2.13), that is the matrix hh0 (t) h0T (t − τ )ic . Using the expression equation 2.18 of h0 as a function of the sources, one sees that this matrix is in fact symmetric and given by: hh0 (t) h0T (t − τ )ic = O0T K 0 −1/2 K (τ ) K 0 −1/2 O0 .
(4.5)
This shows that the desired orthogonal matrix is obtained from solving the eigendecomposition of this correlation matrix (see equation 4.5). Remark. In thisRsection, averages might be conveniently time averages, such as hA(t)i = dt0 A(t − t0 ) exp(−t0 /T). In that case, τ has to be small compared to T in such a way that, for example, K0 is the same when averaging on t0 < t and on t0 < t − τ . 4.2 Using Correlations at Equal Time. Shraiman (1993) has shown how to reduce the problem of source separation to the diagonalization of a certain symmetric matrix D built on third-order cumulants. We refer readers to Shraiman (1993) for the elegant derivation of this result. Here we will show how to derive this matrix from the approach introduced in sections 2.2 and 3. We will do so by working with the k-order cumulants, giving a generalization of Shraiman’s work to any k at least equal to 3. We start with the expression of the data projected onto the principal components as in equation 2.18: h0 (t) = O0T K 0 −1/2 σ (t).
(4.6)
One computes the k-order statistics hh0i1 h0i2 , . . . , h0ik ic . From equation 4.6, these cumulants have the following expression in term of source cumulants: hh0i1 h0i2 , . . . , h0ik ic =
N X
0 0 Oa,i , . . . , Oa,i ζ a, 1 k k
(4.7)
a=1
where the ζka are the normalized cumulants as defined in equation 2.5. Now, one multiplies two such cumulants having k − 1 identical indices and sums
1436
Jean-Pierre Nadal and Nestor Parga
over all the possible values of these indices. This will produce the contraction of k−1 matrices O0 with their transposed, leading to 1N , and only two terms O0 will remain. Explicitly, we consider the symmetric matrix, to be referred to as the k-Shraiman matrix,
Di,i0 =
X i1 ,i2 ,...,ik−1
hh0i h0i1 , . . . , h0ik−1 ic hh0i0 h0i1 , . . . , h0ik−1 ic ,
(4.8)
which, from equation 4.7, is equal to
Di,i0 =
N X
0 0 a 2 Oa,i Oa,i 0 (ζk ) .
(4.9)
a=1
The above formula is nothing but an eigendecomposition of the matrix D. This shows that the rows of O0 are the eigenvectors of the k-Shraiman matrix, the eigenvalues being (ζka )2 for a = 1, . . . , N. A solution of the source separation problem is thus given by the diagonalization of one k-Shraiman matrix (e.g., taking k = 3 or k = 4). 4.3 A Simple Solution Using Fourth-Order Cumulants. We now consider the solution based on fourth-order cumulants (Cardoso, 1989), which is directly related to the results obtained in section 3.2.1. Let us consider the following cumulants of the input data projected onto the principal components, h0 : * C4 i,i0 ≡
h0i
N X i00 =1
+ 0 h02 i00 hi0
.
(4.10)
c
In term of the orthogonal matrix O0 and of the cumulants ζ4a , it reads (C4 )i,i0 =
N X
0 a Oa,i ζ4
a=1
N X
0 0 (Oa,i” )2 Oa,i 0.
(4.11)
i00 =1
Since O0 is orthogonal, this reduces to the equation
C4 = O0 K4 O0T ,
(4.12)
where K4 is the diagonal matrix of the fourth-order cumulants, (K4 )a,b = δa,b ζ4a .
(4.13)
This shows that O0 can be found by solving for the eigendecomposition of the cumulant C4 .
Redundancy Reduction and Independent Component Analysis
1437
All of the algebraic solutions considered thus far in this section are based on the same two facts: (1) the diagonalization of a real positive symmetric matrix leaves an arbitrary orthogonal matrix, which can be used to diagonalize another symmetric matrix; (2) but because one is dealing with a linear mixture, applying this to two well-chosen correlation matrices is precisely enough to solve the BSS problem (in particular, we have seen that working with two symmetric matrices provides at least as many equations as unknowns). 4.4 The Joint Diagonalization Approach. All of the algebraic solutions discussed so far suffer from the same drawback, which is that sources having the same statistics at the orders under consideration will not be separated. Moreover, numerical instability may occur if these statistics are different but very close to one another. One may wonder whether it would be possible to work with an algebraic solution involving more equations than unknowns, in such a way that indetermination cannot occur. A positive answer is given by the joint diagonalization method of Cardoso and Souloumiac (1993) and Belouchrani (1993). Here we give a different (and slightly more general) presentation from the one in Cardoso and Souloumiac (1993). In particular, we will make use of the theorems of section 3.1. We will consider only correlations at equal time. The case of time correlations is discussed in Belouchrani (1993). The basic idea is to joint diagonalize a family of matrices Γr r 0α,β = hh0α h0β Qr (h0 )ic ,
(4.14)
where h0 is the principal component vector as defined in equation 2.13 and the Qr are well-chosen scalar functions of it, the index r labeling the function (hence the matrix) in the family. One possible example is the family defined by taking for r all possible choices of k − 2 indices with k ≥ 3, r ≡ (α1 , . . . , αk−2 ), 1 ≤ α1 ≤ α2 , . . . , ≤ αk−2 ≤ N
(4.15)
Qr=(α1 ,...,αk−2 ) (h0 ) = h0α1 , . . . , h0αk−2 .
(4.16)
and
The case considered in Cardoso and Souloumiac (1993) is k = 4. Using the expression of h0 as a function of the normalized sources, h0 (t) = O0T K 0 −1/2 σ (t),
(4.17)
where O0 is the orthogonal matrix that we want to compute (see section 2.2), one can write
Γr = O0T Λr O0 ,
(4.18)
1438
Jean-Pierre Nadal and Nestor Parga
where Λr is a diagonal matrix with components 0 0 3ra = ζka Oa,α , . . . , Oa,α . 1 k−2
(4.19)
As it is obvious in equation 4.18, the matrices Γr are jointly diagonalizable by the orthogonal matrix O0 . However, if for at least a pair a, b one has 3ra = 3rb for every r, O0 is not the only solution (up to a sign permutation). Actually this never happens, as shown in Cardoso and Souloumiac (1993) for k = 4. Let us give a direct proof valid for any k. We consider one particular orthogonal matrix O, which jointly diagonalizes all the matrices of the family, and let h = Oh0 . By hypothesis, the matrix OΓr OT is diagonal, that is hhi hi0 h0α1 , . . . , h0αk−2 ic = 0 for i 6= i0 ,
(4.20)
and this for every choice of the k − 2 indices. Multiplying the left-hand side by Oi1 ,α1 , . . . , Oik−2 ,αk−2 and summing over the Greek indices, one gets that for any choice of i1 , . . . , ik−2 , hhi hi0 hi1 , . . . , hik−2 ic = 0 for i 6= i0 .
(4.21)
We can now make use of Theorem 2: one can write equation 4.21 for the particular choice i1 = i2 = . . . = ik−2 = i0 , which gives exactly the conditions (see equation 3.2) for k and m = 2, and we can thus apply Theorem 2 (equivalently, one can deduce from equation 4.21 that these cumulants are zero whenever any two indices are different, and then use equation 3.13, that is, Comon’s 1994 result). For the simplest case k = 3, these conditions are exactly those of Theorem 2: joint diagonalizing the N matrices Γα = hh0 h0T h0α ic is strictly equivalent to imposing the conditions (see equation 3.2) for k = 3 (and m = 2) (note, however, that the number of conditions is larger than the minimum required according to Theorem 1). For k > 3, the number of conditions in equation 4.21 is larger that the number of conditions in equation 3.2. To conclude, one sees that at the price of having a number of conditions larger than the minimum required (in order to guarantee that no indetermination will occur), source separation can be done with an algebraic method, even when there are identical source cumulants. Remark. In practical applications, cumulants are empirically computed, and thus the matrices under consideration are not jointly diagonalizable. For this reason, a criterion is considered in Cardoso and Souloumiac (1993) that, if maximized, provides the best possible approximation to joint diagonalization. In this article, we do not consider this aspect of the problem.
Redundancy Reduction and Independent Component Analysis
1439
5 Cost Functions Derived from Information Theory We switch now to the study of adaptive algorithms. To do so, one first has to define proper cost functions. Whereas in the next section we will consider cost functions based on cumulants, here we consider the particular costs derived from information theory. In both cases we will take advantage of the results obtained in section 3. We will see that an important outcome is the derivation of updating rules for the synaptic efficacies closely related to the Bienenstock, Cooper, and Munro (BCM) theory of cortical plasticity (Bienenstock, Cooper, & Munro, 1982). 5.1 From Infomax to Redundancy Reduction. Our starting point is the main result obtained in Nadal and Parga (1994), namely, that maximization of the mutual information between the input data and the output (neural code) leads to redundancy reduction, hence to source separation for both linear and nonlinear mixtures. To be more specific, we first give a short derivation of that fact (for more details see Nadal & Parga, 1994). We consider a network with N inputs and p outputs, and nonlinear transfer functions fi , i = 1, . . . , p. Hence the output V is given by a gain control after some (linear or nonlinear) processing: Vi (t) = fi (hi (t)), i = 1, . . . , p.
(5.1)
In the simplest case (in particular, in the context of BSS), h is given by the linear combination of the inputs: hi (t) =
N X
Ji,j Sj (t), i = 1, . . . , p.
(5.2)
j=1
However, here the hi (t) can be as well any deterministic (hence not necessarily linear) functions of the inputs S(t). In particular, we will make use of this fact in section 7: there, hi will be the local field at the output layer of a one-hidden-layer network with nonlinear transfer functions. The mutual information I between the input and the output is given by (see Blahut, 1988): Z P(V, S) . (5.3) I ≡ dp V dN SP(V, S) log Q(V) P(S) This quantity is well defined only if noise processing is taken into account (e.g., resolution noise). In the limit of vanishing additive noise, one gets that maximizing the mutual information is equivalent to maximizing the (differential) output entropy H(Q) of the output distribution Q = Q(V), Z (5.4) H(Q) = − dp V Q(V) log Q(V).
1440
Jean-Pierre Nadal and Nestor Parga
On the right-hand side of equation 5.4, one can make the change of variable V → h, using p Y
dVi Q(V) =
p Y
dhi 9(h)
(5.5)
i = 1, . . . , p.
(5.6)
9(h) . dh9(h) ln Qp 0 i=1 fi (hi )
(5.7)
i=1
i=1
and dVi = fi0 (hi )dhi , This gives Z H(Q) = −
This implies that H(Q), hence I , is maximal when 9(h) factorizes, 9(h) =
p Y
9i (hi ),
(5.8)
i=1
and at the same time for each output neuron, the transfer function fi has its derivative equal to the corresponding marginal probability distribution: fi0 (hi ) = 9i (hi ),
i = 1, . . . , p.
(5.9)
As a result, infomax implies redundancy reduction. The optimal neural representation is a factorial code—provided it exists. 5.2 The Specific Case of BSS. Let us now return to the BSS problem for which the h are taken as linear combinations of the inputs. By hypothesis, the N-dimensional input is a linear mixture of N-independent sources. In the following we consider only p = N. Note that the factorial code is obtained by the network processing before applying the nonlinear function at each output neuron. From the algorithmic aspect, as suggested in Nadal and Parga (1994), this gives us two possible approaches. One is to optimize globally, that is, to maximize the mutual information over both the synaptic efficacies and the transfer functions. In that case, infomax is used in order to perform ICA—the nonlinear transfer functions being there just to enforce factorization. Another possibility is first to find the synaptic efficacies leading to a factorial code, and then compute the optimal transfer functions (which depend on the statistical properties of the stimuli). In that case, one may say that it is ICA that is used in order to build the network that maximizes information transfer. Still, if one considers that the transfer functions are chosen at each
Redundancy Reduction and Independent Component Analysis
1441
instant of time according to equation 5.9, the mutual information becomes precisely equal to minus the redundancy cost function R: Z
R≡
dh9(h) ln Qp
9(h)
i=1 9i (hi )
.
(5.10)
In the context of blind source separation the relevance of the redundancy cost function has been recognized by Comon (1994) (we also note a work by Burel [1992], where a different but related cost function is considered). Remark on the terminology. The quantity (see equation 5.10), called here and in the literature related to sensory coding the redundancy, is called in the signal processing literature (in particular in Comon, 1994) the mutual information, short for the mutual information between the random variables hi (the outputs). But this mutual information, that is the redundancy (see equation 5.10), should not be mistaken for the mutual information (see equation 5.3) we introduced above, which is defined in the usual way—that is, between the input and the output of a processing channel (Blahut, 1988). To avoid confusion, we will consistently use redundancy for equation 5.10, and mutual information for equation 5.3. Although it is appealing to work with either the mutual information or the redundancy, doing so may not be easy. It is convenient to rewrite the output entropy, changing the variable h → S in equation 5.7, as done in Bell and Sejnowski (1995). Since the input entropy H(P) is a constant (it does not depend on the couplings J), the quantity that has to be maximized is
E = ln |J| +
X hlog ci (hi )i,
(5.11)
i
where |J| is the absolute value of the determinant of the coupling matrix J and h.i is the average over the output activity hi . The function ci can be given two interpretations: it is equal to either fi0 , if one considers the mutual information, or to 9i , if one considers the redundancy (the mutual information for the optimal transfer function at a given J). In the first case, one has to find an algorithm for searching for the optimal transfer functions; in the second case, one has to estimate the marginal distributions. The cost (see equation 5.11) can be given another interpretation. In fact, it was first derived in a maximum likelihood approach (Gaeta & Lacoume, 1990; Pham, Garrat, & Jutten, 1992): it is easy to see that equation 5.11, with ci = 9i , is equal to the (average of) the log likelihood of the observed data (the inputs S), given that they have been generated as a linear combination of independent sources with the 9i as marginal distributions. In the two following subsections we consider practical approaches.
1442
Jean-Pierre Nadal and Nestor Parga
5.3 Working with a Given Family of Transfer Functions. We have seen that at the end of the optimization process, the transfer functions will be related to the probability distributions of the independent sources. Since these distributions are not known—and cannot be estimated without first performing source separation—the choice of a proper parameterized family may be a problem. Still, any prior knowledge on the sources and any reasonable assumption may be used to limit the search to a family of functions controlled by a small number of parameters. A practical way to search for the best fi0 is to restrict the search to an a priori chosen family of transfer functions. In Pham et al. (1992), a practical algorithm is proposed, based on a particular choice combined with an expansion of the cost close to a solution. Another, and very simple, strategy has been tried in Bell and Sejnowski (1995), where very promising results have been obtained on some specific applications. Their numerical simulations suggest that one can take transfer functions with a simple behavior (that is, for example, with one peak in the derivative when the data show only one peak in their distribution), and to optimize just the gain and the threshold in each transfer function, which means fitting the location of the peak and its height.
5.4 Cumulant Expansion of the Marginal Distributions. When working with the redundancy, one would like to avoid having to estimate the marginal distributions from histograms, since this would take a lot of time. One may parameterize the marginal probability distributions and adapt the parameters at the same time one is adapting the couplings. This is exactly the same as working with the mutual information with a parameterized family of transfer functions. Another possibility, considered in Gaeta and Lacoume (1990) and Comon (1994), is to replace each marginal by a simple probability distribution with the same first cumulants as the ones of the actual distribution. Recently this approach has been used in Amari et al. (1996). We consider this expansion here with a slightly different point of view in order to relate this approach to the results of section 3. We know that if we had gaussian distributions, every required computation would be easy. Now, if N is large, each field hi is a sum of a large number of random variables, so that before adaptation (that is, with arbitrary synaptic efficacies), the marginal distribution for hi is a gaussian. However, through adaptation, each hi becomes proportional to one source σα —whose distribution is in general not a gaussian, and not necessarily close to gaussian. Still, there is another, and stronger, motivation for considering such an approximation. Indeed, the result of section 3, which is that conditions on a limited set of cumulants are sufficient in order to obtain factorization, strongly suggests replacing the unknown distribution with a simple distribution having the same first cumulants up to some given order. Let us consider the systematic close-to-gaussian cumulant expansion of
Redundancy Reduction and Independent Component Analysis
1443
9i (hi ) (Abramowitz & Stegun, 1972). At first nontrivial order, it is given by ¸ · hi (h2i − 3) 9i (hi ) ≈ 9i1 (hi ) ≡ 9 0 (hi ) 1 + λ(3) , i 6
(5.12)
where 9 0 (hi ) is the normal distribution µ 2¶ h 1 9 0 (hi ) ≡ √ exp − i , 2 2π
(5.13)
and λ(3) i is the third (true) cumulant of hi : 3 λ(3) i ≡ hhi ic .
(5.14)
In expression 5.12, we have taken into account that, as explained in section 2, one can always take hhi i = 0.
(5.15)
hh2i ic = 1.
(5.16)
and
In the cost function (see equation 5.11), that is,
E = ln |J| +
XZ
dhi 9i (hi ) log 9i (hi ),
(5.17)
i
we replace 9i (hi ) by 9i1 (hi ), and expand the logarithm · ¸ hi (h2i − 3) ln 1 + λ(3) . i 6 Then the quantity to be maximized is, up to a constant,
E = ln |J| +
N 1X [λ(3) ]2 . 6 i=1 i
(5.18)
Since optimization has to be done under the constraints in equation 5.16, we add Lagrange multipliers:
E (ρ) = E −
N X 1 i=1
2
ρi (hh2i ic − 1).
(5.19)
1444
Jean-Pierre Nadal and Nestor Parga
Taking into account equation 5.15, one then obtains the updating equation for a given synaptic efficacy Jij : 1Jij ∝ −
dE (ρ) dJij
dE (ρ) = −JijT −1 − hh3i ic h(h2i − 1)Sj i + ρi hhi Sj i. dJij
(5.20)
We now consider the fixed-point equation, that is, 1Jij = 0. Multiplying by Ji0 j and summing over j, it reads: δii0 = ρi hhi hi0 ic − hh3i ic hh2i hi0 ic
(5.21)
together with hh2i ic = 1 for every i. The parameters ρi are obtained by writing the fixed-point equation (5.21) at i = i0 , that is, ρi = 1 + hh3i i2c .
(5.22)
Note that, in particular, ρi > 0 for all i. It follows from the result (see equation 3.1) of section 3 that the exact, desired solutions are particular solutions of the fixed-point equation (5.21), giving a particular absolute minimum of the cost function with the closeto-gaussian approximation. However, there is no guarantee that no other local minimum exists: there could be solutions for which equation 5.21 is satisfied with nondiagonal matrices hhi hi0 ic and hh2i hi0 ic . Remark. One may wonder what happens if one first performs whitening, computing the h0 , and then uses the mutual information between h0 and h. This is what is studied in Comon (1994), where at lowest order, the cost function (see equation 5.18) is found to be the sum of the square of the third cumulants. This can be readily seen from equation 5.18, where J is now the orthogonal matrix that takes h0 to h, and thus ln |J| is a constant. 5.5 Link with the BCM Theory of Synaptic Plasticity. Let us now consider a possible stochastic implementation of the gradient descent (see equation 5.20). Since there are products of averages, it is not possible to have a simple stochastic version, where the variation of Jij would depend on the instanteneous activities only. Still, by removing one of the averages in equation 5.20 one gets the following updating rule: 1Jij = ²{JijT −1 − ρi hi Sj + hh3i ic h2i Sj },
(5.23)
where ² is a parameter controlling the rate of variation of the synaptic efficacies. The parameters ρi can be taken at each time according to the fixed-point
Redundancy Reduction and Independent Component Analysis
1445
equation (5.22). It is quite interesting to compare the updating equation (5.23) with the BCM theory of synaptic plasticity (Bienenstock et al., 1982). In the latter, a qualitative synaptic modification rule was proposed in order to account for experimental data on neural cell development in the early visual system. This BCM rule can be seen as a nonlinear variant of the Hebbian covariance rule. One of its possible implementations reads, in our notation: 1Jij = ² γi {−hi Sj + 2i h2i Sj },
(5.24)
where γi and 2i are parameters possibly depending on the current statistics of the cell activities. The particular choices 2i = hh2i i, γi = 1 or 2−1 i have been studied with some detail (Intrator & Cooper, 1992; Law & Cooper, 1994). The two main features of the BCM rule are: (1) there is a synaptic increase or decrease depending on the postsynaptic activity relative to some threshold 2i , which itself varies according to the cell mean activity level; (2) at low activity, there is no synaptic modification. Since ρi is positive, the rule we derived above is quite similar to equation 5.24, with a threshold 2i equal to hh3i ic . ρi The main difference is in the constant (that is, activity-independent) term JijT −1 . This term plays a crucial role: it couples the N neural cells. Note in fact that the BCM rule has been mostly studied for a single cell, and only crude studies of its possible extension to several cells have been performed (Scofield & Cooper, 1985). Note also that in our formulas we have always assumed zero mean activity (hSj i = 0, hence also hhi i = 0). If this were not the case, the corresponding averages have to be subtracted (Sj → Sj − hSj i for every j, hi → hi − hhi i for every i). Finally we note that if the third cumulants are zero, one has to make the expansion up to the fourth order. The corresponding derivation and the conclusions are similar: apart from numerical factors, essentially the square of the third cumulant in the cost is replaced by the square of the fourth cumulant, and one gets again a plasticity rule similar to equation 5.23, that is with the same qualitative behavior as the BCM rule. 6 Adaptive Algorithms from Cost Functions Based on Cumulants 6.1 A Gradient Descent Based on Theorem 1. Among the algorithms using correlations at equal time, only algebraic solutions (discussed in section 4) and the recently proposed deflation algorithm (Delfosse & Loubaton, 1995), which extracts the independent components one by one, guarantee to find them in a rather simple and efficient way. All other approaches suffer from the same problem: empirical updating rules based on high moments, like the Herault-Jutten algorithm, and gradient methods based on some cost
1446
Jean-Pierre Nadal and Nestor Parga
function (most often a combination of cumulants), may have unwanted fixed points (see Comon, 1994; Delfosse & Loubaton, 1995). Whatever the algorithm one is working with, the conditions derived in section 3 can be used in order to check whether a correct solution has been found. Clearly one can also define cost functions from these conditions by taking the sum of the square of every cumulant that has to be set to zero. We thus have a cost function for which, in addition to having only good solutions as absolute minima, the value of the cost at an absolute minimum is known: it is zero. Of course, many other families of cumulants could be used for the same purpose. The possible interest of the one we are dealing with is that it involves a small number of terms. However, this does not imply a priori any particular advantage as far as efficiency is concerned. For illustrative purposes, we consider with more detail a gradient descent for a particular choice of cost based on Theorem 1 in section 3. Specifically, we ask for the diagonalization of the two-point correlation and the thirdorder cumulants hhi h2i0 ic . Here again we use the reduction to the search for an orthogonal transformation, as explained in section 2. We thus consider the optimization of the orthogonal matrix. The cost is then defined by
E=
1 X 1 h hi h2i0 i2c − Tr[ρ(OOT − 1N )] 2 i6=i0 2
(6.1)
where ρ is a symmetric matrix of Lagrange multipliers, and hi has to be written in term of O (see equation 2.15): hi =
N X α=1
Oi,α h0α ,
(6.2)
the h0i being the projections of the inputs onto the principal components, as given by equation 2.13. The simplest gradient descent scheme is given by dE Oi,α = −ε , dt dOi,α
(6.3)
where ε is some small parameter. From equation 6.1, one derives the derivative of E with respect to Oi,α : dE dOi,α
=
Xh
hhi h2i0 ic h h0α h2i0 ic + 2hhi0 h2i ic hhi0 hi h0α ic
i0 (6=i)
−
X i0
ρi,i0 Oi0 ,α .
i
(6.4)
Redundancy Reduction and Independent Component Analysis
1447
One can either adapt ρ according to dρ dE = −ε , dt dρ or choose ρ imposing at each time OOT = 1N , which we do here. The equation for ρ is obtained by writing the orthonogonality condition for O, (O + dO)(OT + dOT ) = 1N , that is:
O dOT + dO OT = 0.
(6.5)
Paying attention to the fact that ρ is symmetric, one gets ρi,i0 =
i 1 Xh hhi h2k ic hhi0 h2k ic + 2hhk h2i ic hhk hi hi0 ic 2 k(6=i0 ) i 1 Xh hhi h2k ic hhi0 h2k ic + 2hhk h2i0 ic hhk hi hi0 ic . + 2 k(6=i)
(6.6)
Replacing in equation 6.4 ρ by its expression (6.6), multiplying both sides of equation 6.4 by Oi0 ,α for some i0 and summing over α, and using equation 6.2, one gets the rather simple equations for the projections of the variations of O onto the N vectors Oi : X α
Oi0 ,α
X dOi,α dE = −ε Oi0 ,α ≡ −ε ηi,i0 , dt d O i,α α
(6.7)
where: ηi,i0 =
i 3h 3 hhi0 ic hhi h2i0 ic − hh3i ic hhi0 h2i ic 2 h i X hhk hi hi0 ic hhk h2i ic − hhk h2i0 ic . +
(6.8)
k
From the above expression, one can easily write the (less simple) updating equations for either O (multiplying by Oi0 ,α and summing over i0 ), or J −1
(multiplying by [OΛ0 2 O0 ]i0 ,j and suming over i0 ). Note that the Lagrange multiplier ρ ensures that, starting from an arbitrary orthogonal matrix O(0) at time 0, O(t) remains orthogonal. In practice, since this orthogonality is enforced only at first order in ε, an explicit normalization will have to be done from time to time This is an efficient method used in statistical mechanics and field theory (Aldazabal, Gonzalez-Arroyo, & Parga, 1985). Although we derived the updating equation from a global cost function, one may also derive an adaptative version. One possibility is to use the
1448
Jean-Pierre Nadal and Nestor Parga
approach considered in the preceding section: in each term containing a product of averages, one average h.i is replaced by the instantaneous value. The remaining average is computed as a time average on a moving time window. An alternative approach is to replace each average by a time average, taking different time constants in order to obtain an estimate of the product of averages—and not the average of the product. 6.2 From Feedforward to Lateral Connections. We conclude with a general remark concerning the choice of the architecture. In all the above derivations, we worked with a feedforward network, with no lateral connections. As it is well known, one may prefer to work with adaptable lateral connections, as it is the case in the Herault-Jutten algorithm (Jutten & Herault, 1991). One can in fact perform any given linear processing with either one or the other architecture; it is only the algorithmic implementation that might be simpler with a given architecture. Let us consider here this equivalence. A standard way to use a network with lateral connections is the one in Jutten and Herault (1991). One has a unique link from each input Si to the output unit i and lateral connections L between output units. The dynamics of the postsynaptic potentials ui of the output cells is given by X dui =− Li,i0 ui0 + Si . dt i0
(6.9)
If L has positive eigenvalues, then the dynamics converge to a fixed point h given by L h = S. As a result, after convergence, the network gives the same linear processing as the one with feedforward connections J given by
L = J−1 .
(6.10)
In the particular case considered above, one gets the updating rule for Lj,i0 = 1
−1 2 0 Jj,i 0 by multiplying equation 6.7 by [OΛ0 O ]i,j and summing over i. For a given adaptive algorithm derived from the minimization of a cost function, one can thus work with either the feedforward or the lateral connections. It is clear that, in general, updating rules will look different whether they are for the lateral or feedforward connections. However, it is worth mentioning that if we consider the updating rule after whitening (which is to assume that a first network is performing PCA, providing the h0α as input to the next layer), then the updating rule for the feedforward and the lateral connections is essentially the same. Indeed, the feedforward coupling matrix that allows going from h0 to h is an orthogonal transformation O; hence the associated lateral network has as couplings the inverse of that orthogonal transformation, that is, its transposed, OT .
Redundancy Reduction and Independent Component Analysis
1449
7 Possible Extensions to Nonlinear Processing Some work has already proposed redundancy reduction criteria for nonlinear data processing (Nadal & Parga, 1994; Haft, Schlang, & Deco, 1995; Parra, 1996), and for defining unsupervised algorithms in the context of automatic data clustering (Hinton, Dayan, Frey, & Neal, 1995). Here we just point out that all the criteria and cost functions discussed in this article may be applied to the output layer of a multilayer network for performing independent component analysis on nonlinear data. Indeed, if a multilayer network is able to perform ICA, this implies that in the layer preceding the output, the data representation is a linear mixture. The main questions are, then, How many layers are required in order to find such a linear representation? and Is it always possible to do so? Assuming that there exists at least one (possibly nonlinear) transformation of the data leading to a set of independent features, we suggest two lines of research. The first is based on general results on function approximation. It is known that a network with one hidden layer with sufficiently many units is able to approximate a given function with any desired accuracy (provided sufficiently many examples are available) (Cybenko, 1989; Hornik, Stinchcombe, & White, 1991; Barron, 1993). Then there exists a network with one hidden layer and N outputs such that the ith output unit gives an approximation of the particular function that extracts the ith independent component from the data. Hence, we know that it should be enough to take a network with one hidden layer. A possible approach is to perform gradient descent onto a cost function defined for the output layer, which, if minimized, means that separation has been achieved (we know that the redundancy will do). If the algorithm does not give good results, then one may increase the number of hidden units. Another approach is suggested by the study of the infomax-redundancy reduction criteria (Nadal & Parga, 1994). It is easy to see that the cost function (see equation 5.11) has a straightforward generalization to a multilayer network where every layer has the same number N of units. Indeed, if one calls Jk the couplings in the kth layer, and cki (hki ) the derivative of the transfer function (or the marginal distribution; see section 5.2) of the ith neuron in the kth layer, the mutual information between the input and the output of the multilayer network can be written as
EL =
X k
ln |Jk | +
X X k
hlog cki (hki )i.
(7.1)
i
Hence the cost EL is a sum of terms, each tending to impose factorization in a given layer. This allows an easy implementation of a gradient descent algorithm. Moreover, this additive structure of the cost suggests a constructive approach. One may start with one layer; if factorization is not obtained,
1450
Jean-Pierre Nadal and Nestor Parga
one can add a second layer, and so on (note, however, that the couplings of a given layer have to be readapted each time a new layer is added). 8 Conclusion In this article, we have presented several new results on blind source separation. Focusing on the mathematical aspects of the problem, we obtained several necessary and sufficient conditions that, if fulfilled, guarantee that separation has been performed. These conditions are on a limited set of cross-cumulants and can be used for defining an appropriate cost function or just in order to check, when using any BSS algorithm, that a correct solution has been reached. Next, we showed how algebraic solutions can be easily understood, and for some of them generalized, within the framework of the reduction to the search for an orthogonal matrix. We then discussed adaptive approaches, the main focus being on cost functions based on information-theoretic criteria. In particular, we have shown that the resulting updating rule appears to be, in a loose sense, Hebbian and more precisely quite similar to the type proposed by Bienenstock, Cooper, and Munro in order to account for experimental data on the development of the early visual system (Bienenstock et al., 1982). We also showed how some cost functions could be conveniently used for nonlinear processing, that is, for, say, a multilayer network. In all cases, we paid attention to relate our work to other similar approaches. We showed how the reduction to the search for an orthogonal transformation is a convenient tool for analyzing the BSS problem and finding new solutions. This, of course, does not mean that one cannot perform BSS without whitening, and indeed there are interesting approaches to BSS in which whitening is not required (Laheld & Cardoso, 1994). Appendix A: Proof of Theorem 1 Let us consider a matrix J for which equation 3.1 is true. First we use the fact that J diagonalize the two-point correlation. Hence, with the notation and results of section 2, we have to determine the family of orthogonal matrices X such that, when h = XK 0 −1/2 σ
(A.1)
the k-order cumulants in equation 3.1 are zero. Using the above expression of h, we have for any k-order cumulant in equation 3.1, hi0 ic = hh(k−1) i
N X (Xi,a )(k−1) Xi0 ,a , ζka , a=1
(A.2)
Redundancy Reduction and Independent Component Analysis
1451
where the ζka are the normalized k-cumulants ζka =
hσak ic k/2
hσa2 ic
,
a = 1, . . . , N.
(A.3)
A.1 The Case of N Nonzero Source Cumulants. We first consider the case when for every a, ζka is not zero. Since we want the k-order cumulants to be zero whenever i 6= i0 , we can write 1i δi,i0 =
N X (Xi,a )(k−1) Xi0 ,a ζka
(A.4)
a=1
for some yet indeterminate constants 1i . Using the fact that X is orthogonal, T we multiply both sides of this equation by Xi0 ,a = Xa,i 0 for some a and sum over i0 . This gives 1i Xi,a = (Xi,a )(k−1) ζka .
(A.5)
There are now two possibilities for each pair (i, a): either Xi,a = 0, or Xi,a is nonzero (and then 1i as well), and we can write (k−2) Xi,a = εi,a
1i , ζka
(A.6)
where εi,a is 1 or 0. For k odd, one then has · Xi,a = εi,a
1i ζka
¸
1 k−2
.
(A.7)
We now use the fact that X is orthogonal; first, for each i, the sum over a of 2 is one, hence for at least one a ε is nonzero—and it follows also the Xi,a i,a P that for every i 1i is nonzero. Second, for every pair i 6= i0 , a Xi,a Xi0 ,a = 0. Then, from equation A.7, we have X a
2
εi,a εi0 ,a (ζka ) k−2 = 0.
(A.8)
The left-hand side is a sum of positive terms; hence each has to be zero. It follows that for every a, either εi,a or εi0 ,a is zero (or both). The argument can be repeated exchanging the roles of the indices i and a, so that it is also true that for each pair a 6= a0 , for each i either εi,a or εi,a0 is zero (or both). Hence X is a matrix with, on each row and on each column, a single nonzero element, which, necessarily, is then ±1: X is what we called a sign permutation, and this completes the proof of part (i) of the theorem for the case where the k-order cumulants are nonzero for every source.
1452
Jean-Pierre Nadal and Nestor Parga
Remark. For k even, there is a sign indetermination when going from P (k−2) Xi,a to Xi,a . Hence one cannot write that a Xi,a Xi0 ,a is a sum of positive numbers. In fact, taking k = 4 and N even, one can easily build an example where the conditions in equation 3.1 are fulfilled with at least one solution X that is not a sign permutation. For instance, let N = 4 and ζ4a = z for every a. Then the equations 3.1 are fulfilled for X defined by:
1 1 1 X= 1 2 1
1 1 −1 −1
1 −1 1 −1
−1 1 −1 1
(A.9)
A.2 The Case of L < N Nonzero Source Cumulants. Now we consider the case where only L < N k-order cumulants are nonzero (and this will include the case of only one source with zero cumulant). Without loss of generality, we will assume that they are the first L sources (a = 1, . . . , L). We have to reconsider the preceding argument. It started with equation A.4 in which the right-hand side can be considered as a sum over a = 1, . . . , L. It follows that a particular family of solutions is the set of block-diagonal matrices X, with an L × L block being a sign permutation matrix, followed by an (N − L) × (N − L) block being an arbitrary (N − L) × (N − L) orthogonal matrix. We show now that these are, up to a global permutation, the only solutions. Since the k-cumulants are zero for a > L, we have the following possibilities for each i: 1. 1i = 0 and Xi,a = 0, a = 1, . . . , L. 2. 1i 6= 0, Xi,a = 0, a > L, and for a = 1, . . . , L equation A.6 is valid with at least one εi,a nonzero. By applying an appropriate permutation, we can assume that it is the first ` indices, i = 1, . . . , `, for which 1i 6= 0. Hence X has a nonzero upperleft ` × L block, X1 , which satisfies all the equations derived previously (in particular, X1 X1T = 1N ), but with i = 1, . . . , ` and a = 1, . . . , L; and a nonzero lower-right (N−`)×(N−L) block, X2 , for which the only constraint is X2 X2T = X2T X2 = 1N . It follows from the discussion of the case with nonzero cumulants that X1 has a single nonzero element per line and per column, which implies that this is a square matrix: ` = N, and this completes the proof. In addition, if only one source has zero k-order cumulant, then the X2 matrix is just a 1 × 1 matrix, whose unique element is thus ±1. Hence, in that case, it is in fact all the N sources that are separated.
Redundancy Reduction and Independent Component Analysis
1453
Appendix B: Proof of Theorem 2 In the first part of the proof, we proceed exactly as in appendix A. We reduce the problem to the search of an orthogonal matrix X such that, when h = XK 0 −1/2 σ
(B.1)
the k-order cumulants in equation 3.2 are zero. Using the above expression of h, we have for any k-order cumulant in equation 3.2, hm−1 hi00 ic = hh(k−m) i i0
N X a (Xi,a )(k−m) Xim−1 0 ,a Xi00 ,a ζk
(B.2)
a=1
where the ζka are the normalized k-cumulants as in equation A.3. Now we write that these quantities are zero whenever at least two of the indices i, i0 , i00 are different: 1i δi,i0 δi0 ,i00 =
N X a (Xi,a )(k−m) Xim−1 0 ,a Xi00 ,a ζk
(B.3)
a=1
for some yet indeterminate constants 1i . Using the fact that X is orthogonal, we multiply both sides of this equation by Xi00 ,a for some a for which ζka is nonzero, and sum over i00 . This gives 1i Xi0 ,a δi,i0 = (Xi,a )(k−m) (Xi0 ,a )(m−1) ζka .
(B.4)
Now, either Xi0 ,a = 0, or Xi0 ,a 6= 0 and then 1i δi,i0 = (Xi,a )(k−m) (Xi0 ,a )(m−2) ζka .
(B.5)
At this point the proof differs from the one of Theorem 1 and is in fact simpler. For i 6= i0 the right-hand side of equation B.5 is 0. Because Xi0 ,a 6= 0 and k is strictly greater than m, we obtain Xi,a = 0. Hence, for every a such that ζka 6= 0, there is at most one i for which Xi,a 6= 0. Since X is orthogonal, this implies that its restriction to the subspace of nonzero ζka is a sign permutation. Acknowledgments This work was partly supported by the French-Spanish program Picasso, the E.U. grant CHRX-CT92-0063, the Universidad Autonoma ´ de Madrid, the Universit´e Paris VI, and Ecole Normale Sup´erieure. NP and JPN thank, respectively, the Laboratoire de Physique Statistique (ENS) and the Departamento de F´ısica Teorica ´ (UAM) for their hospitality. We thank B. Shraiman for communicating his results prior to publication and P. Del Giudice and A. Campa for useful discussions. We thank an anonymous referee for useful comments, which led us to elaborate on the joint diagonalization method.
1454
Jean-Pierre Nadal and Nestor Parga
References Abramowitz, M., & Stegun, I. A. (1972). Handbook of mathematical functions. New York: Dover. Aldazabal, G., Gonzalez-Arroyo, A., & Parga, N. (1985). The stochastic quantization of U(N) and SU(N) lattice gauge theory and Langevin equations for the Wilson loops. Journal of Physics, A18, 2975. Amari, S. I., Cichocki, A., & Yang, H. H. (1996). A new learning algorithm for blind signal separation. In D. S. Touretzky, M. C. Mozer, & M. E. Hasselmo (Eds.), Advances in neural information processing systems 8. Cambridge, MA: MIT Press. Atick, J. J. (1992). Could information theory provide an ecological theory of sensory processing? NETWORK, 3, 213–251. Attneave, F. (1954). Informational aspects of visual perception. Psychological Review, 61, 183–193. Barlow, H. B. (1961). Possible principles underlying the transformation of sensory messages. In W. Rosenblith (ed.), Sensory communication (p. 217). Cambridge, MA: MIT Press. Barlow, H. B., Kaushal, T. P., & Mitchison, G. J. (1989). Finding minimum entropy codes. Neural Comp., 1, 412–423. Bar-Ness, Y. (1982, November). Bootstrapping adaptive interference cancelers: Some practical limitations. In The Globecom Conf. (pp. 1251–1255). Paper F3.7, Miami, Nov. 1982. Barron, A. R. (1993). Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. I. T., 39. Bell, A., & Sejnowski, T. (1995). An information-maximisation approach to blind separation and blind deconvolution. Neural Comp., 7, 1129–1159. Belouchrani, A., & M. (1993). S´eparation aveugle au second ordre de sources corr´el´ees. In GRETSI’93, Juan-Les-Pins, 1993. Bienenstock, E., Cooper, L., & Munro, P. W. (1982). Theory for the development of neuron selectivity: Orientation specificity and binocular interaction in visual cortex. Journal Neurosciences, 2, 32–48. Blahut, R. E. (1988). Principles and practice of information theory. Reading, MA: Addison-Wesley. Burel, G. (1992). Blind separation of sources: A nonlinear neural algorithm. Neural Networks, 5, 937–947. Cardoso, J.-F. (1989) Source separation using higher-order moments. In Proc. Internat. Conf. Acoust. Speech Signal Process.-89 (pp. 2109–2112). Glasgow. Cardoso, J.-F., & Souloumiac, A. (1993). Blind beamforming for non Gaussian signals. IEE Proceedings-F, 140(6), 362–370. Comon, P. (1994). Independent component analysis, a new concept? Signal Processing, 36, 287–314. Cybenko, G. (1989). Approximations by superpositions of a sigmoidal function. Math. Contr. Signals Syst., 2, 303–314. Delfosse, N., & Loubaton, Ph. (1995). Adaptive blind separation of independent sources: A deflation approach. Signal Processing, 45, 59–83.
Redundancy Reduction and Independent Component Analysis
1455
Del Giudice, P., Campa, A., Parga, N., & Nadal, J.-P. (1995). Maximization of mutual information in a linear noisy network: A detailed study. NETWORK, 6, 449–468. Deville, Y., & Andry, L. (1995). Application of blind source separation techniques to multi-tag contactless identification systems. Proceedings of NOLTA’95 (pp. 73-78). Las Vegas. F´ety, L. (1988). M´ethodes de traitement d’antenne adapt´ee aux radio-communications. Unpublished doctoral dissertation, ENST, Paris. Gaeta, M., & Lacoume, J. L. (1990). Source separation without apriori knowledge: The maximum likelihood approach. In L. Tores, E. MasGrau, & M. A. Lagunas (eds.), Signal Processing V, proceedings of EUSIPCO 90 (pp. 621-624). Haft, M., Schlang, M., & Deco, G. (1995). Information theory and local learning rules in a self-organizing network of ising spins. Phys. Rev. E, 52, 2860–2871. Hinton, G. E., Dayan, P., Frey, B. J., & Neal, R. M. (1995). The wake-sleep algorithm for unsupervised neural networks. Science, 268, 1158–1160. Hopfield, J. J. (1991). Olfactory computation and object perception. Proc. Natl. Acad. Sci. USA, 88, 6462–6466. Hornik, K., Stinchcombe, M., & White, H. (1991). Multilayer feedforward networks are universal approximators. Neural Networks, 2, 359–366. Intrator, N., & Cooper, L. (1992). Objective function formulation of the BCM theory of visual cortical plasticity: Statistical connections, stability conditions. Neural Networks, 5, 3–17. Jutten, C., & Herault, J. (1991). Blind separation of sources, part i: An adaptive algorithm based on neuromimetic architecture. Signal Processing, 24, 1–10. Laheld, B., & Cardoso, J.-F. (1994). Adaptive source separation without prewhitening. In Proc. EUSIPCO (pp. 183–186). Edinburgh. Law and Cooper, L. (1994). Formation of receptive fields in realistic visual environments according to the BCM theory. Proc. Natl. Acad. Sci. USA, 91, 7797– 7801. Li, Z., & Atick, J. J. (1994). Efficient stereo coding in the multiscale representation. Network: Computation in Neural Systems, 5, 1–18. Linsker, R. (1988). Self-organization in a perceptual network. Computer, 21, 105– 117. Molgedey, L., & Schuster, H. G. (1994). Separation of a mixture of independent signals using time delayed correlations. Phys. Rev. Lett., 72, 3634–3637. Nadal, J.-P., & Parga, N. (1994). Nonlinear neurons in the low-noise limit: A factorial code maximizes information transfer. NETWORK, 5, 565–581. Parra, L. C. (1996). Symplectic nonlinear component analysis. Preprint. Pham, D.-T., Garrat, Ph. & Jutten, Ch. (1992). Separation of a mixture of independent sources through a maximum likelihood approach. In Proc. EUSIPCO (pp. 771–774). Redlich, A. N. (1993). Redundancy reduction as a strategy for unsupervised learning. Neural Comp., 5, 289–304. Rospars, J.-P., & Fort, J.-C. (1994). Coding of odor quality: Roles of convergence and inhibition. NETWORK, 5, 121–145. Scofield, C. L., & Cooper, L. (1985). Development and properties of neural networks. Contemporary Physics, 26, 125–145.
1456
Jean-Pierre Nadal and Nestor Parga
Shraiman, B. (1993). Technical memomorandum (AT&T Bell Labs TM-11111930811-36). Tong, L., Soo, V., Liu, R., & Huang, Y. (1990). Amuse: A new blind identification algorithm. In Proc. ISCAS. New Orleans. van Hateren, J. H. (1992). Theoretical predictions of spatiotemporal receptive fields of fly lmcs, and experimental validation. J. Comp. Physiol. A, 171, 157– 170.
Received June 26, 1996; accepted February 19, 1997.
Communicated by Erkki Oja
Adaptive Online Learning Algorithms for Blind Separation: Maximum Entropy and Minimum Mutual Information Howard Hua Yang Shun-ichi Amari Laboratory for Information Representation, FRP, RIKEN Hirosawa 2-1, Wako-shi, Saitama 351-01, Japan
There are two major approaches for blind separation: maximum entropy (ME) and minimum mutual information (MMI). Both can be implemented by the stochastic gradient descent method for obtaining the demixing matrix. The MI is the contrast function for blind separation; the entropy is not. To justify the ME, the relation between ME and MMI is first elucidated by calculating the first derivative of the entropy and proving that the mean subtraction is necessary in applying the ME and at the solution points determined by the MI, the ME will not update the demixing matrix in the directions of increasing the cross-talking. Second, the natural gradient instead of the ordinary gradient is introduced to obtain efficient algorithms, because the parameter space is a Riemannian space consisting of matrices. The mutual information is calculated by applying the Gram-Charlier expansion to approximate probability density functions of the outputs. Finally, we propose an efficient learning algorithm that incorporates with an adaptive method of estimating the unknown cumulants. It is shown by computer simulation that the convergence of the stochastic descent algorithms is improved by using the natural gradient and the adaptively estimated cumulants. 1 Introduction Let us consider the case where a number of observed signals are mixtures of the same number of stochastically independent source signals. It is desirable to recover the original source signals from the observed mixtures. The blind separation problem is to find a transform that recovers original signals without knowing how the sources are mixed. A learning algorithm for blind source separation belongs to the category of unsupervised factorial learning (Deco & Brauer, 1996). It is related to the redundancy reduction principle (Barlow & Foldi´ ¨ ak, 1989; Nadal & Parga, 1994), one of the principles possibly employed by the nervous system to extract the statistically independent features for a better representation of the input without losing information. When the mixing model is arbitrarily nonlinear, it is generally Neural Computation 9, 1457–1482 (1997)
c 1997 Massachusetts Institute of Technology °
1458
Howard Hua Yang and Shun-ichi Amari
impossible to separate the sources from the mixture unless further knowledge about the sources and the mixing model is assumed. In this article, we discuss only the linear mixing model for which the inverse transform (the demixing transform) is also linear. Maximum entropy (ME) (Bell & Sejnowski, 1995a) and minimum mutual information (MMI) or independent component analysis (ICA) (Amari, Cichocki, & Yang, 1996; Comon, 1994) are two major approaches to derive blind separation algorithms. By the MMI, the mutual information (MI) of the outputs is minimized to find the demixing matrix. This approach is based on established (ICA) theory (Comon, 1994). The MI is one of the best contrast functions since it is invariant under transforms such as scaling, permutation, and componentwise nonlinear transforms. Since scaling and permutation are indeterminacy in the blind separation problem, it is desirable to choose a contrast function that is invariant to these transforms. All the global minima (zero points) of the MI are possible solutions to the blind separation problem. By the ME approach, the outputs of the demixing system are first componentwise transformed by sigmoid functions, and then the joint entropy of the transformed outputs is maximized to find a demixing matrix. The online algorithm derived by the ME is concise and often effective in practice, but the ME has not yet been rigorously justified except for the case when the sigmoid functions happen to be the cumulative density functions of the unknown sources (Bell & Sejnowski, 1995b). We study the relation between the ME and MMI, and give a justification to the ME approach. In order to realize the MMI by online algorithms, we need to estimate the MI. To this end, we apply the Gram-Charlier expansion and the Edgeworth expansion to approximate the probability density functions of the outputs. We then have the stochastic gradient type online algorithm similar to the one obtained from the ME. In order to speed up the algorithm, we show that the natural gradient descent method should be used to minimize the estimated MI rather than the conventional gradient. This is because the stochastic gradient descent takes place in the space of matrices. This space has a natural Riemannian structure from which the natural gradient is obtained. We also apply this idea to improve the online algorithms based on the ME. Although the background philosophies of the ME and MMI are different, both approaches result in algorithms of a similar form with different nonlinearity. Both include unknown factors to be fixed adequately or to be estimated. They are activation functions in the case of ME and cumulants κ3 and κ4 in the case of MMI. Their optimal values are determined by the unknown probability distributions of the sources. The point is that the algorithms work even if the specification of the activation functions or cumulants is not accurate. However, efficiency of the algorithms becomes worse by misspecification. In order to obtain efficient algorithms, we propose an adaptive method
Adaptive Online Learning Algorithms
1459
together with online learning algorithms in which the unknown factors are adequately estimated. The performances of the adaptive online algorithms are compared with the fixed online algorithms based on simulation results. It is shown by simulations that the adaptive algorithms work much better in various examples. The blind separation problem is described in section 2. The relation between the ME and the MMI is discussed in section 3. The gradient descent algorithms based on ME and MMI are derived in section 4. The natural gradient is also shown, but its derivation is given in appendix B. The GramCharlier expansion and the Edgeworth expansion are applied in this section to estimate the MI. The adaptive online algorithms based on the MMI, together with an adaptive method of estimating the unknown cumulants, are proposed in section 5. The performances of the adaptive algorithms are compared with the fixed ones by the simulations in section 6. Finally, the conclusions are made in section 7. 2 Problem Let us consider n unknown source signals Si (t), i = 1, . . . , n, which are mutually independent at any fixed time t. We denote random variables by uppercase letters and their specific values by the same letters in lowercase. The bold uppercase letters denote random vectors or matrices. We assume that the sources Si (t) are stationary processes, each source has moments of any order, and at most one source is gaussian. We also treat visual signals where t should be replaced by the spatial coordinates (x, y) with Si (x, y) representing the brightness of the pixel at (x, y). The model for the sensor outputs is
X (t) = AS (t), where A ∈ Rn×n is an unknown nonsingular mixing matrix, S (t) = [S1 (t), . . . , Sn (t)]T and X (t) = [X1 (t), . . . , Xn (t)]T and T denote the transposition. Without knowing the source signals and the mixing matrix, we want to recover the original signals from the observations X (t) by the following linear transform,
Y (t) = W X (t), where Y (t) = [Y1 (t), . . . , Yn (t)]T and W ∈ Rn×n is a matrix. When W is equal to A−1 we have Y (t) = S (t). It is impossible to obtain the original sources Si (t) in exact order and amplitude because of the indeterminacy of permutation of {Si } and scaling due to the product of two unknowns: the mixing matrix A and the source vector S (t). Nevertheless, subject to a permutation of indices, it is possible to obtain the scaled sources ci Si (t) where the constants ci are nonzero scalar factors. The source signals are identifiable in this sense. Our goal is to find a demixing
1460
Howard Hua Yang and Shun-ichi Amari
matrix W adaptively so that [Y1 , . . . , Yn ] coincides with a permutation of scaled [S1 , . . . , Sn ]. In this case, this demixing matrix W is written as
W = ΛP A−1 , where Λ is a nonsingular diagonal matrix and P a permutation matrix. 3 Maximizing Entropy versus Minimizing Mutual Information There are two well-known methods for blind separation: (1) maximizing the entropy (ME) of the transformed outputs and (2) minimizing the mutual information (MMI) of Y so that its components become independent. We discuss the relation between ME and MMI. 3.1 Some Properties of Entropy and Mutual Information.PThe idea of ME originated from neural networks. Let us transform ya = j waj xj by a sigmoid function ga (y) to za = ga (ya ), which is regarded as the output from an analog neuron. Let Z = (g1 (Y1 ), . . . , gn (Yn )) be the componentwise transformed output vector by sigmoid functions ga (y), a = 1, . . . , n. It is expected that the entropy of the output Z is maximized when the components Za of Z are mutually independent. The blind separation algorithm based on ME (Bell & Sejnowski, 1995a) was derived by maximizing the joint entropy H(Z ; W ) with respect to W by using the stochastic gradient descent method. The joint entropy of Z is defined by Z H(Z ; W ) = − p(z ; W ) log p(z ; W )dz , where p(z ; W ) is the joint probability density function (pdf) of Z determined by W and {ga }. The nonlinear transformations ga (y) are necessary for bounding the entropy in a finite range. Indeed, when g(y) is bounded c ≤ g(y) ≤ d, for any random variable Y the entropy of Z = g(Y) has an upper bound: H(Z) ≤ log(d − c). Therefore, the entropy of the transformed output vector is upper bounded: H(Z ; W ) ≤
n X
H(Za ) ≤ n log(d − c).
(3.1)
a=1
In fact, the above inequality holds for any bounded transforms, so the global maximum of the entropy H(Z ; W ) exists. H(Z ; W ) may also have many local maxima determined by the functions {ga } used to transform Y . By studying the relation between ME and MMI, we shall prove that some of these maxima are the demixing matrices with the form ΛP A−1 .
Adaptive Online Learning Algorithms
1461
The basic idea of MMI is to choose W that minimizes the dependence among the components of Y . The dependence is measured by the KullbackLeibler divergence I(W ) between the joint pdf p(y ; W ) of Y and its factor˜ y ; W ), which is the product of the marginal pdf’s of Y : ized version p( Z ˜ y ; W )] = I(W ) = D[p(y ; W ) k p( ˜ y; W ) = where p( pa (ya ; W ) =
Qn Z
a=1 pa (ya ; W )
p(y ; W ) log
p(y ; W ) dy ˜ y, W ) p(
(3.2)
and pa (ya ; W ) is the marginal pdf of y ,
ˇ , . . . , dyn , p(y ; W )dy1 , . . . , dy a
ˇ denoting that dya is missing from dy = dy1 , . . . , dyn . dy a The Kullback-Leibler divergence (see equation 3.2) gives the MI of Y and is written in terms of entropies, I(W ) = −H(Y; W ) +
n X
H(Ya ; W ),
(3.3)
a=1
where
R H(Y; W ) = − p(y ; W ) log p(y ; W )dy ,
and
R H(Ya ; W ) = − pa (ya ; W ) log pa (ya ; W )dya is the marginal entropy.
It is easy to show that I(W ) ≥ 0 and is equal to zero if and only if Ya are independent. As it is proved in Comon (1994), I(W ) is a contrast function for the ICA, meaning, I(W ) = 0 iff W = ΛP A−1 . In contrast to the entropy criterion, the mutual information I(W ) is invariant under componentwise monotonic transformations, scaling, and permutation of Y . Let {gi (y)} be differentiable monotonic functions and Z = (g1 (Y1 ), . . . , gn (Yn )) be the transformed outputs. It is easy to prove that the mutual information I(W ) is the same for Y and Z . This implies that the componentwise nonlinear transformation is not necessary as a preprocessing if we use the MI as a criterion. 3.2 Relation Between ME and MMI. The blind separation algorithm derived by the ME approach is concise and often very effective in practice. However, the ME is not rigorously justified except when the sigmoid transforms happen to be the cumulative distribution functions (cdf’s) of the unknown sources.
1462
Howard Hua Yang and Shun-ichi Amari
It is discussed in Nadal and Parga (1994) and Bell and Sejnowski (1995a) that the ME does not necessarily lead to a statistically independent representation. To justify the ME approach, we study how ME is related to MMI. We prove that when all sources have a zero mean, ME gives locally correct solutions; otherwise it is not true in general. Let us first consider the relation between the entropy and the mutual information. Since Z is a componentwise nonlinear transform of Y , it is easy to obtain H(Z ; W ) = H(Y ; W ) +
n Z X
dya p(ya ; W ) log g0a (ya )
a=1
= −I(W ) +
n X
H(Ya , W ) +
a=1
= −I(W ) −
n X
n Z X
dya p(ya ; W ) log g0a (ya )
a=1
D[p(ya ; W ) k g0 (ya )].
a=1
Hence, we have the following equation: ˜ y ; W ) k g0 (y )] −H(Z ; W ) = I(W ) + D[p(
(3.4)
where g 0 (y ) =
n Y
g0a (ya )
a=1
is regarded as a pdf of an independent random vector provided, for each a, ga (−∞) = 0, ga (∞) = 1, and g0a (y) is the derivative of ga (y). Let {ra (sa )} be the pdf’s of the independent sources Sa (t) at any t. The joint Q pdf of the sources is r(s) = na=1 ra (sa ). When W = A−1 , we have Y = S so that p(ya ; A−1 ) = ra (ya ). Hence, from equation 3.4, it is straightforward that if {ga (ya )} are the cdf’s of the sources, r(y ) = g0 (y ) so that D[r(y ) k g0 (y )] = 0. In this case, H(Z ; A−1 ) = −I(A−1 ) = 0. Since H(Z ; W ) ≤ 0 from equation 3.1, the entropy H(Z ; W ) achieves the global maximum at W = A−1 . Let r˜(y ) be the joint pdf of the scaled and permutated sources y = ΛP s. It is easy to prove that H(Z ; W ) achieves the global maximum at W = ΛP A−1 too. This was also discussed in Nadal and Parga (1994) and Bell and Sejnowski (1995b), from different perspectives. The above formula (see equation 3.4) can also be derived from the formula 24 in Nadal and Parga (1994). We now study the general case where g0 (y ) is different from r(y ). We decompose H(Z ; W ) as −H(Z ; W ) = I(W ) + D(W ) + C(W ),
(3.5)
Adaptive Online Learning Algorithms
1463
where I(W ) = I(y ; W ), ˜ y , W ) k r(y )], D(W ) = D[p( n Z n Z X X C(W ) = dya p(ya ; W ) log ka (ya ) = dy p(y ; W ) log ka (ya ), a=1
a=1
ra (ya ) ka (ya ) = 0 . ga (ya ) Since p(ya , A−1 ) = ra (ya ), I(W ) and D(W ) take the minimum value zero at W = A−1 . To understand the behavior of C(W ), we need to compute its gradient. To facilitate the process for calculating the gradient, we reparameterize W as
W = (I + B )A−1 around W = A−1 so that Y = (I + B )S . We call the diagonal elements {Bbb } the scaling coordinates and the off-diagonal elements {Bbc , b 6= c} the cross-talking coordinates. If ∂C/∂Bbc 6= 0 for some b 6= c at W = A−1 , then the gradient descent algorithm based on the ME would increase crosstalking around this point. If ∂C/∂Bbc = 0 for all b 6= c, even if ∂C/∂Bbb 6= 0 for some b, the ME method still gives a correct solution for any nonlinear functions ga (y). We have the following lemma: Lemma 1.
At B = 0 or W = A−1 , for b 6= c, the gradient of C(W ) is
∂C =− ∂Bbc
µZ
¶ µZ dyc yc r(yc )
¶ dyb r0b (yb )rb (yb ) log kb (yb ) .
(3.6)
The proof is given in appendix A. The diagonal terms ∂C/∂Bbb do not vanish in general. It is easy to see from equation 3.6 that at W = A−1 the derivatives {∂C/∂Bbc , b 6= c} vanish when all the sources have a zero mean, Z dsa sa r(sa ) = 0, or all the sigmoid functions ga are equal to the cdf’s of the sources, that is, g0 (y ) = r(y ). Similarly, reparameterizing the function C(W ) around around W = ΛP A−1 using
W = (I + B )ΛP A−1 , we calculate the gradient ∂C/∂ B at B = 0. It turns out that equation 3.6 still holds after permutating indexes, and the derivatives {∂C/∂Bbc , b 6= c}
1464
Howard Hua Yang and Shun-ichi Amari
vanish when all sources have a zero mean or g0 (y ) = r˜(y ), which is the joint pdf of scaled and permutated sources. Hence, we have the following theorem regarding the relation between the ME and the MMI: Theorem 1. When all sources are zero mean signals, at all solution points W = ΛP A−1 determined by the contrast function MI, the ME algorithms will not update the demixing matrix in the directions of increasing the cross-talking. When one of sources has a nonzero mean, the entropy is not maximized in general at W = ΛP A−1 . Note the entropy is maximized at W = ΛP A−1 when the sigmoid functions are equal to the cdf’s of the scaled and permutated sources. But usually we cannot choose the unknown cdf’s as the sigmoid functions. In some blind separation problems such as the separation of visual signals, the means of mixture are positive. In such cases, we need a preprocessing step to centralize the mixture:
xt − xt = A(st − st ),
(3.7)
P where xt = 1t tu=1 xu and st is similarly defined. The mean subtraction can also be implemented by online thresholds for the sigmoid functions applied to the outputs. Applying a blind separation algorithm, we find a matrix W close to one of solution points ΛP A−1 such that W (xt − xt ) ≈ ΛP (st − st ), and then obtain the sources by
W xt = W (xt − xt ) + W xt ≈ ΛP st . Since every linear mixture model can be reformulated as equation 3.7, without losing generality, we assume that all sources have a zero mean in the rest of this article. 4 Gradient Descent Algorithms Based on ME and MMI The gradient descent algorithms based on ME and MMI are the following: dW ∂H(Z ; W ) =η , dt ∂W
(4.1)
∂I(W ) dW = −η . dt ∂W
(4.2)
However, the algorithms work well if we put a positive-definite operator G
Adaptive Online Learning Algorithms
1465
on matrices: ∂H(Z ; W ) dW = ηG ? dt ∂W
(4.3)
∂I(W ) dW = −ηG ? . dt ∂W
(4.4)
The gradients themselves are not obtainable in general. In practice, they are replaced by the stochastic ones whose expectations give the true gradients. This method, called the stochastic gradient descent method, has been known in the neural network community for a long time (e.g., Amari, 1967). Some remarks on the the stochastic gradient and backpropagation are given in Amari (1993). 4.1 Stochastic Gradient Descent Method Based on ME. The entropy H(Z ; W ) can be written as H(Z ; W ) = H(X ) + log |W | +
n X
E[log g0a (Ya )],
a=1
where |W | = | det(W )|. It is easy to show ∂ log |W | = W −T ∂W where W −T = (W −1 )T . Moreover, P ∂ a=1 log g0a (ya ) = −Φ(y )xT , ∂W where ¶T µ 00 g (y1 ) g00 (yn ) · · · − n0 Φ(y ) = − 10 . g1 (y1 ) gn (yn )
(4.5)
Hence, the expectation of the instantaneous values of the stochastic gradient c ∂H = W −T − Φ(y )xT ∂W
(4.6)
gives the gradient ∂H/∂ W . Based on this, the following algorithm proposed by Bell & Sejnowski (1995a), is obtained from equations 4.3 and 4.6: dW = η(W −T − Φ(y )xT ). dt
(4.7)
Here, the learning equation is in the form of equation 4.3 with the operator G equal to the identity matrix. We show a more adequate G later. When the
1466
Howard Hua Yang and Shun-ichi Amari
Rx transforms are tanh(x) and −∞ exp(− 14 u4 )du, the corresponding activation functions ga are 2tanh(x) and x3 , respectively. We show in appendix B that the natural choice of the operator G should be
G?
∂H ∂H W TW . = ∂W ∂W
This is given by the Riemannian structure of the parameter space of matrices W (Amari, 1997). We then obtain the following algorithm from equation 4.7, dW = η(I − Φ(y )y T )W , dt
(A)
which is not only computationally easy but also very efficient. The learning rule of form (A) was proposed in Cichocki, Unbehaven, Moszczynski, ´ and Rummert (1994) and later justified in Amari, Cichocki, & Yang, (1996). The learning rule (A) has two important properties: the equivariant property (Cardoso & Laheld, 1996) and the property of keeping W (t) from becoming singular, whereas the learning rule (see equation 4.7) does not have these properties. To prove the second property, we define hX , Y i = tr(X T Y ) and calculate ¿ À D E ∂|W | dW d|W | = = |W |W −T , η(I − 8(y )y T )W , dt ∂W dt à ! n X T (1 − φi (yi )yi ) |W |. = ηtr(I − 8(y )y )|W | = η i=1
Then, we obtain an expression for |W (t)| ( Z ) n tX (1 − φi (yi (τ ))yi (τ ))dτ , |W (t)| = |W (0)| exp η
(4.8)
0 i=1
from which we know that |W (t)| 6= 0 if |W (0)| 6= 0. This means that {X ∈ Rn×n : |X | 6= 0} is an invariant set of the flow W (t) driven by the learning rule (A). It is worth pointing out that algorithm 4.7 is equivalent to the sequential maximum likelihood algorithm when ga are taken to be equal to ra . For r(s) = r1 (s1 ) . . . rn (sn ), the likelihood function is log p(x; W ) = log(r(W x)|W |) =
n X a=1
log ra (ya ) + log |W |,
Adaptive Online Learning Algorithms
1467
from which we have ∂ log p(x; W ) r0 (ya ) xk + (W −T )ak , = a ∂wak ra (ya ) and in the matrix form, ∂ log p(x; W ) = W −T − 9(y )xT , ∂W where
¶T µ 0 r (y1 ) r0 (yn ) ,...,− n 9(y ) = − 1 r1 (y1 ) rn (yn )
and wak is the (a, k) elements of W . Using the natural (Riemannian) gradient descent method to maximize log p(x; W ), we have the sequential maximum likelihood algorithm of the type in equation (A): dW = µ(I − 9(y )y T )W . dt
(4.9)
From the point of view of asymptotic statistics, this gives the Fisher-efficient estimator. However, we do not know ra or 9(y ), so we need to use the adaptive method of estimating 9(y ) in order to implement equation 4.9. 4.2 Approximation of MI. To implement the MMI, we need some formulas to approximate the MI. Generally it is difficult to obtain the explicit form of the MI since the pdf’s of the outputs are unknown. The main difficulty is calculating the marginal entropy H(Ya ; W ) explicitly. In order to estimate the marginal entropy, the Edgeworth expansion and the GramCharlier expansion were used in Comon (1994) and Amari et al. (1996), respectively, to approximate the pdf’s of the outputs. The two expansions are formally identical except for the order of summation, but the truncated expansions are different. We show later by computer simulation that the Gram-Charlier expansion is superior to the Edgeworth expansion for blind separation. In order to calculate each H(Ya ; W ) in equation 3.3, we shall apply the Gram-Charlier expansion to approximate the pdf pa (ya ). Since we assume E[s] = 0, we have E[y] = E[W As] = 0 and E[ya ] = 0. To simplify the calculations for the entropy H(ya ; W ), to be carried out later, we assume ma2 = E[y2a ] = 1 for all a. Under this assumption, we approximate each marginal entropy first and then obtain a formula for the MI. The zero mean and unit variance assumption makes it easier to calculate the MI. However, the formula obtained for the MI can be used in more general cases. When the components of Y have nonzero means and different variances, we can shift and scale Y to obtain
Y˜ = Σ−1 (Y − m),
1468
Howard Hua Yang and Shun-ichi Amari
where Σ = diag(σ1 , . . . , σn ), σa is the variance of Ya , and m = E[Y ]. The components of Y˜ have the zero mean and the unit variance. Due to the invariant property of the MI, the formula for computing the MI of Y˜ is applicable to compute the MI of Y . We use the following truncated Gram-Charlier expansion (Stuart & Ord, 1994) to approximate the pdf pa (ya ): ½ ¾ κa κa pa (ya ) ≈ α(ya ) 1 + 3 H3 (ya ) + 4 H4 (ya ) , 3! 4!
(4.10)
where κ3a = ma3 and κ4a = ma4 − 3 are the third- and fourth-order cumulants of Ya , respectively, and mak = E[yka ] is the kth order moment of Ya , α(y) = √1 e− 2π
y2 2
, and Hk (y), k = 1, 2, . . . , are the Chebyshev-Hermite polynomials defined by the identity (−1)k
dk α(y) = Hk (y)α(y). dyk
The Gram-Charlier expansion clearly shows how κ3a and κ4a affect the approximation of the pdf. The last two terms in equation 4.10 characterize the deviations from the gaussian distributions. To apply equation 4.10 to calculate H(Ya ), we need the following integrals: Z α(y)(H3 (y))2 H4 (y)dy = 3!3
(4.11)
α(y)(H4 (y))3 dy = 123 .
(4.12)
Z
These integrals can be obtained easily from the following results for the moments of a gaussian random variable N(0, 1): Z
Z y2k+1 α(y)dy = 0,
y2k α(y)dy = 1 · 3 · · · (2k − 1).
(4.13)
By using the expansion log(1 + y) ≈ y −
y2 + O(y3 ) 2
and taking account of the orthogonality relations of the Chebyshev-Hermite polynomials and equations 4.11 and 4.12, the entropy H(Ya ; W ) is approximated by H(Ya ; W ) ≈
(κ a )2 (κ a )2 3 1 1 log(2πe) − 3 − 4 + (κ3a )2 κ4a + (κ4a )3 . (4.14) 2 2 · 3! 2 · 4! 8 16
Adaptive Online Learning Algorithms
1469
Let F(κ3a , κ4a ) denote the right-hand side of the approximation in equation 4.14. From Y = W X , we have H(Y ) = H(X ) + log |W |. Applying this expression and equation 4.14 to equation 3.3, we have I(Y ; W) ≈ −H(X ) − log |W | +
n X n log(2π e) + F(κ3a , κ4a ). 2 a=1
(4.15)
On the other hand, the following approximation of the marginal entropy (Comon, 1994) is obtained by using the Edgeworth expansion of pa (ya ): H(Ya ; W ) ≈
1 1 1 log(2πe) − (κ3a )2 − (κ a )2 2 2 · 3! 2 · 4! 4 −
7 a 4 1 a 2 a (κ ) + (κ3 ) κ4 . 48 3 8
(4.16)
The terms in the Edgeworth expansion are arranged in a decreasing order by assuming that κia is of order n(2−i)/2 . This is true when Ya is a sum of n independent random variables and the number n is large. To obtain the formula in equation 4.16, the truncated Edgeworth expansion is used by neglecting those high-order terms higher than 1/n2 . This is a standard method of asymptotic statistics but is not valid in the present context because of the fixed n. Moreover, at around W = A−1 , Ya is close to Sa , which is not a sum of independent random variables, so that the Edgeworth expansion is not justified in this case. The learning algorithm derived from equation 4.16 does not work well in our simulations. The reason is that the cubic of the fourth-order cumulant (κ4a )3 plays an important role but is omitted in the equation. It is a small term from the point of view of the Edgeworth expansion but not a small term in the present context. In our simulations, we deal with nearly symmetric source signals, so (κ3a )4 is small. In this case, we should use the following entropy approximation instead of equation 4.16: H(Ya ; W ) ≈
1 1 1 log(2πe) − (κ3a )2 − (κ a )2 2 2 · 3! 2 · 4! 4 1 1 + (κ3a )2 κ4a + (κ4a )3 . 8 48
(4.17)
The learning algorithm derived from the above formula works almost as well as equation 4.14. Note that the expansion formula can be more general. Instead of the gaussian kernel, we can use another standard distribution function as the kernel to expand pa (ya ) as: ( ) X µi Ki (y) , pa (y) = β(y) 1 + i
1470
Howard Hua Yang and Shun-ichi Amari
where Ki (y) are the orthogonal polynomials with respect to the kernel β(y). For example, the expansion corresponding to the kernel ( e−y , y ≥ 0 β(y) = 0, y>0 will be better than the Gram-Charlier expansion in approximating the pdf of a positive random variable. The orthogonal polynomials corresponding to this kernel are Laguerre polynomials. 4.3 Stochastic Gradient Method Based on MMI. To obtain the stochastic gradient descent algorithm to update W recursively, we need to calculate the gradient of I(W ) with respect to W . Since the exact function form of I(W ) is unknown, we calculate the derivative using the approximated MI. Since ∂κ3a = 3E[y2a xk ] ∂wak and ∂κ4a = 4E[y3a xk ], ∂wak we obtain the following from equation 4.15: ∂I(W ) ≈ −(W −T )ak + f (κ3a , κ4a )E[y2a xk ] + g(κ3a , κ4a )E[y3a xk ], ∂wak
(4.18)
where 9 1 f (y, z) = − y + yz, 2 4
1 3 3 g(y, z) = − z + y2 + z2 . 6 2 4
(4.19)
Removing the expectation symbol E(·) in equation 4.18 and writing it in a matrix form, we obtain the stochastic gradient: b ∂I ∂W
= −W −T + (f (κ3 , κ4 ) ◦ y 2 )xT + (g (κ3 , κ4 ) ◦ y 3 )xT ,
(4.20)
whose expectation gives ∂I/∂ W , where ◦ denotes the Hadamard product of two vectors
f ◦ y = ( f1 y1 , . . . , fn yn )T , and y k = [(y1 )k , . . . , (yn )k ]T for k = 2, 3, f (κ3 , κ4 ) = [ f (κ31 , κ41 ), . . . , f (κ3n , κ4n )]T , g (κ3 , κ4 ) = [g(κ31 , κ41 ), . . . , g(κ3n , κ4n )]T .
Adaptive Online Learning Algorithms
1471
We need to evaluate κ3 and κ4 for implementing this idea. If κ3 and κ4 are known, using the natural gradient descent method to minimize I(W ), from equation 4.20 we obtain the algorithm based on MMI: dW = η(t){I − Φκ (y )y T }W , dt
(A0 )
where
Φκ (y ) = h(y ; κ3 , κ4 ) = f (κ3 , κ4 ) ◦ y 2 + g (κ3 , κ4 ) ◦ y 3 .
(4.21)
Note that each component of Φκ (y ) is a third-order polynomial. In partic11 3 y . ular, if κ3i = 0 and κ4i = −1, the nonlinearity Φκ (y ) becomes 12 0 The algorithm (A ) is based on the entropy formula (see equation 4.14) derived from the Gram-Charlier expansion. Using the entropy formula (see equation 4.17) based on the Edgeworth expansion rather than the formula in equation 4.14, we obtain an algorithm, to be referred to as (A00 ). This algorithm has the same form as the algorithm (A0 ) except for the definition of the functions f and g. For the Edgeworth expansion-based algorithm (A00 ), instead of equation 4.19 we use the following definition for f and g: 7 3 1 f (y, z) = − y + y3 + yz, 2 4 4
1 1 g(y, z) = − z + y2 . 6 2
(4.22)
So the algorithm of type (A) is the common form for both the ME algorithm and the MMI algorithm. In the ME approach, the function Φ(y ) in type (A) is determined by the sigmoid transforms {ga (ya )}. If we use the cdf’s of the source distributions, we need to evaluate the unknown ra (sa ). In the MMI approach, the function Φκ (y ) = h(y ; κ3 , κ4 ) depends on the cumulants κ3a and κ4a , and the approximation procedures should be taken for computing the entropy. It is better to choose g0a equal to unknown ra or to choose κ3a and κ4a equal to the true values. However, it is verified by numerous simulations that even if the g0a or κ3a and κ4a are misspecified, the algorithms (A) and (A0 ) may still converge to the true separation matrix in many cases. 5 Adaptive Online Learning Algorithms In order to implement an efficient learning algorithm, we propose the following adaptive algorithm to trace κ3 and κ4 together with (A0 ): dκ3a = −µ(t)(κ3a − (ya )3 ), dt dκ4a = −µ(t)(κ4a − (ya )4 + 3), a = 1, . . . , n, dt where µ(t) is a learning rate function.
(5.1)
1472
Howard Hua Yang and Shun-ichi Amari
It is possible to use the same adaptive scheme for implementing the efficient ME algorithm. In this case, we use the Gram-Charlier expansion (see equation 4.10) to evaluate ra (ya ) or 9(y ). The performances of the algorithms (A0 ) with equation 5.1 will be compared with that of the algorithms (A) without adaptation in the next section by simulation. The simulation results demonstrate that the adaptive algorithm (A0 ) needs fewer samples to reach the separation than the fixed algorithm (A). 6 Simulation In order to show the effect of the online learning algorithms (A0 ) with equation 5.1 and compare its performance with that of the fixed algorithm (A), we apply these algorithms to separate sources from the mixture of modulating signals or the mixture of visual signals. For the algorithm A, we can use various fixed Φ(y ). For example, we can use one of the following functions 1. f (y) = y3 . 2. f (y) = 2 tanh(y). 3. f (y) = 34 y11 +
15 9 4 y
−
14 7 3 y
−
29 5 4 y
+
29 3 4 y
to define Φ(y ) = ( f (y1 ), . . . , f (yn ))T . Function 3 was used in Amari et al. (1996) by replacing κ3a and κ4a by their instantaneous values: κ3a ∼ (ya )3 ,
κ4a ∼ (ya )4 − 3.
We also compare the adaptive algorithm obtained from the Gram-Charlier expansion (A0 ) with the adaptive algorithm obtained from the Edgeworth expansion (A00 ). 6.1 Modulating Source Signals. Assume that the following five unknown sources are mixed by a mixing matrix A randomly chosen:
s(t) = [sign(cos(2π155t)), sin(2π 800t), sin(2π 300t + 6 cos(2π 60t)), sin(2π 90t), n(t)]T ,
(6.1)
where four components of s(t) are modulating data signals, and one component n(t) is a noise source uniformly distributed in [−1, +1]. The elements of the mixing matrix A are randomly chosen subject to the uniform distribution in [−1, +1] such that A is nonsingular. We use the cross-talking error defined below to measure the performance of the algorithms: ! Ã n n n n X X X X |pij | |pij | − 1 + −1 E= maxk |pik | maxk |pkj | i=1 i=1 j=1 j=1
Adaptive Online Learning Algorithms
1473
where P = (pij ) = WA. The mixed signals are sampled at the sampling rate of 10K Hz. Taking 2000 samples, we simulate the algorithm (A) with fixed Φ defined by the functions 1 and 2, and the algorithms (A0 ) and (A00 ) with the adaptive Φκ defined by equation 4.21 for which the functions f and g are defined by equations 4.19 and 4.22, respectively. The learning rate is a constant. For each algorithm, we select a nearly optimal learning rate. The initial matrix is chosen as W (0) = 0.5I in all simulations. The sources, mixtures, and outputs obtained by using the different algorithms are displayed in Figure 1. In each part, four out of five components are shifted upward from the zero level for a better illustration. The sources, mixtures, and all the outputs shown are within the same time window [0.1, 0.2]. It is shown in Figure 1 that the algorithms with adaptive Φκ need fewer samples than those with fixed Φ to achieve separation. The cross-talking errors are plotted in Figure 2, which shows that the adaptive MMI algorithms (A0 ) and (A00 ) outperform the ME algorithm (A) with either fixed function 1 or 2. The cross-talking error for each algorithm can be decreased further if more observations are taken and an exponentially decreasing learning rate is chosen. 6.2 Image Data. In this article, the two basic algorithms (A) and (A0 ) are derived by ME and MMI. The assumption that sources are independent is needed as a sufficient condition for identifiability. To check whether the performance of these algorithms is sensitive to the independence assumption, we test these algorithms on correlated data such as images. Images are usually correlated and nonstationary in the spatial coordinate. In the top row in Figure 3, we have six image sources {si (x, y), i = 1, . . . , 6} consisting of five natural images and one uniformly distributed noise. All images have the same size with N pixels in each of them. From each image, a pixel sequence is taken by scanning the image in a certain order, for example, row by row or column by column. These sources have the following correlation coefficient matrix:
Rs = (cij ) 1.0000 0.0819 −0.1027 −0.0616 0.2154 0.0106 0.0819 1.0000 0.3158 0.0899 −0.0536 −0.0041 −0.1027 0.3158 1.0000 0.3194 −0.4410 −0.0102 = 0.0899 0.3194 1.0000 −0.0692 −0.0058 −0.0616 0.2154 −0.0536 −0.4410 −0.0692 1.0000 0.0215 0.0106 −0.0041 −0.0102 −0.0058 0.0215 1.0000 , where the correlation coefficients cij are defined by P x,y (si (x, y) − si )(sj (x, y) − sj ) cij = Nσi σj
1474
0.1
Howard Hua Yang and Shun-ichi Amari
0.12
0.14
0.16
0.18
0.2
0.1
0.12
(a)
0.1
0.12
0.14
0.12
0.16
0.14
0.18
0.2
0.18
0.2
0.1
0.12
0.14
0.16
0.18
0.2
0.16
0.18
0.2
(d)
0.16
(e)
0.16
(b)
(c)
0.1
0.14
0.18
0.2
0.1
0.12
0.14
(f)
Figure 1: The comparison of the separation by the algorithms (A), (A0 ), and (A00 ). (a) The sources. (b) The mixtures. (c) The separation by (A0 ) using learning rate µ = 60. (d) The separation by (A00 ) using learning rate µ = 60. (e) The separation by (A) using x3 and µ = 65. (f) The separation by (A) using 2tanh(x) and µ = 30.
Adaptive Online Learning Algorithms
1475
30 (A) with 2tanh(x) 25
(A) with x^3 (A’’) (A’)
error
20
15
10
5
0 0
200
400
600
800
1000 1200 iteration
1400
1600
1800
2000
Figure 2: Comparison of the performances of (A) (using x3 or 2tanh(x)), and (A0 ) and (A00 ).
si =
1 X si (x, y). N x,y
σi2 =
1 X (si (x, y) − si )2 . N x,y
It is shown by the above source correlation matrix that each natural image is correlated with one or more other natural images. Mixing the image sources by the following matrix, 1.0439 1.0943 0.9927 1.0955 1.0373 0.9722 1.0656 0.9655 0.9406 1.0478 1.0349 1.0965 1.0913 0.9555 0.9498 0.9268 0.9377 0.9059 A= 1.0232 1.0143 0.9684 1.0440 1.0926 1.0569 0.9745 0.9533 1.0165 1.0891 1.0751 1.0321 0.9114 0.9026 1.0283 0.9928 1.0711 1.0380 , we obtain six mixed pixel sequences; their images are shown in the second
1476
Howard Hua Yang and Shun-ichi Amari
Figure 3: Separation of mixed images. The images in the first row are sources; those in the second row are the mixed images; those in the third row are separated images.
row in Figure 3. The mixed images are highly correlated with the correlation coefficient matrix, Rm =
1.0000 0.9966 0.9988 0.9982 0.9985 0.9968
0.9966 1.0000 0.9963 0.9995 0.9989 0.9989
0.9988 0.9963 1.0000 0.9972 0.9968 0.9950
0.9982 0.9995 0.9972 1.0000 0.9996 0.9993
0.9985 0.9989 0.9968 0.9996 1.0000 0.9996
0.9968 0.9989 0.9950 0.9993 0.9996 1.0000
.
To compute the demixing matrix, we apply the algorithm (A0 ) and use only 20 percent of the mixed data. The separation result is shown in the third row in Figure 3. If the outputs are arranged in the same order as the sources, the
Adaptive Online Learning Algorithms
correlation coefficient matrix of the output is 1.0000 −0.0041 −0.1124 −0.1774 −0.0041 1.0000 0.1469 0.0344 −0.1124 0.1469 1.0000 0.1160 Ro = −0.1774 0.0344 0.1160 1.0000 0.0663 −0.1446 −0.1683 −0.1258 0.1785 −0.0271 −0.0523 0.0946
1477
0.0663 −0.1446 −0.1683 −0.1258 1.0000 −0.1274
0.1785 −0.0271 −0.0523 0.0946 −0.1274 1.0000 .
Applying algorithm (A00 ), we obtain a separation result similar to that in Figure 3. It is not unusual that dependent sources can sometimes be separated by the algorithms (A0 ) and (A00 ) since the MI of the mixtures is usually much higher than the MI of the sources. Applying algorithms (A0 ) or (A00 ), the MI of the transformed mixture is decreased. When it reaches a level similar to that of the sources, the dependent sources can be extracted. In this case, some prior knowledge about sources (e.g., human face and speech) is needed to select the separation results. 7 Conclusion The blind separation algorithm based on the ME approach is concise and often very effective in practice, but it is not yet fully justified. The MMI approach, on the other hand, is based on the contrast function MI, which is well justified. We have studied the relation between the ME and MMI and prove that the ME will not update the demixing matrix in the directions of increasing the cross-talking at the solution points when all sources are zero mean signals. When one of sources has a nonzero mean, the entropy is not maximized in general at the solution points. It is suggested by this result that the mixture model should be reformulated such that all sources have a zero mean in order to use the ME approach. The two basic MMI algorithms (A0 ) and (A00 ) have been derived based on the minimization of the MI of the outputs. The contrast function MI is impractical unless we approximate it. The difficulty is evaluating the marginal entropy of the outputs. The Gram-Charlier expansion and the Edgeworth expansion are applied to estimate the MI. The natural gradient method is used to minimize the estimated MI to obtain the basic algorithm. These algorithms have been tested for separating unknown source signals mixed by a mixing matrix randomly chosen, and the validity of the algorithms has been verified. Because of the approximation error, the MMI algorithms have limitations; they cannot replace the ME. There are cases such as the experiment in Bell and Sejnowski (1995a) using speech signals in which the ME algorithm performs better. Although the ME and the MMI result in very similar algorithms, it is unknown whether the two approaches are equivalent globally. The activation functions in algorithm A and the cumulants in algorithm A0 should be de-
1478
Howard Hua Yang and Shun-ichi Amari
termined adequately. We proposed an adaptive algorithm to estimate these cumulants. It is observed from many simulations that the adaptive online algorithms (A0 ) and (A00 ) need fewer samples than the online algorithm (A) (with nonlinear functions such as 2 tanh(x) and x3 ) to reach the separation. The test of algorithms (A0 ) and (A00 ) using the image data suggests that even dependent sources can sometimes be separated if some prior knowledge is used to select the separation results. The reason is that the MI of the mixtures is usually higher than that of sources and the algorithms (A0 ) and (A00 ) can decrease the MI from a high level to a lower one. Appendix A: Proof of Lemma 1 Let kB k ≤ ε < 1, then |I + B |−1 = 1 − tr(B ) + O(ε 2 ) and (I + B )−1 = I − B + O(ε2 ). So p(y ; W ) = |I + B |−1 r((I + B )−1 y ) ! Ã n X Y 2 ra ya − Bac yc + O(ε ) = (1 − tr(B )) c
a=1
= (1 − tr(B ))
n Y
ra (ya ) −
= r(y ) − tr(B ) +
r0a (ya )
a
a=1
+ O(ε 2 ) "
X
X
n Y
rb (yb )
X
b6=a
Bac yc
c
# l0a (ya )Bac yc
r(y ) + O(ε 2 )
(A.1)
a,c
where l0a (ya ) =
r0a (ya ) . ra (ya )
From equation A.1, we have the linear expansion of C(W ) = C((I + B )A−1 ) at B = 0, " # XZ X −1 0 C((I + B )A ) ≈ − lb (yb )Bbc yc r(y ) log ka (ya ), dy tr(B ) + a
b,c
from which we calculate ∂C/∂ B at B = 0. When b 6= c, XZ ∂C =− dy yc r(y )l0b (yb ) log ka (ya ) ∂Bbc a ¶ µZ ¶ X µZ 0 =− dyb rb (yb ) D[ra (ya )kg0a (ya )] dyc yc r(yc ) a6=c,a6=b
Adaptive Online Learning Algorithms
Ã
Z −
Y
dy yc rc (yc )
i6=c
µZ −
R
ri (yi ) l0b (yb ) log kc (yc )
¶ µZ ¶ µZ
µZ
since
!
dyc yc r(yc ) dyc yc r(yc )
=−
dyr0b (y) = 0 and (
Q
i6=c
1479
¶
dyb r0b (yb )rb (yb ) log kb (yb )
¶
dyb r0b (yb )rb (yb ) log kb (yb )
ri (yi ))l0b (yb ) = (
Q
i6=c,i6=b
(A.2)
ri (yi ))r0b (yb ).
Appendix B: Natural Gradient Denote Gl(n) = {X ∈ Rn×n : det(X ) 6= 0}. Let W ∈ Gl(n) and φ(W ) be a scalar function. Before we define the natural gradient of φ(W ), we recapitulate the gradient in a Riemannian manifold S in the tensorial form. Let x = (xi ), i = 1, . . . , m, be its (local) coordinates. The gradient of φ(x) is written as ¶ µ ∂φ = (ai ), ∇φ = ∂xi which is a linear operator (or a covariant vector) mapping a vector (contravariant vector) X = (Xi ) to real values in R, X ai Xi . ∇φ ◦ X = The manifold S has the Riemannian metric gij (x), by which the inner product of X and Y is written as X gij Xi Y j . hX , Y i = ˜ = a˜ be the contravariant version of a = ∇φ, such that Let ∇φ ˜ Xi ∇φ ◦ X = h∇φ, or
X
ai X i =
X
gij a˜ i X j .
So we have X gij aj , a˜ i = where (gij ) is the inverse matrix of (gij ). It is possible to give a more direct meaning to a˜ i . We search for the steepest direction of φ(x) at point x. Let ε > 0 be a small constant, and we study the direction d such that
d˜ = argmax φ(x + εd),
1480
Howard Hua Yang and Shun-ichi Amari
under the constraint that d is a unit vector, X gij di d j = 1. Then it is easy to calculate that d˜i = a˜ i . It is natural to define the gradient flow in S by ˜ = −η x˙ = −η∇φ
X
gij
∂ φ(x). ∂x j
Now we return to our manifold Gl(n) of matrices. We define the natural gradient of φ(W ) by imposing a Riemannian structure in Gl(n). The manifold Gl(n) of matrices has the Lie group structure: any A ∈ Gl(n) maps Gl(n) to Gl(n) by W → W A, where A = E (unit matrix) is the unit. We impose that the Riemannian structure should be invariant by this operation A. More definitely, at W , we consider a small deviation of W , W + εZ , where ε is infinitesimally small. Here, Z is the tangent vector at W (or an element of the Lie algebra). We introduce an inner product hZ , Z iW at W . Now, W is mapped to the unit matrix E by the operation of A = W −1 . Then W and W + εZ are mapped to W A = W W −1 = E and (W + εZ )W −1 = E + εZW −1 , respectively. So the tangent vector Z at W corresponds to the tangent vector ZW −1 at E . They should have the same length (the Lie group invariance: the same as the invariance under the basis change of vectors on which a matrix acts). So hZ , Z iW = hZW −1 , ZW −1 iE . It is very natural to define the inner product at E by X hY , Y iE = tr(Y T Y ) = (Yij )2 , because we have no specific preference in the components at E . Then we have hZ , Z iW = hZW −1 , ZW −1 iE = tr(W −T Z T ZW −1 ). On the other hand, the gradient operator ∂φ is defined by φ(W + ε Z ) = φ(W ) + ε∂φ ◦ Z , X ∂φ ∂φ ◦ Z = tr(∂φ T Z ) = Zij . ∂Wij
Adaptive Online Learning Algorithms
1481
˜ is Hence, the contravariant version ∂φ ˜ Zi ∂φ ◦ Z = h∂φ, W ˜ T ZW −1 ) = tr(W −T ∂φ ˜ T Z ), = tr(W −1 W −T ∂φ giving ˜ T ∂φ T = W −1 W −T ∂φ or ˜ W −1 W −T ∂φ = ∂φ that is, ˜ = ∂φ W T W . ∂φ Hence, the natural gradient descent algorithm should be dW = −η(∇φ)W T W . dt The natural gradient (∇φ)W T W is equivalent to the relative gradient introduced in Cardoso and Laheld (1996). Using the natural gradient or the relative gradient leads to the blind separation algorithms with the equivariant property; that is, the performance of the algorithms is independent from the scaling of the sources. Acknowledgments We thank the reviewers for their constructive comments on the manuscript and acknowledge the useful discussions with J.-F. Cardoso and A. Cichocki. References Amari, S. (1967). A theory of adaptive pattern classifiers. IEEE Trans. on Electronic Computers, EC.16(3), 299–307. Amari, S. (1993). Backpropagation and stochastic gradient descent method. Neurocomputing, 5, 185–196. Amari, S. (1997). Neural learning in structured parameter spaces—natural Riemannian gradient. In M. C. Mozer, M. I. Jordan, & T. Petsche (Eds.), Advances in neural information processing systems, 9. Cambridge, MA: MIT Press. Amari, S., Cichocki, A., & Yang, H. H. (1996). A new learning algorithm for blind signal separation. In D. S. Touretzky, M. C. Mozer, & M. E. Hasselmo Advances in neural information processing systems, 8, (pp. 757–763). Cambridge, MA: MIT Press.
1482
Howard Hua Yang and Shun-ichi Amari
Barlow, H. B., & Foldi´ ¨ ak, P. (1989). Adaptation and decorrelation in the cortex. In C. Miall, R. M. Durbin, & G. J. Mitchison (Eds.), The computing neuron (pp. 54–72). Reading, MA: Addison-Wesley. Bell, A. J., & Sejnowski, T. J. (1995a). An information-maximisation approach to blind separation and blind deconvolution. Neural Computation, 7, 1129–1159. Bell, A. J., & Sejnowski, T. J. (1995b). Fast blind separation based on information theory. In Proceedings 1995 International Symposium on Nonlinear Theory and Applications (vol. 1, pp. 43–47). Las Vegas. Cardoso, J.-F., & Laheld, B. 1996. Equivariant adaptive source separation. IEEE Trans. on Signal Processing, 44(12), 3017–3030. Cichocki, A., Unbehauen, R., Moszczynski, ´ L., & Rummert, E. (1994). A new online adaptive learning algorithm for blind separation of source signals. In ISANN94 (pp. 406–411). Taiwan. Comon, P. (1994). Independent component analysis, a new concept? Signal Processing, 36, 287–314. Deco, G., & Brauer, W. (1996). Nonlinear higher-order statistical decorrelation by volume-conserving neural architectures. Neural Networks, 8(4), 525–535. Nadal, J. P., & Parga, N. (1994). Nonlinear neurons in the low noise limit: A factorial code maximizes information transfer. Network, 5, 561–581. Stuart, A., & Ord, J. K. (1994). Kendall’s advanced theory of statistics. London: Edward Arnold.
Received August 19, 1996, accepted February 21, 1997.
Communicated by Anthony Bell
A Fast Fixed-Point Algorithm for Independent Component Analysis Aapo Hyv¨arinen Erkki Oja Helsinki University of Technology, Laboratory of Computer and Information Science, Espoo, Finland
We introduce a novel fast algorithm for independent component analysis, which can be used for blind source separation and feature extraction. We show how a neural network learning rule can be transformed into a fixedpoint iteration, which provides an algorithm that is very simple, does not depend on any user-defined parameters, and is fast to converge to the most accurate solution allowed by the data. The algorithm finds, one at a time, all nongaussian independent components, regardless of their probability distributions. The computations can be performed in either batch mode or a semiadaptive manner. The convergence of the algorithm is rigorously proved, and the convergence speed is shown to be cubic. Some comparisons to gradient-based algorithms are made, showing that the new algorithm is usually 10 to 100 times faster, sometimes giving the solution in just a few iterations. 1 Introduction Independent component analysis (ICA) (Comon, 1994; Jutten & Herault, 1991) is a signal processing technique whose goal is to express a set of random variables as linear combinations of statistically independent component variables. Two interesting applications of ICA are blind source separation and feature extraction. In the simplest form of ICA (Comon, 1994), we observe m scalar random variables v1 , v2 , . . . , vm , which are assumed to be linear combinations of n unknown independent components s1 , s2 , . . . , sn that are mutually statistically independent and zeromean. In addition, we must assume n ≤ m. Let us arrange the observed variables vi into a vector v = (v1 , v2 , . . . , vm )T and the component variables si into a vector s, respectively; then the linear relationship is given by v = As.
(1.1)
Here, A is an unknown m × n matrix of full rank, called the mixing matrix. The basic problem of ICA is then to estimate the original components si from the mixtures vj or, equivalently, to estimate the mixing matrix A. Neural Computation 9, 1483–1492 (1997)
c 1997 Massachusetts Institute of Technology °
1484
Aapo Hyv¨arinen and Erkki Oja
The fundamental restriction of the model is that we can estimate only nongaussian independent components (except if just one of the independent components is gaussian). Moreover, neither the energies nor the signs of the independent components can be estimated, because any constant multiplying an independent component in equation 1.1 could be cancelled by dividing the corresponding column of the mixing matrix A by the same constant. For mathematical convenience, we define here that the independent components si have unit variance. This makes the (nongaussian) independent components unique, up to their signs. Note that no order is defined between the independent components. In blind source separation (Cardoso, 1990; Jutten & Herault, 1991), the observed values of v correspond to a realization of an m-dimensional discretetime signal v(t), t = 1, 2, . . . Then the components si (t) are called source signals, which are usually original, uncorrupted signals or noise sources. Another possible application of ICA is feature extraction (Bell & Sejnowski, 1996, in press; Hurri, Hyv¨arinen, Karhunen, & Oja, 1996). Then the columns of A represent features, and si signals the presence and the amplitude of the ith feature in the observed data v. The problem of estimating the matrix A in equation 1.1 can be somewhat simplified by performing a preliminary sphering or prewhitening of the data v (Cardoso, 1990; Comon, 1994; Oja & Karhunen, 1995). The observed vector v is linearly transformed to a vector x = Mv such that its elements xi are mutually uncorrelated, and all have unit variance. Thus the correlation matrix of x equals unity: E{xxT } = I. This transformation is always possible and can be accomplished by classical principal component analysis. At the same time, the dimensionality of the data should be reduced so that the dimension of the transformed data vector x equals n, the number of independent components. This also has the effect of reducing noise. After the transformation we have x = Mv = MAs = Bs,
(1.2)
where B = MA is an orthogonal matrix due to our assumptions on the components si : it holds E{xxT } = BE{ssT }BT = BBT = I. Thus we have reduced the problem of finding an arbitrary full-rank matrix A to the simpler problem of finding an orthogonal matrix B, which then gives s = BT x. If the ith column of B is denoted bi , then the ith independent component can be computed from the observed x as si = (bi )T x. The current algorithms for ICA can be roughly divided into two categories. The algorithms in the first category (Cardoso, 1992; Comon, 1994) rely on batch computations minimizing or maximizing some relevant criterion functions. The problem with these algorithms is that they require very complex matrix or tensorial operations. The second category contains adaptive algorithms often based on stochastic gradient methods, which may have implementations in neural networks (Amari, Cichocki, & Yang, 1996;
A Fast Fixed-Point Algorithm
1485
Bell & Sejnowski, 1995; Delfosse & Loubaton, 1995; Hyv¨arinen & Oja, 1996; Jutten & Herault, 1991; Moreau & Macchi, 1993; Oja & Karhunen, 1995). The main problem with this category is the slow convergence and the fact that the convergence depends crucially on the correct choice of the learning rate parameters. In this article we introduce a novel approach for performing the computations needed in ICA.1 We introduce an algorithm using a very simple yet highly efficient fixed-point iteration scheme for finding the local extrema of the kurtosis of a linear combination of the observed variables. It is well known (Delfosse & Loubaton, 1995) that finding the local extrema of kurtosis is equivalent to estimating the nongaussian independent components. However, the convergence of our algorithm will be proved independent of these well-known results. The computations can be performed either in batch mode or semiadaptively. The new algorithm is introduced and analyzed in section 3, after presenting in section 2 a short review of kurtosis minimization-maximization and its relation to neural network type learning rules. 2 ICA by Kurtosis Minimization and Maximization Most suggested solutions to the ICA problem use the fourth-order cumulant or kurtosis of the signals, defined for a zero-mean random variable v as kurt(v) = E{v4 } − 3(E{v2 })2 .
(2.1)
For a gaussian random variable, kurtosis is zero; for densities peaked at zero, it is positive, and for flatter densities, negative. Note that for two independent random variables v1 and v2 and for a scalar α, it holds kurt(v1 + v2 ) = kurt(v1 ) + kurt(v2 ) and kurt(αv1 ) = α 4 kurt(v1 ). Let us search for a linear combination of the sphered observations xi , say, wT x, such that it has maximal or minimal kurtosis. Obviously, this is meaningful only if the norm of w is somehow bounded; let us assume kwk = 1. Using the orthogonal mixing matrix B, let us define z = BT w. Then also kzk = 1. Using equation 1.2 and the properties of the kurtosis, we have kurt(wT x) = kurt(wT Bs) = kurt(zT s) =
n X
z4i kurt(si ).
(2.2)
i=1
Under the constraint kwk = kzk = 1, the function in equation 2.2 has a number of local minima and maxima. For simplicity, let us assume for the moment that the mixture contains at least one independent component 1 It was brought to our attention that a similar algorithm for blind deconvolution was proposed by Shalvi and Weinstein (1993).
1486
Aapo Hyv¨arinen and Erkki Oja
whose kurtosis is negative and at least one whose kurtosis is positive. Then, as was shown by Delfosse and Loubaton (1995), the extremal points of equation 2.2 are the canonical base vectors z = ±ej , that is, vectors whose components are all zero except one component that equals ±1. The corresponding weight vectors are w = Bz = Bej = bj (perhaps with a minus sign), i.e., the columns of the orthogonal mixing matrix B. So by minimizing or maximizing the kurtosis in equation 2.2 under the given constraint, the columns of the mixing matrix are obtained as solutions for w, and the linear combination itself will be one of the independent components: wT x = (bi )T x = si . Equation 2.2 also shows that gaussian components cannot be estimated by this way, because for them kurt(si ) is zero. To minimize or maximize kurt(wT x), a neural algorithm based on gradient descent or ascent can be used (Delfosse & Loubaton, 1995; Hyv¨arinen & Oja, 1996). Then w is interpreted as the weight vector of a neuron with input vector x. The objective function can be simplified because the inputs have been sphered. It holds kurt(wT x) = E{(wT x)4 } − 3[E{(wT x)2 }]2 = E{(wT x)4 } − 3kwk4 . (2.3) Also the constraint kwk = 1 must be taken into account, for example, by a penalty term (Hyv¨arinen & Oja, 1996). Then the final objective function is J(w) = E{(wT x)4 } − 3kwk4 + F(kwk2 ),
(2.4)
where F is a penalty term due to the constraint. Several forms for the penalty term have been suggested by Hyv¨arinen and Oja (1996).2 In the following, the exact form of F is not important. Denoting by x(t) the sequence of observations, by µ(t) the learning rate sequence, and by f the derivative of F/2, the online learning algorithm then has the form w(t+1) = w(t) ±µ(t)[x(t)(w(t)T x(t))3 −3kw(t)k2 w(t)+ f (kw(t)k2 )w(t)].
(2.5)
The first two terms in brackets are obtained from the gradient of kurt(wT x) when instantaneous values are used instead of the expectation. The third term in brackets is obtained from the gradient of F(kwk2 ); as long as this is a function of kwk2 only, its gradient has the form scalar × w. A positive sign before the brackets means finding the local maxima; a negative sign corresponds to local minima. The convergence of this kind of algorithm can be proved using the principles of stochastic approximation. The advantage of such neural learning 2 Note that in Hyv¨ arinen and Oja (1996), the second term in J was also included in the penalty term.
A Fast Fixed-Point Algorithm
1487
rules is that the inputs x(t) can be used in the algorithm at once, thus enabling fast adaptation in a nonstationary environment. A resulting trade-off is that the convergence is slow and depends on a good choice of the learning rate sequence µ(t). A bad choice of the learning rate can, in practice, destroy convergence. Therefore, some ways to make the learning radically faster and more reliable may be needed. The fixed-point iteration algorithms are such an alternative. The fixed points w of the learning rule (see equation 2.5) are obtained by taking the expectations and equating the change in the weight to 0: E{x(wT x)3 } − 3kwk2 w + f (kwk2 )w = 0.
(2.6)
The time index t has been dropped. A deterministic iteration could be formed from equation 2.6 by a number of ways, for example, by standard numerical algorithms for solving such equations. A very fast iteration is obtained, as shown in the next section, if we write equation 2.6 in the form w = scalar × (E{x(wT x)3 } − 3kwk2 w).
(2.7)
Actually, because the norm of w is irrelevant, it is the direction of the righthand side that is important. Therefore the scalar in equation 2.7 is not significant, and its effect can be replaced by explicit normalization, or the projection of w onto the unit sphere. 3 Fixed-Point Algorithm 3.1 The Algorithm. 3.1.1 Estimating One Independent Component. Assume that we have collected a sample of the sphered (or prewhitened) random vector x, which in the case of blind source separation is a collection of linear mixtures of independent source signals according to equation 1.2. Using the derivation of the preceding section, we get the following fixed-point algorithm for ICA: 1. Take a random initial vector w(0) of norm 1. Let k = 1. 2. Let w(k) = E{x(w(k − 1)T x)3 } − 3w(k − 1). The expectation can be estimated using a large sample of x vectors (say, 1000 points). 3. Divide w(k) by its norm. 4. If |w(k)T w(k − 1)| is not close enough to 1, let k = k + 1, and go back to step 2. Otherwise, output the vector w(k). The final vector w(k) given by the algorithm equals one of the columns of the (orthogonal) mixing matrix B. In the case of blind source separation, this means that w(k) separates one of the nongaussian source signals in the sense that w(k)T x(t), t = 1, 2, . . . equals one of the source signals.
1488
Aapo Hyv¨arinen and Erkki Oja
A remarkable property of our algorithm is that a very small number of iterations, usually 5 to 10, seems to be enough to obtain the maximal accuracy allowed by the sample data. This is due to the cubic convergence shown below. 3.1.2 Estimating Several Independent Components. To estimate n independent components, we run this algorithm n times. To ensure that we estimate each time a different independent component, we need only to add a simple orthogonalizing projection inside the loop. Recall that the columns of the mixing matrix B are orthonormal because of the sphering. Thus we can estimate the independent components one by one by projecting the current solution w(k) on the space orthogonal to the columns of the mixing matrix B previously found. Define the matrix B as a matrix whose columns are the previously found columns of B. Then add the projection operation in the beginning of step 3: T
Let w(k) = w(k) − B B w(k). Divide w(k) by its norm. Also the initial random vector should be projected this way before starting the iterations. To prevent estimation errors in B from deteriorating the estimate w(k), this projection step can be omitted after the first few iterations. Once the solution w(k) has entered the basin of attraction of one of the fixed points, it will stay there and converge to that fixed point. In addition to the hierarchical (or sequential) orthogonalization described above, any other method of orthogonalizing the weight vectors could also be used. In some applications, a symmetric orthogonalization might be useful. This means that the fixed-point step is first performed for all the n weight vectors, and then the matrix W(k) = (w1 (k), . . . , wn (k)) of the weight vectors is orthogonalized, for example, using the well-known formula, W(k) = W(k)(W(k)T W(k))−1/2 ,
(3.1)
where (W(k)T W(k))−1/2 is obtained from the eigenvalue decomposition of W(k)T W(k) = EDET as (W(k)T W(k))−1/2 = ED−1/2 ET . However, the convergence proof below applies only to the hierarchical orthogonalization. 3.1.3 A Semiadaptive Version. A disadvantage of many batch algorithms is that large amounts of data must be stored simultaneously in working memory. Our fixed-point algorithm, however, can be used in a semiadaptive manner so as to avoid this problem. This can be accomplished simply by computing the expectation E{x(w(k − 1)T x)3 } by an online algorithm for N consecutive sample points, keeping w(k − 1) fixed, and updating the vector w(k) after the average over all the N sample points has been computed. This semiadaptive version also makes adaptation to nonstationary data possible. Thus, the semiadaptive algorithm combines many of the advantages usually attributed to either online or batch algorithms.
A Fast Fixed-Point Algorithm
1489
3.2 Convergence Proof. Now we prove the convergence of our algorithm. To begin, make the change of variables z(k) = BT w(k). Note that the effect of the projection step is to set to zero all components zi (k) of z(k) such that the ith column of B has already been estimated. Therefore, we can simply consider in the following the algorithm without the projection step, taking into account that the zi (k) corresponding to the independent components previously estimated are zero. First, using equation 2.2, we get the following form for step 2 of the algorithm: z(k) = E{s(z(k − 1)T s)3 } − 3z(k − 1).
(3.2)
Expanding the first term, we can calculate explicitly the expectation, and obtain for the ith component of the vector z(k), zi (k) = E{s4i }zi (k − 1)3 + 3
X
zj (k − 1)2 zi (k − 1) − 3zi (k − 1),
(3.3)
j6=i
where we have used the fact that by the statistical independence of the si , we have E{s2i sj2 } = 1, and E{s3i sj } = E{s2i sj sl } = E{si sj sl sm } = 0 for four different indices i, j, l, and m. Using kz(k)k = kw(k)k = 1, equation 3.3 simplifies to zi (k) = kurt(si ) zi (k − 1)3 ,
(3.4)
where kurt(si ) = E{s4i } − 3 is the kurtosis of the ith independent component. Note that the subtraction of 3w(k − 1) from the right side cancelled the term due to the cross-variances, enabling direct access to the fourth-order cumulants. Choosing j so that kurt(sj ) 6= 0 and zj (k − 1) 6= 0, we further obtain, | kurt(si )| |zi (k)| = |zj (k)| | kurt(sj )|
µ
|zi (k − 1)| |zj (k − 1)|
¶3 .
(3.5)
Note that the assumption zj (k − 1) 6= 0 implies that the jth column of B is not among those already found. Next note that |zi (k)|/|zj (k)| is not changed by the normalization step 3. It is therefore possible to solve explicitly |zi (k)|/|zj (k)| from this recursive formula, which yields Ãp !3k p | kurt(sj )| | kurt(si )| |zi (0)| |zi (k)| p = p |zj (k)| | kurt(sj )| |zj (0)| | kurt(si )|
(3.6)
p for all k > 0. For j = arg maxp | kurt(sp )| |zp (0)|, we see that all the other components zi (k), i 6= j quickly become small compared to zj (k). Taking the normalization kz(k)k = kw(k)k = 1 into account, this means that zj (k) → 1
1490
Aapo Hyv¨arinen and Erkki Oja
and zi (k) → 0 for all i 6= j. This implies that w(k) = Bz(k) converges to the column bj of the mixing matrix B, for which the kurtosis of the corresponding independent component sj is not zero and which has not yet been found. This proves the convergence of our algorithm. 4 Simulation Results The fixed-point algorithm was applied to blind separation of four source signals from four observed mixtures. Two of the source signals had a uniform distribution, and the other two were obtained as cubes of gaussian variables. Thus, the source signals included both subgaussian and supergaussian signals. Using different random mixing matrices and initial values w(0), five iterations were usually sufficient for estimation of one column of the orthogonal mixing matrix to an accuracy of four decimal places. Next, the convergence speed of our algorithm was compared with the speed of the corresponding neural stochastic gradient algorithm. In the neural algorithm, an empirically optimized learning rate sequence was used. The computational overhead of the fixed-point algorithm was optimized by initially using a small sample size (200 points) in step 2 of the algorithm, and increasing it at every iteration for greater accuracy. The number of floating-point operations was calculated for both methods. The fixed-point algorithm needed only 10 percent of the floating-point operations required by the neural algorithm. This result was achieved with an empirically optimized learning rate. If we had been obliged to choose the learning rate without preliminary testing, the speed-up factor would have been of the order of 100. In fact, the neural stochastic gradient algorithm might not have converged at all. These simulation results confirm the theoretical implications of very fast convergence. 5 Discussion We introduced a batch version of a neural learning algorithm for ICA. This algorithm, which is based on the fixed-point method, has several advantages as compared to other suggested ICA methods. 1. Equation 3.6 shows that the convergence of our algorithm is cubic. This means very fast convergence and is rather unique among the ICA algorithms. It is also in contrast to other similar fixed-point algorithms, like the power method, which often have only linear convergence. In fact, our algorithm can be considered a higher-order generalization of the power method for tensors. 2. Contrary to gradient-based algorithms, there is no learning rate or other adjustable parameters in the algorithm, which makes it easy to use and more reliable.
A Fast Fixed-Point Algorithm
1491
3. The algorithm, in its hierarchical version, finds the independent components one at a time instead of working in parallel like most of the suggested ICA algorithms that solve the entire mixing matrix. This makes it possible to estimate only certain desired independent components, provided we have sufficient prior information of the weight matrices corresponding to those components. For example, if the initial weight vector of the algorithm is the first principal component, the algorithm finds probably the most important independent component, such that the norm of the corresponding column in the original mixing matrix A is the largest. 4. Components of both negative kurtosis (i.e., subgaussian components) and positive kurtosis (i.e., supergaussian components) can be found by starting the algorithm from different initial points and possibly removing the effect of the already found independent components by a projection onto an orthogonal subspace. If just one of the independent components is gaussian (or otherwise has zero kurtosis), it can be estimated as the residual that is left over after extracting all other independent components. Recall that more than one gaussian independent component cannot be estimated in the ICA model. 5. Although the algorithm was motivated as a short-cut method to make neural learning for kurtosis minimization and maximization faster, its convergence was proved independent of the neural algorithm and the well-known results on the connection between ICA and kurtosis. Indeed, our proof is based on principles different from those used so far in ICA and thus opens new lines for research. Recent developments of the theory presented in this article can be found in Hyv¨arinen (1997a,b). References Amari, S., Cichocki, A., & Yang, H. (1996). A new learning algorithm for blind source separation. In Advances in Neural Information Processing 8 (Proc. NIPS’95). Cambridge, MA: MIT Press. Bell, A., & Sejnowski, T. (1995). An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7, 1129–1159. Bell, A., & Sejnowski, T. (1996). Learning higher-order structure of a natural sound. Network, 7, 261–266. Bell, A., & Sejnowski, T. (in press). The “independent components” of natural scenes are edge filters. Vision Research. Cardoso, J.-F. (1990). Eigen-structure of the fourth-order cumulant tensor with application to the blind source separation problem. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (pp. 2655–2658). Albuquerque, NM. Cardoso, J.-F. (1992). Iterative techniques for blind source separation using only fourth-order cumulants. In Proc. EUSIPCO (pp. 739–742). Brussels.
1492
Aapo Hyv¨arinen and Erkki Oja
Comon, P. (1994). Independent component analysis—a new concept? Signal Processing, 36, 287–314. Delfosse, N., & Loubaton, P. (1995). Adaptive blind separation of independent sources: A deflation approach. Signal Processing, 45, 59–83. Hurri, J., Hyv¨arinen, A., Karhunen, J., & Oja, E. (1996). Image feature extraction using independent component analysis. In Proc. NORSIG’96. Espoo, Finland. Hyv¨arinen, A. (1997a). A family of fixed-point algorithms for independent component analysis. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP ’97) (pp. 3917–3920). Munich, Germany. Hyv¨arinen, A. (1997b). Independent component analysis by minimization of mutual information (Technical Report). Helsinki: Helsinki University of Technology, Laboratory of Computer and Information Science. Hyv¨arinen, A., & Oja, E. (1996). A neuron that learns to separate one independent component from linear mixtures. In Proc. IEEE Int. Conf. on Neural Networks (pp. 62–67). Washington, D.C. Jutten, C., & Herault, J. (1991). Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture. Signal Processing, 24, 1–10. Moreau, E., & Macchi, O. (1993). New self-adaptive algorithms for source separation based on contrast functions. In Proc. IEEE Signal Processing Workshop on Higher Order Statistics (pp. 215–219). Lake Tahoe, NV. Oja, E., & Karhunen, J. (1995). Signal separation by nonlinear Hebbian learning. In M. Palaniswami, Y. Attikiouzel, R. Marks, D. Fogel, & T. Fukuda (Eds.), Computational intelligence—a dynamic system perspective (pp. 83–97). New York: IEEE Press. Shalvi, O., & Weinstein, E. (1993). Super-exponential methods for blind deconvolution. IEEE Trans. on Information Theory, 39(2), 504–519.
Received April 8, 1996, accepted December 17, 1996.
Communicated by Garrison Cottrell
Dimension Reduction by Local Principal Component Analysis Nandakishore Kambhatla Todd K. Leen Department of Computer Science and Engineering, Oregon Graduate Institute of Science and Technology, Portland, Oregon 97291-1000, U.S.A.
Reducing or eliminating statistical redundancy between the components of high-dimensional vector data enables a lower-dimensional representation without significant loss of information. Recognizing the limitations of principal component analysis (PCA), researchers in the statistics and neural network communities have developed nonlinear extensions of PCA. This article develops a local linear approach to dimension reduction that provides accurate representations and is fast to compute. We exercise the algorithms on speech and image data, and compare performance with PCA and with neural network implementations of nonlinear PCA. We find that both nonlinear techniques can provide more accurate representations than PCA and show that the local linear techniques outperform neural network implementations. 1 Introduction The objective of dimension reduction algorithms is to obtain a parsimonious description of multivariate data. The goal is to obtain a compact, accurate, representation of the data that reduces or eliminates statistically redundant components. Dimension reduction is central to an array of data processing goals. Input selection for classification and regression problems is a task-specific form of dimension reduction. Visualization of high-dimensional data requires mapping to a lower dimension—usually three or fewer. Transform coding (Gersho & Gray, 1992) typically involves dimension reduction. The initial high-dimensional signal (e.g., image blocks) is first transformed in order to reduce statistical dependence, and hence redundancy, between the components. The transformed components are then scalar quantized. Dimension reduction can be imposed explicitly, by eliminating a subset of the transformed components. Alternatively, the allocation of quantization bits among the transformed components (e.g., in increasing measure according to their variance) can result in eliminating the low-variance components by assigning them zero bits. Recently several authors have used neural network implementations of dimension reduction to signal equipment failures by novelty, or outlier, Neural Computation 9, 1493–1516 (1997)
c 1997 Massachusetts Institute of Technology °
1494
Nandakishore Kambhatla and Todd K. Leen
detection (Petsche et al., 1996; Japkowicz, Myers, & Gluck, 1995). In these schemes, high-dimensional sensor signals are projected onto a subspace that best describes the signals obtained during normal operation of the monitored system. New signals are categorized as normal or abnormal according to the distance between the signal and its projection1 . The classic technique for linear dimension reduction is principal component analysis (PCA). In PCA, one performs an orthogonal transformation to the basis of correlation eigenvectors and projects onto the subspace spanned by those eigenvectors corresponding to the largest eigenvalues. This transformation decorrelates the signal components, and the projection along the high-variance directions maximizes variance and minimizes the average squared residual between the original signal and its dimension-reduced approximation. A neural network implementation of one-dimensional PCA implemented by Hebb learning was introduced by Oja (1982) and expanded to hierarchical, multidimensional PCA by Sanger (1989), Kung and Diemantaras (1990), and Rubner and Tavan (1989). A fully parallel (nonhierarchical) design that extracts orthogonal vectors spanning an m-dimensional PCA subspace was given by Oja (1989). Concurrently, Baldi and Hornik (1989) showed that the error surface for linear, three-layer autoassociators with hidden layers of width m has global minima corresponding to input weights that span the m-dimensional PCA subspace. Despite its widespread use, the PCA transformation is crippled by its reliance on second-order statistics. Though uncorrelated, the principal components can be highly statistically dependent. When this is the case, PCA fails to find the most compact description of the data. Geometrically, PCA models the data as a hyperplane embedded in the ambient space. If the data components have nonlinear dependencies, PCA will require a largerdimensional representation than would be found by a nonlinear technique. This simple realization has prompted the development of nonlinear alternatives to PCA. Hastie (1984) and Hastie and Stuetzle (1989) introduce their principal curves as a nonlinear alternative to one-dimensional PCA. Their parameterized curves f (λ): R → Rn are constructed to satisfy a self-consistency requirement. Each data point x is projected to the closest point on the curve λ f (x) = argminµ kx − f (µ)k, and the expectation of all data points that project to the same parameter value λ is required to be on the curve. Thus f (3) = Ex [x | λ f (x) = 3 ]. This mathematical statement reflects the desire that the principal curves pass through the middle of the data.
1 These schemes also use a sigmoidal contraction map following the projection so that new signals that are close to the subspace, yet far from the training data used to construct the subspace, can be properly tagged as outliers.
Dimension Reduction by Local PCA
1495
Hastie and Stuetzle (1989) prove that the curve f (λ) is a principal curve iff it is a critical point (with respect to variations in f (λ)) of the mean squared distance between the data and their projection onto the curve. They also show that if the principal curves are lines, they correspond to PCA. Finally, they generalize their definitions from principal curves to principal surfaces. Neural network approximators for principal surfaces are realized by fivelayer, autoassociative networks. Independent of Hastie and Stuetzle’s work, several researchers (Kramer, 1991; Oja, 1991; DeMers & Cottrell, 1993; Usui, Nakauchi, & Nakano, 1991) have suggested such networks for nonlinear dimension reduction. These networks have linear first (input) and fifth (output) layers, and sigmoidal nonlinearities on the second- and fourth-layer nodes. The input and output layers have width n. The third layer, which carries the dimension-reduced representation, has width m < n. We will refer to this layer as the representation layer. Researchers have used both linear and sigmoidal response functions for the representation layer. Here we consider only linear response in the representation layer. The networks are trained to minimize the mean squared distance between the input and output and, because of the middle-layer bottleneck, build a dimension-reduced representation of the data. In view of Hastie and Stuetzle’s critical point theorem, and the mean square error (MSE) training criteria for five-layer nets, these networks can be viewed as approximators of principal surfaces.2 Recently Hecht-Nielsen (1995) extended the application of five-layer autoassociators from dimension reduction to encoding. The third layer of his replicator networks has a staircase nonlinearity. In the limit of infinitely steep steps, one obtains a quantization of the middle-layer activity, and hence a discrete encoding of the input signal.3 In this article, we propose a locally linear approach to nonlinear dimension reduction that is much faster to train than five-layer autoassociators and, in our experience, provides superior solutions. Like five-layer autoassociators, the algorithm attempts to minimize the MSE between the original
2 There is, as several researchers have pointed out, a fundamental difference between the representation constructed by Hastie’s principal surfaces and the representation constructed by five-layer autoassociators. Specifically, autoassociators provide a continuous parameterization of the embedded manifold, whereas the principal surfaces algorithm does not constrain the parameterization to be continuous. 3 Perfectly sharp staircase hidden-layer activations are, of course, not trainable by gradient methods, and the plateaus of a rounded staircase will diminish the gradient signal available for training. However, with parameterized hidden unit activations h(x; a) with h(x; 0) = x and h(x; 1) a sigmoid staircase, one can envision starting training with linear activations and gradually shift toward a sharp (but finite sloped) staircase, thereby obtaining an approximation to Hecht-Nielsen’s replicator networks. The changing activation function will induce both smooth changes and bifurcations in the set of cost function minima. Practical and theoretical issues in such homotopy methods are discussed in Yang and Yu (1993) and Coetzee and Stonick (1995).
1496
Nandakishore Kambhatla and Todd K. Leen
data and its reconstruction from a low-dimensional representation—what we refer to as the reconstruction error. However, while five-layer networks attempt to find a global, smooth manifold that lies close to the data, our algorithm finds a set of hyperplanes that lie close to the data. In a loose sense, these hyperplanes locally approximate the manifold found by five-layer networks. Our algorithm first partitions the data space into disjoint regions by vector quantization (VQ) and then performs local PCA about each cluster center. For brevity, we refer to the hybrid algorithms as VQPCA. We introduce a novel VQ distortion function that optimizes the clustering for the reconstruction error. After training, dimension reduction proceeds by first assigning a datum to a cluster and then projecting onto the m-dimensional principal subspace belonging to that cluster. The encoding thus consists of the index of the cluster and the local, dimension-reduced coordinates within the cluster. The resulting algorithm directly minimizes (up to local optima) the reconstruction error. In this sense, it is optimal for dimension reduction. The computation of the partition is a generalized Lloyd algorithm (Gray, 1984) for a vector quantizer that uses the reconstruction error as the VQ distortion function. The Lloyd algorithm iteratively computes the partition based on two criteria that ensure minimum average distortion: (1) VQ centers lie at the generalized centroids of the quantizer regions and (2) the VQ region boundaries are surfaces that are equidistant (in terms of the distortion function) from adjacent VQ centers. The application of these criteria is considered in detail in section 3.2. The primary point is that constructing the partition according to these criteria provides a (local) minimum for the reconstruction error. Training the local linear algorithm is far faster than training five-layer autoassociators. The clustering operation is the computational bottleneck, though it can be accelerated using the usual tree-structured or multistage VQ. When encoding new data, the clustering operation is again the main consumer of computation, and the local linear algorithm is somewhat slower for encoding than five-layer autoassociators. However, decoding is much faster than for the autoassociator and is comparable to PCA. Local PCA has been previously used for exploratory data analysis (Fukunaga & Olsen, 1971) and to identify the intrinsic dimension of chaotic signals (Broomhead, 1991; Hediger, Passamante, & Farrell, 1990). Bregler and Omohundro (1995) use local PCA to find image features and interpolate image sequences. Hinton, Revow, and Dayan (1995) use an algorithm based on local PCA for handwritten character recognition. Independent of our work, Dony and Haykin (1995) explored a local linear approach to transform coding, though the distortion function used in their clustering is not optimized for the projection, as it is here. Finally, local linear methods for regression have been explored quite thoroughly. See, for example, the LOESS algorithm in Cleveland and Devlin (1988).
Dimension Reduction by Local PCA
1497
In the remainder of the article, we review the structure and function of autoassociators, introduce the local linear algorithm, and present experimental results applying the algorithms to speech and image data. 2 Dimension Reduction and Autoassociators As considered here, dimension reduction algorithms consist of a pair of maps g: Rn → Rm with m < n and f : Rm → Rn . The function g(x) maps the original n-dimensional vector x to the dimension-reduced vector y ∈ Rm . The function f (y) maps back to the high-dimensional space. The two maps, g and f , correspond to the encoding and decoding operation, respectively. If these maps are smooth on open sets, then they are diffeomorphic to a canonical projection and immersion, respectively. In general, the projection loses some of the information in the original representation, and f (g(x)) 6= x. The quality of the algorithm, and adequacy of the chosen target dimension m, are measured by the algorithm’s ability to reconstruct the original data. A convenient measure, and the one we employ here, is the average squared residual, or reconstruction error:
E = Ex [ kx − f (g(x))k2 ]. The variance in the original vectors provides a useful normalization scale for the squared residuals, so our experimental results are quoted as normalized reconstruction error:
Enorm =
Ex [ kx − f (g(x))k2 ] . Ex [ kx − Ex xk2 ]
(2.1)
This may be regarded as the noise-to-signal ratio for the dimension reduction, with the signal strength defined as its variance. Neural network implementations of these maps are provided by autoassociators, layered, feedforward networks with equal input and output dimension n. During training, the output targets are set to the inputs; thus, autoassociators are sometimes called self-supervised networks. When used to perform dimension reduction, the networks have a hidden layer of width m < n. This hidden, or representation, layer is where the low-dimensional representation is extracted. In terms of the maps defined above, processing from the input to the representation layer corresponds to the projection g, and processing from the representation to the output layer corresponds to the immersion f . 2.1 Three-Layer Autoassociators. Three-layer autoassociators with n input and output units and m < n hidden units, trained to perform the identity transformation over the input data, were used for dimension reduction by several researchers (Cottrell, Munro, & Zipser, 1987; Cottrell &
1498
Nandakishore Kambhatla and Todd K. Leen
Metcalfe, 1991; Golomb, Lawrence, & Sejnowski, 1991). In these networks, the first layer of weights performs the projection g and the second the immersion f . The output f (g(x)) is trained to match the input x in the mean square sense. Since the transformation from the hidden to the output layer is linear, the network outputs lie in an m-dimensional linear subspace of Rn . The hidden unit activations define coordinates on this hyperplane. If the hidden unit activation functions are linear, it is obvious that the best one can do to minimize the reconstruction error is to have the hyperplane correspond to the m-dimensional principal subspace. Even if the hidden-layer activation functions are nonlinear, the output is still an embedded hyperplane. The nonlinearities introduce nonlinear scaling of the coordinate intervals on the hyperplane. Again, the solution that minimizes the reconstruction error is the m-dimensional principal subspace. These observations were formalized in two early articles. Baldi and Hornik (1989) show that optimally trained three-layer linear autoassociators perform an orthogonal projection of the data onto the m-dimensional principal subspace. Bourlard and Kamp (1988) show that adding nonlinearities in the hidden layer cannot reduce the reconstruction error. The optimal solution remains the PCA projection.
2.2 Five-Layer Autoassociators. To overcome the PCA limitation of three-layer autoassociators and provide the capability for genuine nonlinear dimension reduction, several authors have proposed five-layer autoassociators (e.g., Oja, 1991; Usui et al., 1991; Kramer, 1991; DeMers & Cottrell, 1993; Kambhatla & Leen, 1994; Hecht-Nielsen, 1995). These networks have sigmoidal second and fourth layers and linear first and fifth layers. The third (representation) layer may have linear or sigmoidal response. Here we use linear response functions. The first two layers of weights carry out a nonlinear projection g: Rn → Rm , and the last two layers of weights carry out a nonlinear immersion f : Rm → Rn (see Figure 1). By the universal approximation theorem for single hidden-layer sigmoidal nets (Funahashi, 1989; Hornik, Stinchcombe, & White, 1989), any continuous composition of immersion and projection (on compact domain) can be approximated arbitrarily closely by the structure. The activities of the nodes in the third, or representation layer form global curvilinear coordinates on a submanifold of the input space (see Figure 1b). We thus refer to five-layer autoassociative networks as a global, nonlinear dimension reduction technique. Several authors report successful implementation of nonlinear PCA using these networks for image (DeMers & Cottrell, 1993; Hecht-Nielsen, 1995; Kambhatla & Leen, 1994) and speech dimension reduction, for characterizing chemical dynamics (Kramer, 1991), and for obtaining concise representations of color (Usui et al., 1991).
Dimension Reduction by Local PCA
1499
1 0 -1 .5 5 Low Dimensional Encoding
0 -.5 -1 -1.5
a)
Original High Dimensional Representation
-1 0
b)
1
Figure 1: (a) Five-layer feedforward autoassociative network with inputs x ∈ Rn and representation layer of dimension m. Outputs x0 ∈ Rn are trained to match the inputs, that is, to minimize E[ kx − x0 k2 ]. (b) Global curvilinear coordinates built by a five-layer network for data distributed on the surface of a hemisphere. When the activations of the representation layer are swept, the outputs trace out the curvilinear coordinates shown by the solid lines.
3 Local Linear Transforms Although five-layer autoassociators are convenient and elegant approximators for principal surfaces, they suffer from practical drawbacks. Networks with multiple sigmoidal hidden layers tend to have poorly conditioned Hessians (see Rognvaldsson, 1994, for a nice exposition) and are therefore difficult to train. In our experience, the variance of the solution with respect to weight initialization can be quite large (see section 4), indicating that the networks are prone to trapping in poor local optimal. We propose an alternative that does not suffer from these problems. Our proposal is to construct local models, each pertaining to a different disjoint region of the data space. Within each region, the model complexity is limited; we construct linear models by PCA. If the local regions are small enough, the data manifold will not curve much over the extent of the region, and the linear model will be a good fit (low bias). Schematically, the training algorithm is as follows: 1. Partition the input space Rn into Q disjoint regions {R(1) , . . . , R(Q) }. 2. Compute the local covariance matrices 6 (i) = E[ (x − Ex)(x − Ex)T | x ∈ R(i) ] ;
i = 1, . . . , Q
and their eigenvectors ej(i) , j = 1, . . . , n. Relabel the eigenvectors so
1500
Nandakishore Kambhatla and Todd K. Leen
.55
-1
-.5
0
.5
1
.255 0 -.25 25 -.5 .5 -1
-.5
0
.5
1
Figure 2: Coordinates built by local PCA for data distributed on the surface of a hemiphere. The solid lines are the two principal eigendirections for the data in each region. One of the regions is shown shaded for emphasis.
that the corresponding eigenvalues are in descending order λ(i) 1 > (i) λ(i) > · · · > λ . n 2 3. Choose a target dimension m and retain the leading m eigendirections for the encoding. The partition is formed by VQ on the training data. The distortion measure for the VQ strongly affects the partition, and therefore the reconstruction error for the algorithm. We discuss two alternative distortion measures. The local PCA defines local coordinate patches for the data, with the orientation of the local patches determined by the PCA within each region Ri . Figure 2 shows a set of local two-dimensional coordinate frames induced on the hemisphere data from figure 1, using the standard Euclidean distance as the VQ distortion measure. 3.1 Euclidean Partition. The simplest way to construct the partition is to build a VQ based on Euclidean distance. This can be accomplished by either an online competitive learning algorithm or by the generalized Lloyd algorithm (Gersho & Gray, 1992), which is the batch counterpart of competitive learning. In either case, the trained quantizer consists of a set of Q reference vectors r(i) , i = 1, . . . , Q and corresponding regions R(i) . The placement of the reference vectors and the definition of the regions satisfy Lloyd’s optimality conditions: 1. Each region, R(i) corresponds to all x ∈ Rn that lie closer to r(i) than to any other reference vector. Mathematically R(i) = {x | dE (x, r(i) )
ρ; otherwise, the category is reset. If the match criterion is satisfied, the category’s net input signal, gj , is determined by modulating its match value by nj , which is proportional to the category’s a priori probability, and by QM ( i=1 σji )−1 , which normalizes its gaussian distribution: nj gj = QM
i=1 σji
Gj if Gj > ρ;
gj = 0 otherwise.
(2.2)
The category’s activation, yj , represents its conditional probability for being the “source” of the input vector: P(j | xE). This is obtained by normalizing the category’s input strength, gj yj = PN l=1
gl
.
(2.3)
As originally proposed, GAM used a choice activation rule at F2 : yj = 1 if gj > gl ∀ l 6= j; yj = 0 otherwise (Williamson, 1996a). In this version, only a single, chosen category learned on each trial. Here, we describe a distributed-learning version of GAM, which uses the distributed F2 activation equation (2.3). Distributed GAM was introduced in Williamson (1996b), where it was shown to obtain a more efficient representation than GAM with choice learning. Distributed GAM has also been applied as part of an image classification system, where it outperformed existing state-of-the-art
1520
James R. Williamson
image classification system that use rule-based, multilayer perceptron, and k-nearest neighbor classifiers (Grossberg & Williamson, 1997). 2.2 Prediction and Match Tracking. Equations 2.1 through 2.3 describe activation of category nodes in an unsupervised-learning gaussian ART module. The following equations describe GAM’s supervised-learning mechanism, which incorporates feedback from class predictions made by the F2 category nodes and thus turns gaussian ART into gaussian ARTMAP. When a category, j, is first chosen, it learns a permanent mapping to the output class, k, associated with the current training sample. All categories that map to the same class prediction belong to the same ensemble: j ∈ E(k). Each time an input is presented, the categories in each ensemble sum their activations to generate a net probability estimate, zk , of the class prediction k that they share: X yj . (2.4) zk = j∈E(k)
The system prediction, K, is obtained from the maximum probability estimate, K = arg max(zk ),
(2.5)
k
which also determines the chosen ensemble. On real-world problems, the probability estimate zK has been found to predict accurately the probability that prediction K is correct (Grossberg & Williamson, 1997). Note that category j’s initial activation, yj , represents P(j | xE). Once the class prediction K is chosen, we obtain the category’s “chosen-ensemble” activation, yj∗ , which represents P(j | xE, K): yj∗ = P
yj
l∈E(K) yl
if j ∈ E(K);
yj∗ = 0 otherwise.
(2.6)
If K is the correct prediction, then the network resonates and learns the current input. If K is incorrect, then match tracking is invoked. As originally conceived, match tracking involves raising ρ continuously, causing categories j, such that Gj ≤ ρ, to be reset until the correct prediction is finally selected (Carpenter, Grossberg, & Rosen, 1991). Because GAM uses a distributed representation at F2 , each zk may be determined by multiple categories, according to equation 2.6. Therefore, it is difficult to determine numerically how much ρ needs to be raised in order to select a different prediction. It is inefficient (on a conventional computer) to determine the exact amount to raise ρ by repeatedly resetting the active category with the lowest match value Gj , each time reevaluating equations 2.3, 2.4, and 2.5, until a new prediction is finally selected.
A Constructive, Incremental-Learning Network
1521
Instead, a one-shot match tracking algorithm has been developed for GAM and used successfully on several classification problems (Williamson, 1996b; Grossberg & Williamson, 1997). This algorithm involves raising ρ to the average match value of the chosen ensemble: ¶2 M µ X X − µ x 1 i ji . y∗ ρ = exp − 2 j∈E(K) j i=1 σji
(2.7)
In addition, all categories in the chosen ensemble are reset: gj = 0 ∀ j ∈ E(K). Equations 2.2 through 2.5 are then reevaluated. Based on the remaining nonreset categories, a new prediction K in equation 2.5, and its corresponding ensemble, is chosen. This automatic search cycle continues until the correct prediction is made or until all committed categories are reset, Gj ≤ ρ ∀ j ∈ {1, . . . , N}, and an uncommitted category is chosen. Match tracking ensures that the correct prediction comes from an ensemble with a better match to the training sample than all reset ensembles. Upon presentation of the next training sample, ρ is reassigned its baseline value: ρ = ρ. E j and σEj are updated to represent the 2.3 Learning. The F2 parameters µ sample statistics of the input using local learning rules that are related to the instar, or gated steepest descent, learning rule (Grossberg, 1976). Instar learning is an associative rule in which the postsynaptic activity yj∗ modulates the rate at which the weight wji tracks the presynaptic signal f (xi ), ²
d wji = yj∗ [ f (xi ) − wji ]. dt
(2.8)
The discrete-time version of equation 2.8 is wji := (1 − ² −1 yj∗ )wji + ² −1 yj∗ f (xi ).
(2.9)
GAM’s learning equations are obtained by modifying equation 2.9. The rate constant ² is replaced by nj , which is incremented to represent the cumulative chosen-ensemble activation of node j and thus the amount of training data the node has been assigned credit for: nj := nj + yj∗ .
(2.10)
Modulation of learning by nj causes the inputs to be weighted equally over time, so that their sample statistics are learned. The presynaptic term f (xi ) is set to xi and x2i , respectively, for learning the first and second moments of the input. The standard deviation is then derived from these statistics: µji := (1 − yj∗ nj−1 )µji + yj∗ nj−1 xi ,
(2.11)
1522
James R. Williamson
νji := (1 − yj∗ nj−1 )νji + yj∗ nj−1 x2i , q σji = νji − µji2 .
(2.12) (2.13)
In Williamson (1996a, 1996b) and Grossberg and Williamson (1997), σji , rather than νji , is incrementally updated via: σji :=
q (1 − yj nj−1 )σji2 + yj nj−1 (xi − µji )2 .
(2.14)
Unlike equations 2.12 and 2.13, equation 2.14 biases the estimate of σji because the incremental updates are based on current estimates of µji , which vary over time. This bias appears to be insignificant, however, as our simulations have not revealed a significant advantage for either method. Equations 2.12 and 2.13 are used here solely because they describe a simpler learning rule. GAM is initialized with N = 0. When an uncommitted category is chosen, N is incremented, and the new category, indexed by N, is initialized with y∗N = 1 and nN = 0, and with a permanent mapping to the correct output class. Learning then proceeds via equations 2.10 through 2.13, with one modification: a constant, γ 2 , is added to νNi in equation 2.12, which yields σNi = γ in equation 2.13. Initializing categories with this nonzero standard deviation is necessary to make equation 2.1 and equation 2.2 well defined. Varying γ has a marked effect on learning: as γ is raised, learning becomes slower, but fewer categories are created. Generally, γ is much larger than the final standard deviation that a category converges to. Intuitively, a large γ represents a low level of certainty for, and commitment to, the location in the input space coded by a new category. As γ is raised, the network settles into its input space representation in a slower and more graceful way. Note that best results have generally been obtained by preprocessing the set of input vectors to have the same standard deviation in each dimension, so that γ has the same meaning in all the dimensions. 3 Expectation-Maximization Now we show the relationship between GAM and the EM approach to mixture modeling. EM is a general iterative optimization technique for obtaining maximum likelihood estimates of observed data that are in some way incomplete (Dempster et al., 1977). Each iteration of EM consists of an expectation step (E-step) followed by a maximization step (M-step). We start with an “incomplete-data” likelihood function of the model given the data and then posit a “complete-data” likelihood function, which is much easier to maximize but depends on unknown, missing data. The E-step finds the expectation of the complete-data likelihood function, yielding a deterministic function. The M-step then updates the system parameters to
A Constructive, Incremental-Learning Network
1523
maximize this function. Dempster et al. (1977) proved that each iteration of EM yields an increase in the incomplete-data likelihood until a local maximum is reached. 3.1 Gaussian Mixture Modeling. First, let us consider density estimation of the input space using (separable) gaussian mixtures. We model the training set, X = {E xt }Tt=1 , as comprising independent, identically distributed samples generated from a mixture density, which is parameterized N . The incomplete-data density of X given 2 is by 2 = {αj , θEj }j=1 P(X|2) =
T Y
P(E xt |2) =
t=1
N T X Y
αj P(E xt |θEj ),
(3.1)
t=1 j=1
where θEj parameterizes the distribution of the jth component, and αj reprePN αj = 1. sents its a priori probability, or mixture proportion: αj ≥ 0 and j=1 The incomplete-data log likelihood of 2 given X is l(2|X) =
T X t=1
log
N X
αj P(E xt |θEj ),
(3.2)
j=1
which is difficult to maximize because it includes the log of a sum. Intuitively, equation 3.2 contains a credit assignment problem, because it is not clear which component generated each data sample. To get around this problem, we introduce “missing data” in the form of a set of indicator variables, Z = {Ezt }Tt=1 , such that ztj = 1 if component j generated sample t and ztj = 0 otherwise. Now, using the complete data, {X, Z}, we can explicitly assign credit and thus decouple the overall maximization problem into a set of simple maximizations by defining a complete-data density function, P(X, Z|2) =
N T Y Y [αj P(E xt |θEj )]ztj ,
(3.3)
t=1 j=1
from which we obtain a complete-data log likelihood, lc (2|X, Z) =
N T X X
ztj log[αj P(E xt |θEj )],
(3.4)
t=1 j=1
which does not include a log of a sum. However, note that lc (2|X, Z) is a random variable because the missing variables Z are unknown. Therefore, the EM algorithm finds the expected value of lc (2|X, Z) in the E-step: Q(2|2(p) ) = E[lc (2|X, Z)|X, 2(p) ],
(3.5)
1524
James R. Williamson
where 2(p) is the set of parameters at the pth iteration. The E-step yields a deterministic function Q(2|2(p) ), and the M-step then maximizes this function with respect to 2 to obtain new parameters, 2(p+1) : 2(p+1) = arg max Q(2|2(p) ).
(3.6)
2
Dempster et al. (1977) proved that each iteration of EM yields an increase in the incomplete-data likelihood until a local maximum is reached: l(2(p+1) |X) ≥ l(2(p) |X).
(3.7) (p)
The E-step in equation 3.7 simplifies to computing (for all t, j) ytj = xt , 2], the probability that component j generated sample t. For comE[ztj |E parison with GAM, we define the density P(E xt |θEj ) as a separable gaussian distribution, yielding à ! µ (p) ¶2 ³ ´ x −µ P ti (p) QM (p) −1 M ji αj exp − 12 i=1 (p) i=1 σji (p)
ytj =
PN l=1
" (p) αl
³Q
´ (p) −1 M exp i=1 σli
σji
à − 12
PM i=1
µ
(p)
xti −µli
¶2 !# .
(3.8)
(p)
σli
For distributions in the exponential family, the M-step simply updates the model parameters based on their reestimated sufficient statistics, which are computed in a batch procedure that weights each sample by its probability, (p) ytj , (p+1)
αj
(p+1)
µji
(p+1)
σji
T 1X (p) y , T t=1 tj PT (p) t=1 ytj xti = P , (p) T t=1 ytj v u PT (p) 2 u u t=1 ytj xti ³ (p+1) ´2 =t P − µji . (p) T t=1 ytj
=
(p)
(3.9)
(3.10)
(3.11)
Note that ytj in equation 3.8 is equivalent to GAM’s category activation term yj in equation 2.3 provided that vigilance is zero (ρ = 0). Also, the EM parameter reestimation equations (3.9 through 3.11) are essentially the same as GAM’s learning equations (2.10 through 2.13), except that EM uses batch learning with a constant number of components, while GAM uses incremental learning, updating the parameters after each input sample, and recruiting new categories as needed.
A Constructive, Incremental-Learning Network
1525
3.2 Extension to Classification. The EM mixture modeling algorithm is extended to classification problems by modeling the class label as a multinomial variable (Ghahramani & Jordan, 1994). Therefore, each mixture component consists of a gaussian distribution for the “input” features and a multinomial distribution for the “output” class labels. Thus, the classification problem is cast as a density estimation problem, in which the mixture components represent the joint density of the input-output mapping. Specifically, the joint probability that the tth sample has input features xEt and output class k(t) is denoted by N X
P(E xt , K = k(t)|2) =
Ej ) αj P(E x, K = k(t)|θEj , λ
j=1 N X
=
λjk(t) αj P(E x|θEj ),
(3.12)
j=1
E where Pthe multinomial distribution is parameterized by λjk = P(K = k|j; θj ) and k λjk = 1. This classification algorithm is trained the same way as the gaussian mixture algorithm, except that equation 3.8 becomes:
(p) ytj
(p) (p) λjk(t) αj
=
PN l=1
³Q
´ (p) −1 M σ exp i=1 ji
" (p) (p) λlk(t) αl
³Q
à − 12
´ (p) −1 M σ exp i=1 li
PM
à − 12
µ
(p)
xti −µji
¶2 !
(p)
i=1
σji
PM i=1
µ
(p)
xti −µli
¶2 !# ,
(3.13)
(p)
σli
and the multinomial parameters are updated via (p+1) λjk
PT
(p) t=1 ytj δ[k − PT (p) t=1 ytj
=
k(t)]
,
(3.14)
where δ[ω] = 1 if ω = 0 and δ[ω] = 0 if ω 6= 0. During testing, the class label is missing and its expected value is “filled in” to determine the system prediction: N X ytj λjk , K = arg max k
(3.15)
j=1
where ytj is computed via equation 3.8. The parameter λjk plays an analogous role to GAM’s membership function, j ∈ E(k). GAM’s equation (2.5) performs the same computation as equation 3.15, provided that λjk = 1 if j ∈ E(k) and λjk = 0 otherwise. Note that if each EM component is initialized
1526
James R. Williamson
so that λjk = 1 for some k, then λjk will never change according to update equation 3.14. Therefore, with this restriction, along with the restriction that vigilance is always zero (ρ ≡ 0), EM becomes a batch-learning version of GAM. Thus, GAM and EM use similar learning equations and obtain a similar final representation: a gaussian mixture model, with mappings from the mixture components to class labels. However, the learning dynamics of the two algorithms are quite different due to GAM’s match tracking operation. The EM algorithm is a variable metric gradient ascent algorithm, in which each step in parameter space is related to the gradient of the log likelihood of the mixture model (Xu & Jordan, 1996). With each step, the likelihood of the I/O density estimate increases until a local maximum is reached. In other words, the parameterization at each step is represented by a point in the parameter space, which has a constant dimensionality. The system is initialized at some point in this parameter space, and the point moves with each training epoch, based on the gradient of the log likelihood, until a local maximum of the likelihood is reached. GAM’s parameters are updated using an incremental approximation to the batch-learning EM algorithm. However, the most important respect in which GAM’s learning procedure differs from that of EM is that the former uses predictive feedback via the match tracking process. When errors occur in the I → O mapping, match tracking reduces, by varying amounts, the number of categories that learn and thus restricts the movement of GAM’s parameterization to a parameter subspace. Match tracking also causes uncommitted categories to be chosen, which expands the dimensionality of the parameter space. Newly committed categories have small a priori probabilities and large standard deviations, and thus a weak but ubiquitous influence on the gradient. 3.3 Incremental Variants of EM. One of the practical advantages of GAM over the standard EM algorithm described in sections 3.1 and 3.2 is that the former learns incrementally whereas the latter learns using a batch method. However, incremental variants of EM have also been developed. Most notably, Neal and Hinton (1993) showed that EM can incrementally update the model parameters if a separate set of sufficient statistics is stored for each input sample. That is, a separate set of the statistics computed in equations 3.9 through 3.11 and 3.14, corresponding to each input sample, can be saved. In this way, the E-step and M-step can be computed following the presentation of each sample, with the maximization affecting only the statistics associated with the current sample. Because this incremental EM recomputes expectations and maximizations following each input sample, it incorporates new information immediately into its parameter updates and thereby converges more quickly than the standard EM batch algorithm. Incremental EM illustrates the statistics that need to be maintained in order to ensure monotonic convergence of the model likelihood in an online
A Constructive, Incremental-Learning Network
1527
setting. However, the need to store separate statistics for each input sample makes incremental EM extremely nonlocal and, moreover, quite impractical for use on large data sets. On the other hand, there also exist incremental approximations of EM that use local learning but do not guarantee monotonic convergence of the model likelihood. For example, Hinton and Nowlan (1990) used an incremental equation for updating variance estimates that is identical to equation 2.14, except that a constant learning rate coefficient was used rather than the decaying term, nj−1 . GAM differs from standard EM due to both GAM’s match tracking procedure and its incremental approximation to EM’s learning equations. To make comparisons of GAM and EM on real-world classification problems more informative, the effects of each of these differences should be isolated. Therefore, in the following section GAM is compared to an incremental EM approximation as well as to the standard EM algorithm. Furthermore, in order to isolate the role played by match tracking, we use GAM’s learning equations as the incremental EM approximation. 4 Simulations: Comparisons of GAM and EM 4.1 Methodology. All three classification tasks are evaluated using the same procedure. The data sets are normalized to have unit variance in each dimension. EM is tested with several different N. For each setting of N, EM is trained five times with different initializations, and the five test results are averaged. EM’s performance often peaks quickly and then declines due to overfitting, particularly when N is large. Therefore, EM’s performance is plotted following two training epochs, when its best performance is generally obtained (for large N), and also following equilibration. GAM uses ρ ≈ 0 (precisely, ρ = 10−7M ) for all simulations, and γ is varied. For each setting of γ , GAM is trained five times with different random orderings of the data, and the data order is also scrambled between each training epoch. GAM is trained for 100 epochs, and the test results are plotted after each epoch. As the results below illustrate, GAM often begins with relatively poor performance for the first few training epochs (particularly when γ is large because it biases the initial category standard deviations), after which performance improves. Performance sometimes peaks and then declines due to overfitting. To convey a full picture of GAM’s behavior, GAM’s average performance is plotted for each of its 100 training epochs and for each setting of γ . Several initialization procedures for EM were evaluated and found to produce widely different results. The most successful of these is reported here. As it happens, this procedure initializes EM components in essentially the same way that GAM categories are initialized, except that the former are all initialized prior to training. Specifically, each mixture component is assigned to one of N randomly selected samples, denoted by {E xt , k(t)}N t=1 ,
1528
James R. Williamson
and initialized as follows: αj = 1/N, µji = xti , σji = γ , and λjk = δ[k − k(t)]. In addition, it is guaranteed that at least one component maps to each of the output classes. Because each EM component maps only to a single output class, EM and GAM use the same representation and thus have the same storage requirement for a given N. It is fortuitous that EM’s best initialization procedure corresponds so closely to that of GAM. This makes the comparison between the two algorithms more revealing because it isolates the role played by match tracking. Apart from their batch–incremental-learning distinction, this EM algorithm operates identically to a GAM network that has a constant ρ ≡ 0. This is because EM equation 3.13, in which “feedback” from the class label directly “activates” the mixture components, is functionally equivalent to the GAM process of resetting all ensemble categories that make a wrong prediction, and finally basing learning on the chosen-ensemble activations in equation 2.6 that correspond to the correct prediction. Thus, GAM is set apart only by match tracking, which raises ρ according to equation 2.7 when an incorrect prediction is made. The contribution of match tracking can be further isolated by removing the batch–incremental-learning distinction. This is done by using a staticGAM (S-GAM) network, which is identical to GAM except that it has a fixed vigilance (ρ ≡ ρ), which prevents S-GAM from committing new categories during training because the baseline vigilance, ρ = 10−7M , is too small to reset any of the committed categories. Therefore, S-GAM needs to be initialized with a set of N categories prior to training. S-GAM is initialized the same way as EM (described above), with each of the N categories assigned to one of N randomly selected training samples and with an ensemble that maps to each of the output classes. If S-GAM makes an incorrect prediction during training, then the chosen ensemble is reset, but vigilance is not raised. Because vigilance is never raised, match tracking has no effect on learning other than to ensure that the correct prediction is made before learning occurs. Therefore, S-GAM’s procedure is functionally identical to the EM procedure of directly activating categories based on both the input and the supervised class label. By including comparisons to S-GAM, therefore, the effects of match tracking alone (GAM versus S-GAM), the effects of incremental learning alone (S-GAM versus EM), and the effects of match tracking and incremental learning together (GAM versus EM) are revealed. 4.2 Letter Image Recognition. EM, GAM, and S-GAM are first evaluated on a letter image recognition task developed in Frey and Slate (1991). The data set, which is archived in the UCI machine learning repository (King, 1992), consists of 16-dimensional vectors derived from machinegenerated images of alphabetical characters (A to Z). The classification problem is to predict the correct letter from the 16 features. Classification difficulty stems from the fact that the characters are generated from 20 different fonts, are randomly warped, and only simple features such as the
A Constructive, Incremental-Learning Network
1529
Table 1: Letter Image Classification. Algorithm k-NN HAC (a) HAC (b) HAC (c) GAM-CL (γ = 2) GAM-CL (γ = 4) FAM (α = 1.0) FAM (α = 0.1)
Error Rate (%) 4.4 19.2 18.4 17.3 6.0 6.3 8.1 13.5
Source: Results are adapted from Frey and Slate (1991) and Williamson (1996a). Notes: k-NN = nearest-neighbor classifier (k = 1). HAC = Holland-style adaptive classifier. Results of three HAC variations are shown here (see Frey & Slate, 1991). GAM-CL = Gaussian ARTMAP with choice learning. FAM = Fuzzy ARTMAP.
total number of “on” pixels and the size and position of a box around the “on” pixels are used. The data set consists of 20,000 samples, the first 16,000 of which are used for training and the last 4000 for testing. For comparison, the results of several other classifiers are shown in Table 1 (see Frey & Slate, 1991; Williamson, 1996a). Figure 1 shows the classification results of EM and GAM on the letter image recognition problem. EM’s average error rate is plotted as a function of N (N = 100, 200, . . . , 1000). For each N, the error rate is shown after two training epochs (solid line) and after EM has equilibrated (dashed line). Note that with N < 600, EM performs better following equilibration, but with N > 600, EM performs better following two epochs, and then performance declines, presumably due to overfitting. EM’s best performance (7.4 ± 0.2 percent error) is obtained with N = 1000 following two training epochs. GAM’s average error rate is plotted for γ = 1, 2, 4. Each point along one of GAM’s error curves corresponds to a different training epoch, with the first epoch plotted at the left-most point of a curve. As training proceeds, the number of categories increases, and the error rate decreases. After a certain point, the error rate may increase again due to overfitting. As γ is raised, the error curves shift to the left. The initial performance becomes progressively worse, and it takes longer for GAM’s performance to peak. However, fewer categories are created, and there is less degradation due to
1530
James R. Williamson
Letter Image Classification EM, 2 epochs, gamma = 1 EM, equilibrium, gamma = 1 GAM, gamma = 1 GAM, gamma = 2 GAM, gamma = 4
Error Rate (%)
25
20
15
10
5 100
200
300
400
500
600
700
800
900
1000
Number of Categories Figure 1: The average error rates of EM and GAM on the letter image classification problem plotted as a function of the number of categories, N. EM’s error rates are shown after two training epochs and after equilibration. EM is trained with N = 100, 200, . . . , 1000. GAM is trained with γ = 1, 2, 4. GAM’s error rates are plotted after each training epoch. Each of GAM’s error curves corresponds to a different value of γ , with the left-most point on a curve corresponding to the first epoch. From left to right along a curve, each successive point corresponds to a successive training epoch.
overfitting. GAM’s best performance (5.4 ± 0.3 percent error) is obtained with γ = 1 following six training epochs. For all settings of γ , GAM achieves lower error rates than EM for all N. As γ is raised, GAM requires more training epochs to surpass EM. However, EM does achieve reasonably low error rates with N much smaller than that created by GAM for any value of γ . Figure 1 also suggests a general pattern
A Constructive, Incremental-Learning Network
1531
Table 2: The Trade-Offs Entailed by the Choice of γ for GAM for Three Classification Problems. γ a.
Number of Epochs
Error Rate (%)
Number of Categories
Storage Ratea (%)
812.8 941.2 807.2 712.4 593.8
10.5 12.1 10.4 9.2 7.7
125.0 255.2 212.0 149.2 93.4
5.7 11.7 9.7 6.8 4.3
72.4 62.2 53.8 46.6
28.8 24.7 21.4 18.5
Letter image classification 1 1 2 4 8
b.
1 6 38 100 100
7.7 5.4 6.0 6.6 8.8
Satellite image classification 1 1 2 4 8
c.
1 100 100 100 100
12.4 10.0 11.2 12.2 13.9
Spoken vowel classification 1 2 4 8
1 5 6 19
49.7 48.7 44.0 43.9
Notes: The lowest error rates (averaged over five runs) obtained for each of four settings of γ (γ = 1, 2, 4, 8) are shown. The error rate obtained with γ = 1 after only one training epoch is also shown to illustrate GAM’s fast-learning capability. a The amount of storage used by GAM divided by the amount used by the training set.
in GAM’s performance: in a plot of error rate as a function of number of categories, there exists (roughly) a U-shaped envelope. For different values of γ , GAM approaches and then follows a different portion of that envelope. As the remaining simulations illustrate, however, the relationship between γ and the placement of this envelope varies between different tasks. To illustrate further the trade-offs entailed by the choice of γ , Table 2a lists the lowest error rate obtained on this problem for different values of γ , along with the number of training epochs used to reach that point and the number of categories created. Finally, the storage rate, which is the amount of storage used by GAM relative to that used by the training set (and hence, by a nearest-neighbor classifier), is listed. For an M-dimensional input, each GAM category stores 2M + 1 values, so the storage rate is calculated as: N(2M + 1) . TM In Figure 2, the error rates are plotted as a function of the number of training epochs. GAM’s results are shown given its best parameter setting
1532
James R. Williamson
Letter Image Classification EM (N = 1000), gamma = 1 GAM, gamma = 1 S-GAM (N = 1000), gamma = 1
10
Error Rate (%)
9
8
7
6
5 0
10
20
30
40
50
60
70
80
90
100
Number of Epochs Figure 2: Average error rates of EM, GAM, and S-GAM plotted as a function of the number of training epochs.
(γ = 1), and the results of EM and S-GAM are shown given similar parameters (γ = 1, N = 1000). GAM quickly achieves its best performance at six epochs, after which performance slowly degrades due to overfitting. EM quickly achieves its best performance at two epochs, after which performance quickly degrades and then stabilizes. S-GAM’s performance improves more slowly than EM, but S-GAM eventually obtains lower error rates than EM. For other values of N, the relative performance of EM and S-GAM is similar to that shown in Figure 2. Therefore, on this problem SGAM’s incremental approximation to EM’s batch learning seems to confer a small advantage. Much of this advantage is probably due to the fact that S-GAM’s standard deviations decrease less than EM’s due to the lingering effect of γ . Additional simulations have shown that EM suffers less from
A Constructive, Incremental-Learning Network
1533
overfitting if the shrinkage of its standard deviations is attenuated during learning, although this variation does not appear to improve its best results. Figures 1 and 2 show that match tracking causes GAM to construct an appropriate number of categories to support the I → O mapping and to learn the mapping more accurately than EM or S-GAM. However, these results do not indicate why this is the case. There are two possible explanations for why match tracking gives GAM an advantage: (1) match tracking causes GAM to obtain a better estimate of the I/O density (i.e., to obtain a mixture model with a higher likelihood) (2) match tracking biases GAM’s density estimate so as to reduce its predictive error in the I → O mapping. The results shown in Figures 3 and 4 indicate that the latter explanation is correct. Figure 3 shows the error rate over 25 training epochs for a single run of GAM, on which GAM created 985 categories: the error rate on the training set (top) and the test set (bottom). Figure 3 also shows the corresponding error rates for EM and S-GAM initialized with the same number (N = 985) of categories as were created by GAM. On both the training set and the test set, GAM obtains much lower error rates than EM and S-GAM. Next, Figure 4 shows the log likelihoods of the mixture models formed by EM, GAM, and S-GAM. EM obtains a much higher log likelihood than GAM and S-GAM on both the training set and the test set, whereas the log likelihoods obtained by GAM and S-GAM are similar. Therefore, GAM outperforms EM and S-GAM because match tracking biases its density estimate such that the predictive error in its I → O mapping is reduced, despite the fact that GAM obtains an estimate of the I/O density with a lower likelihood than that of EM. 4.3 Landsat Satellite Image Segmentation. EM, GAM, and S-GAM are evaluated on a real-world task: segmentation of a Landsat satellite image (Feng, Sutherland, King, Muggleton, & Henery, 1993). The data set, which is archived in the UCI machine learning repository (King, 1992), consists of multispectral values within nonoverlapping 3 × 3 pixel neighborhoods in an image obtained from a Landsat satellite multispectral scanner. At each pixel are four values, corresponding to four spectral bands. Two of these are in the visible region (corresponding approximately to green and red regions of the visible spectrum) and two are in the (near) infrared. The input space is thus 36-dimensional (9 pixels and 4 values per pixel). The spatial resolution of a pixel is about 80 m × 80 m. The center pixel of each neighborhood is associated with one of six vegetation classes: red soil, cotton crop, gray soil, damp gray soil, soil with vegetation stubble, and very damp gray soil. The data set is partitioned into 4435 samples for training, and 2000 samples for testing. For comparison, the results of several other classifiers are shown in Table 3 (see Feng et al., 1993; Asfour, Carpenter, & Grossberg, 1995). Figure 5 shows the classification results of EM and GAM on the satellite image segmentation problem. EM’s error rate is plotted for N = 25, 50, . . . ,
1534
James R. Williamson
Letter Image Classification 12 EM (N = 985), gamma = 1 GAM, gamma = 1 S-GAM (N = 985), gamma = 1
Training Set
10
8
6
Error Rate (%)
4
2
0 0
5
10
15
20
25
12 EM (N = 985), gamma = 1 GAM, gamma = 1 S-GAM (N = 985), gamma = 1 10
Test Set
8
6
4
2
0 0
5
10
15
20
25
Number of Epochs Figure 3: Error rates of EM, GAM, and S-GAM on the training set (top) and the test set (bottom) plotted as a function of the number of training epochs.
A Constructive, Incremental-Learning Network
1535
Letter Image Classification 4 EM (N = 985), gamma = 1 GAM, gamma = 1 S-GAM (N = 985), gamma = 1
2
Training Set
-2
-4
-6
-8
-10
-12 0
5
10
15
20
25
4 EM (N = 985), gamma = 1 GAM, gamma = 1 S-GAM (N = 985), gamma = 1
2
0
Test Set
Log−Likelihood of Mixture Model
0
-2
-4
-6
-8
-10
-12 0
5
10
15
20
25
Number of Epochs Figure 4: Log likelihoods of EM, GAM, and S-GAM on the training set (top) and test set (bottom) plotted as a function of the number of training epochs.
1536
James R. Williamson
Table 3: Satellite Image Classification. Algorithm
Error Rate (%)
k-NN FAM RBF Alloc80 INDCART CART MLP NewID C4.5 CN2 Quadra SMART LogReg Discrim CASTLE
10.6 11.0 12.1 13.2 13.7 13.8 13.9 15.0 15.1 15.2 15.3 15.9 16.9 17.1 19.4
Source: Results adapted from Feng et al. (1993) and Asfour et al. (1995). Note: The k-NN result reported here, which we obtained, is different from the k-NN result reported in Feng et al. (1993).
275, following two epochs and following equilibration. For N < 100 EM performs better after equilibration, whereas for N > 100 EM performs better after two epochs. EM’s best results (10.6 ± 0.4 percent error) are obtained with N = 275. GAM’s error rate is plotted for γ = 1, 2, 4. Once again, GAM achieves the lowest overall error rate (10.0 ± 0.4 percent error), although on this problem GAM outperforms EM only with γ = 1. It is difficult to distinguish GAM’s error curves because they overlap each other so much. Unlike the letter image recognition results, none of GAM’s error curves tails up due to overtraining. Therefore, all the curves appear to be on the left side of our hypothetical U-shaped envelope. Table 2b reports the lowest error rate for different values of γ , along with the relevant statistics: number of training epochs, number of categories, and storage rate. Figure 6 plots the error rates as a function of the number of training epochs for GAM (γ = 1), EM (γ = 1, N = 250), and S-GAM (γ = 1, N = 250). GAM’s performance generally increases throughout, whereas EM’s performance again peaks at two epochs and then quickly degrades and stabilizes. On this problem EM outperforms S-GAM.
A Constructive, Incremental-Learning Network
1537
Figure 5: Average error rates of EM and GAM plotted as a function of the number of categories.
4.4 Speaker-Independent Vowel Recognition. Finally, EM, GAM, and S-GAM are evaluated on another real-world task: speaker-independent vowel recognition (Deterding, 1989). The data set is archived in the CMU connectionist benchmark collection (Fahlman, 1993). The data were collected by Deterding (1989), who recorded examples of the 11 steady-state vowels of English spoken by 15 speakers. A word containing each vowel was spoken once by each of the 15 speakers (7 females and 8 males). The speech signals were low pass filtered at 4.7 kHz and then digitized to 12 bits with a 10-kHz sampling rate. Twelfth-order linear predictive analysis was carried out on six 512-sample Hamming windowed segments from the steady part of the vowel. The reflection coefficients were used to calculate 10 log area parameters, giving a 10-dimensional input space. Each speaker thus
1538
James R. Williamson
Figure 6: Average error rates of EM, GAM, and S-GAM plotted as a function of the number of training epochs.
yielded six samples of speech from the 11 vowels, resulting in 990 samples from the 15 speakers. The data are partitioned into 528 samples for training, from four male and four female speakers, and 462 samples for testing, from the remaining four male and three female speakers. For comparison, the results of several other classifiers are shown in Table 4 (see Robinson, 1989; Fritzke, 1994; Williamson, 1996a). Figure 7 shows the classification results of EM and GAM on the vowel recognition problem. EM’s error rate is plotted for N = 15, 20, . . . , 70, following two epochs and following equilibration. For all N, EM obtains better results after two epochs than after equilibration. EM’s best results (45.4 ± 1.1 percent error) are obtained with N = 40. GAM’s error rate is plotted for γ = 2, 4, 8. By a small margin, GAM achieves the lowest overall
A Constructive, Incremental-Learning Network
1539
Table 4: Spoken Vowel Classification. Algorithm k-NN MLP MKM RBF GNN SNN 3-D GCS 5-D GCS GAM-CL (γ = 2) GAM-CL (γ = 4) FAM (α = 1.0) FAM (α = 0.1)
Error Rate (%) 43.7 49.4 57.4 52.4 46.5 45.2 36.8 33.5 43.3 41.8 48.9 50.4
Source: Results adapted from Robinson (1989), Fritzke (1994), and Williamson (1996a). Notes: Gradient descent networks (using 88 internal nodes) are: MLP (multilayer perceptron), MKM (modified Kanerva model), RBF (radial basis function), GNN (gaussian node network), and SNN (square node network). Constructive networks are: GCS (growing cell structures), GAM-CL, and FAM.
error rate (43.9 ± 1.4 percent error), but outperforms EM only with γ = 4 and γ = 8. The data set is quite small, and hence both algorithms have a strong tendency to overfit the data. Table 2c reports the lowest error rate for different values of γ , along with the relevant statistics: the number of training epochs, number of categories, and storage rate. Figure 8 plots the error rates as a function of the number of training epochs for GAM (γ = 3), EM (γ = 3, N = 40), and S-GAM (γ = 3, N = 40). GAM’s performance peaks at 19 epochs and then slowly degrades. EM’s performance peaks at 2 epochs and then degrades quickly and severely before stabilizing. S-GAM’s error rate, which stabilizes near 50 percent, never reaches EM’s best performance. 5 Conclusions GAM learns mappings from a real-valued space of input features to a discrete-valued space of output classes by learning a gaussian mixture model of the input space as well as connections from the mixture components to the output classes. The mixture components correspond to nodes in GAM’s internal category layer. GAM is a simple neural architecture that em-
1540
James R. Williamson
Spoken Vowel Classification EM, 2 epochs, gamma = 8 EM, equilibrium, gamma = 8 GAM, gamma = 2 GAM, gamma = 4 GAM, gamma = 8
80
Error Rate (%)
75
70
65
60
55
50
45
40 20
30
40
50
60
70
Number of Categories Figure 7: Average error rates of EM and GAM plotted as a function of the number of categories.
ploys constructive, incremental, local learning rules. These learning rules allow GAM to create a representation of appropriate size as it is trained online. We have shown a close relationship between GAM and an EM algorithm that estimates the joint I/O density by optimizing a gaussian-multinomial mixture model. The major difference between GAM and EM is GAM’s match tracking procedure, which raises a match criterion following incorrect predictions and prevents nodes from learning if they do not satisfy the raised match criterion. Match tracking biases GAM’s estimate of the joint I/O density such that GAM’s predictive error in the I → O map is reduced. With this biased density estimate, GAM outperforms EM on classification bench-
A Constructive, Incremental-Learning Network
1541
Figure 8: Average error rates of EM, GAM, and S-GAM plotted as a function of the number of training epochs.
marks despite the fact that GAM uses a suboptimal incremental approximation to EM’s batch learning rules and learns mixture models that have lower likelihoods than those optimized by EM. Acknowledgments This work was supported in part by the Office of Naval Research (ONR N00014-95-1-0409). References Asfour, Y., Carpenter, G. A., & Grossberg, S. (1995). Landsat satellite image segmentation using the fuzzy ARTMAP neural network (Tech. Rep. No. CAS/CNS
1542
James R. Williamson
TR-95-004). Boston: Boston University. Carpenter, G. A., & Grossberg, S. (1987). A massively parallel architecture for a self-organizing neural pattern recognition machine. Computer Vision, Graphics, and Image Processing, 37, 54–115. Carpenter, G. A., Grossberg, S., & Reynolds, J. (1991). ARTMAP: Supervised realtime learning and classification of nonstationary data by a self-organizing neural network. Neural Networks, 4, 565–588. Carpenter, G. A., Grossberg, S., & Rosen, D. B. (1991). Fuzzy ART: Fast stable learning and categorization of analog patterns by an adaptive resonance system. Neural Networks, 4, 759–771. Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Satistical Society Series B, 39, 1–38. Deterding, D. H. (1989). Speaker normalisation for automatic speech recognition. Unpublished doctoral dissertation, University of Cambridge. Fahlman, S. E. (1993). CMU benchmark collection for neural net learning algorithms. [Machine-readable data repository]. Pittsburgh: Carnegie Mellon University, School of Computer Science. Feng, C., Sutherland, A., King, S., Muggleton, S., & Henery, R. (1993). Symbolic classifiers: Conditions to have good accuracy performance. In Proceedings of the Fourth International Workshop on Artificial Intelligence and Statistics (pp. 371– 380) Waterloo, Ontario: University of Waterloo. Frey, P. W. & Slate, D. J. (1991). Letter recognition using Holland-style adaptive classifiers. Machine Learning, 6, 161–182. Fritzke, B. (1994). Growing cell structures—A self-organizing network for unsupervised and supervised learning. Neural Networks, 7, 1441–1460. Ghahramani, Z. & Jordan, M. I. (1994). Supervised learning from incomplete data via an EM approach. In J. D. Cowan, G. Tesauro, & J. Alspector (Eds.), Advances in Neural Information Processing Systems, 9. San Mateo, CA: Morgan Kauffman. Grossberg, S., (1976). Adaptive pattern classification and universal recoding. I: Parallel development and coding of neural feature detectors. Biological Cybernetics, 23, 121–134. Grossberg, S., & Williamson, J. (1997). A self-organizing neural system for learning to recognize textured scenes (Tech. Rep. No. CAS/CNS TR-97-001). Boston, MA: Boston University. Hinton, G. E., & Nowlan, S. J. (1990). The bootstrap Widrow-Hoff rule as a cluster-formation algorithm. Neural Computation, 2, 355–362. King, R. (1992). Statlog databases. UCI Repository of machine learning databases. [Machine readable repository at ics.uci.edu:/pub/machinelearning-databases]. Neal, R. M., & Hinton, G. E. (1993). A new view of the EM algorithm that justifies incremental and other variants. Unpublished manuscript. Poggio, T., & Girosi, F. (1989). A theory of networks for approximation and learning (A.I. Memo No. 1140). Cambridge, MA: M.I.T., 1989. Robinson, A. J. (1989). Dynamic error propagation networks. Unpublished doctoral dissertation, Cambridge University.
A Constructive, Incremental-Learning Network
1543
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propagation. In Parallel Distributed Processing (pp. 318– 362). Cambridge, MA: MIT Press. Williamson, J. R. (1996a). Gaussian ARTMAP: A neural network for fast incremental learning of noisy multidimensional maps. Neural Networks, 9, 881–897. Williamson, J. R. (1996b). Neural networks for image processing, classification, and understanding. Unpublished doctoral dissertation, Boston University. Xu, L. & Jordan, M. I. (1996). On convergence properties of the EM algorithm for gaussian mixtures. Neural Computation, 8, 129–151.
Received July 9, 1996, accepted January 29, 1997.
Communicated by Shimon Ullman
Shape Quantization and Recognition with Randomized Trees Yali Amit Department of Statistics, University of Chicago, Chicago, IL, 60637, U.S.A.
Donald Geman Department of Mathematics and Statistics, University of Massachusetts, Amherst, MA 01003, U.S.A.
We explore a new approach to shape recognition based on a virtually infinite family of binary features (queries) of the image data, designed to accommodate prior information about shape invariance and regularity. Each query corresponds to a spatial arrangement of several local topographic codes (or tags), which are in themselves too primitive and common to be informative about shape. All the discriminating power derives from relative angles and distances among the tags. The important attributes of the queries are a natural partial ordering corresponding to increasing structure and complexity; semi-invariance, meaning that most shapes of a given class will answer the same way to two queries that are successive in the ordering; and stability, since the queries are not based on distinguished points and substructures. No classifier based on the full feature set can be evaluated, and it is impossible to determine a priori which arrangements are informative. Our approach is to select informative features and build tree classifiers at the same time by inductive learning. In effect, each tree provides an approximation to the full posterior where the features chosen depend on the branch that is traversed. Due to the number and nature of the queries, standard decision tree construction based on a fixed-length feature vector is not feasible. Instead we entertain only a small random sample of queries at each node, constrain their complexity to increase with tree depth, and grow multiple trees. The terminal nodes are labeled by estimates of the corresponding posterior distribution over shape classes. An image is classified by sending it down every tree and aggregating the resulting distributions. The method is applied to classifying handwritten digits and synthetic linear and nonlinear deformations of three hundred LATEX symbols. Stateof-the-art error rates are achieved on the National Institute of Standards and Technology database of digits. The principal goal of the experiments on LATEX symbols is to analyze invariance, generalization error and related issues, and a comparison with artificial neural networks methods is presented in this context. Neural Computation 9, 1545–1588 (1997)
c 1997 Massachusetts Institute of Technology °
1546
Yali Amit and Donald Geman
Figure 1: LATEX symbols.
1 Introduction We explore a new approach to shape recognition based on the joint induction of shape features and tree classifiers. The data are binary images of two-dimensional shapes of varying sizes. The number of shape classes may reach into the hundreds (see Figure 1), and there may be considerable within-class variation, as with handwritten digits. The fundamental problem is how to design a practical classification algorithm that incorporates the prior knowledge that the shape classes remain invariant under certain transformations. The proposed framework is analyzed within the context of invariance, generalization error, and other methods based on inductive learning, principally artificial neural networks (ANN). Classification is based on a large, in fact virtually infinite, family of binary
Shape Quantization and Recognition
1547
features of the image data that are constructed from local topographic codes (“tags”). A large sample of small subimages of fixed size is recursively partitioned based on individual pixel values. The tags are simply labels for the cells of each successive partition, and each pixel in the image is assigned all the labels of the subimage centered there. As a result, the tags do not involve detecting distinguished points along curves, special topological structures, or any other complex attributes whose very definition can be problematic due to locally ambiguous data. In fact, the tags are too primitive and numerous to classify the shapes. Although the mere existence of a tag conveys very little information, one can begin discriminating among shape classes by investigating just a few spatial relationships among the tags, for example, asking whether there is a tag of one type “north” of a tag of another type. Relationships are specified by coarse constraints on the angles of the vectors connecting pairs of tags and on the relative distances among triples of tags. No absolute location or scale constraints are involved. An image may contain one or more instances of an arrangement, with significant variations in location, distances, angles, and so forth. There is one binary feature (“query”) for each such spatial arrangement; the response is positive if a collection of tags consistent with the associated constraints is present anywhere in the image. Hence a query involves an extensive disjunction (ORing) operation. Two images that answer the same to every query must have very similar shapes. In fact, it is reasonable to assume that the shape class is determined by the full feature set; that is, the theoretical Bayes error rate is zero. But no classifier based on the full feature set can be evaluated, and it is impossible to determine a priori which arrangements are informative. Our approach is to select informative features and build tree classifiers (Breiman, Friedman, Olshen, & Stone, 1984; Casey & Nagy, 1984; Quinlan, 1986) at the same time by inductive learning. In effect, each tree provides an approximation to the full posterior where the features chosen depend on the branch that is traversed. There is a natural partial ordering on the queries that results from regarding each tag arrangement as a labeled graph, with vertex labels corresponding to the tag types and edge labels to angle and distance constraints (see Figures 6 and 7). In this way, the features are ordered according to increasing structure and complexity. A related attribute is semi-invariance, which means that a large fraction of those images of a given class that answer the same way to a given query will also answer the same way to any query immediately succeeding it in the ordering. This leads to nearly invariant classification with respect to many of the transformations that preserve shape, such as scaling, translation, skew and small, nonlinear deformations of the type shown in Figure 2. Due to the partial ordering, tree construction with an infinite-dimensional feature set is computationally efficient. During training, multiple trees (Breiman, 1994; Dietterich & Bakiri, 1995; Shlien, 1990) are grown, and a
1548
Yali Amit and Donald Geman
Figure 2: (Top) Perturbed LATEX symbols. (Bottom) Training data for one symbol.
form of randomization is used to reduce the statistical dependence from tree to tree; weak dependence is verified experimentally. Simple queries are used at the top of the trees, and the complexity of the queries increases with tree depth. In this way semi-invariance is exploited, and the space of shapes is systematically explored by calculating only a tiny fraction of the answers. Each tree is regarded as a random variable on image space whose values are the terminal nodes. In order to recognize shapes, each terminal node of each tree is labeled by an estimate of the conditional distribution over the shape classes given that an image reaches that terminal node. The estimates are simply relative frequencies based on training data and require no optimization. A new data point is classified by dropping it down each of the trees, averaging over the resulting terminal distributions, and taking the mode of this aggregate distribution. Due to averaging and weak dependence, considerable errors in these estimates can be tolerated. Moreover, since tree-growing (i.e., question selection) and parameter estimation can be separated, the estimates can be refined indefinitely without reconstruct-
Shape Quantization and Recognition
1549
ing the trees, simply by updating a counter in each tree for each new data point. The separation between tree making and parameter estimation, and the possibility of using different training samples for each phase, opens the way to selecting the queries based on either unlabeled samples (i.e., unsupervised learning) or samples from only some of the shape classes. Both of these perform surprisingly well compared with ordinary supervised learning. Our recognition strategy differs from those based on true invariants (algebraic, differential, etc.) or structural features (holes, endings, etc.). These methods certainly introduce prior knowledge about shape and structure, and we share that emphasis. However, invariant features usually require image normalization or boundary extraction, or both, and are generally sensitive to shape distortion and image degradation. Similarly, structural features can be difficult to express as well-defined functions of the image (as opposed to model) data. In contrast, our queries are stable and primitive, precisely because they are not truly invariant and are not based on distinguished points or substructures. A popular approach to multiclass learning problems in pattern recognition is based on ANNs, such as feedforward, multilayer perceptrons (Dietterich & Bakiri, 1995; Fukushima & Miyake, 1982; Knerr, Personnaz, & Dreyfus, 1992; Martin & Pitman, 1991). For example, the best rates on handwritten digits are reported in LeCun et al. (1990). Classification trees and neural networks certainly have aspects in common; for example, both rely on training data, are fast online, and require little storage (see Brown, Corruble, & Pittard, 1993; Gelfand & Delp, 1991). However, our approach to invariance and generalization is, by comparison, more direct in that certain properties are acquired by hardwiring rather than depending on learning or image normalization. With ANNs, the emphasis is on parallel and local processing and a limited degree of disjunction, in large part due to assumptions regarding the operation of the visual system. However, only a limited degree of invariance can be achieved with such models. In contrast, the features here involve extensive disjunction and more global processing, thus achieving a greater degree of invariance. This comparison is pursued in section 12. The article is organized as follows. Other approaches to invariant shape recognition are reviewed in section 2; synthesized random deformations of 293 basic LATEX symbols (see Figures 1 and 2) provide a controlled experimental setting for an empirical analysis of invariance in a high-dimensional shape space. The basic building blocks of the algorithm, namely the tags and the tag arrangements, are described in section 3. In section 4 we address the fundamental question of how to exploit the discriminating power of the feature set; we attempt to motivate the use of multiple decision trees in the context of the ideal Bayes classifier and the trade-off between approximation error and estimation error. In section 5 we explain the roles
1550
Yali Amit and Donald Geman
of the partial ordering and randomization for both supervised and unsupervised tree construction; we also discuss and quantify semi-invariance. Multiple decision trees and the full classification algorithm are presented in section 6, together with an analysis of the dependence on the training set. In section 7 we calculate some rough performance bounds, for both individual and multiple trees. Generalization experiments, where the training and test samples represent different populations, are presented in section 8, and incremental learning is addressed in section 9. Fast indexing, another possible role for shape quantization, is considered in section 10. We then apply the method in section 11 to a real problem—classifying handwritten digits—using the National Institute of Standards and Technology (NIST) database for training and testing, achieving state-of-the-art error rates. In section 12 we develop the comparison with ANNs in terms of invariance, generalization error, and connections to observed functions in the visual system. We conclude in section 13 by assessing extensions to other visual recognition problems.
2 Invariant Recognition Invariance is perhaps the fundamental issue in shape recognition, at least for isolated shapes. Some basic approaches are reviewed within the following framework. Let X denote a space of digital images, and let C denote a set of shape classes. Let us assume that each image x ∈ X has a true class label Y(x) ∈ C = {1, 2, . . . , K}. Of course, we cannot directly observe Y. In addition, there is a probability distribution P on X. Our goal is to construct ˆ X → C such that P(Yˆ 6= Y) is small. a classifier Y: In the literature on statistical pattern recognition, it is common to address some variation by preprocessing or normalization. Given x, and before estimating the shape class, one estimates a transformation ψ such that ψ(x) represents a standardized image. Finding ψ involves a sequence of procedures that brings all images to the same size and then corrects for translation, slant, and rotation by one of a variety of methods. There may also be some morphological operations to standardize stroke thickness (Bottou et al., 1994; Hastie, Buja, & Tibshirani, 1995). The resulting image is then classified by one of the standard procedures (discriminant analysis, multilayer neural network, nearest neighbors, etc.), in some cases essentially ignoring the global spatial properties of shape classes. Difficulties in generalization are often encountered because the normalization is not robust or does not accommodate nonlinear deformations. This deficiency can be ameliorated only with very large training sets (see the discussions in Hussain & Kabuka, 1994; Raudys & Jain, 1991; Simard, LeCun, & Denker, 1994; Werbos, 1991, in the context of neural networks). Still, it is clear that robust normalization
Shape Quantization and Recognition
1551
methods which reduce variability and yet preserve information can lead to improved performance of any classifier; we shall see an example of this in regard to slant correction for handwritten digits. Template matching is another approach. One estimates a transformation from x for each of the prototypes in the library. Classification is then based on the collection of estimated transformations. This requires explicit modeling of the prototypes and extensive computation at the estimation stage (usually involving relaxation methods) and appears impractical with large numbers of shape classes. A third approach, closer in spirit to ours, is to search for invariant functions 8(x), meaning that P(8(x) = φc |Y = c) = 1 for some constants φc , c = 1, . . . , K. The discriminating power of 8 depends on the extent to which the values φc are distinct. Many invariants for planar objects (based on single views) and nonplanar objects (based on multiple views) have been discovered and proposed for recognition (see Reiss, 1993, and the references therein). Some invariants are based on Fourier descriptors and image moments; for example, the magnitude of Zernike moments (Khotanzad & Lu, 1991) is invariant to rotation. Most invariants require computing tangents from estimates of the shape boundaries (Forsyth et al., 1991; Sabourin & Mitiche, 1992). Examples of such invariants include inflexions and discontinuities in curvature. In general, the mathematical level of this work is advanced, borrowing ideas from projective, algebraic, and differential geometry (Mundy & Zisserman, 1992). Other successful treatments of invariance include geometric hashing (Lamdan, Schwartz, & Wolfson, 1988) and nearest-neighbor classifiers based on affine invariant metrics (Simard et al., 1994). Similarly, structural features involving topological shape attributes (such as junctions, endings, and loops) or distinguished boundary points (such as points of high curvature) have some invariance properties, and many authors (e.g., Lee, Srihari, & Gaborski, 1991) report much better results with such features than with standardized raw data. In our view, true invariant features of the form above might not be sufficiently stable for intensity-based recognition because the data structures are often too crude to analyze with continuum-based methods. In particular, such features are not invariant to nonlinear deformations and depend heavily on preprocessing steps such as normalization and boundary extraction. Unless the data are of very high quality, these steps may result in a lack of robustness to distortions of the shapes, due, for example, to digitization, noise, blur, and other degrading factors (see the discussion in Reiss, 1993). Structural features are difficult to model and to extract from the data in a stable fashion. Indeed, it may be more difficult to recognize a “hole” than to recognize an “8.” (Similar doubts about hand-crafted features and distinguished points are expressed in Jung & Nagy, 1995.) In addition, if one could recognize the components of objects without recognizing the objects themselves, then the choice of classifier would likely be secondary.
1552
Yali Amit and Donald Geman
Our features are not invariant. However, they are semi-invariant in an appropriate sense and might be regarded as coarse substitutes for some of the true geometric, point-based invariants in the literature already cited. In this sense, we share at least the outlook expressed in recent, modelbased work on quasi-invariants (Binford & Levitt, 1993; Burns, Weiss, & Riseman, 1993), where strict invariance is relaxed; however, the functionals we compute are entirely different. The invariance properties of the queries are related to the partial ordering and the manner in which they are selected during recursive partitioning. Roughly speaking, the complexity of the queries is proportional to the depth in the tree, that is, to the number of questions asked. For elementary queries at the bottom of the ordering, we would expect that for each class c, either P(Q = 1|Y = c) À 0.5 or P(Q = 0|Y = c) À 0.5; however this collection of elementary queries would have low discriminatory power. (These statements will be amplified later on.) Queries higher up in the ordering have much higher discriminatory power and maintain semi-invariance relative to subpopulations determined by the answers to queries preceding them in the ordering. Thus if Q˜ is a query immediately preceding Q in the ordering, then P(Q = 1|Q˜ = 1, Y = c) À 0.5 or P(Q = 0|Q˜ = 1, Y = c) À 0.5 for each class c. This will be defined more precisely in section 5 and verified empirically. Experiments on invariant recognition are scattered throughout the article. Some involve real data: handwritten digits. Most employ synthetic data, in which case the data model involves a prototype x∗c for each shape class c ∈ C (see Figure 1) together with a space 2 of image-to-image transformations. We assume that the class label of the prototype is preserved under all transformations in 2, namely, c = Y(θ (x∗c )) for all θ ∈ 2, and that no two distinct prototypes can be transformed to the same image. We use “transformations” in a rather broad sense, referring to both affine maps, which alter the pose of the shapes, and to nonlinear maps, which deform the shapes. (We shall use degradation for noise, blur, etc.) Basically, 2 consists of perturbations of the identity. In particular, we are not considering the entire pose space but rather only perturbations of a reference pose, corresponding to the identity. The probability measure P on X is derived from a probability measure ν(dθ) on the space of transformations as follows: for any D ⊂ X, X X P(D|Y = c)π(c) = ν{θ : θ (x∗c ) ∈ D}π(c) P(D) = c
c
where π is a prior distribution on C , which we will always take to be uniform. Thus, P is concentrated on the space of images {θ (x∗c )}θ,c . Needless to say, the situation is more complex in many actual visual recognition problems, for example, in unrestricted 3D object recognition under standard projection models. Still, invariance is already challenging in the above context. It is important to emphasize that this model is not used explicitly in the
Shape Quantization and Recognition
1553
classification algorithm. Knowledge of the prototypes is not assumed, nor is θ estimated as in template approaches. The purpose of the model is to generate samples for training and testing. The images in Figure 2 were made by random sampling from a particular distribution ν on a space 2 containing both linear (scale, rotation, skew) and nonlinear transformations. Specifically, the log scale is drawn uniformly between −1/6 and 1/6; the rotation angle is drawn uniformly from ±10 degrees; and the log ratio of the axes in the skew is drawn uniformly from −1/3 to +1/3. The nonlinear part is a smooth, random deformation field constructed by creating independent, random horizontal and vertical displacements, each of which is generated by random trigonometric series with only low-frequency terms and gaussian coefficients. All images are 32 × 32, but the actual size of the object in the image varies significantly, both from symbol to symbol and within symbol classes due to random scaling. 3 Shape Queries We first illustrate a shape query in the context of curves and tangents in an idealized, continuum setting. The example is purely motivational. In practice we are not dealing with one-dimensional curves in the continuum but rather with a finite pixel lattice, strokes of variable width, corrupted data, and so forth. The types of queries we use are described in sections 3.1 and 3.2. Observe the three versions of the digit “3” in Figure 3 (left); they are obtained by spline interpolation of the center points of the segments shown in Figure 3 (middle) in such a way that the segments represent the direction of the tangent at those points. All three segment arrangements satisfy the geometric relations indicated in Figure 3 (right): there is a vertical tangent northeast of a horizontal tangent, which is south of another horizontal tangent, and so forth. The directional relations between the points are satisfied to within rather coarse tolerances. Not all curves of a “3” contain five points whose tangents satisfy all these relations. Put differently, some “3”s answer “no” to the query, “Is there a vertical tangent northeast of a . . . ?” However, rather substantial transformations of each of the versions below will answer “yes.” Moreover, among “3”s that answer “no,” it is possible to choose a small number of alternative arrangements in such a way that the entire space of “3”s is covered. 3.1 Tags. We employ primitive local features called tags, which provide a coarse description of the local topography of the intensity surface in the neighborhood of a pixel. Instead of trying to manually characterize local configurations of interest—for example, trying to define local operators to identify gradients in the various directions—we adopt an informationtheoretic approach and “code” a microworld of subimages by a process very similar to tree-structured vector quantization. In this way we sidestep the
1554
Yali Amit and Donald Geman
Figure 3: (Left) Three curves corresponding to the digit “3.” (Middle) Three tangent configurations determining these shapes via spline interpolation. (Right) Graphical description of relations between locations of derivatives consistent with all three configurations.
issues of boundary detection and gradients in the discrete world and allow for other forms of local topographies. This approach has been extended to gray level images in Jedynak and Fleuret (1996). The basic idea is to reassign symbolic values to each pixel based on examining a few pixels in its immediate vicinity; the symbolic values are the tag types and represent labels for the local topography. The neighborhood we choose is the 4 × 4 subimage containing the pixel at the upper left corner. We cluster the subimages with binary splits corresponding to adaptively choosing the five most informative locations of the sixteen sites of the subimage. Note that the size of the subimages used must depend on the resolution at which the shapes are imaged. The 4 × 4 subimages are appropriate for a certain range of resolutions—roughly 10 × 10 through 70 × 70 in our experience. The size must be adjusted for higher-resolution data, and the ultimate performance of the classifier will suffer if the resolution of the test data is not approximately the same as that of the training data. The best approach would be one that is multiresolution, something we have not done in this article (except for some preliminary experiments in section 11) but which is carried out in Jedynak and Fleuret (1996) in the context of gray-level images and 3D objects. A large sample of 4×4 subimages is randomly extracted from the training data. The corresponding shape classes are irrelevant and are not retained. The reason is that the purpose of the sample is to provide a representative database of microimages and to discover the biases at that scale; the statistics of that world is largely independent of global image attributes, such as symbolic labels. This family of subimages is then recursively partitioned with binary splits. There are 4 × 4 = 16 possible questions: “Is site (i, j) black?” for i, j = 1, 2, 3, 4. The criterion for choosing a question at a node
Shape Quantization and Recognition
1555
t is dividing the subimages Ut at the node as equally as possible into two groups. This corresponds to reducing as much as possible the entropy of the empirical distribution on the 216 possible binary configurations for the sample Ut . There is a tag type for each node of the resulting tree, except for the root. Thus, if three questions are asked, there are 2 + 4 + 8 = 14 tags, and if five questions are asked, there are 62 tags. Depth 5 tags correspond to a more detailed description of the local topography than depth 3 tags, although eleven of the sixteen pixels still remain unexamined. Observe also that tags corresponding to internal nodes of the tree represent unions of those associated with deeper ones. At each pixel, we assign all the tags encountered by the corresponding 4 × 4 subimage as it proceeds down the tree. Unless otherwise stated, all experiments below use 62 tags. At the first level, every site splits the population with nearly the same frequencies. However, at the second level, some sites are more informative than others, and by levels 4 and 5, there is usually one site that partitions the remaining subpopulation much better than all others. In this way, the world of microimages is efficiently coded. For efficiency, the population is restricted to subimages containing at least one black and one white site within the center four, which then obviously concentrates the processing in the neighborhood of boundaries. In the gray-level context it is also useful to consider more general tags, allowing, for example, for variations on the concept of local homogeneity. The first three levels of the tree are shown in Figure 4, together with the most common configuration found at each of the eight level 3 nodes. Notice that the level 1 tag alone (i.e., the first bit in the code) determines the original image, so this “transform” is invertible and redundant. In Figure 5 we show all the two-bit tags and three-bit tags appearing in an image. 3.2 Tag Arrangements. The queries involve geometric arrangements of the tags. A query QA asks whether a specific geometric arrangement A of tags of certain types is present (QA (x) = 1) or is not present (QA (x) = 0) in the image. Figure 6 shows several LATEX symbols that contain a specific geometric arrangement of tags: tag 16 northeast of tag 53, which is northwest of tag 19. Notice that there are no fixed locations in this description, whereas the tags in any specific image do carry locations. “Present in the image” means there is at least one set of tags in x of the prescribed types whose locations satisfy the indicated relationships. In Figure 6, notice, for example, how different instances of the digit “0” still contain the arrangement. Tag 16 is a depth 4 tag; the corresponding four questions in the subimage are indicated by the following mask: n
n n 1 0 n n n n 0 0 n n n n n
1556
Yali Amit and Donald Geman
Figure 4: First three tag levels with most common configurations.
Figure 5: (Top) All instances of the four two-bit tags. (Bottom) All instances of the eight three-bit tags.
where 0 corresponds to background, 1 to object, and n to “not asked.” These neighborhoods are loosely described by “background to lower left, object to upper right.” Similar interpretations can be made for tags 53 and 19. Restricted to the first ten symbol classes (the ten digits), the conditional distribution P(Y = c|QA = 1) on classes given the existence of this arrangement in the image is given in Table 1. Already this simple query contains significant information about shape.
Shape Quantization and Recognition
1557
Figure 6: (Top) Instances of a geometric arrangement in several “0”s. (Bottom) Several instances of the geometric arrangement in one “6.” Table 1: Conditional Distribution on Digit Classes Given the Arrangement of Figure 6. 0
1
2
3
4
5
6
7
8
9
.13
.003
.03
.08
.04
.07
.23
0
.26
.16
To complete the construction of the feature set, we need to define a set of allowable relationships among image locations. These are binary functions of pairs, triples, and so forth of planar points, which depend on only their relative coordinates. An arrangement A is then a labeled (hyper)graph. Each vertex is labeled with a type of tag, and each edge (or superedge) is labeled with a type of relation. The graph in Figure 6, for example, has only binary relations. In fact, all the experiments on the LATEX symbols are restricted to this setting. The experiments on handwritten digits also use a ternary relationship of the metric type. There are eight binary relations between any two locations u and v corresponding to the eight compass headings (north, northeast, east, etc.). For example, u is “north” of v if the angle of the vector u − v is between π/4 and 3π/4. More generally, the two points satisfy relation k (k = 1, . . . , 8) if the
1558
Yali Amit and Donald Geman
angle of the vector u − v is within π/4 of k ∗ π/4. Let A denote the set of all possible arrangements, and let Q = {QA : A ∈ A}, our feature set. There are many other binary and ternary relations that have discriminating power. For example, there is an entire family of “metric” relationships that are, like the directional relationships above, completely scale and translation invariant. Given points u, v, w, z, one example of a ternary relation is ku − vk < ku − wk, which inquires whether u is closer to v than to w. With four points we might ask if ku − vk < kw − zk. 4 The Posterior Distribution and Tree-Based Approximations For simplicity, and in order to facilitate comparisons with other methods, we restrict ourselves to queries QA of bounded complexity. For example, consider arrangements A with at most twenty tags and twenty relations; this limit is never exceeded in any of the experiments. Enumerating these arrangements in some fashion, let Q = (Q1 , . . . , QM ) be the corresponding feature vector assuming values in {0, 1}M . Each image x then generates a bit string of length M, which contains all the information available for estimating Y(x). Of course, M is enormous. Nonetheless, it is not evident how we might determine a priori which features are informative and thereby reduce M to manageable size. Evidently these bit strings partition X. Two images that generate the same bit string or “atom” need not be identical. Indeed, due to the invariance properties of the queries, the two corresponding symbols may vary considerably in scale, location, and skew and are not even affine equivalent in general. Nonetheless, two such images will have very similar shapes. As a result, it is reasonable to expect that H(Y|Q) (the conditional entropy of Y given Q) is very small, in which case we can in principle obtain high classification rates using Q. To simplify things further, at least conceptually, we will assume that H(Y|Q) = 0; this is not an unreasonable assumption for large M. An equivalent assumption is that the shape class Y is determined by Q and the error rate of the Bayes classifier Yˆ B = arg max P(Y = c|Q) c
is zero. Needless to say, perfect classification cannot actually be realized. Due to the size of M, the full posterior cannot be computed, and the classifier Yˆ B is only hypothetical. Suppose we examine some of the features by constructing a single binary tree T based on entropy-driven recursive partitioning and randomization and that T is uniformly of depth D so that D of the M features are examined for each image x. (The exact procedure is described in the following section; the details are not important for the moment.) Suffice it to say that a feature Qm is assigned to each interior node of T and the set of features Qπ1 , . . . , QπD
Shape Quantization and Recognition
1559
along each branch from root to leaf is chosen sequentially and based on the current information content given the observed values of the previously chosen features. The classifier based on T is then Yˆ T = arg max P(Y = c|T) c
= arg max P(Y = c|Qπ1 , . . . , QπD ) c
since D ¿ M, Yˆ T is not the Bayes classifier. However, even for values of D on the order of hundreds or thousands, we can expect that P(Y = c|T) ≈ P(Y = c|Q). We shall refer to the difference between these distributions (in some appropriate norm) as the approximation error (AE). This is one of the sources of error in replacing Q by a subset of features. Of course, we cannot actually compute a tree of such depth since at least several hundred features are needed to achieve good classification; we shall return to this point shortly. Regardless of the depth D, in reality we do not actually know the posterior distribution P(Y = c|T). Rather, it must be estimated from a training set L = {(x1 , Y(x1 )), . . . , (xm , Y(xm ))}, where x1 , . . . , xm is a random sample from P. (The training set is also used to estimate the entropy values during recursive partitioning.) Let PˆL (Y = c|T) denote the estimated distribution, obtained by simply counting the number of training images of each class c that land at each terminal node of T. If L is sufficiently large, then Pˆ L (Y = c|T) ≈ P(Y = c|T). We call the difference estimation error (EE), which of course vanishes only as |L| → ∞. The purpose of multiple trees (see section 6) is to solve the approximation error problem and the estimation error problem at the same time. Even if we could compute and store a very deep tree, there would still be too many probabilities (specifically K2D ) to estimate with a practical training set L. Our approach is to build multiple trees T1 , . . . , TN of modest depth. In this way tree construction is practical and Pˆ L (Y = c|Tn ) ≈ P(Y = c|Tn ), n = 1, . . . , N. Moreover, the total number of features examined is sufficiently large to control the approximation error. The classifier we propose is N 1 X PˆL (Y = c|Tn ). Yˆ S = arg max c N n=1
An explanation for this particular way of aggregating the information from multiple trees is provided in section 6.1. In principle, a better way to combine the trees would be to classify based on the mode of P(Y = c|T1 , . . . , TN ).
1560
Yali Amit and Donald Geman
However, this is impractical for reasonably sized training sets for the same reasons that a single deep tree is impractical (see section 6.4 for some numerical experiments). The trade-off between AE and EE is related to the trade-off between bias and variance, which is discussed in section 6.2, and the relative error rates among all these classifiers is analyzed in more detail in section 6.4 in the context of parameter estimation. 5 Tree-Structured Shape Quantization Standard decision tree construction (Breiman et al., 1984; Quinlan, 1986) is based on a scalar-valued feature or attribute vector z = (z1 , . . . , zk ) where k is generally about 10 − 100. Of course, in pattern recognition, the raw data are images, and finding the right attributes is widely regarded as the main issue. Standard splitting rules are based on functions of this vector, usually involving a single component zj (e.g., applying a threshold) but occasionally involving multivariate functions or “transgenerated features” (Friedman, 1973; Gelfand & Delp, 1991; Guo & Gelfand, 1992; Sethi, 1991). In our case, the queries {QA } are the candidates for splitting rules. We now describe the manner in which the queries are used to construct a tree. 5.1 Exploring Shape Space. Since the set of queries Q is indexed by graphs, there is a natural partial ordering under which a graph precedes any of its extensions. The partial ordering corresponds to a hierarchy of structure. Small arrangements with few tags produce coarse splits of shape space. As the arrangements increase in size (say, the number of tags plus relations), they contain more and more information about the images that contain them. However, fewer and fewer images contain such an instance— that is, P(Q = 1) ≈ 0 for a query Q based on a complex arrangement. One straightforward way to exploit this hierarchy is to build a decision tree using the collection Q as candidates for splitting rules, with the complexity of the queries increasing with tree depth (distance from the root). In order to begin to make this computationally feasible, we define a minimal extension of an arrangement A to mean the addition of exactly one relation between existing tags, or the addition of exactly one tag and one relation binding the new tag to an existing one. By a binary arrangement, we mean one with two tags and one relation; the collection of associated queries is denoted B ⊂ Q. Now build a tree as follows. At the root, search through B and choose the query Q ∈ B, which leads to the greatest reduction in the mean uncertainty about Y given Q. This is the standard criterion for recursive partitioning in machine learning and other fields. Denote the chosen query QA0 . Those data points for which QA0 = 0 are in the “no” child node, and we search again through B. Those data points for which QA0 = 1 are in the “yes” child node and have one or more instances of A0 , the “pending arrangement.” Now search among minimal extensions of A0 and choose the one that leads
Shape Quantization and Recognition
1561
Figure 7: Examples of node splitting. All six images lie in the same node and have a pending arrangement with three vertices. The “0”s are separated from the “3”s and “5”s by asking for the presence of a new tag, and then the “3”s and “5”s are separated by asking a question about the relative angle between two existing vertices. The particular tags associated with these vertices are not indicated.
to the greatest reduction in uncertainty about Y given the existence of A0 . The digits in Figure 6 were taken from a depth 2 (“yes”/“yes”) node of such a tree. We measure uncertainty by Shannon entropy. The expected uncertainty in Y given a random variable Z is X X P(Z = z) P(Y = c|Z = z) log2 P(Y = c|Z = z). H(Y|Z) = − z
c
Define H(Y|Z, B) for an event B ⊂ X in the same way, except that P is replaced by the conditional probability measure P(.|B). Given we are at a node t of depth k > 0 in the tree, let the “history” be Bt = {QA0 = q0 , . . . , QAk−1 = qk−1 }, meaning that QA1 is the second query chosen given that q0 ∈ {0, 1} is the answer to the first; QA2 is the third query chosen given the answers to the first two are q0 and q1 ; and so forth. The pending arrangement, say Aj , is the deepest arrangement along the path from root to t for which qj = 1, so that qi = 0, i = j + 1, . . . , k − 1. Then QAk minimizes H(Y|QA , Bt ) among minimal extensions of Aj . An example of node splitting is shown in Figure 7. Continue in this fashion until a stopping criterion is satisfied, for example, the number of data points at every terminal node falls below a threshold. Each tree may then be regarded as a discrete random variable T on X; each terminal node corresponds to a different value of T. In practice, we cannot compute these expected entropies; we can only estimate them from a training set L. Then P is replaced by the empirical distribution PˆL on {x1 , . . . , xm } in computing the entropy values. 5.2 Randomization. Despite the growth restrictions, the procedure above is still not practical; the number of binary arrangements is very large, and
1562
Yali Amit and Donald Geman
there are too many minimal extensions of more complex arrangements. In addition, if more than one tree is made, even with a fresh sample of data points per tree, there might be very little difference among the trees. The solution is simple: instead of searching among all the admissible queries at each node, we restrict the search to a small random subset. 5.3 A Structural Description. Notice that only connected arrangements can be selected, meaning every two tags are neighbors (participate in a relation) or are connected by a sequence of neighboring tags. As a result, training is more complex than standard recursive partitioning. At each node, a list must be assigned to each data point consisting of all instances of the pending arrangement, including the coordinates of each participating tag. If a data point passes to the “yes” child, then only those instances that can be incremented are maintained and updated; the rest are deleted. The more data points there are, the more bookkeeping. A far simpler possibility is sampling exclusively from B, the binary arrangements (i.e., two vertices and one relation) listed in some order. In fact, we can imagine evaluating all the queries in B for each data point. This vector could then be used with a variety of standard classifiers, including decision trees built in the standard fashion. In the latter case, the pending arrangements are unions of binary graphs, each one disconnected from all the others. This approach is much simpler and faster to implement and preserves the semi-invariance. However, the price is dear: losing the common, global characterization of shape in terms of a large, connected graph. Here we are referring to the pending arrangements at the terminal nodes (except at the end of the all “no” branch); by definition, this graph is found in all the shapes at the node. This is what we mean by a structural description. The difference between one connected graph and a union of binary graphs can be illustrated as follows. Relative to the entire population X, a random selection in B is quite likely to carry some information about Y, measured, say, by the mutual information I(Y, Q) = H(Y)−H(Y|Q). On the other hand, a random choice among all queries with, say, five tags will most likely have no information because nearly all data points x will answer “no.” In other words, it makes sense at least to start with binary arrangements. Assume, however, that we are restricted to a subset {QA = 1} ⊂ X determined by an arrangement A of moderate complexity. (In general, the subsets at the nodes are determined by the “no” answers as well as the “yes” answers, but the situation is virtually the same.) On this small subset, a randomly sampled binary arrangement will be less likely to yield a significant drop in uncertainty than a randomly sampled query among minimal extensions of A. These observations have been verified experimentally, and we omit the details. This distinction becomes more pronounced if the images are noisy (see the top panel of Figure 8) or contain structured backgrounds (see the bottom panel of Figure 11) because there will be many false positives for ar-
Shape Quantization and Recognition
1563
Figure 8: Samples from data sets. (Top) Spot noise. (Middle) Duplication. (Bottom) Severe perturbations.
rangements with only two tags. However, the chance of finding complex arrangements utilizing noise tags or background tags is much smaller. Put differently, a structural description is more robust than a list of attributes. The situation is the same for more complex shapes; see, for example, the middle panel of Figure 8, where the shapes were created by duplicating each symbol four times with some shifts. Again, a random choice among minimal extensions carries much more information than a random choice in B. 5.4 Semi-Invariance. Another benefit of the structural description is what we refer to as semi-invariance. Given a node t, let Bt be the history and A j the pending arrangement. For any minimal extension A of A j , and for any shape class c, we want max(P(QA = 0|Y = c, Bt ), P(QA = 1|Y = c, Bt ) À .5 . In other words, most of the images in Bt of the same class should answer the same way to query QA . In terms of entropy, semi-invariance is equivalent to relatively small values of H(QA |Y = c, Bt ) for all c. Averaging over classes, this in turn is equivalent to small values of H(QA |Y, Bt ) at each node t. In order to verify this property we created ten trees of depth 5 using the data set described in section 2 with thirty-two samples per symbol class.
1564
Yali Amit and Donald Geman
At each nonterminal node t of each tree, the average value of H(QA |Y, Bt ) was calculated over twenty randomly sampled minimal extensions. Over all nodes, the mean entropy was m = .33; this is the entropy of the distribution (.06, .94). The standard deviation over all nodes and queries was σ = .08. Moreover, there was a clear decrease in average entropy (i.e., increase in the degree of invariance) as the depth of the node increases. We also estimated the entropy for more severe deformations. On a more variable data set with approximately double the range of rotations, log scale, and log skew (relative to the values in section 2), and the same nonlinear deformations, the corresponding numbers were m = .38, σ = .09. Finally for rotations sampled from (−30, 30) degrees, log scale from (−.5, .5), log skew from (−1, 1), and doubling the variance of the random nonlinear deformation (see the bottom panel of Figure 8), the corresponding mean entropy was m = .44 (σ = .11), corresponding to a (.1, .9) split. In other words, on average, 90 percent of the images in the same shape class still answer the same way to a new query. Notice that invariance property is independent of the discriminating power of the query, that is, the extent to which the distribution P(Y = c|Bt , QA ) is more peaked than the distribution P(Y = c|Bt ). Due to the symmetry of mutual information, H(Y|Bt ) − H(Y|QA , Bt ) = H(QA |Bt ) − H(QA |Y, Bt ). This means that if we seek a question that maximizes the reduction in the conditional entropy of Y and assume the second term on the right is small due to semi-invariance, then we need only find a query that maximizes H(QA |Bt ). This, however, does not involve the class variable and hence points to the possibility of unsupervised learning, which is discussed in the following section. 5.5 Unsupervised Learning. We outline two ways to construct trees in an unsupervised mode, that is, without using the class labels Y(xj ) of the samples xj in L. Clearly each query Qm decreases uncertainty about Q, and hence about Y. Indeed, H(Y|Qm ) ≤ H(Q|Qm ) since we are assuming Y is determined by Q. More generally, if T is a tree based on some of the components of Q and if H(Q|T) ¿ H(Q), then T should contain considerable information about the shape class. Recall that in the supervised mode, the query Qm chosen at node t minimizes H(Y|Bt , Qm ) (among a random sample of admissible queries), where Bt is the event in X corresponding to the answers to the previous queries. Notice that typically this is not equivalent to simply maximizing the information content of Qm because H(Y|Bt , Qm ) = H(Y, Qm |Bt ) − H(Qm |Bt ), and both terms depend on m. However, in the light of the discussion in the preceding section about semi-invariance, the first term can be ignored, and we can focus on maximizing the second term.
Shape Quantization and Recognition
1565
Another way to motivate this criterion is to replace Y by Q, in which case H(Q|Bt , Qm ) = H(Q, Qm |Bt ) − H(Qm |Bt ) = H(Q|Bt ) − H(Qm |Bt ). Since the first term is independent of m, the query of choice will again be the one maximizing H(Qm |Bt ). Recall that the entropy values are estimated from training data and that Qm is binary. It follows that growing a tree aimed at reducing uncertainty about Q is equivalent to finding at each node that query which best splits the data at the node into two equal parts. This results from the fact that maximizing H(p) = p log2 (p) + (1 − p) log2 (1 − p) reduces to minimizing |p − .5|. In this way we generate shape quantiles or clusters ignoring the class labels. Still, the tree variable T is highly correlated with the class variable Y. This would be the case even if the tree were grown from samples representing only some of the shape classes. In other words, these clustering trees produce a generic quantization of shape space. In fact, the same trees can be used to classify new shapes (see section 9). We have experimented with such trees, using the splitting criterion described above as well as another unsupervised one based on the “question metric,” dQ (x, x0 ) =
M 1 X δ(Qm (x) 6= Qm (x0 )), x, x0 ∈ X M m=1
where δ(. . .) = 1 if the statement is true and δ(. . .) = 0 otherwise. Since Q leads to Y, it makes sense to divide the data so that each child is as homogeneous as possible with respect to dQ ; we omit the details. Both clustering methods lead to classification rates that are inferior to those obtained with splits determined by separating classes but still surprisingly high; one such experiment is reported in section 6.1. 6 Multiple Trees We have seen that small, random subsets of the admissible queries at any node invariably contain at least one query that is informative about the shape class. What happens if many such trees are constructed using the same training set L? Because the family Q of queries is so large and because different queries—tag arrangements—address different aspects of shape, separate trees should provide separate structural descriptions, characterizing the shapes from different “points of view.” This is illustrated in Figure 9, where the same image is shown with an instance of the pending graph at the terminal node in five different trees. Hence, aggregating the information provided by a family of trees (see section 6.1) should yield more accurate and more robust classification. This will be demonstrated in experiments throughout the remainder of the article.
1566
Yali Amit and Donald Geman
Figure 9: Graphs found in an image at terminal nodes of five different trees.
Generating multiple trees by randomization was proposed in Geman, Amit, & Wilder (1996). Previously, other authors had advanced other methods for generating multiple trees. One of the earliest was weighted voting trees (Casey & Jih, 1983); Shlien (1990) uses different splitting criteria; Breiman (1994) uses bootstrap replicates of L; and Dietterich and Bakiri (1995) introduce the novel idea of replacing the multiclass learning problem by a family of two-class problems, dedicating a tree to each of these. Most of these articles deal with fixed-size feature vectors and coordinate-based questions. All authors report gains in accuracy and stability. 6.1 Aggregation. Suppose we are given a family of trees T1 , . . . , TN . The best classifier based on these is Yˆ A = arg max P(Y = c|T1 , . . . , TN ), c
but this is not feasible (see section 6.4). Another option would be to regard the trees as high-dimensional inputs to standard classifiers. We tried that with classification trees, linear and nonlinear discriminant analysis, K-means clustering, and nearest neighbors, all without improvement over simple averaging for the amount of training data we used. By averaging, we mean the following. Let µn,τ (c) denote the posterior distribution P(Y = c|Tn = τ ), n = 1, . . . , N, c = 1, . . . , K, where τ denotes a terminal node. We write µTn for the random variable µn,Tn . These probabilities are the parameters of the system, and the problem of estimating them will be discussed in section 6.4. Define µ(x) ¯ =
N 1 X µT (x) , N n=1 n
the arithmetic average of the distributions at the leaves reached by x. The mode of µ(x) ¯ is the class assigned to the data point x, that is, Yˆ S = arg max µ¯ c . c
Shape Quantization and Recognition
1567
Using a training database of thirty-two samples per symbol from the distribution described in section 2, we grew N = 100 trees of average depth d = 10, and tested the performance on a test set of five samples per symbol. The classification rate was 96 percent. This experiment was repeated several times with very similar results. On the other hand, growing one hundred unsupervised trees of average depth 11 and using the labeled data only to estimate the terminal distributions, we achieved a classification rate of 94.5 percent. 6.2 Dependence on the Training Set. The performance of classifiers constructed from training samples can be adversely affected by overdependence on the particular sample. One way to measure this is to consider the population of all training sets L of a particular size and to compute, for each data point x, the average EL eL (x), where eL denotes the error at x for the classifier made with L. (These averages may then be further averaged over X.) The average error decomposes into two terms, one corresponding to bias and the other to variance (Geman, Bienenstock, & Doursat, 1992). Roughly speaking, the bias term captures the systematic errors of the classifier design, and the variance term measures the error component due to random fluctuations from L to L. Generally parsimonious designs (e.g., those based on relatively few unknown parameters) yield low variance but highly biased decision boundaries, whereas complex nonparametric classifiers (e.g., neural networks with many parameters) suffer from high variance, at least without enormous training sets. Good generalization requires striking a balance. (See Geman et al., 1992, for a comprehensive treatment of the bias/variance dilemma; see also the discussions in Breiman, 1994; Kong & Dietterich, 1995; and Raudys & Jain, 1991.) One simple experiment was carried out to measure the dependence of our classifier Yˆ S on the training sample; we did not systematically explore the decomposition mentioned above. We made ten sets of twenty trees from ten different training sets, each consisting of thirty-two samples per symbol. The average classification rate was 85.3 percent; the standard deviation was 0.8 percent. Table 2 shows the number of images in the test set correctly labeled by j of the classifiers, j = 0, 1, . . . , 10. For example, we see that 88 percent of the test points are correctly labeled at least six out of ten times. Taking the plurality of the ten classifiers improves the classification rate to 95.5 percent so there is some pointwise variability among the classifiers. However, the decision boundaries and overall performance are fairly stable with respect to L. We attribute the relatively small variance component to the aggregation of many weakly dependent trees, which in turn results from randomization. The bias issue is more complex, and we have definitely noticed certain types of structural errors in our experiments with handwritten digits from the NIST database; for example, certain styles of writing are systematically misclassified despite the randomization effects.
1568
Yali Amit and Donald Geman
Table 2: Number of Points as a Function of the Number of Correct Classifiers. Number of correct classifiers 0 1 2 3 4 5 6 7 8 9 10 Number of points 9 11 20 29 58 42 59 88 149 237 763
6.3 Relative Error Rates. Due to estimation error, we favor many trees of modest depth over a few deep ones, even at the expense of theoretically higher error rates where perfect estimation is possible. In this section, we analyze those error rates for some of the alternative classifiers discussed above in the asymptotic case of infinite data and assuming the total number of features examined is held fixed, presumably large enough to guarantee low approximation error. The implications for finite data are outlined in section 6.4. Instead of making N trees T1 , . . . , TN of depth D, suppose we made just one tree T∗ of depth ND; in both cases we are asking ND questions. Of course this is not practical for the values of D and N mentioned above (e.g., D = 10, N = 20), but it is still illuminating to compare the hypothetical performance of the two methods. Suppose further that the criterion for selecting T∗ is to minimize the error rate over all trees of depth ND: T∗ = arg max E[max P(Y = c|T)], T
c
where the maximum is over all trees of depth ND. The error rate of the corresponding classifier Yˆ ∗ = arg maxc P(Y = c|T∗ ) is then e(Yˆ ∗ ) = 1 − E[maxc P(Y = c|T∗ )]. Notice that finding T∗ would require the solution of a global optimization problem that is generally intractable, accounting for the nearly universal adoption of greedy tree-growing algorithms based on entropy reduction, such as the one we are using. Notice also that minimizing ˆ the entropy H(Y|T) or the error rate P(Y 6= Y(T)) amounts to basically the same thing. Let e(Yˆ A ) and e(Yˆ S ) be the error rates of Yˆ A and Yˆ S (defined in 6.1), respectively. Then it is easy to show that e(Yˆ ∗ ) ≤ e(Yˆ A ) ≤ e(Yˆ S ). The first inequality results from the observation that the N trees of depth D could be combined into one tree of depth ND simply by grafting T2 onto each terminal node of T1 , then grafting T3 onto each terminal node of the new tree, and so forth. The error rate of the tree so constructed is just e(Yˆ A ). However, the error rate of T∗ is minimal among all trees of depth ND, and hence is lower than e(Yˆ A ). Since Yˆ S is a function of T1 , . . . , TN , the second inequality follows from a standard argument:
Shape Quantization and Recognition
1569
P(Y 6= Yˆ S ) = E[P(Y 6= Yˆ S |T1 , . . . , TN )] ≥ E[P(Y 6= arg max P(Y = c|T1 , . . . , TN ))|T1 , . . . , TN )] = P(Y 6= Yˆ A ).
c
6.4 Parameter Estimation. In terms of tree depth, the limiting factor is parameter estimation, not computation or storage. The probabilities P(Y = c|T∗ ), P(Y = c|T1 , . . . , TN ), and P(Y = c|Tn ) are unknown and must be estimated from training data. In each of the cases Yˆ∗ and Yˆ A , there are K × 2ND parameters to estimate (recall K is the number of shape classes), whereas for Yˆ S there are K × N × 2D parameters. Moreover, the number of data points in L available per parameter is kLk/(K2ND ) in the first two cases and kLk/(K2D ) with aggregation. For example, consider the family of N = 100 trees described in section 6.1, which were used to classify the K = 293 LATEX symbols. Since the average depth is D = 8, then there are approximately 100 × 28 × 293 ∼ 7.5 × 106 parameters, although most of these are nearly zero. Indeed, in all experiments reported below, only the largest five elements of µn,τ are estimated; the rest are set to zero. It should be emphasized, however, that the parameter estimates can be refined indefinitely using additional samples from X, a form of incremental learning (see section 9). For Yˆ A = arg maxc P(Y = c|T1 , . . . , TN ) the estimation problem is overwhelming, at least without assuming conditional independence or some other model for dependence. This was illustrated when we tried to compare the magnitudes of e(Yˆ A ) with e(Yˆ S ) in a simple case. We created N = 4 trees of depth D = 5 to classify just the first K = 10 symbols, which are the ten digits. The trees were constructed using a training set L with 1000 samples per symbol. Using Yˆ S , the error rate on L was just under 6 percent; on a test set V of 100 samples per symbol, the error rate was 7 percent. Unfortunately L was not large enough to estimate the full posterior given the four trees. Consequently, we tried using 1000, 2000, 4000, 10,000, and 20,000 samples per symbol for estimation. With two trees, the error rate was consistent from L to V , even with 2000 samples per symbol, and it was slightly lower than e(Yˆ S ). With three trees, there was a significant gap between the (estimated) e(Yˆ A ) on L and V , even with 20,000 samples per symbol; the estimated value of e(Yˆ A ) on V was 6 percent compared with 8 percent for e(Yˆ S ). With four trees and using 20,000 samples per symbol, the estimate of e(Yˆ A ) on V was about 6 percent, and about 1 percent on L. It was only 1 percent better than e(Yˆ S ), which was 7 percent and required only 1000 samples per symbol. We did not go beyond 20,000 samples per symbol. Ultimately Yˆ A will do better, but the amount of data needed to demonstrate this is prohibitive, even for four trees. Evidently the same problems would be encountered in trying to estimate the error rate for a very deep tree.
1570
Yali Amit and Donald Geman
7 Performance Bounds We divide this into two cases: individual trees and multiple trees. Most of the analysis for individual trees concerns a rather ideal case (twenty questions) in which the shape classes are atomic; there is then a natural metric on shape classes, and one can obtain bounds on the expected uncertainty after a given number of queries in terms of this metric and an initial distribution over classes. The key issue for multiple trees is weak dependence, and the analysis there is focused on the dependence structure among the trees. 7.1 Individual Trees: Twenty Questions. Suppose first that each shape class or hypothesis c is atomic, that is, it consists of a single atom of Q (as defined in section 4). In other words each “hypothesis” c has a unique code word, which we denote by Q(c) = (Q1 (c), . . . , QM (c)), so that Q is determined by Y. This setting corresponds exactly to a mathematical version of the twenty questions game. There is also an initial distribution ν(c) = P(Y = c). For each c = 1, . . . , K, the binary sequence (Qm (1), . . . , Qm (K)) determines a subset of hypotheses—those that answer yes to query Qm . Since the code words are distinct, asking enough questions will eventually determine Y. The mathematical problem is to find the ordering of the queries that minimizes the mean number of queries needed to determine Y or the mean uncertainty about Y after a fixed number of queries. The best-known example is when there is a query for every subset of {1, . . . , K}, so that M = 2K . The optimal strategy is given by the Huffman code, in which case the mean number of queries required to determine Y lies in the interval [H(Y), H(Y) + 1) (see Cover & Thomas, 1991). Suppose π1 , . . . , πk represent the indices of the first k queries. The mean residual uncertainty about Y after k queries is then H(Y|Qπ1 , . . . , Qπk ) = H(Y, Qπ1 , . . . , Qπk ) − H(Qπ1 , . . . , Qπk ) = H(Y) − H(Qπ1 , . . . , Qπk ) ¡ = H(Y) − H(Qπ1 ) + H(Qπ2 |Qπ1 )
¢ + · · · + H(Qπk |Qπ1 , . . . , Qπk−1 ) .
Consequently, if at each stage there is a query that divides the active hypotheses into two groups such that the mass of the smaller group is at least β (0 < β ≤ .5), then H(Y|Qπ1 , . . . , Qπk ) ≤ H(Y) − kH(β). The mean decision time is roughly H(Y)/H(β). In all unsupervised trees we produced, we found H(Qπk |Qπ1 , . . . , Qπk−1 ) to be greater than .99 (corresponding to β ≈ .5) at 95 percent of the nodes. If assumptions are made about the degree of separation among the code words, one can obtain bounds on mean decision times and the expected uncertainty after a fixed number of queries, in terms of the prior distribution ν. For these types of calculations, it is easier to work with the Hellinger
Shape Quantization and Recognition
1571
measure of uncertainty than with Shannon entropy. Given a probability vector p = (p1 , . . . , pJ ), define Xp √ pj pi , G(p) = j6=i
and define G(Y), G(Y|Bt ), and G(Y|Bt , Qm ) the same way as with the entropy function H. (G and H have similar properties; for example, G is minimized on a point mass, maximized on the uniform distribution, and it follows from Jensen’s inequality that H(p) ≤ log2 [G(p) + 1].) The initial amount of uncertainty is X ν 1/2 (c)ν 1/2 (c0 ). G(Y) = c6=c0
For any subset {m1 , . . . , mk } ⊂ {1, . . . , M}, using Bayes rule and the fact that P(Q|Y) is either 0 or 1, we obtain G(Y|Qm1 , . . . , Qmk ) =
k XY c6=c0
δ(Qmi (c) = Qmi (c0 ))ν 1/2 (c)ν 1/2 (c0 ).
i=1
Now suppose we average G(Y|Qm1 , . . . , Qmk ) over all subsets {m1 , . . . , mk } (allowing repetition). The average is X
M−k
G(Y|Qm1 , . . . , Qmk ) =
(m1 ,...,mk )
X c6=c0
M−k
X
k Y
(m1 ,...,mk ) i=1
× δ(Qmi (c) = Qmi (c0 ))ν 1/2 (c)ν 1/2 (c0 ) X (1 − dQ (c, c0 ))k ν 1/2 (c)ν 1/2 (c0 ). = c6=c0
Consequently, any better-than-average subset of queries satisfies X (1 − dQ (c, c0 ))k ν 1/2 (c)ν 1/2 (c0 ). G(Y|Qm1 , . . . , Qmk ) ≤ c6=c0
If γ = minc,c0 dQ (c, c0 ), then the residual uncertainty is at most (1 − γ )k G(Y). In order to disambiguate K hypotheses under a uniform starting distribution (in which case G(Y) = K − 1) we would need approximately k≈−
log K log(1 − γ )
queries, or k ≈ log K/γ for small γ . (This is clear without the general inequality above, since we eliminate a fraction γ of the remaining hypotheses with each new query.) This value of k is too large to be practical for realistic values of γ (due to storage, etc.) but does express the divide-and-conquer nature
1572
Yali Amit and Donald Geman
of recursive partitioning in the logarithmic dependence on the number of hypotheses. Needless to say, the compound case is the only realistic one, where the number of atoms in a shape class is a measure of its complexity. (For example, we would expect many more atoms per handwritten digit class than per printed font class.) In the compound case, one can obtain results similar to those mentioned above by considering the degree of homogeneity within classes as well as the degree of separation between classes. For example, the index γ must be replaced by one based on both the maximum distance Dmax between code words of the same class and the minimum distance Dmin between code words from different classes. Again, the bounds obtained call for trees that are too deep actually to be made, and much deeper than those that are empirically demonstrated to obtain good discrimination. We achieve this in practice due to semi-invariance, guaranteeing that Dmax is small, and the extraordinary richness of the world of spatial relationships, guaranteeing that Dmin is large. 7.2 Multiple Trees: Weak Dependence. From a statistical perspective, randomization leads to weak conditional dependence among the trees. For example, given Y = c, the correlation between two trees T1 and T2 is small. In other words, given the class of an image, knowing the leaf of T1 that is reached would not aid us in predicting the leaf reached in T2 . In this section, we analyze the dependence structure among the trees and obtain a crude lower bound on the performance of the classifier Yˆ S for a fixed family of trees T1 , . . . , TN constructed from a fixed training set L. Thus we are not investigating the asymptotic performance of Yˆ S as either N → ∞ or |L| → ∞. With infinite training data, a tree could be made arbitrarily deep, leading to arbitrarily high classification rates since nonparametric classifiers are generally strongly consistent. ¯P . . . , Ec µ(K)) ¯ denote the mean of µ¯ conditioned on Let Ec µ¯ = (Ec µ(1), ¯ = N1 N Y = c: Ec µ(d) i=1 E(µTn (d)|Y = c). We make three assumptions about the mean vector, all of which turn out to be true in practice: ¯ = c. 1. arg maxd Ec µ(d) ¯ = αc >> 1/K. 2. Ec µ(c) ¯ ∼ (1 − αc )/(K − 1). 3. Ec µ(d) The validity of the first two is clear from Table 3. The last assumption says that the amount of mass in the mean aggregate distribution that is off the true class tends to be uniformly distributed over the other classes. Let SK denote the K-dimensional simplex (probability vectors in RK ), and let Uc = {µ : arg maxd µ(d) = c}, an open convex subset of SK . Define φc to be the (Euclidean) distance from Ec µ¯ to ∂Uc , the boundary of Uc . Clearly ¯ < φc implies that arg maxd µ(d) = c, where k·k denotes Euclidean kµ−Ec µk norm. This is used below to bound the misclassification rate. First, however,
Shape Quantization and Recognition
1573
Table 3: Estimates of αc , γc , and ec for Ten Classes. Class 0 αc γc ec
1
2
3
4
5
6
7
8
9
0.66 0.86 0.80 0.74 0.74 0.64 0.56 0.86 0.49 0.68 0.03 0.01 0.01 0.01 0.03 0.02 0.04 0.01 0.02 0.01 0.14 0.04 0.03 0.04 0.11 0.13 0.32 0.02 0.23 0.05
we need to compute φc . Clearly, ∂Uc = ∪d:d6=c {µ ∈ SK : µ(c) = µ(d)}. From symmetry arguments, a point in ∂Uc that achieves the minimum distance to Ec µ¯ will lie in each of the sets in the union above. A straightforward computation involving orthogonal projections then yields φc = √ (αc K − 1)/ 2(K − 1). Using Chebyshev’s inequality, a crude upper bound on the misclassification rate for class c is obtained as follows: ¯ d) 6= c|Y = c) P(Yˆ S 6= c|Y = c) = P(x : arg max µ(x, d ° ≤ P(kµ¯ − Ec µ¯ ° > φc |Y = c) °2 1 ≤ 2 Ekµ¯ − Ec µ¯ ° φc " N K X 1 X = 2 2 Var(µTn (d)|Y = c) φc N d=1 n=1 +
X
#
Cov(µTn (d), µTm (d)|Y = c) .
n6=m
Let ηc denote the sum of the conditional variances, and let γc denote the sum of the conditional covariances, both averaged over the trees: K N X 1 X Var(µTn (d)|Y = c) = ηc N n=1 d=1 K 1 XX Cov(µTn (d), µTm (d)|Y = c) = γc . N2 n6=m d=1
We see that P(Yˆ S 6= c|Y = c) ≤
γc + ηc /N 2(γc + ηc /N)(K − 1)2 = . φc2 (αc K − 1)2
1574
Yali Amit and Donald Geman
Since ηc /N will be small compared with γc , the key parameters are αc and γc . This inequality yields only coarse bounds. However, it is clear that under the assumptions above, high classification rates are feasible as long as γc is sufficiently small and αc is sufficiently large, even if the estimates µTn are poor. Observe that the N trees form a simple random sample from some large population T of trees under a suitable distribution on T . This is due to the randomization aspect of tree construction. (Recall that at each node, the splitting rule is chosen from a small random sample of queries.) Both Ec µ¯ and the sum of variances are sample means of functionals on T . The sum of the covariances has the form of a U statistic. Since the trees are drawn independently and the range of the corresponding variables is very small (typically less than 1), standard statistical arguments imply that these sample means are close to the corresponding population means for a moderate number N of trees, say, tens or hundreds. In other words, αc ∼ ET EX (µT (c)|Y = c) and P γc ∼ ET ×T Kd=1 CovX (µT1 (d), µT2 (d)|Y = c). Thus the conditions on αc and γc translate into conditions on the corresponding expectations over T , and the performance variability among the trees can be ignored. Table 3 shows some estimates of αc and γc and the resulting bound ec on the misclassification rate P(Yˆ S 6= c|Y = c). Ten pairs of random trees were made on ten classes to estimate γc and αc . Again, the bounds are crude; they could be refined by considering higher-order joint moments of the trees. 8 Generalization For convenience, we will consider two types of generalization, referred to as interpolation and extrapolation. Our use of these terms may not be standard and is decidedly ad hoc. Interpolation is the easier case; both the training and testing samples are randomly drawn from (X, P), and the number of training samples is sufficiently large to cover the space X. Consequently, for most test points, the classifier is being asked to interpolate among nearby training points. By extrapolation we mean situations in which the training samples do not represent the space from which the test samples are drawn—for example, training on a very small number of samples per symbol (e.g., one); using different perturbation models to generate the training and test sets, perhaps adding more severe scaling or skewing; or degrading the test images with correlated noise or lowering the resolution. Another example of this occurred at the first NIST competition (Wilkinson et al., 1992); the hand-printed digits in the test set were written by a different population from those in the distributed training set. (Not surprisingly, the distinguishing feature of the winning algorithm was the size and diversity of the actual samples used to train the classifier.) One way Pto characterize such situations is to regard P as a mixture distribution P = i αi Pi , where the Pi might correspond to writer
Shape Quantization and Recognition
1575
Table 4: Classification Rates for Various Training Sample Sizes Compared with Nearest-Neighbor Methods. Sample Size
Trees
NN(B)
NN(raw)
1 8 32
44% 87 96
11% 57 74
5% 31 55
populations, perturbation models, or levels of degradation, for instance. In complex visual recognition problems, the number of terms might be very large, but the training samples might be drawn from relatively few of the Pi and hence represent a biased sample from P. In order to gauge the difficulty of the problem, we shall consider the performance of two other classifiers, based on k-nearest-neighbor classification with k = 5, which was more or less optimal in our setting. (Using nearestneighbors as a benchmark is common; see, for example, Geman et al., 1992; Khotanzad & Lu, 1991.) Let NN(raw) refer to nearest-neighbor classification based on Hamming distance in (binary) image space, that is, between bitmaps. This is clearly the wrong metric, but it helps to calibrate the difficulty of the problem. Of course, this metric is entirely blind to invariance but is not entirely unreasonable when the symbols nearly fill the bounding box and the degree of perturbation is limited. Let NN(B) refer to nearest-neighbor classification based on the binary tag arrangements. Thus, two images x and x0 are compared by evaluating Q(x) and Q(x0 ) for all Q ∈ B0 ⊂ B and computing the Hamming distance between the corresponding binary sequences. B0 was chosen as the subset of binary tag arrangements that split X to within 5 percent of fifty-fifty. There were 1510 such queries out of the 15,376 binary tag arrangements. Due to invariance and other properties, we would expect this metric to work better than Hamming distance in image space, and of course it does (see below). 8.1 Interpolation. One hundred (randomized) trees were constructed from a training data set with thirty-two samples for each of the K = 293 symbols. The average classification rate per tree on a test set V consisting of 100 samples per symbol is 27 percent. However, the performance of the classifier Yˆ S based on 100 trees is 96 percent. This clearly demonstrates the weak dependence among randomized trees (as well as the discriminating power of the queries). With the NN(B)-classifier, the classification rate was 74 percent; with NN(raw), the rate is 55 percent (see Table 4). All of these rates are on the test set. When the only random perturbations are nonlinear (i.e., no scaling, rotation, or skew), there is not much standardization that can be done to the
1576
Yali Amit and Donald Geman
Figure 10: LATEX symbols perturbed with only nonlinear deformations.
raw image (see Figure 10). With thirty-two samples per symbol, NN(raw) climbs to 76 percent, whereas the trees reach 98.5 percent. 8.2 Extrapolation. We also grew trees using only the original prototypes x∗c , c = 1, . . . , 293, recursively dividing this group until pure leaves were obtained. Of course, the trees are relatively shallow. In this case, only about half the symbols in X could then be recognized (see Table 4). The 100 trees grown with thirty-two samples per symbol were tested on samples that exhibit a greater level of distortion or variability than described up to this point. The results appear in Table 5. “Upscaling” (resp. “downscaling”) refers to uniform sampling between the original scale and twice (resp. half) the original scale, as in the top (resp. middle) panel of Figure 11; “spot noise” refers to adding correlated noise (see the top panel of Figure 8). Clutter (see the bottom panel of Figure 11) refers to the addition of pieces of other symbols in the image. All of these distortions came in addition to the random nonlinear deformations, skew, and rotations. Downscaling creates more confusions due to extreme thinning of the stroke. Notice that the NN(B) classifier falls apart with spot noise. The reason is the number of false positives: tags due to the noise induce random occurrences of simple arrangements. In contrast, complex arrangements A are far less likely to be found in the image by pure chance; therefore, chance occurrences are weeded out deeper in the tree. 8.3 Note. The purpose of all the experiments in this article is to illustrate various attributes of the recognition strategy. No effort was made to optimize the classification rates. In particular, the same tags and tree-making
Shape Quantization and Recognition
1577
Table 5: Classification Rates for Various Perturbations. Type of Perturbation
Trees
NN(B)
NN(raw)
Original Upscaling Downscaling Spot noise Clutter
96% 88 80 71 74
74% 57 52 28 27
55% 0 0 57 59
Figure 11: (Top) Upscaling. (Middle) Downscaling. (Bottom) Clutter.
protocol were used in every experiment. Experiments were repeated several times; the variability was negligible. One direction that appears promising is explicitly introducing different protocols from tree to tree in order to decrease the dependence. One small experiment was carried out in this direction. All the images were subsampled to half the resolution; for example, 32 × 32 images become 16 × 16. A tag tree was made with 4 × 4 subimages from the subsampled data set, and one hundred trees were grown using the subsampled training set. The
1578
Yali Amit and Donald Geman
output of these trees was combined with the output of the original trees on the test data. No change in the classification rate was observed for the original test set. For the test set with spot noise, the two sets of trees each had a classification rate of about 72 percent. Combined, however, they yield a rate of 86 percent. Clearly there is a significant potential for improvement in this direction.
9 Incremental Learning and Universal Trees The parameters µn,τ (c) = P(Y = c|Tn = τ ) can be incrementally updated with new training samples. Given a set of trees, the actual counts from the training set (instead of the normalized distributions) are kept in the terminal nodes τ . When a new labeled sample is obtained, it can be dropped down each of the trees and the corresponding counters incremented. There is no need to keep the image itself. This separation between tree construction and parameter estimation is crucial. It provides a mechanism for gradually learning to recognize an increasing number of shapes. Trees originally constructed with training samples from a small number of classes can eventually be updated to accommodate new classes; the parameters can be reestimated. In addition, as more data points are observed, the estimates of the terminal distributions can be perpetually refined. Finally, the trees can be further deepened as more data become available. Each terminal node is assigned a randomly chosen list of minimal extensions of the pending arrangement. The answers to these queries are then calculated and stored for each new labeled sample that reaches that node; again there is no need to keep the sample itself. When sufficiently many samples are accumulated, the best query on the list is determined by a simple calculation based on the stored information, and the node can then be split. The adaptivity to additional classes is illustrated in the following experiment. A set of one hundred trees was grown with training samples from 50 classes randomly chosen from the full set of 293 classes. The trees were grown to depth 10 just as before (see section 8). Using the original training set of thirty-two samples per class for all 293 classes, the terminal distributions were estimated and recorded for each tree. The aggregate classification rate on all 293 classes was about 90 percent, as compared with about 96 percent when the full training set is used for both quantization and parameter estimation. Clearly fifty shapes are sufficient to produce a reasonably sharp quantization of the entire shape space. As for improving the parameter estimates, recall that the one hundred trees grown with the pure symbols reached 44 percent on the test set. The terminal distributions of these trees were then updated using the original training set of thirty-two samples per symbol. The classification rate on the same test set climbed from 44 percent to 90 percent.
Shape Quantization and Recognition
1579
10 Fast Indexing One problem with recognition paradigms such as “hypothesize and test” is determining which particular hypothesis to test. Indexing into the shape library is therefore a central issue, especially with methods based on matching image data to model data and involving large numbers of shape classes. The standard approach in model-based vision is to flag plausible interpretations by searching for key features or discriminating parts in hierarchical representations. Indexing efficiency seems to be inversely related to stability with respect to image degradation. Deformable templates are highly robust because they provide a global interpretation for many of the image data. However, a good deal of searching may be necessary to find the right template. The method of invariant features lies at the other extreme of this axis: the indexing is one shot, but there is not much tolerance to distortions of the data. We have not attempted to formulate this trade-off in a manner susceptible to experimentation. We have noticed, however, that multiple trees appear to offer a reliable mechanism for fast indexing, at least within the framework of this article and in terms of narrowing down the number of possible classes. For example, in the original experiment with 96 percent classification rate, the five highest-ranking classes in the aggregate distribution µ¯ contained the true class in all but four images in a test set of size 1465 (five samples per class). Even with upscaling, for example, the true label was among the top five in 98 percent of the cases. These experiments suggest that very high recognition rates could be obtained with final tests dedicated to ambiguous cases, as determined, for example, by the mode of the µ. ¯ 11 Handwritten Digit Recognition The optical character recognition (OCR) problem has many variations, and the literature is immense; one survey is Mori, Suen, and Yamamoto (1992). In the area of handwritten character recognition, perhaps the most difficult problem is the recognition of unconstrained script; zip codes and handdrawn checks also present a formidable challenge. The problem we consider is off-line recognition of isolated binary digits. Even this special case has attracted enormous attention, including a competition sponsored by the NIST (Wilkinson et al., 1992), and there is still no solution that matches human performance, or even one that is commercially viable except in restricted situations. (For comparisons among methods, see Bottou et al., 1994, and the lucid discussion in Brown et al., 1993.) The best reported rates seem to be those obtained by the AT&T Bell Laboratories: up to 99.3 percent by training and testing on composites of the NIST training and test sets (Bottou et al., 1994). We present a brief summary of the results of experiments using the treebased shape quantization method to the NIST database. (For a more detailed
1580
Yali Amit and Donald Geman
Figure 12: Random sample of test images before (top) and after (bottom) preprocessing.
description, see Geman et al., 1996.) Our experiments were based on portions of the NIST database, which consists of approximately 223,000 binary images of isolated digits written by more than 2000 writers. The images vary widely in dimensions, ranging from about twenty to one hundred rows, and they also vary in stroke thickness and other attributes. We used 100,000 for training and 50,000 for testing. A random sample from the test set is shown in Figure 12. All results reported in the literature utilize rather sophisticated methods of preprocessing, such as thinning, slant correction, and size normalization. For the sake of comparison, we did several experiments using a crude form of slant correction and scaling, and no thinning. Twenty-five trees were made. We stopped splitting when the number of data points in the second largest class fell below ten. The depth of the terminal nodes (i.e., number of questions asked per tree) varied widely, the average over trees being 8.8. The average number of terminal nodes was about 600, and the average classification rate (determined by taking the mode of the terminal distribution) was about 91 percent. The best error rate we achieved with a single tree was about 7 percent. The classifier was tested in two ways. First, we preprocessed (scaled and
Shape Quantization and Recognition
1581
1 0.99 0.98 0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 5
10
15
20
25
Figure 13: Classification rate versus number of trees.
slant corrected) the test set in the same manner as the training set. The resulting classification rate is 99.2 percent (with no rejection). Figure 13 shows how the classification rate grows with the number of trees. Recall from section 6.1 that the estimated class of an image x is the mode of the aggregate distribution µ(x). ¯ A good measure of the confidence in this estimate is the value of µ(x) ¯ at the mode; call it M(x). It provides a natural mechanism for rejection by classifying only those images x for which M(x) > m; no rejection corresponds to m = 0. For example, the classification rate is 99.5 percent with 1 percent rejection and 99.8 percent with 3 percent rejection. Finally, doubling the number of trees makes the classification rates 99.3 percent, 99.6 percent, and 99.8 percent at 0, 1, and 2 percent rejection, respectively. We performed a second experiment in which the test data were not preprocessed in the manner of the training data; in fact, the test images were classified without utilizing the size of the bounding box. This is especially important in the presence of noise and clutter when it is essentially impossi-
1582
Yali Amit and Donald Geman
ble to determine the size of the bounding box. Instead, each test image was classified with the same set of trees at two resolutions (original and halved) and three (fixed) slants. The highest of the resulting six modes determines the classification. The classification rate was 98.9 percent. We classify approximately fifteen digits per second on a single processor SUN Sparcstation 20 (without special efforts to optimize the code); the time is approximately equally divided between transforming to tags and answering questions. Test data can be dropped down the trees in parallel, in which case classification would become approximately twenty-five times faster. 12 Comparison with ANNs The comparison with ANNs is natural in view of their widespread use in pattern recognition (Werbos, 1991) and several common attributes. In particular, neither approach is model based in the sense of utilizing explicit shape models. In addition, both are computationally very efficient in comparison with model-based methods as well as with memory-based methods (e.g., nearest neighbors). Finally, in both cases performance is diminished by overdedication to the training data (overfitting), and problems result from deficient approximation capacity or parameter estimation, or both. Two key differences are described in the following two subsections. 12.1 The Posterior Distribution and Generalization Error. It is not clear how a feature vector such as Q could be accommodated in the ANN framework. Direct use is not likely to be successful since ANNs based on very high-dimensional input suffer from poor generalization (Baum & Haussler, 1989), and very large training sets are then necessary in order to approximate complex decision boundaries (Raudys & Jain, 1991). The role of the posterior distribution P(Y = c|Q) is more explicit in our approach, leading to a somewhat different explanation of the sources of error. In our case, the approximation error results from replacing the entire feature set by an adaptive subset (or really many subsets, one per tree). The difference between the full posterior and the tree-based approximations can be thought of as the analog of approximation error in the analysis of the learning capacity of ANNs (see, e.g., Niyogi & Girosi, 1996). In that case one is interested in the set of functions that can be generated by the family of networks of a fixed architecture. Some work has centered on approximation of the particular function Q → P(Y = c|Q). For example, Lee et al. (1991) consider least-squares approximation to the Bayes rule under conditional independence assumptions on the features; the error vanishes as the number of hidden units goes to infinity. Estimation error in our case results from an inadequate amount of data to estimate the conditional distributions at the leaves of the trees. The analogous situation for ANNs is well known: even for a network of unlimited
Shape Quantization and Recognition
1583
capacity, there is still the problem of estimating the weights from limited data (Niyogi & Girosi, 1996). Finally, classifiers based on ANNs address classification in a less nonparametric fashion. In contrast to our approach, the classifier has an explicit ˆ Of functional (input-output) representation from the feature vector to Y. course, a great many parameters may need to be estimated in order to achieve low approximation error, at least for very complex classification problems such as shape recognition. This leads to a relatively large variance component in the bias/variance decomposition. In view of these difficulties, the small error rates achieved by ANNs on handwritten digits and other shape recognition problems are noteworthy. 12.2 Invariance and the Visual System. The structure of ANNs for shape classification is partially motivated by observations about processing in the visual cortex. Thus, the emphasis has been on parallel processing and the local integration (“funneling”) of information from one layer to the next. Since this local integration is done in a spatially stationary manner, translation invariance is achieved. The most successful example in the context of handwritten digit is the convolution neural net LeNET (LeCun et al., 1990). Scale, deformation, and other forms of invariance are achieved to a lesser degree, partially from using gray-level data and soft thresholds, but mainly by utilizing very large training sets and sophisticated image normalization procedures. The neocognitron (Fukushima & Miyake, 1982; Fukushima & Wake, 1991) is an interesting mechanism to achieve a degree of robustness to shape deformations and scale changes in addition to translation invariance. This is done using hidden layers, which carry out a local ORing operation mimicking the operation of the complex neurons in V1. These layers are referred to as C-type layers in Fukushima and Miyake (1982). A dual-resolution version (Gochin, 1994) is aimed at achieving scale invariance over a larger range. Spatial stationarity of the connection weights, the C-layers, and the use of multiple resolutions are all forms of ORing aimed at achieving invariance. The complex cell in V1 is indeed the most basic evidence of such an operation in the visual system. There is additional evidence for disjunction at a very global scale in the cells of the inferotemporal cortex with very large receptive fields. These respond to certain shapes at all locations in the receptive field, at a variety of scales but also for a variety of nonlinear deformations such as changes in proportions of certain parts (see Ito, Fujita, Tamura, & Tanaka, 1994; Ito, Tamura, Fujita, & Tanaka, 1995). It would be extremely inefficient to obtain such invariance through ORing of responses to each of the variations of these shapes. It would be preferable to carry out the global ORing at the level of primitive generic features, which can serve to identify and discriminate between a large class of shapes. We have demonstrated that global features consisting of geometric arrange-
1584
Yali Amit and Donald Geman
ments of local responses (tags) can do just that. The detection of any of the tag arrangements described in the preceding sections can be viewed as an extensive and global ORing operation. The ideal arrangement, such as a vertical edge northwest of a horizontal edge, is tested at all locations, all scales, and a large range of angles around the northwest vector. A positive response to any of these produces a positive answer to the associated query. This leads to a property we have called semi-invariance and allows our method to perform well without preprocessing and without very large training sets. Good performance extends to transformations or perturbations not encountered during training (e.g., scale changes, spot noise, and clutter). The local responses we employ are very similar to ones employed at the first level of the convolution neural nets, for example, in Fukushima and Wake (1991). However, at the next level, our geometric arrangements are defined explicitly and designed to accommodate a priori assumptions about shape regularity and variability. In contrast, the features in convolution neural nets are all implicitly derived from local inputs to each layer and from the slack obtained by local ORing, or soft thresholding, carried out layer by layer. It is not clear that this is sufficient to obtain the required level of invariance, nor is it clear how portable they are from one problem to another. Evidence for correlated activities of neurons with distant receptive fields is accumulating, not only in V1 but in the lateral geniculate nucleus and in the retina (Neuenschwander and Singer, 1996). Gilbert, Das, Ito, Kapadia, and Westheimer (1996) report increasing evidence for more extensive horizontal connections. Large optical point spreads are observed, where subthreshold neural activation appears in an area covering the size of several receptive fields. When an orientation-sensitive cell fires in response to a stimulus in its receptive field, subthreshold activation is observed in cells with the same orientation preference in a wide area outside the receptive field. It appears therefore that the integration mechanisms in V1 are more global and definitely more complex than previously thought. Perhaps the features described in this article can contribute to bridging the gap between existing computational models and new experimental findings regarding the visual system. 13 Conclusion We have studied a new approach to shape classification and illustrated its performance in high dimensions, with respect to both the number of shape classes and the degree of variability within classes. The basic premise is that shapes can be distinguished from one another by a sufficiently robust and invariant form of recursive partitioning. This quantization of shape space is based on growing binary classification trees using geometric arrangements among local topographic codes as splitting rules. The arrangements are semi-invariant to linear and nonlinear image transformations. As a result,
Shape Quantization and Recognition
1585
the method generalizes well to samples not encountered during training. In addition, due to the separation between quantization and estimation, the framework accommodates unsupervised and incremental learning. The codes are primitive and redundant, and the arrangements involve only simple spatial relationships based on relative angles and distances. It is not necessary to isolate distinguished points along shape boundaries or any other special differential or topological structures, or to perform any form of grouping, matching, or functional optimization. Consequently, a virtually infinite collection of discriminating shape features is generated with elementary computations at the pixel level. Since no classifier based on the full feature set is realizable and since it is impossible to know a priori which features are informative, we have selected features and built trees at the same time by inductive learning. Another data-driven nonparametric method for shape classification is based on ANNs, and a comparison was drawn in terms of invariance and generalization error, leading to the conclusion that prior information plays a relatively greater role in our approach. We have experimented with the NIST database of handwritten digits and with a synthetic database constructed from linear and nonlinear deformations of about three hundred LATEX symbols. Despite a large degree of within-class variability, the setting is evidently simplified since the images are binary and the shapes are isolated. Looking ahead, the central question is whether our recognition strategy can be extended to more unconstrained scenarios, involving, for example, multiple or solid objects, general poses, and a variety of image formation models. We are aware that our approach differs from the standard one in computer vision, which emphasizes viewing and object models, 3D matching, and advanced geometry. Nonetheless, we are currently modifying the strategy in this article in order to recognize 3D objects against structured backgrounds (e.g., cluttered desktops) based on gray-level images acquired from ordinary video cameras (see Jedynak & Fleuret, 1996). We are also looking at applications to face detection and recognition. Acknowledgments We thank Ken Wilder for many valuable suggestions and for carrying out the experiments on handwritten digit recognition. Yali Amit has been supported in part by the ARO under grant DAAL-03-92-0322. Donald Geman has been supported in part by the NSF under grant DMS-9217655, ONR under contract N00014-91-J-1021, and ARPA under contract MDA972-93-10012. References Baum, E. B., & Haussler, D. (1989). What size net gives valid generalization? Neural Comp., 1, 151–160.
1586
Yali Amit and Donald Geman
Binford, T. O., & Levitt, T. S. (1993). Quasi-invariants: Theory and exploitation. In Proceedings of the Image Understanding Workshop (pp. 819–828). Washington D.C. Bottou, L., Cortes, C., Denker, J. S., Drucker, H., Guyon, I., Jackel, L. D., LeCun, Y., Muller, U. A., Sackinger, E., Simard, P., & Vapnik, V. (1994). Comparison of classifier methods: A case study in handwritten digit recognition. Proc. 12th Inter. Conf. on Pattern Recognition (Vol. 2, pp. 77–82). Los Alamitos, CA: IEEE Computer Society Press. Breiman, L. (1994). Bagging predictors (Tech. Rep. No. 451). Berkeley: Department of Statistics, University of California, Berkeley. Breiman, L., Friedman, J., Olshen, R., & Stone, C. (1984). Classification and regression trees. Belmont, CA: Wadsworth. Brown, D., Corruble, V., & Pittard, C. L. (1993). A comparison of decision tree classifiers with backpropagation neural networks for multimodal classification problems. Pattern Recognition, 26, 953–961. Burns, J. B., Weiss, R. S., & Riseman, E. M. (1993). View variation of point set and line segment features. IEEE Trans. PAMI, 15, 51–68. Casey, R. G., & Jih, C. R. (1983). A processor-based OCR system. IBM Journal of Research and Development, 27, 386–399. Casey, R. G., & Nagy, G. (1984). Decision tree design using a probabilistic model. IEEE Trans. Information Theory, 30, 93–99. Cover, T. M., & Thomas, J. A. (1991). Elements of information theory. New York: Wiley. Dietterich, T. G., & Bakiri, G. (1995). Solving multiclass learning problems via error-correcting output codes. J. Artificial Intell. Res., 2, 263–286. Forsyth, D., Mundy, J. L., Zisserman, A., Coelho, C., Heller, A., & Rothwell, C. (1991). Invariant descriptors for 3-D object recognition and pose. IEEE Trans. PAMI, 13, 971–991. Friedman, J. H. (1973). A recursive partitioning decision rule for nonparametric classification. IEEE Trans. Comput., 26, 404–408. Fukushima, K., & Miyake, S. (1982). Neocognitron: A new algorithm for pattern recognition tolerant of deformations and shifts in position. Pattern Recognition, 15, 455–469. Fukushima, K., & Wake, N. (1991). Handwritten alphanumeric character recognition by the neocognitron. IEEE Trans. Neural Networks, 2, 355–365. Gelfand, S. B., & Delp, E. J. (1991). On tree structured classifiers. In I. K. Sethi & A. K. Jain (Eds.), Artificial neural networks and statistical pattern recognition (pp. 51–70). Amsterdam: North-Holland. Geman, D., Amit, Y., & Wilder, K. (1996). Joint induction of shape features and tree classifiers (Tech. Rep.). Amherst: University of Massachusetts. Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemma. Neural Computation, 4, 1–58. Gilbert, C. D., Das, A., Ito, M., Kapadia, M., & Westheimer, G. (1996). Spatial integration and cortical dynamics. Proc. Natl. Acad. Sci., 93, 615–622. Gochin, P. M. (1994). Properties of simulated neurons from a model of primate inferior temporal cortex. Cerebral Cortex, 5, 532–543. Guo, H., & Gelfand, S. B. (1992). Classification trees with neural network feature
Shape Quantization and Recognition
1587
extraction. IEEE Trans. Neural Networks, 3, 923–933. Hastie, T., Buja, A., & Tibshirani, R. (1995). Penalized discriminant analysis. Annals of Statistics, 23, 73–103. Hussain, B., & Kabuka, M. R. (1994). A novel feature recognition neural network and its application to character recognition. IEEE Trans. PAMI, 16, 99–106. Ito, M., Fujita, I., Tamura, H., Tanaka, K. (1994). Processing of contrast polarity of visual images of inferotemporal cortex of Macaque monkey. Cerebral Cortex, 5, 499–508. Ito, M., Tamura, H., Fujita, I., & Tanaka, K. (1995). Size and position invariance of neuronal response in monkey inferotemporal cortex. J. Neuroscience, 73(1), 218–226. Jedynak, B., Fleuret, F. (1996). Reconnaissance d’objets 3D a` l’aide d’arbres de classification. In Proc. Image Com 96. Bordeaux, France. Jung, D.-M., & Nagy, G. (1995). Joint feature and classifier design for OCR. In Proc. 3rd Inter. Conf. Document Analysis and Processing (Vol. 2, pp. 1115–1118). Montreal. Los Alamitos, CA: IEEE Computer Society Press. Khotanzad, A., & Lu, J.-H. (1991). Shape and texture recognition by a neural network. In I. K. Sethi & A. K. Jain (Eds.), Artificial Neural Networks and Statistical Pattern Recognition. Amsterdam: North-Holland. Knerr, S., Personnaz, L., & Dreyfus, G. (1992). Handwritten digit recognition by neural networks with single-layer training. IEEE Trans. Neural Networks, 3, 962–968. Kong, E. B., & Dietterich, T. G. (1995). Error-correcting output coding corrects bias and variance. In Proc. of the 12th International Conference on Machine Learning (pp. 313–321). San Mateo, CA: Morgan Kaufmann. Lamdan, Y., Schwartz, J. T., & Wolfson, H. J. (1988). Object recognition by affine invariant matching. In IEEE Int. Conf. Computer Vision and Pattern Rec. (pp. 335–344). Los Alamitos, CA: IEEE Computer Society Press. LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1990). Handwritten digit recognition with a back-propagation network. In D. S. Touretzky (Ed.), Advances in Neural Information (Vol. 2). San Mateo, CA: Morgan Kaufmann. Lee, D.-S., Srihari, S. N., & Gaborski, R. (1991). Bayesian and neural network pattern recognition: A theoretical connection and empirical results with handwritten characters. In I. K. Sethi & A. K. Jain (Eds.), Artificial Neural Networks and Statistical Pattern Recognition. Amsterdam: North-Holland. Martin, G. L., & Pitman, J. A. (1991). Recognizing hand-printed letters and digits using backpropagation learning. Neural Computation, 3, 258–267. Mori, S., Suen, C. Y., & Yamamoto, K. (1992). Historical review of OCR research and development. In Proc. IEEE (Vol. 80, pp. 1029–1057). New York: IEEE. Mundy, J. L., & Zisserman, A. (1992). Geometric invariance in computer vision. Cambridge, MA: MIT Press. Neuenschwander, S., & Singer, W. (1996). Long-range synchronization of oscillatory light responses in cat retina and lateral geniculate nucleus. Nature, 379(22), 728–733.
1588
Yali Amit and Donald Geman
Niyogi, P., & Girosi, F. (1996). On the relationship between generalization error, hypothesis complexity, and sample complexity for radial basis functions. Neural Comp., 8, 819–842. Quinlan, J. R. (1986). Induction of decision trees. Machine Learning, 1, 81–106. Raudys, S., & Jain, A. K. (1991). Small sample size problems in designing artificial neural networks. In I. K. Sethi & A. K. Jain (Eds.), Artificial Neural Networks and Statistical Pattern Recognition. Amsterdam: North-Holland. Reiss, T. H. (1993). Recognizing planar objects using invariant image features. Lecture Notes in Computer Science no. 676. Berlin: Springer-Verlag. Sabourin, M., & Mitiche, A. (1992). Optical character recognition by a neural network. Neural Networks, 5, 843–852. Sethi, I. K. (1991). Decision tree performance enhancement using an artificial neural network implementation. In I. K. Sethi & A. K. Jain (Eds.), Artificial Neural Networks and Statistical Pattern Recognition. Amsterdam: NorthHolland. Shlien, S. (1990). Multiple binary decision tree classifiers. Pattern Recognition, 23, 757–763. Simard, P. Y., LeCun, Y. L., & Denker, J. S. (1994). Memory-based character recognition using a transformation invariant metric. In Proc. 12th Inter. Conf. on Pattern Recognition (Vol. 2, pp. 262–267). Los Alamitos, CA: IEEE Computer Society Press. Werbos, P. (1991). Links between artificial neural networks (ANN) and statistical pattern recognition. In I. K. Sethi & A. K. Jain (Eds.), Artificial Neural Networks and Statistical Pattern Recognition. Amsterdam: North-Holland. Wilkinson, R. A., Geist, J., Janet, S., Grother, P., Gurges, C., Creecy, R., Hammond, B., Hull, J., Larsen, N., Vogl, T., & and Wilson, C. (1992). The first census optical character recognition system conference (Tech. Rep. No. NISTIR 4912). Gaithersburg, MD: National Institute of Standards and Technology.
Received April 18, 1996; accepted November 1, 1996.
Communicated by Eric Mjolsness
Airline Crew Scheduling with Potts Neurons Martin Lagerholm Carsten Peterson Bo Soderberg ¨ Department of Theoretical Physics, University of Lund, S¨olvegatan 14A, S-223 62 Lund, Sweden
A Potts feedback neural network approach for finding good solutions to resource allocation problems with a nonfixed topology is presented. As a target application, the airline crew scheduling problem is chosen. The topological complication is handled by means of a propagator defined in terms of Potts neurons. The approach is tested on artificial random problems tuned to resemble real-world conditions. Very good results are obtained for a variety of problem sizes. The computer time demand for the approach only grows like (number of flights)3 . A realistic problem typically is solved within minutes, partly due to a prior reduction of the problem size, based on an analysis of the local arrival and departure structure at the single airports. 1 Introduction In the past decade, feedback neural networks have emerged as a useful method to obtain good approximate solutions to various resource allocation problems (Hopfield & Tank, 1985; Peterson & Soderberg, ¨ 1989; Gisl´en, Soderberg, ¨ & Peterson, 1989, 1992). Most applications have concerned fairly academic problems like the traveling salesman problem (TSP) and various graph partition problems (Hopfield & Tank, 1985; Peterson & Soderberg, ¨ 1989). Gisl´en, Soderberg, ¨ and Peterson (1989, 1992) explored high school scheduling. The typical approach proceeds in two steps: (1) map the problem onto a neural network (spin) system with a problem-specific energy function, and (2) minimize the energy by means of deterministic mean field (MF) equations, which allow for a probabilistic interpretation. Two basic mapping variants are common: a hybrid (template) approach (Durbin & Willshaw, 1987) and a “purely” neural one. The template approach is advantageous for low-dimensional geometrical problems like the TSP, whereas for generic resource allocation problems, a purely neural Potts encoding is preferable. A challenging resource allocation problem is airline crew scheduling, where a given flight schedule is to be covered by a set of crew rotations, each consisting of a connected sequence of flights (legs), starting and ending at a Neural Computation 9, 1589–1599 (1997)
c 1997 Massachusetts Institute of Technology °
1590
Martin Lagerholm, Carsten Peterson, and Bo Soderberg ¨
given home base (hub). The total crew waiting time is then to be minimized, subject to a number of restrictions on the rotations. This application differs strongly from high school scheduling (Gisl´en, Soderberg, ¨ & Peterson, 1989, 1992) in the existence of nontrivial topological restrictions. A similar structure occurs in multitask telephone routing. A common approach to this problem is to convert it into a set covering problem, by generating a large pool of legal rotation templates and seeking a subset of the templates that precisely covers the entire flight schedule. Solutions to the set covering problem are then found with linear programming techniques or feedback neural network methods (Ohlsson, Peterson, & Soderberg, ¨ 1993). A disadvantage with this method is that the rotation generation for computational reasons has to be nonexhaustive for a large problem; thus, only a fraction of the solution space is available. The approach to the crew scheduling problem developed in this article is quite different and proceeds in two steps. First, the full solution space is narrowed down using a reduction technique that removes a large part of the suboptimal solutions. Then, a mean field annealing approach based on Potts neurons is applied; a novel key ingredient is the use of a propagator formalism for handling topology, leg counting, and so forth. The method, which is explored on random artificial problems resembling real-world situations, performs well with respect to quality, with a computational requirement that grows like N3f , where N f is the number of flights. 2 Nature of the Problem Typically, a real-world flight schedule has a basic period of one week. Given such a schedule in terms of a set of N f weekly flights, with specified times and airports of departure and arrival, a crew is to be assigned to each flight such that the total crew waiting time is minimized, subject to the constraints: • Each crew must follow a connected path (a rotation) starting and ending at the hub (see Figure 1). • The number of flight legs in a rotation must not exceed a given upper bound Lmax . • The total duration (flight plus waiting time) of a rotation is similarly bounded by Tmax . These are the crucial and difficult constraints. In a real-world problem, there are often some twenty additional ones, which we neglect for simplicity; they constitute no additional challenge from an algorithmic point of view. Without the above constraints, the problem would reduce to that of minimizing waiting times independently at each airport (the local problems). These can be solved exactly in polynomial time, for example, by pairwise
Airline Crew Scheduling
1591
Figure 1: Schematic view of the three crew rotations starting and ending in a hub.
connecting arrival flights with subsequent departure flights. It is the global structural requirements that make the crew scheduling problem a challenge. 3 Reduction of Problem Size: Airport Fragmentation Prior to developing our artificial neural network method, we will describe a technique to reduce the size of problem, based on the local flight structure at each airport. With the waiting time between an arriving flight i and a departing flight j defined as ³ ´ (dep) = tj − t(arr) mod period, t(w) i ij
(3.1)
the total waiting time for a given problem can change only by an integer times the period. By demanding a minimal waiting time, the local problem at an airport typically can be split up into independent subproblems, where each contains a subset of the arrivals and an equally large subset of the departures. For example, with A/D denoting arrival/departure, the time ordering [AADDAD] can, if minimal waiting time is demanded, be divided into two subproblems: [AADD] and [AD]. Some of these are trivial ([AD]), forcing the crew of an arrival to continue to a particular departure. The minimal total wait time for the local problems is straightforward to compute wait . and will be denoted by Tmin wait also Similarly, by demanding a solution (assuming one exists) with Tmin for the constrained global problem, this can be reduced as follows: • Airport fragmentation. Divide each airport into effective airports corresponding to the nontrivial local subproblems. • Flight clustering. Join every forced sequence of flights into one effective
1592
Martin Lagerholm, Carsten Peterson, and Bo Soderberg ¨
composite flight, which will thus represent more than one leg and have a formal duration defined as the sum of the durations of its legs and the waiting times between them. The reduced problem thus obtained differs from the original problem only in an essential reduction of the suboptimal part of the solution space; the part with minimal waiting time is unaffected by the reduction. The resulting information gain, taken as the natural logarithm of the decrease in the number of possible configurations, empirically seems to scale approximately like 1.5 times (number of flights), and ranges from 100 to 2000 for the problems probed. The reduced problem may in most cases be further separated into a set of independent subproblems, which can be solved one by one. Some of the composite flights will formally arrive at the same effective airport they started from. This does not pose a problem. Indeed, if the airport in question is the hub, such a single flight constitutes a separate (trivial) subproblem, representing an entire forced rotation. Typically, one of the subproblems will be much larger than the rest and will be referred to as the kernel problem; the remaining subproblems will be essentially trivial. In the formalism below, we allow for the possibility that the problem to be solved has been reduced as described above, which means that flights may be composite. 4 Potts Encoding A naive way to encode the crew scheduling problem would be to introduce Potts spins in analogy with what was done by Gisl´en, Soderberg, ¨ and Peterson (1989, 1992), where each event (lecture) is mapped onto a resource unit (lecture room, time slot). This would require a Potts spin for each flight to handle the mapping onto crews. Since the problem consists of linking together sequences of (composite) flights such that proper rotations are formed, starting and ending at the hub, it appears more natural to choose an encoding where each flight i is mapped, via a Potts spin, onto the flight j to follow it in the rotation: ½ 1 if flight i precedes flight j in a rotation sij = 0 otherwise where it is understood that j is restricted to depart from the (effective) airport where i arrives. In order to ensure that proper rotations are formed, each flight has to be mapped onto precisely one other flight. This restriction is inherent in the Potts spin, defined to have precisely one component “on”: X sij = 1. (4.1) j
In order to start and terminate rotations, two auxiliary flights are intro-
Airline Crew Scheduling
=
+
1593
+
+
+
...
Figure 2: Expansion of the propagator Pij (°) in terms of sij . A line represents a flight, and (•) a landing and take-off.
duced at the hub: a dummy arrival a, forced to be mapped onto every proper hub departure, and a dummy departure b, to which every proper hub arrival must be mapped. No other mappings are allowed at the hub. Thus, a acts as a source and b as a sink of rotations; every rotation starts with a and terminates with b. Both have zero duration and leg count, as well as associated wait times. Global topological properties, leg counts, and durations of rotations, cannot be described in a simple way by polynomial functions of the spins. Instead, they are conveniently handled by means of a propagator matrix, P, derived from the Potts spin matrix s (including a, b components) as ³ ´ Pij = (1 − s)−1 ij X X X = δij + sij + sik skj + sik skl slj + sik skl slm smj + · · · (4.2) k
kl
klm
A pictorial expansion of the propagator is shown in Figure 2. The interpretation is obvious: Pij counts the number of connecting paths from flight i to j. Similarly, an element of the matrix square of P, X X Pik Pkj = δij + 2sij + 3 sik skj + · · · , (4.3) k
k
counts the total number of (composite) legs in the connecting paths, while the number of proper legs is given by X ¡ ¢ X ¡ ¢ Pik Lk Pkj = Li δij + sij Li + Lj + sik skj Li + Lk + Lj + · · · , (4.4) k
k
where Lk is P the intrinsic number of single legs in the composite flight k. Thus, Lij ≡ k Pik Lk Pkj /Pij gives the average leg count of the connecting paths. Similarly, the average duration (flight plus waiting time) of the paths from i to j amounts to P Tij ≡
k
P (w) Pik t(f) kl Pik tkl skl Plj k Pkj + , Pij
(4.5)
where t(f) i denotes the duration of the composite flight i, including the embedded waiting time. Furthermore, any closed loop (such as obtained if two
1594
Martin Lagerholm, Carsten Peterson, and Bo Soderberg ¨
flights are mapped onto each other) will make P singular; for a proper set of rotations, det P = 1. The problem can now be formulatet as follows. Minimize Tab subject to the contraints:1 Lib ≤ Lmax
(4.6)
Tib ≤ Tmax
(4.7)
for every departure i at the hub. 5 Mean Field Approach We use a mean field (MF) annealing approach in the search for the global minimum. The discrete Potts variables, sij , are replaced by continuous MF Potts neurons, vij . They represent thermal averages hsij iT , with T an artificial temperature to be slowly decreased (annealed), and have an obvious interpretation of probabilities (for flight i to be followed by j). The corresponding probabilistic propagator P will be defined as the matrix inverse of 1 − v. The neurons are updated by iterating the MF equations exp(uij /T) vij = P k exp(uik /T)
(5.1)
for one flight i at a time, by first zeroing the ith row of v,2 and then computing the relevant local fields uij entering equation 5.1 as uij = −c1 t(w) ij − c2 Pji − c3 − c5 9
³
(ij) Lrot
X ´
³ ´ (ij) vkj − c4 9 Trot − Tmax
k
− Lmax ,
(5.2)
where j is restricted to be a possible continuation flight to i. In the first term, t(w) ij is the waiting time between flight i and j. The second term suppresses closed loops, and the third term is a soft exclusion term, penalizing solutions where two flights point to the same next flight. In the fourth and fifth terms, (ij) (ij) Lrot stands for the total leg count and Trot for the duration of the rotation if i were to be mapped onto j and amount to (ij)
Lrot = Lai + Ljb (ij) Trot
1 2
= Tai +
t(w) ij
(5.3) + Tjb .
(5.4)
Minimizing this minimizes the total wait time, since the total flight time is fixed. To eliminate self-couplings.
Airline Crew Scheduling
1595
The penalty function 9, used to enforce the inequality constraints (Ohlsson et al., 1993; Tank & Hopfield, 1986), is defined by 9(x) = x2(x) where 2 is the Heaviside step function. Normally, the local fields uij are derived from a suitable energy function; however, for reasons of simplicity, some of the terms in equation 5.2 are chosen in a more pragmatic way. After an initial computation of the propagator P from scratch, it is subsequently updated according to the Sherman-Morrison algorithm for incremental matrix inversion (Press, Flannery, Teukolsky, & Vettering, 1986). An update of the ith row of v, vij → vij + δj , generates precisely the following change in the propagator P: Pki zl Pkl → Pkl + 1 − zi X δj Pjl . where zl =
(5.5) (5.6)
j
Inverting the matrix from scratch would take O(N3 ) operations, while the (exact) scheme devised above requires only O(N2 ) per row. As the temperature goes to zero, a solution crystallizes in a winner-takesall dynamics: for each flight i, the largest uij determines the continuation flight j to be chosen. 6 Test Problems In choosing test problems, our aim has been to maintain a reasonable degree of realism, while avoiding unnecessary complication and at the same time not limiting ourselves to a few real-world problems, where one can always tune parameters and procedures to get good performance. In order to accomplish this, we have analyzed two typical real-world template problems obtained from a major airline: one consisting of long-distance (LD) and the other of short- and medium-distance (SMD) flights. As can be seen in Figure 3, LD flight time distributions are centered around long times, with a small hump for shorter times representing local continuations of long flights. The SMD flight times have a more compact distribution. For each template we have made a distinct problem generator producing random problems resembling the template. A problem with a specified number of airports and flights is generated as follows. First, the distances (flight times) between airports are chosen randomly from a suitable distribution. Then a flight schedule is built up in the form of legal rotations starting and ending at the hub. For every new leg, the waiting time and the next airport are randomly chosen in a way designed to make the resulting problems statistically resemble the respective template problem. Due to the excessive time consumption of the available exact methods (e.g., branch and bound), the performance of the Potts approach cannot be
1596
Martin Lagerholm, Carsten Peterson, and Bo Soderberg ¨
30
180
a
b
160
25 140
20
120
100
15 80
10
60
40
5 20
0
0
0
100
200
300
400
500
600
700
0
50
100
150
200
250
300
350
Figure 3: Flight time distributions in minutes for (a) LD and (b) SMD template problems.
tested against these—except for in this context ridiculously small problems, for which the Potts solution quality matches that of an exact algorithm. For artificial problems of a more realistic size, we circumvent this obstacle. Since problems are generated by producing a legal set of rotations, we wait ; if add in the generator a final check that the implied solution yields Tmin not, a new problem is generated. Theoretically, this might introduce a bias in the problem ensemble; empirically, however, no problems have had to be wait . redone. Also the two real problems turn out to be solvable at Tmin Each problem is then reduced as described above (using a negligible amount of computer time), and the kernel problem is stored as a list of flights, with all traces of the generating rotations removed. 7 Results We have tested the performance of the Potts MF approach for both LD and SMD kernel problems of varying sizes. As an annealing schedule for the serial updating of the MF equations, 5.1 and 5.5), we have used Tn =kTn−1 with k = 0.9. In principle, a proper value for the initial temperature T0 can be estimated from linearizing the dynamics of the MF equations. We have chosen a more pragmatic approach: the initial temperature is assigned a tentative value of 1.0, which is dynamically adjusted based on the measured rate of change of the neurons until a proper T0 is found. The values used for the coefficients ci in equation 5.2 are chosen in an equally simple and pragmatic way: c1 = 1/period, c2 = c3 = 1, while c4 = 1/hTrot i and c5 = wait ) and hLrot i the 1/hLrot i, where hTrot i is the average duration (based on Tmin
Airline Crew Scheduling
1597
Table 1: Average Performance of the Potts Algorithm for LD Problems. Nf
Na
hNeff i f
hNaeff i
hRi
hCPU timei
75 100 150 200 225 300
5 5 10 10 15 15
23 50 55 99 84 154
8 17 17 29 26 46
0.0 0.0 0.0 0.0 0.0 0.0
0.0 sec 0.2 sec 0.3 sec 1.3 sec 0.7 sec 3.4 sec
Notes: The superscript “eff” refers to the kernel problem; subscripts “ f ” and “a” refer to flight and airport, respectively. The averages are taken with ten different problems for each N f . The performance is measured as the difference between the waiting time in the Potts and the local solutions divided by the period. The CPU time refers to DEC Alpha 2000.
average leg count per rotation, both of which can be computed beforehand. It is worth stressing that these parameter settings have been used for the entire range of problem sizes probed. When evaluating a solution obtained with the Potts approach, a check is done as to whether it is legal (if not, a simple postprocessor restores legality; this is only occasionally needed); then the solution quality is probed by measuring the excess waiting time R, R=
wait − T wait TPotts min , period
(7.1)
which is a nonnegative integer for a legal solution. For a given problem size, as given by the desired number of airports Na and flights N f , a set of ten distinct problems is generated. Each problem is subsequently reduced, and the Potts algorithm is applied to the resulting kernel problem. The solutions are evaluated, and the average R for the set is computed. The results for a set of problem sizes ranging from N f ' 75 to 1000 are shown in Tables 1 and 2; for the two real problems, see Table 3. The results are quite impressive. The Potts algorithm has solved all problems, and with a very modest CPU time consumption, of which the major 3 3 part goes into updating the P matrix. The sweep time scales like (Neff f ) ∝ Nf , with a small prefactor, are due to the fast method used (see equations 5.5 and 5.6). This should be multiplied by the number of sweeps needed— empirically between 30 and 40, independent of problem size.3 3
The minor apparent deviation from the expected scaling in Tables 1, 2, and 3 is due
1598
Martin Lagerholm, Carsten Peterson, and Bo Soderberg ¨
Table 2: Average Performance of the Potts Algorithm for SMD Problems. Nf
Na
hNeff i f
hNaeff i
hRi
hCPU timei
600 675 700 750 800 900 1000
40 45 35 50 40 45 50
280 327 370 414 441 535 614
64 72 83 87 91 101 109
0.0 0.0 0.0 0.0 0.0 0.0 0.0
19 sec 35 sec 56 sec 90 sec 164 sec 390 sec 656 sec
Note: The averages are taken with ten different problems for each N f . Same notation as in Table 1.
Table 3: Average Performance of the Potts Algorithm for Ten Runs on the Two Real Problems. Nf
Na
hNeff i f
hNaeff i
hRi
hCPU timei
type
189 948
15 64
71 383
24 98
0.0 0.0
0.6 sec 184 sec
LD SMD
Note: Same notation as in Table 1.
8 Summary We have developed a mean field Potts approach for solving resource allocation problems with a nontrivial topology. The method is applied to airline crew scheduling problems resembling real-world situations. A novel key feature is the handling of global entities, sensitive to the dynamically changing “fuzzy” topology, by means of a propagator formalism. Another important ingredient is the problem size reduction achieved by airport fragmentation and flight clustering, narrowing down the solution space by removing much of the suboptimal part. High-quality solutions are consistently found throughout a range of problem sizes without having to fine-tune the parameters, with a time consumption scaling as the cube of the problem size. The basic approach should be easy to adapt to other applications (e.g., communication routing).
to an anomalous scaling of the Digital DXML library routines employed; the number of elementary operations does scale like N3f .
Airline Crew Scheduling
1599
References Durbin, R., & Willshaw, D. (1987). An analog approach to the traveling salesman problem using an elastic net method. Nature, 326, 689. Gisl´en, L., Soderberg, ¨ B., & Peterson, C. (1989). Teachers and classes with neural networks. International Journal of Neural Systems, 1, 167. Gisl´en, L., Soderberg, ¨ B., & Peterson, C. (1992). Complex scheduling with Potts neural networks. Neural Computation, 4, 805. Hopfield, J. J., & Tank, D. W. (1985). Neural computation of decisions in optimization problems. Biological Cybernetics, 52, 141. Ohlsson, M., Peterson, C., & Soderberg, ¨ B. (1993). Neural networks for optimization problems with inequality constraints—the knapsack problem. Neural Computation, 5, 331. Peterson, C., & Soderberg, ¨ B. (1989). A new method for mapping optimization problems onto neural networks. International Journal of Neural Systems, 1, 3. Press, W. P., Flannery, B. P., Teukolsky, S. A., & Vettering, W. T. (1986). Numerical recipes: The art of scientific computing. Cambridge: Cambridge University Press. Tank, D. W., & Hopfield, J. J. (1986). Simple neural optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit. IEEE Transactions on Circuits and Systems, CAS-33, 533.
Received May 20, 1996; accepted November 27, 1996.
Communicated by John Platt
Online Learning in Radial Basis Function Networks Jason A. S. Freeman Centre for Cognitive Science, University of Edinburgh, Edinburgh EH8 9LW, U.K.
David Saad Department of Computer Science and Applied Mathematics, University of Aston, Birmingham B4 7ET, U.K.
An analytic investigation of the average case learning and generalization properties of radial basis function (RBFs) networks is presented, utilizing online gradient descent as the learning rule. The analytic method employed allows both the calculation of generalization error and the examination of the internal dynamics of the network. The generalization error and internal dynamics are then used to examine the role of the learning rate and the specialization of the hidden units, which gives insight into decreasing the time required for training. The realizable and some overrealizable cases are studied in detail: the phase of learning in which the hidden units are unspecialized (symmetric phase) and the phase in which asymptotic convergence occurs are analyzed, and their typical properties found. Finally, simulations are performed that strongly confirm the analytic results. 1 Introduction Several tools facilitate the analytic investigation of learning and generalization in supervised neural networks, such as the statistical physics methods (see Watkin, Rau, & Biehl, 1993, for a review), the Bayesian framework (MacKay, 1992), and the “probably approximately correct” (PAC) method (Haussler, 1994). These tools have principally been applied to simple networks, such as linear and boolean perceptrons, and various simplifications of the committee machine (see, for instance, Schwarze, 1993, and references therein). It has proved very difficult to obtain general results for the commonly used multilayer networks, such as the sigmoid multilayer perceptron (MLP) and the radial basis function (RBF) network. Another approach, based on studying the dynamics of online gradient descent training scenarios, has been used by several authors (Heskes & Kappen, 1991; Leen & Orr, 1994; Amari, 1993) to examine the evolution of system parameters, primarily in the asymptotic regime. A similar approach, based on examining the dynamics of overlaps between characteristic sysNeural Computation 9, 1601–1622 (1997)
c 1997 Massachusetts Institute of Technology °
1602
Jason A. S. Freeman and David Saad
tem vectors in online training scenarios, has been suggested recently (Saad & Solla, 1995a, 1995b) for investigating the learning dynamics in the soft committee machine (SCM) (Biehl & Schwarze, 1995). This approach provides a complete description of the learning process, formulated in terms of the overlaps between vectors in the system, and it can be easily extended to include general two-layer networks (Riegler & Biehl, 1995). For RBFs, some analytic studies focus primarily on generalization error. In Freeman and Saad (1995a, 1995b), average case analyses are performed employing a Bayesian framework to study RBFs under a stochastic training paradigm. In Niyogi and Girosi (1994), a bound on generalization error is derived under the assumption that the training algorithm finds a globally optimal solution. Details of studies of RBFs from the perspective of the PAC framework can be found in Holden and Rayner (1995) and its references. These methods focus on a training scenario in which a model is trained on a fixed set of examples using a stochastic training method. This article presents a method for analyzing the behavior of RBFs in an online learning scenario whereby network parameters are modified after each presentation of an example, which allows the calculation of generalization error as a function of a set of variables characterizing the properties of the adaptive parameters of the network. The dynamical evolution of these variables in the average case can be found, allowing not only the investigation of generalization ability but also the internal dynamics of the network, such as specialization of hidden units, to be analyzed. This tool has previously been applied to MLPs (Saad & Solla, 1995a, 1995b; Rieler & Biehl, 1995). 2 Training Paradigms for RBF Networks RBF networks have been successfully employed over the years in many real-world tasks, providing a useful alternative to MLPs. Furthermore, the RBF is a universal approximator for continuous functions given a sufficient number of hidden units (Hartman, Keeler, & Kowalski, 1990). The RBF architecture consists of a two-layer fully connected network. The mapping performed by each hidden node represents a radially symmetric basis function; within this analysis, the basis functions are considered gaussian, and each is therefore parameterized by two quantities: a vector representing the position of the basis function center in input space and a scalar representing the width of the basis function. For simplicity, the output layer is taken to consist of a single node; this performs a linear combination of the hidden unit outputs. There are two commonly utilized methods for training RBFs. One approach is to fix the parameters of the hidden layer (both the basis function centers and widths) using an unsupervised technique such as clustering, setting a center on each data point of the training set, or even picking random values (for a review, see Bishop, 1995). Only the hidden-to-output
Online Learning
1603
weights are adaptable, which makes the problem linear in those weights. Although fast to train, this approach results in suboptimal networks since the basis function centers are set to fixed, suboptimal values. The alternative is to adapt the hidden layer parameters—either just the center positions or both center positions and widths. This renders the problem nonlinear in the adaptable parameters, and hence requires an optimization technique, such as gradient descent, to estimate these parameters. The second approach is computationally more expensive but usually leads to greater accuracy of approximation. This article investigates the nonlinear approach in which basis function centers are continuously modified to allow convergence to more optimal models. There are two methods in use for gradient descent. In batch learning, one attempts to minimize the additive training error over the entire data set; adjustments to parameters are performed once the full training set has been presented. The alternative approach, examined here, is online learning, in which the adaptive parameters of the network are adjusted after each presentation of a new data point.1 There has been a resurgence of interest analytically in the online method; technical difficulties caused by the variety of ways in which a training set of given size can be selected are avoided, so complicated techniques such as the replica method (Hertz, Krogh, & Palmer, 1989) are unnecessary. 3 Online Learning in RBF Networks We examine a gradient descent online training scenario on a continuous error measure. The trained model (student) is an RBF network consisting of K basis functions. The center of student basis function (SBF) b is denoted by mb , and the hidden-to-output weights of the student are represented by w. Training examples will consist of input-output pairs (ξ , ζ ). The components of ξ are uncorrelated gaussian random variables of mean 0, variance σξ2 , while ζ is generated by applying ξ to a deterministic teacher RBF, but one in which the number M and the position of the hidden units need not correspond to that of the student, which allows investigation of overrealizable and unrealizable cases.2 The mapping implemented by the teacher is denoted by fT and that of the student by fS . The hidden-to-output weights of the teacher are w0 , while the center of teacher basis function u is given by nu . The vector of SBF responses to input vector ξ is represented by s(ξ ), and those of the teacher are denoted by t(ξ ). The overall functions computed by
1
Obviously one may employ a method that is a compromise between the two extremes. This represents a general training scenario since, being universal approximators, RBF networks can approximate any continuous mapping to a desired degree. 2
1604
Jason A. S. Freeman and David Saad
the networks are therefore:3 Ã
K X
kξ − mb k2 fS (ξ ) = wb exp − 2σB2 b=1 fT (ξ ) =
M X
à w0u exp
u=1
kξ − nu k2 − 2σB2
! = w · s(ξ )
(3.1)
= w0 · t(ξ )
(3.2)
!
N will denote the dimensionality of input space and P the number of examples presented. The centers of the basis functions (input-to-hidden weights) and the hidden-to-output weights are considered adjustable; for simplicity, the widths of the basis functions are fixed to a common value σB . The evolution of the centers of the basis functions is described in terms of the overlaps Qbc ≡ mb · mc , Rbu ≡ mb · nu , and Tuv ≡ nu · nv , where Tuv is constant and describes characteristics of the task to be learned. Previous work in this area (Biehl & Schwarze, 1995; Saad & Solla, 1995a, 1995b; Riegler & Biehl, 1995) has relied on the thermodynamic limit.4 This limit allows one to ignore fluctuations in the updates of the means of the overlaps due to the randomness of the training examples, and permits the difference equations of gradient descent to be considered as differential equations. The thermodynamic limit is hugely artificial for local RBFs; as the activation is localized, the N → ∞ limit implies that a basis function responds only in the vanishingly unlikely event that an input point falls exactly on its center; there is no obvious reasonable rescaling of the basis functions.5 The price paid for not taking this limit is that one has no a priori justification for ignoring the fluctuations in the update of the adaptive parameters due to the randomness of the training example. In this work, we calculate both the means and variances of the adaptive parameters, showing that the fluctuations are practically negligible (see section 5). 3.1 Calculating the Generalization Error. Generalization error measures the average dissimilarity over input space between the desired mapping fT 3 Indices b, c, d, and e will always represent SBFs; u and v will represent those of the teacher. 4 P → ∞, N → ∞, and P/N = α, where α is finite. 5 For instance, utilizing
µ exp −
kξ − mb k2
¶
2NσB2
eliminates all directional information as the cross-term ξ · mb vanishes in the thermodynamic limit.
Online Learning
1605
and that implemented by the learning model fS . This dissimilarity is taken as quadratic deviation: À ¿ ¤2 1£ fS − fT , (3.3) EG = 2 where h· · ·i denotes an average over input space with respect to the measure p(ξ ). Substituting the definitions of equations 3.1 and 3.2 into this leads to: ) ( X X 1 X 0 0 0 wb wc hsb sc i + wu wv htu tv i − 2 wb wu hsb tu i . (3.4) EG = 2 bc uv bu Since the input distribution is gaussian, the averages are gaussian integrals and can be performed analytically; the resulting expression for generalization error is given in the appendix. Each average has dependence on combinations of Q, R, and T depending on whether the averaged basis functions belong to student or teacher. 3.2 System Dynamics. Expressions for the time evolution of the overlaps Q and R can be derived by employing the gradient descent rule, η p+1 p δb (ξ − mb ), mb = mb + NσB2 where δb = ( fT − fS )wb sb and η is the learning rate, which is explicitly scaled with 1/N: iE η Dh p p p p δb (ξ − mb ) · mc + δc (ξ − mc ) · mb (3.5) h 1Qbc i = NσB2 !2 Ã E D η p p δb δc (ξ − mb ) · (ξ − mc ) + 2 NσB h 1Rbu i =
η p h δb (ξ − mb ) · nu i. NσB2
(3.6)
The hidden-to-output weights can be treated similarly, but here the learning rate is scaled with 1/K, yielding:6 h 1wb i =
η h ( fT − fS )sb i. K
(3.7)
These averages are again gaussian integrals, so they can be carried out analytically. The averaged expressions for 1Q, 1R, and 1w are given in the appendix. 6 For simplicity we use the same learning rate for both the centers and the hidden-tooutput weights, although different learning rates may be employed.
1606
Jason A. S. Freeman and David Saad
By iterating equations 3.5, 3.6, and 3.7, the evolution of the learning process can be tracked. This allows one to examine facets of learning such as specialization of the hidden units. Since generalization error depends on Q, R, and w, one can also use these equations with equation 3.4 to track the evolution of generalization error. 4 Analysis of Learning Scenarios 4.1 The Evolution of the Learning Process. Solving the difference equations 3.5, 3.6, and 3.7 iteratively, one obtains solutions to the mean behavior of the overlaps and the weights. There are four distinct phases in the learning process, which are described with reference to an example of learning an exactly realizable task. The task consists of three SBFs learning a graded teacher of three teacher basis functions (TBFs) where graded implies that the square norms of the TBFs (diagonals of T) differ from one another; for this task, T00 = 0.5, T11 = 1.0, and T22 = 1.5. In this demonstration the teacher is chosen to be uncorrelated, so the offdiagonals of T are 0, and the teacher hidden-to-output weights w0 are set to 1. The learning process is illustrated in Figure 1. Figure 1a (solid curve) shows the evolution of generalization error, calculated from equation 3.4, while Figures 1b–d show the evolution of the equations for the means of R, Q, and w, respectively, calculated by iterating equations 3.5, 3.6, and 3.7 from random initial conditions sampled from the following uniform distributions: Qbb and wb are sampled from U[0, 0.1], while Qbc,b6=c and Rbc from a uniform distribution U[0, 10−6 ]. These initial conditions will be used throughout the article and reflect random correlations expected by arbitrary initialization of large systems. Input dimensionality N = 8, learning rate η = 0.9, input variance σξ2 = 1, and basis function width σB2 = 1 will be employed unless stated otherwise. Initially, there is a short transient phase in which the overlaps and hiddento-output weights evolve from their initial conditions until they reach an approximately steady value (P = 0 to P = 1000). The symmetric phase then begins, which is characterized by a plateau in the evolution of the generalization error (see Figure 1a, solid curve; P = 1000 to P = 7000), corresponding to a lack of differentiation among the hidden units; they are unspecialized and learn an average of the hidden units of the teacher, so that the student center vectors and hidden-to-output weights are similar (see Figures 1b–d). The difference in value between the overlaps R between student center vectors and teacher center vectors (see Figure 1b) is only due to the difference in the lengths of various teacher center vectors; if the overlaps were normalized, they would be identical. The symmetric phase is followed by a symmetry-breaking phase in which the SBFs learn to specialize, and become differentiated from one another (P = 7000 to P = 20,000). Finally there is a long convergence phase, as the overlaps and hidden-tooutput weights reach their asymptotic values. Since the task is realizable,
Online Learning
1607
0.0040
1.5
η = 0.1 η = 0.9 η = 5.0
0.0030
1.0
Eg
0.5
0.0020
R 0.0
0.0010
R00 R 01 R02
-0.5
0.0000
0
10000
20000
P
30000
40000
50000
-1.0
0
10000
(a)
20000
P
R10 R 11 R 12 30000
R20 R21 R22 40000
50000
(b)
1.5 1.0 1.0
W
0.5
Q
W1
-0.5
-1.0
W0
0.5
0.0
Q 00 Q 01 0
10000
Q02 Q11
20000
(c)
P
30000
W2
Q12 Q22 40000
50000
0.0
0
10000
20000
P
30000
40000
50000
(d)
Figure 1: The exactly realizable scenario with positive TBFs. Three SBFs learn a graded, uncorrelated teacher of three TBFs with T00 = 0.5, T11 = 1.0, and T22 = 1.5. All teacher hidden-to-output weights are set to 1. (a) The evolution of the generalization error as a function of the number of examples for several different learning rates (η = 0.1, 0.9, 5.0). (b, c) The evolution of overlaps between student and teacher center vectors and among student center vectors, respectively. (d) The evolution of the mean hidden-to-output weights.
this phase is characterized by Eg → 0 (see Figure 1a, solid curve) and by the student center vectors and hidden-to-output weights approaching those of the teacher (i.e., Q00 = R00 = 0.5, Q11 = R11 = 1.0, Q22 = R22 = 1.5, with the off-diagonal elements of both Q and R being zero; ∀b, wb = 1).7 These phases are generic in that they are observed, sometimes with some variation such as a series of symmetric and symmetry-breaking phases, in every online learning scenario for RBFs so far examined. They also correspond to the phases found for MLPs (Saad & Solla, 1995b; Riegler & Biehl, 1995). 7
The arbitrary labels of the SBFs were permuted to match those of the teacher.
1608
Jason A. S. Freeman and David Saad
The formalism describes the evolution of the means (and the variances) from certain initial conditions. Convergence of the dynamics to suboptimal attractive fixed points (local minima) may occur if the starting point is within the corresponding basin of attraction. No local minima have been observed in our solutions, which may be an artifact of the system dimensionality. 4.2 The Role of the Learning Rate. With all the TBFs positive, analysis of the time evolution of the generalization error, overlaps, and hidden-tooutput weights for various settings of the learning rate reveal the existence of three distinct behaviors. If η is chosen to be too small (here, η = 0.1), there is a long period in which there is no specialization of the SBFs and no improvement in generalization ability. The process becomes trapped in a symmetric subspace of solutions; this is the symmetric phase. Given asymmetry in the student initial conditions (in R, Q, or w) or of the task itself, this subspace will always be escaped, but the time period required may be prohibitively large (see Figure 1a, dotted curve). The length of the symmetric phase increases with the symmetry of the initial conditions. At the other extreme, if η is set too large, an initial transient takes place quickly, but there comes a point from which the student vector norms grow extremely rapidly, until the point where, due to the finite variance of the input distribution and local nature of the basis functions, the SBFs are no longer activated during training (see Figure 1a, dashed curve, with η = 5.0). In this case, the generalization error approaches a finite value as P → ∞, and the task is not solved. Between these extremes lies a region in which the symmetric subspace is escaped quickly, and EG → 0 as P → ∞ for the realizable case (see Figure 1a, solid curve, with η = 0.9). The SBFs become specialized and, asymptotically, the teacher is emulated exactly. These results for the learning rate are qualitatively similar to those found for SCMs and MLPs (Biehl & Schwarze, 1995; Saad & Solla, 1995a, 1995b; Riegler & Biehl, 1995). 4.3 Task Dependence. The symmetric phase depends on the symmetry of the task as well as that of the initial conditions. One would expect a shorter symmetric phase in inherently asymmetric tasks. To examine this, a task similar to that of section 4.1 was employed, with the single change being that the sign of one of the teacher hidden-to-output weights was flipped, thus providing two categories of targets: positive and negative. The initial conditions of the student remained the same as in the previous task, with η = 0.9. The evolution of generalization error and the overlaps for this task are shown in Figure 2. The dividing of the targets into two categories effectively eliminates the symmetric phase; this can be seen by comparing the evolution of the generalization error for this task (see Figure 2a, dashed curve) with that for the previous task (see Figure 2a, solid curve). There is no longer a plateau in the generalization error. Correspondingly, the symmetries be-
Online Learning
1609
0.0040
1.5
Positive Targets Pos/Neg Targets
0.0030
1.0
Eg
0.5
0.0020
R 0.0
0.0010
R00 R 01 R02
-0.5
0.0000
0
10000
20000
(a)
P
30000
40000
50000
-1.0
0
10000
20000
P
R10 R 11 R 12 30000
R20 R21 R22 40000
50000
(b)
Figure 2: The exactly realizable scenario defined by a teacher network with a mixture of positive and negative TBFs. Three SBFs learn a graded, uncorrelated teacher of three TBFs with T00 = 0.5, T11 = 1.0, and T22 = 1.5. w00 = 1, w01 = −1, w02 = 1. (a) The evolution of the generalization error for this case and, for comparison, the evolution in the case of all positive TBFs. (b) The evolution of the overlaps between student and teacher centers R.
tween SBFs break immediately, as can be seen by examining the overlaps between student and teacher center vectors (see Figure 2b); this should be compared with figure 1b, which denotes the evolution of the overlaps in the previous task. Note that the plateaus in the overlaps (see Figure 1b, P = 1000 to P = 7000) are not found for the antisymmetric task. The elimination of the symmetric phase is an extreme result caused by the extremely asymmetric teacher. For networks with many hidden units, one can find a cascade of subsymmetric phases, each shorter than the single symmetric phase in the corresponding task with only positive targets, in which there is one symmetry between the hidden units seeking positive targets and another between those seeking negative targets. This suggests a simple and easily implemented strategy for increasing the speed of learning when targets are predominantly positive (negative): eliminate the bias of the training set by subtracting (adding) the mean target from each target point. This corresponds to an old heuristic among RBF practitioners. It follows that the hidden-to-output weights should be initialized from a zero-mean distribution. 4.4 The Overrealizable Case. In real-world problems, the exact form of the data-generating mechanism is rarely known. This leads to the possibility that the student may be overly powerful, in that it is capable of fitting surfaces more complicated than that of the true teacher. It is important to gain insight into how architectures will respond given such a scenario
1610
Jason A. S. Freeman and David Saad
0.0020
2.0
Exactly Realizable, Task 1 Over-Realizable, Task 1 Exactly Realizable, Task 2 Over-realizable, Task 2
0.0015
1.5
Eg
R
0.0010
1.0
0.0005
0.5
0.0000
0
10000
20000
P
30000
40000
R01 R10 R22 R40
0.0
50000
0
10000
(a)
20000
P
30000
40000
50000
(b) 1.0
0.5
W 0.0
W0 W1 W2 W3 W4
-0.5
-1.0
0
10000
20000
P
30000
40000
50000
(c)
Figure 3: The overrealizable scenario. (a) The evolution of the generalization error in two tasks; each task is learned by a well-matched student (exactly realizable) and an overly powerful student (overrealizable). (b, c) The evolution of the overlaps R and the hidden-to-output weights w for the overrealizable case in the second task, in which the teacher RBF includes a mixture of positive and negative hidden-to-output weights. In this scenario, five SBFs learn a graded, uncorrelated teacher of three TBFs with T00 = 0.5, T11 = 1.0, and T22 = 1.5. w00 = 1, w01 = −1, w02 = 1.
in order to be confident that they can be used successfully when the true teacher is unknown. Intuitively, one might expect that a student that is well matched to the teacher will learn faster than one that is overly powerful. Figure 3a shows two tasks, each of which compares the overrealizable scenario with the well-matched case. The first task, consisting of three TBFs, is identical to that detailed in section 4.1, and hence has only positive targets. The performance of a well-matched student of three SBFs is compared with an overrealizable scenario in which five SBFs learn the three TBFs. Comparison of the evolution of generalization error between these learning scenarios
Online Learning
1611
is shown in Figure 3a; the solid curve represents the well-matched scenario, and the dot-dash curve illustrates the overrealizable scenario. The length of the symmetric phase is significantly increased with the overly powerful student. The length of the convergence phase is also increased. An analytical treatment of these effects as well as the overrealizable scenario generally is given elsewhere (Freeman & Saad, in press). The second task deals with the alternative scenario in which one TBF has a negative hidden-to-output weight; the task is identical to that defined in section 4.3, and the student initial conditions are again as specified in section 4.1. In Figure 3a the evolution of generalization error for both the overrealizable scenario (dashed curve) in which five SBFs learn three TBFs, and the corresponding well-matched case in which three SBFs learn three TBFs (dotted curve), is shown. There is no well-defined symmetric case due to the inherent asymmetry of the task. The convergence phase is again greatly increased in length; this appears to be a general feature of the overrealizable scenario. Given that the student is overly powerful, there appear to be, a priori, several remedies available to the student: eliminate the excess nodes, form cancellation pairs (in which two students exactly cancel one another), or devise more complicated fitting schemes. To examine the actual responses of the student, the evolution of the overlaps between student and teacher and of the hidden-to-output weights for the particular scenario described by the second trial detailed is presented in Figures 3b and 3c, respectively. Looking first at Figure 3c, it is apparent that w3 approaches zero (short-dashed curve), indicating that SBF 3 is entirely eliminated during training. Thus four SBFs remain to emulate three TBFs. The negative TBF 1 is exactly emulated by SBF 0, as T11 = 1, w01 = −1, and R01 = 1, w0 = −1 (solid curve on both Figures 3b and 3c), while, similarly, SBF 2 exactly emulates TBF 2 (long-dashed curve, both figures). This leaves SBF 1 and SBF 4 to emulate TBF 0. Looking at Figure 3c, dotted and dot-dash curves, both student hidden-to-output weights approach 0.5, exactly half that of the hidden-to-output weight of TBF 0; looking at Figure 3b, both SBFs have 0.5 overlap with TBF 0. This indicates that the sum of both students emulates TBF 0. Thus, elimination and fitting involving the noncancelling combination of nodes were found; in these trials and many others, no pairwise cancellation was found. One presumes that this could be induced by very careful selection of the initial conditions but that it is not found under normal circumstances. 4.5 Analysis of the Symmetric Phase. The symmetric phase, in which there is no specialization of the hidden units, can be analyzed in the realizable case by employing a few simplifying assumptions. It is a phenomenon that is predominantly associated with small η, so terms of η2 are neglected. The hidden-to-output weights are clamped to +1. The teacher is taken to be isotropic: TBF centers have identical norms of 1, each hav-
1612
Jason A. S. Freeman and David Saad
ing no overlap with the others; therefore Tuv = δuv . This has the result that the student norms Qbb are very similar in this phase, as are the studentstudent correlations, so Qbb ≡ Q and Qbc,b6=c ≡ C, where Q becomes the square norm of the SBFs, and C is the overlap between any two different SBFs. Following the geometric argument of Saad & Solla (1995b), in the symmetric phase, the SBF centers are confined to the subspace spanned by the TBF centers. Since Tuv = δuv , the SBF centers can be written in the orthonormal basis defined by the TBF centers, with the components being the overPM Rbu nu . Because the teacher is isotropic, the overlaps are laps R: mb = u=1 independent of both b and u and thus can be written in terms of a single parameter R. Further, this reduction to a single overlap parameter leads to Q = C = MR2 , so the evolution of the overlaps can be described as a single difference equation for R. The analytic solution of equations 3.5, 3.6, and 3.7 under these restrictions is still rather complicated. However, since we are primarily interested in large systems, that is, large K, we examine the dominant terms in the solution. Expanding in 1/K and discarding secondorder terms renders the system simple enough to solve analytically for the symmetric fixed point: R=
³
K 1+
1 σB2
−
σB2 exp
h³
1 2σB2
´
σB2 +1 σB2 +2
i´ .
(4.1)
The stability of the fixed point, and thus the breaking of the symmetric phase, can be examined by an eigenvalue analysis of the dynamics of the system near the fixed point. The method employed is similar to that detailed in Saad and Solla (1995b) and is presented elsewhere (Freeman & Saad, in press). The dominant eigenvalue (λ1 > 0) scales with K and represents a perturbation that breaks the symmetries between the hidden units; the remaining modes λi6=1 < 0, which also scale with K, are irrelevant because they preserve the symmetry. This result is in contrast to that for the SCM (Saad & Solla, 1995b), in which the dominant eigenvalue scales with 1/K. This implies that for RBFs, the more hidden units in the network, the faster the symmetric phase is escaped, resulting in negligible symmetric phases for large systems, while in SCMs the opposite is true. This difference is caused by the contrast between the localized nature of the basis function in the RBF network and the global nature of sigmoidal hidden nodes in SCM. In the SCM case, small perturbations around the symmetric fixed point result in relatively small changes in error since the sigmoidal response changes very slowly as one modifies the weight vectors. On the other hand, the gaussian response decays exponentially as one moves away from the center, so small perturbations around the symmetric fixed point result in massive changes that drive the symmetry breaking. When K increases, the
Online Learning
1613
error surface looks very rugged, emphasizing the peaks and increasing this effect, in contrast to the SCM case, where more sigmoids means a smoother error surface. This does not mean that the symmetric phase can be ignored for realistically sized networks, however. Even with a teacher that is not particularly symmetric, this phase can play a significant role in the learning dynamics. To demonstrate this, a teacher RBF of 10 hidden units with N = 5 was constructed with the teacher centers generated from a gaussian distribution N [0, 0.5]. Note that this teacher must be correlated because the number of centers is larger than the input dimension. A student network, also of 10 hidden units, was constructed with all weights initialized from N [0, 0.05]. The networks were then mapped into the corresponding overlaps, and the learning process was run with η = 0.1. The evolution of generalization error is shown in Figure 4d: the symmetric phase, extending here from P = 2000 to P = 15,000, is a prominent phenomenon of the learning dynamics. It is not merely an artifact of a highly symmetric teacher configuration (the teacher was random and correlated) or of a specially chosen set of initial conditions, as the student was initialized with realistic initial conditions before being mapped into overlaps.
4.6 Analysis of the Convergence Phase. To gain insight into the convergence of the online gradient descent process in a realizable scenario, a similar simplified learning scenario to that used in the symmetric phase analysis was employed. The hidden-to-output weights are again fixed to +1, and the teacher is defined by Tuv = δuv . The scenario can be extended to adaptable hidden-to-output weights (this is presented in Freeman & Saad, in press, along with more mathematical detail). As in the symmetric phase, the fact that Tuv = δuv allows the system to be reduced to four adaptive quantities: Q ≡ Qbb , C ≡ Qbc,b6=c , R ≡ Rbb , and S ≡ Rbc,b6=c . Linearizing this system about the known fixed point of the dynamics, Q = 1, C = 0, R = 1, S = 0, yields an equation of the form 1x = Ax, where x = {1 − R, 1 − Q, S, C} is the vector of deviations from the fixed point. The eigenvalues of the matrix A control the converging system; these are presented in Figure 4a for K = 10. In every case examined, there is a single critical eigenvalue λc that controls the stability and convergence rate of the system (shown in bold), a nonlinear subcritical eigenvalue, and two subcritical linear eigenvalues. The value of η at λc = 0 determines the maximum learning rate for convergence to occur; for λc > 0 the fixed point is unstable. The convergence of the overlaps is controlled by the critical eigenvalue; therefore, the value of η at the single minimum of λc determines the optimal learning rate (ηopt ) in terms of the fastest convergence of the system to the fixed point. Examining ηc and ηopt as a function of K (see Figure 4b), one finds that both quantities scale as 1/K; the maximum and optimal learning rates are
1614
Jason A. S. Freeman and David Saad
4.0
Maximum and Optimal Learning Rates
0.000
Eigenvalues for the Asymptotic Phase
-0.005
3.0
η
λ
ηc
2.0
-0.010
-0.020 0.0
η opt
1.0
-0.015
1.0
2.0
3.0
η
4.0
0.0 0.10
0.08
0.06
0.04
0.02
0.00
1/K
(a)
(b)
20.0
0.030
Maximum Learning Rate versus Basis Function Width
16.0
Generalization Error for a Realistic Learning Scenario
0.025 0.020
Eg
12.0
ηc
σξ2
0.015
8.0 0.010 4.0
0.0 0.0
0.005
1.0
2.0
σB2
(c)
3.0
4.0
5.0
0.000
0
20000
40000
60000
P
(d)
Figure 4: Convergence and symmetric phases. (a) The eigenvalues controlling the dynamics of the system for the convergence phase (detailed in section 4.6), linearized about the asymptotic fixed point in the realizable case, as a function of η. The critical eigenvalue is shown in bold. (b) The maximum and optimal convergence phase learning rates, found from the critical eigenvalue; these quantities scale as 1/K. (c) The maximum convergence phase learning rate as a function of basis function width. (d) The evolution of generalization error for a realistically sized learning scenario (described in section 4.5), demonstrating that the symmetric phase can play a significant role, even with a correlated, asymmetric teacher.
inversely proportional to the number of hidden units of the student. Numerically, the ratio of ηopt to ηc is approximately two-thirds. Finally, the relationship between basis function width and ηc is plotted in Figure 4c. When the widths are small, ηc is very large as it becomes unlikely that a training point will activate any of the basis functions. For σB2 > σξ2 , ηc ∼ 1/σB2 .
Online Learning
1615
5 Quantifying the Variances Because the thermodynamic limit is not employed, it is necessary to quantify the variances in the adaptive parameters to justify considering only the mean updates.8 By making assumptions as to the form of these variances, it is possible to derive equations describing their evolution. Specifically, it is assumed that each update function and parameter being updated can be written in terms of a mean and fluctuation; for instance, applying this to Qbc : q ηd [ (5.1) 1Qbc = 1Qbc + 1Q bc Qbc = Qbc + N Qbc , d where hQbc i denotes an average value and Q bc represents a fluctuation due to the randomness of the example. Combining these equations and averaging with respect to the input distribution results in a set of difference equations describing the evolution of the variances of the overlaps and hidden-to-output weights (similar to Riegler & Biehl, 1995) as training proceeds. Details of the method can be found in Heskes and Kappen (1991) and in Barber, Saad, and Sollich (1996) for the SCM. It has been shown that the variances vanish in the thermodynamic limit for realizable cases (Barber et al., 1996; Heskes & Kappen, 1991). (A detailed description of the calculation of the variances as applied to RBFs appears in Freeman & Saad, in press.) Figure 5 shows the evolution of the variances, as error bars on the mean, for the dominant overlaps and the hidden-to-output weights using η = 0.9, N = 10 on a task identical to that described in section 4.1. Examining the dominant overlaps R first (see Figure 5a), the variances follow the same pattern for each overlap but at different values of P. The variances begin at 0, then increase, peaking at the symmetry-breaking point at which the SBF begins to specialize on a particular TBF; then they decrease to 0 again as convergence occurs. Looking at each SBF in turn, for SBF 2 (dashed curve), the overlap begins to specialize at approximately P = 2000, where the variance peak occurs; for SBF 0 (solid curve), the symmetry lasts until P = 10,000, again where the variance peak occurs, and for SBF 1 (dotted curve), the symmetry breaks later at approximately P = 20,000, again where the peak of the variance occurs. The variances then dwindle to 0 for each SBF in the convergence phase. Essentially the same pattern occurs for the hidden-to-output weights (see Figure 5b). The variances increase rapidly until the hidden units begin 8 The hidden-to-output weights are not intrinsically self-averaging even in the thermodynamic limit, although they have been shown to be such for the MLP if the learning rate is scaled with N (Riegler & Biehl, 1995). If scaled differently, adiabatic elimination techniques may be employed to describe the evolution adequately (Riegler, personal communication, 1996).
1616
Jason A. S. Freeman and David Saad
1.1
2.0
R01 R22 R10
1.5
1.0
0.9
R
W
1.0
0.8
W0 W1
0.5 0.7
0.0
0
10000 20000 30000 40000 50000 60000
P
(a)
0.6
W2 0
10000 20000 30000 40000 50000 60000
P
(b)
Figure 5: Evolution of the variances of the overlaps R and hidden-to-output weights w are shown in (a) and (b), respectively. The curves denote the evolution of the means; the error bars show the evolution of the fluctuations about the mean. Input dimensionality N = 10, learning rate η = 0.9, input variance σξ2 = 1, and basis function width σB2 = 1.0.
to specialize, at which point the variances peak. This is followed by the variances’ decreasing to 0 as convergence occurs. For both overlaps and hidden-to-output weights, the mean is an order of magnitude larger than the standard deviation at the variance peak and is much more dominant elsewhere; the ratio becomes greater as N is increased. The magnitude of the variances is influenced by the degree of symmetry of the initial conditions of the student and the task in that the greater this symmetry is, the larger the variances. Discussion of this phenomenon can be found in Barber et al. (1996); it will be explored at greater length for RBFs in a future publication. 6 Simulations In order to confirm the validity of the analytic results, simulations were performed in which RBFs were trained using online gradient descent. The trajectories of the overlaps were calculated from the trajectories of the weight vectors of the network, and generalization error was estimated by finding the average error on a 1000-point test set. The procedure was performed 50 times and the results averaged, subject to permutation of the labels of the SBFs to ensure the average was meaningful. Typical results are shown in Figure 6. The example shown is for an exactly realizable system of three SBFs and three TBFs at N = 5, η = 0.9. Figure 6a shows the correspondence between empirical test error and theoretical generalization error. At all times, the theoretical result is within
Online Learning
1617
0.0020
1.5
Theoretical Empirical
0.0015
1.0
R
Eg 0.0010
0.5
0.0005
0.0
0.0000
-0.5
Empirical
Theoretical 0
2000
4000
P
6000
8000
10000
0
2000
(a)
P
6000
8000
10000
(b)
1.5
1.2
1.0
1.0
Q
W
0.5
0.8
0.0
0.6
0
2000
4000
(c)
P
Theoretical
Empirical
Theoretical -0.5
4000
6000
8000
10000
0.4
0
2000
4000
P
6000
Empirical 8000
10000
(d)
Figure 6: Comparison of theoretical results with simulations. The simulation results are averaged over 50 trials; the labels of the student hidden units were permuted where necessary to make the averages meaningful. Empirical generalization error was approximated with the test error on a 1000-point test set (a). Error bars on the simulations are at most the size of the larger asterisks for the overlaps (b, c), and at most twice this size for the hidden-to-output weights (d). Input dimensionality N = 5, learning rate η = 0.9, input variance σξ2 = 1, and basis function width σB2 = 1.
one standard deviation of the empirical result. Figures 6b–d show the excellent correspondence between the trajectories of the theoretical overlaps and hidden-to-output weights and their empirical counterparts; the error bars on the simulation distributions are not shown because they are approximately the size of the symbols. The simulations demonstrate the validity of the theoretical results. In addition, we have found excellent correlation between the analytically calculated variances and those obtained from the simulations (this is explored further in Freeman & Saad, in press).
1618
Jason A. S. Freeman and David Saad
7 Conclusion Online learning, in which the adaptive parameters of the network are updated at each presentation of a data point, was examined for the RBF using gradient descent learning. The analytic method presented allows the calculation of the evolution of generalization error and the specialization of the hidden units. This method was used to elucidate the stages of training and the role of the learning rate. There are four stages of training: a short transitory phase in which the adaptive parameters move from the initial conditions to the symmetric phase; the symmetric phase itself, characterized by lack of differentiation among hidden units; a symmetry-breaking phase in which the hidden units become specialized; and a convergence phase in which the adaptive parameters reach their final values asymptotically. Three regimes were found for the learning rate: small, giving unnecessarily slow learning; intermediate, leading to fast escape from the symmetric phase and convergence to the correct target; and too large, which results in a divergence of SBF norms and failure to converge to the correct target. Examining the exactly realizable scenario, it was shown that employing both positive and negative targets leads to much faster symmetry breaking; this appears to be the underlying reason behind the neural network folklore that targets should be given zero mean. The overrealizable case was also studied, showing that overrealizability extends both the length of the symmetric phase and that of the convergence phase. The symmetric phase for realizable scenarios was analyzed and the value of the overlaps at the symmetric fixed point found. It was discovered that there is a significant difference between the behaviors of the RBF and SCM, in that increasing K speeds up the symmetry-breaking in RBFs, while it slows the process for SCMs. The convergence phase was also studied; both maximum and optimal learning rates were calculated and shown to scale as 1/K. The dependence of the maximum learning rate on the width of the basis functions was also examined, and, for σB2 > σξ2 , the maximum learning rate scales approximately as 1/σB2 . Finally, simulations were performed that strongly confirm the theoretical results. Future work includes the study of unrealizable cases, in which the learning rate must decay over time in order to find a stable solution, the study of the effects of noise and regularizers, the extension of the analysis of the convergence phase to fully adaptable hidden-to-output weights, and the use of the theory to aid real-world learning tasks, by, for instance, deliberately breaking the symmetries between SBFs in order to reduce drastically or even eliminate the symmetric phase.
Online Learning
1619
Appendix Generalization Error ( ) X X 1 X 0 0 0 EG = wb wc I2 (b, c) + wu wv I2 (u, v) − 2 wb wu I2 (b, u) (A.1) 2 bc uv bu 1Q, 1R, and 1w
i io h η n h J (b; c) − Q I (b) + w J (c; b) − Q I (c) w 2 c 2 b bc bc 2 2 NσB2 !2 Ã n η wb wc K4 (b, c) + Qbc I4 (b, c) + 2 NσB o (A.2) − J4 (b, c; b) − J4 (b, c; c)
h1Qbc i =
o n η wb J2 (b; u) − Rbu I2 (b) 2 NσB
h1Rbu i =
η I2 (b) K
h1wb i = I, J, and K I2 (b) =
(A.3)
X
(A.4)
w0u I2 (b, u) −
u
w0u J2 (b, u; c) −
u
I4 (b, c) =
wd I2 (b, d)
(A.5)
d
X
J2 (b; c) =
X
X
wd J2 (b, d; c)
(A.6)
d
X
wd we I4 (b, c, d, e) +
de
−2
X
X
w0u w0v I4 (b, c, u, v)
uv
wd w0u I4 (b, c, d, u)
(A.7)
du
J4 (b, c; f ) =
X
wd we J4 (b, c, d, e; f ) +
de
−2
X
X
w0u w0v J4 (b, c, u, v; f )
uv
wd w0u J4 (b, c, d, u; f )
(A.8)
du
K4 (b, c) =
X
wd we K4 (b, c, d, e) +
de
−2
X du
X
w0u w0v K4 (b, c, u, v)
uv
wd w0u K4 (b, c, d, u)
(A.9)
1620
Jason A. S. Freeman and David Saad
I, J, and K. In each case, only the quantity corresponding to averaging over SBFs is presented. Each quantity has similar counterparts in which TBFs are substituted for SBFs. For instance, I2 (b, c) = hsb sc i is presented, and I2 (u, v) = htu tv i and I2 (b, u) = hsb tu i are omitted. I2 (b, c) = (2l2 σξ2 )−N/2 " # −Qbb − Qcc + (Qbb + Qcc + 2Qbc )/2σB2 l2 × exp 2σB2 Ã
!
J2 (b, c; d) =
Qbd + Qcd 2l2 σB2
I4 (b, c, d, e) =
(2l4 σξ2 )−N/2 exp
I2 (b, c) "
"
(A.10)
(A.11)
−Qbb − Qcc − Qdd − Qee 2σB2
#
Qbb +Qcc +Qdd +Qee +2(Qbc +Qbd +Qbe +Qcd +Qce +Qde ) × exp 4l4 σB4
#
(A.12) Ã J4 (b, c, d, e; f ) = Ã K4 (b, c, d, e) =
Qbf + Qcf + Qdf + Qef 2l4 σB2
! I4 (b, c, d, e)
(A.13)
2Nl4 σB4 + Qbb + Qcc + Qdd + Qee 4l4 σB4
! 2(Qbc + Qbd + Qbe + Qcd + Qce + Qde ) + I4 (b, c, d, e) 4l24 σB4 (A.14) Other Quantities l2 = l4 =
2σξ2 + σB2 2σB2 σξ2 4σξ2 + σB2 2σB2 σξ2
(A.15) (A.16)
Acknowledgments We thank Ansgar West and David Barber for useful discussions, and the anonymous referees for their comments. D.S. thanks the Leverhulme Trust for its support (F/250/K).
Online Learning
1621
References Amari, S. (1993). Backpropagation and stochastic gradient descent learning. Neurocomputing, 5, 185–196. Barber, D., Saad, D., & Sollich, P. (1996). Finite size effects in online learning of multilayer neural networks. Euro. Phys. Lett., 34, 151–156. Biehl, M., & Schwarze, H. (1995). Learning by online gradient descent. J. Phys. A: Math. Gen., 28, 643. Bishop, C. (1995). Neural networks for pattern recognition. Oxford: Oxford University Press. Freeman, J., & Saad, D. (1995a). Learning and generalisation in radial basis function networks. Neural Computation, 7, 1000–1020. Freeman, J., & Saad, D. (1995b). Radial basis function networks: Generalization in overrealizable and unrealizable scenarios. Neural Networks, 9, 1521– 1529. Freeman, J., & Saad, D. (in press). The dynamics of on-line learning in radial basis function networks. Phys. Rev. A. Hartman, E., Keeler, J., & Kowalski, J. (1990). Layered neural networks with gaussian hidden units as universal approximators. Neural Computation, 2, 210–215. Haussler, D. (1994). The probably approximately correct (PAC) and other learning models. In A. Meyrowitz & S. Chipman (Eds.), Foundations of knowledge acquisition: Machine learning (Chap. 9). Boston: Kluwer. Hertz, J., Krogh, A., & Palmer, R. (1989). Introduction to the theory of neural computation. Reading, MA: Addison-Wesley. Heskes, T., & Kappen, B. (1991). Learning processes in neural networks. Phys. Rev. A., 44, 2718–2726. Holden, S., & Rayner, P. (1995). Generalization and PAC learning: Some new results for the class of generalized single-layer networks. IEEE Trans. on Neural Networks, 6(2), 368–380. Leen, T., & Orr, G. (1994). Optimal stochastic search and adaptive momentum. In J. Cowan, G. Tesauro, & J. Alspector (Eds.), Advances in neural information processing systems, (6: 477–484). San Mateo, CA: Morgan Kaufmann. MacKay, D. (1992). Bayesian interpolation. Neural Computation, 4, 415–447. Niyogi, P., & Girosi, F. (1994). On the relationship between generalization error, hypothesis complexity and sample complexity for radial basis functions (Tech. Rep.). Cambridge, MA: AI Laboratory, MIT. Riegler, P., & Biehl, M. (1995). On-line backpropagation in two-layered neural networks. J. Phys. A: Math. Gen., 28, L507–513. Saad, D., & Solla, S. (1995a). Exact solution for online learning in multilayer neural networks. Phys. Rev. Lett., 74, 4337–4340. Saad, D., & Solla, S. (1995b). On-line learning in soft committee machines. Phys. Rev. E., 52, 4225–4243.
1622
Jason A. S. Freeman and David Saad
Schwarze, H. (1993). Learning a rule in a multilayer neural network. J. Phys. A: Math. Gen., 26, 5781–5794. Watkin, T., Rau, A., & Biehl, M. (1993). The statistical mechanics of learning a rule. Reviews of Modern Physics, 65, 499–556.
Received March 15, 1996; accepted January 3, 1997.
ARTICLE
Communicated by Pietro Perona
Minimax Entropy Principle and Its Application to Texture Modeling Song Chun Zhu Division of Applied Mathematics, Brown University, Providence, RI 02912, U.S.A.
Ying Nian Wu Department of Statistics, University of Michigan, Ann Arbor, MI 48109, U.S.A.
David Mumford Division of Applied Mathematics, Brown University, Providence, RI 02912, U.S.A.
This article proposes a general theory and methodology, called the minimax entropy principle, for building statistical models for images (or signals) in a variety of applications. This principle consists of two parts. The first is the maximum entropy principle for feature binding (or fusion): for a given set of observed feature statistics, a distribution can be built to bind these feature statistics together by maximizing the entropy over all distributions that reproduce them. The second part is the minimum entropy principle for feature selection: among all plausible sets of feature statistics, we choose the set whose maximum entropy distribution has the minimum entropy. Computational and inferential issues in both parts are addressed; in particular, a feature pursuit procedure is proposed for approximately selecting the optimal set of features. The minimax entropy principle is then corrected by considering the sample variation in the observed feature statistics, and an information criterion for feature pursuit is derived. The minimax entropy principle is applied to texture modeling, where a novel Markov random field (MRF) model, called FRAME (filter, random field, and minimax entropy), is derived, and encouraging results are obtained in experiments on a variety of texture images. The relationship between our theory and the mechanisms of neural computation is also discussed.
1 Introduction This article proposes a general theory and methodology, the minimax entropy principle, for statistical modeling in a variety of applications. This section introduces the basic concepts of the minimax entropy principle after a discussion of the motivation of our theory and a brief review of some relevant theories and methods previously studied in the literature. Neural Computation 9, 1627–1660 (1997)
c 1997 Massachusetts Institute of Technology °
1628
Song Chun Zhu, Ying Nian Wu, and David Mumford
1.1 Motivation and Goal. In a variety of disciplines ranging from computational vision, pattern recognition, and image coding to psychophysics, an important theme is to pursue a probability model to characterize a set of images (or signals) I. This is often posed as a statistical inference problem: we assume that there exists a joint probability distribution (or density) f (I) over the image space; f (I) should concentrate on a subspace that corresponds to the ensemble of images in the application; and the objective is to estimate f (I) given a set of observed (or training) images. f (I) plays significant roles in the following areas: 1. Visual coding, where the goal is to take advantage of the regularity or redundancy in the input images to produce a compact coding scheme. This involves measuring the efficiency of coding schemes in terms of entropy (Watson, 1987; Barlow, Kaushal, & Mitchison, 1989), where the computation of the entropy and thus the choice of the optimal coding schemes depend on the estimation of f (I). For example, two kinds of coding schemes are compared in the recent work of Field (1994): the compact coding and the sparse coding. The former assumes gaussian distributions for f (I), whereas the latter assumes nongaussian ones. 2. Pattern recognition, neural networks, and statistical decision theory, where one often needs to find a probability model f (I) for each category of images of similar patterns. Thus, an accurate estimation of f (I) is a key factor for successful classification and recognition. 3. Computational vision, where f (I) is often adopted as a prior model in terms of Bayesian theory, and it provides a language for visual computation ranging from images segmentation to scene understanding (Zhu, 1996). 4. Texture modeling, where the objective is to estimate f (I) by a probability model p(I) for each set of texture images that have perceptually similar texture appearances. p(I) is important not only for texture analysis such as texture segmentation and classification, but also plays a role in texture synthesis since texture images can be synthesized by sampling p(I). Furthermore, the nature of the texture model helps us understand the mechanisms of human texture perception (Julesz, 1995). However, making inferences about f (I) is much more challenging than many of the learning problems in neural modeling (Dayan, Hinton, Neal, & Zernel, 1995; Xu, 1995) for the following reasons. First, the dimension of the image space is overwhelmingly large compared with the number of available training examples. In texture modeling, for instance, the size of images is often about 200 × 200 pixels, and thus f (I) is a function of 40, 000 variables, whereas we have access to only one or a few training images. This make it inappropriate to use nonparametric inference methods, such
Minimax Entropy Principle
1629
as kernel methods, radial basis functions (Ripley, 1996), and mixture of gaussian models (Jordan & Jacobs, 1994). Second, f (I) is often far from being gaussian; therefore some popular dimension-reduction techniques, such as the principal component analysis (Jolliffe, 1986) and spectral analysis (Priestley, 1981), do not appear to be directly applicable. As an illustration of the nongaussian property, Figure 1a shows the empirical marginal distribution (or histogram) of the intensity differences of horizontally adjacent pixels of some natural images (Zhu & Mumford, 1997). As a comparison, the gaussian distribution with the same mean and variance is plotted as a dashed curve in Figure 1a. Similar nongaussian properties are also observed in Field (1994). Another example is shown in Figure 1b, where the solid curve is the histogram of F ∗ I, with I being a texton image shown in Figure 8a, and F is a filter with the same texton (see section 4.5 for details). It is clear that the solid curve is far from being gaussian, and as a comparison, the dotted curve in Figure 1b is the histogram of F ∗ I, with I being a white noise image. The outliers in the histogram are perceptual features, not noise! 1.2 Previous Methods. A key issue in building a statistical model is the balance between generality and simplicity. The model should include rich structures to describe real-world images adequately and should be capable of modeling complexities due to high dimensionality and nongaussian property, and at the same time, it should be simple enough to be computationally feasible and give simple explanation to what we observe. To reduce complexity, it is often necessary to impose structures on the distribution. In the past, two main methods have been adopted in applications. The first method adopts some parametric Markov random field (MRF) models in the forms of Gibbs distributions—for example, the general smoothness models in image restoration (Geman & Geman, 1984; Mumford & Shah, 1989) and the conditional autoregression models in texture modeling (Besag, 1973; Cross & Jain, 1983). This method involves only a small number of parameters and thus constructs concise distributions for images. However, they do not achieve adequate generality for the following reasons. First, these MRF models can afford only small cliques; otherwise the number of parameters will explode. But these small cliques can hardly capture image features at relatively large scales. Second, the potential functions are of very limited and prespecified forms, whereas in practice it is often desirable for the forms of the distributions to be determined or learned from the observed images. The second method is widely used in visual coding and image reconstruction, where the high-dimensionality problem is avoided by representing the images with a relatively small set of feature statistics, and the latter are usually extracted by a set of well-selected filters. Examples of filters include the frequency and orientation selective Gabor filters (Daugman, 1985) and some wavelet pyramids based on various coding criteria (Mallat, 1989;
1630
Song Chun Zhu, Ying Nian Wu, and David Mumford
(a) 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 −15
−10
−5
0
5
10
15
(b) 0.12
0.1
Histogram H(F*I)
0.08
0.06
0.04
0.02
0
−200
−150
−100 −50 filter response F*I
0
50
Figure 1: (a) The histogram of intensity difference at adjacent pixels and gaussian curve (dashed) of same mean and variance in domain [−15, 15]. (b) Histogram of the filtered texton image (solid curve) and a filtered noise image (dotted curve).
Minimax Entropy Principle
1631
Simoncelli, Freeman, Adelson, & Weeger, 1992; Coifman & Wickerhauser, 1992; Donoho & Johnstone, 1994). The feature statistics extracted by a certain filter is usually the overall histogram of filtered images. These histograms are used for pattern classification, recognition, and visual coding (Watson, 1987; Donoho & Johnstone, 1994). Despite the excellent performances of this method, there are two major problems yet to be solved. The first is the feature binding or feature fusion problem: given a set of filters and their histograms, how to integrate them into a single probability distribution. This problem becomes much more difficult if the filters used are not all linear and are not independent of each other. The second problem is feature selection: for a given model complexity, how to choose a set of filters or features to characterize best the images being modeled. 1.3 Our Theory and Methodology. In this article, a minimax entropy principle is proposed for building statistical models, and it provides a new strategy to balance between model generality and model simplicity by two seemingly contrary criteria: maximizing entropy and minimizing entropy. (I). The Maximum Entropy Principle (Jaynes 1957). Without loss of generality, any image feature can be expressed as φ (α) (I), where φ (α) ( ) can be a vector-valued functions of the image intensities and α is the index of the features. The statistic of the feature φ (α) (I) is E f [φ (α) (I)], which is the expectation of φ (α) (I) with respect to f (I) and is estimated by the sample mean computed from the training images. Then, given a set of features S = {φ (α) , α = 1, 2, . . . , K}, a model p(I) is constructed such that it reproduces the feature statistics as observed; that is, Ep [φ (α) (I)] = E f [φ (α) (I)], for α = 1, 2, . . . , K. Among all model p(I) satisfying such constraints, the maximum entropy principle favors the simplest one in the sense that it has the maximum entropy. Since entropy is a measure of randomness, a maximum entropy (ME) model p(I) is considered as the simplest fusion or binding of the features and their statistics. (II). The Minimum Entropy Principle. The goodness of p(I) constructed in (I) is measured by KL( f, p), that is, the Kullback-Leibler divergence from f (I) to p(I) (Kullback & Leibler, 1951), and it depends on the feature set S that we selected. As we will show in the next section, KL( f, p) is, up to a constant, equal to the entropy of p(I). Thus, to estimate f (I) closely, we need to minimize the entropy of the ME distribution p(I) with respect to S, which often means that we should use as many features as possible to specify p(I). In this sense, a minimum entropy principle favors model generality. When the model complexity or the number of features K is limited, the minimum entropy principle provides a criterion for selecting the feature set S that best characterizes f (I). Computational procedures are proposed for parameter estimation and
1632
Song Chun Zhu, Ying Nian Wu, and David Mumford
feature selection. The minimax entropy principle is further studied in the presence of sample variation of feature statistics. As an example of application, the minimax entropy principle is applied to texture modeling, where the features are extracted by filters that are selected from a general filter bank, and the feature statistics are the empirical marginal distributions (usually further reduced to the histograms) of the filtered images. The resulting model, called FRAME (filters, random fields, and minimax entropy), is a new class of MRF model. Compared with previous MRF models, the FRAME model employs a much more enriched vocabulary and hence enjoys a much stronger descriptive ability, and at the same time, the model complexity is still under check. Texture images are synthesized by sampling the estimated models, and the correctness of estimated models is thus verified by checking whether the synthesized texture images have similar visual appearances to the observed images. The rest of the article is arranged as follows. Section 2 studies the minimax entropy principle, where algorithms are proposed for parameters estimation and feature selection. In Section 3 we study the minimax entropy principle in depth by correcting it in the presence of estimation error and addressing the issue of variance estimation in homogeneous random fields. Section 4 applies the minimax entropy principle to texture modeling. Section 5 concludes with a discussion of the texture model and the relationship between minimax entropy and neural modeling. 2 The Minimax Entropy Principle To fix notation, let I be an image defined on a domain D; for example, D can be an N × N lattice, for each point vE ∈ D, I(E v) ∈ L, and L is the range of image intensities. For a given application, we assume that there exists an underlying probability distribution (or density) f (I) defined on the image space L|D| , where |D| is the size of the image domain. Then the objective is to estimate f (I) based on a set of observed images {Iobs i , i = 1, . . . , M} sampled from f (I). 2.1 The Maximum Entropy Principle. At the initial stage of studying the regularity and variability of the observed images Iobs i , i = 1, 2, . . . , M, one often starts from exploring a finite set of essential features that are characteristic of the observations. Without loss of generality, such features are extracted by S = {φ (α) ( ), α = 1, 2, . . . , K}, where φ (α) (I) can be a vectorvalued function of the image intensities. The statistics of these features are estimated by the sample means,
µ(α) obs =
M 1 X φ (α) (Iobs i ), M i=1
for α = 1, . . . , K.
Minimax Entropy Principle
1633
If the large sample effect takes place (usually a necessary condition for modeling), then the sample averages {µ(α) obs , α = 1, . . . , K} make reasonable estimates for the expectations {E f [φ (α) (I)], α = 1, . . . , K}, where E f denotes the expectation with respect to f (I). We call {µ(α) obs , α = 1, . . . , K} the observed statistics and {E f [φ (α) (I)], α = 1, . . . , K} the expected statistics of f (I). To approximate f (I), a probability model p(I) is restricted to reproduce the observed statistics, that is, Ep [φ (α) (I)] = µ(α) obs for α = 1, . . . , K. Let (α) ÄS = {p(I): Ep [φ (α) (I)] = µ(α) ∈ S} obs , ∀φ
(2.1)
be the set of distributions that reproduce the observed statistics of feature set S; then we need to select a p(I) ∈ ÄS provided that ÄS 6= ∅. As far as the observed feature statistics {µ(α) obs , α = 1, . . . , K} are concerned, all the distributions in ÄS explain them equally well, and they are not distinguishable from f (I). The ME principle (Jaynes, 1957) suggests that we should choose p(I) that achieves the maximum entropy to obtain the purest and simplest fusion of the observed features and their statistics. The underlying philosophy is that while p(I) satisfies the constraints along some dimensions, it should be made as random (or smooth) as possible in other unconstrained dimensions, that is, p(I) should represent information no more than that is available and in this sense, the ME principle is often called the minimum prejudice principle. Thus we have the following constrained optimization problem, ½ Z ¾ p(I) = arg max − p(I) log p(I)dI , (2.2) subject to Ep [φ (α) (I)] = and
Z
φ (α) (I)p(I)dI = µ(α) obs ,
α = 1, . . . , K,
Z p(I)dI = 1.
By an application of the Lagrange multipliers, it is well known that the solution for p(I) has the following Gibbs distribution form: ( ) K X 1 exp − hλ(α) , φ (α) (I)i , (2.3) p(I; 3, S) = Z(3) α=1 where 3 = (λ(1) , λ(2) , . . . , λ(K) ) is the parameter, λ(α) is a vector of the same dimension as φ (α) (I), h· , ·i denotes inner product, and ) ( Z K X (α) (α) hλ , φ (I)i dI Z(3) = exp − α=1
1634
Song Chun Zhu, Ying Nian Wu, and David Mumford
is the partition function, which normalizes p(I; 3) into a probability distribution. 2.2 Estimation and Computation. Equation 2.3 specifies an exponential family of distributions (Brown, 1986), 2S = {p(I; 3, S) : 3 ∈ Rd },
(2.4)
ˆ which where d is the total number of parameters, and 3 is solved at 3, ˆ satisfies the constraints p(I; 3, S) ∈ ÄS , that is, [φ (α) (I)] = µ(α) Ep(I;3,S) ˆ obs . α = 1, . . . , K.
(2.5)
However, analytical solution of equation 2.5 is in general unavailable; inˆ S) iteratively from 2S by maximum likelihood stead, we solve for p(I; 3, estimator. 1 PM obs Let L(3, S) = M i=1 log p(Ii ; 3, S) be the log-likelihood function for any p(I; 3, S) ∈ 2S , and it has the following properties: 1 ∂Z ∂L(3, S)) (α) (α) =− − µ(α) ∀α, obs = Ep(I;3,S) [φ ] − µobs , ∂λ(α) Z ∂λ(α) ∂ 2 L(3, S) (β) 0 (α) (β) (I) − µ(α) (I) − µobs ) ], ∀α, β. 0 = Ep(I;3,S) [(φ obs )(φ ∂λ(α) λ(β)
(2.6) (2.7)
Following equation 2.6, maximizing the log likelihood by gradient ascent gives the following equation for solving 3 iteratively: dλ(α) = Ep(I;3,S) [φ (α) (I)] − µ(α) obs , α = 1, . . . , K. dt
(2.8)
ˆ Moreover, equation 2.7 means Obviously equation 2.8 converges to 3 = 3. that the Hessian matrix of L(3, S) is the covariance matrix (φ (1) P(I), . . . , φ (K) (I)) and thus is positive definite under the condition that a(0) + Kα=1 a(α) φ (α) (I) ≡ 0 H⇒ a(α) = 0 for α = 0, . . . , K, which is usually satisfied. So L(3, S) is strictly concave with respect to 3, and the solution for 3 uniquely exists. Following equations 2.5, 2.6, and 2.7, we have the following conclusion. ˆ S)} where 3 ˆ is both Proposition 1. Given a feature set S, ÄS ∩ 2S = {p(I; 3, the maximum entropy estimator and the maximum likelihood estimator. At each step t of equation 2.8, the computation of Ep(I;3,S) [φ (α) (I)] is in general difficult, and we adopt the stochastic gradient method (Younes 1988) for approximation. For a fixed 3, we synthesize some typical images
Minimax Entropy Principle
1635
syn
{Ii , i = 1, . . . , M0 } by sampling p(I; 3, S) with the Gibbs sampler (Geman & Geman, 1984) or other Markov chain Monte Carlo (MCMC) methods (Winkler, 1995), and approximate Ep(I;3,S) [φ (α) (I)] by the sample means; that is, Ep(I;3,S) [φ
(α)
(I)] ≈
µ(α) obs (3)
M0 1 X syn = 0 φ (α) (Ii ), α = 1, . . . , K. M i=1
(2.9)
Therefore the iterative equation for computing 3 becomes dλ(α) (α) = 1(α) (3) = µ(α) syn (3) − µobs , α = 1, . . . , K. dt
(2.10)
For the accuracy of the approximation in equation 2.9, the sample size M0 should be large enough. The data flow for parameter estimation is shown in Figure 2, and the details of the algorithm can be found in (Zhu, Wu, & Mumford, 1996). 2.3 The Minimum Entropy Principle. For now, suppose that the sample size M is large enough so that the expected feature statistics {E f [φ (α) (I)], α = 1, . . . , K} can be estimated exactly by neglecting the estimation errors in the ? observed statistics {µ(α) obs , α = 1, . . . , K}. Then an ME distribution p(I; 3 , S) is computed so that it reproduces the expected statistics of a feature set S = {φ (α) , α = 1, 2, . . . , K}; that is, Ep(I;3? ,S) [φ (α) (I)] = E f [φ (α) (I)], α = 1, . . . , K. Since our goal is to make an inference about the underlying distribution f (I), the goodness of this model can be measured by the Kullback-Leibler (Kullback & Leibler, 1951) divergence from f (I) to p(I; 3? , S), Z f (I) KL( f, p(I; 3? , S)) = dI f (I) log p(I; 3? , S) = E f [log f (I)] − E f [log p(I; 3? , S)]. For KL( f, p(I; 3? , S)), we have the following conclusion: Theorem 1. entropy( f ).
In the above notation, KL( f, p(I; 3? , S)) = entropy(p(I; 3? , S)) −
See the appendix for a proof. In the above result, entropy( f ) is fixed, and entropy(p(I; 3? , S)) depends on the set of features S included in the distribution p(I; 3? , S). Thus minimizing KL( f, p(I; 3? , S)) is equivalent to minimizing the entropy of p(I; 3? , S). We call this the minimum entropy principle, and it has the following intuitive interpretations. First, in information theory, p(I; 3? , S) defines a coding scheme with each I assigned a coding length − log p(I; 3? , S) (Shannon, 1948), and entropy (p(I; 3? , S)) = Ep [− log p(I; 3? , S)] stands for the
1636
Song Chun Zhu, Ying Nian Wu, and David Mumford obs
Ii
9 B
=S
=
f
( ) (I) g
f
?
Ef [( ) (I)]
d(( ) )
6
Ep [( ) (I)]
6
B
=S
=
y
f
=
S
?
k) (obs
?
?
?
D(1)
D(2)
6
6
:::
image analysis
-L
D(k)
= (
(1)
6 image synthesis
; (2) ; : : : (k) )
Ep [(2) (I)] : : : Ep [(k) (I)]
6
(2)
6 estimate
(k )
syn
syn
:::
syn
(1) (I)
(2) (I)
:::
(k) (I) g
6
( ) (I) g
:::
(k) (I) g
? estimate
( 1)
6
:::
?
6
( )
the true distribution
Ef [(2) (I)] : : : Ef [(k) (I)]
Ep [(1) (I)]
syn
f
?
2) (obs
?
f ( I)
sampling
z
(2) (I)
Ef [(1) (I)]
?
(1) (I) 1) (obs
?
R
?
) (obs
i=1;2;::: M
6
I
syn
Ii
i=1;2;:::M 0
6
:
=
S
?
p(I; L ) sampling by MCMC
the model
Figure 2: Data flow of the algorithm for model estimation and feature selection.
expected coding length. Therefore, a minimum entropy principle chooses the coding system with the shortest average coding length. The shortest ˆ S) is minus average coding length in the actually estimated model p(I; 3, its log likelihood in view of Proposition 2; hence minimizing entropy is the same as maximizing the likelihood of the data: ˆ S), then L(3, ˆ S) = −entropy Proposition 2. Given a feature set S and p(I; 3, ˆ ˆ (p(I; 3, S)) where 3 is the ML estimator. Proof. Since (α) [φ (α) (I)] = µ(α) ∈ S, Ep(I;3,S) ˆ obs , ∀φ
) ( M K X 1 X (α) (α) obs ˆ − ˆ S) = hλˆ , φ (Ii )i − log Z(3) L(3, M i=1 α=1
Minimax Entropy Principle
ˆ − = − log Z(3)
1637 K X α=1
ˆ − = − log Z(3)
K X α=1
hλˆ (α) , µ(α) obs i (α) hλˆ (α) , Ep(I;3) ˆ [φ (I)]i
ˆ S)). = −entropy(p(I; 3, However, to keep the model complexity under check, one often needs to fix the number of features K. To be precise, let B be the set of all possible features and S ⊂ B an arbitrary set of K features. Therefore entropy minimization provides a criterion for choosing the optimal set of features; that is, S∗ = arg min entropy(p(I; 3? , S)). |S|=K
(2.11)
According to the maximum entropy principle, p(I; 3? , S) = arg max entropy(p). p∈ÄS
(2.12)
Combining equations 2.11 and 2.12, we have S∗ = arg min {max entropy(p)}. |S|=K p∈ÄS
(2.13)
We call equation 2.13 the minimax entropy principle. We have demonstrated that this principle is consistent with the goal of modeling: Finding the best estimate for the underlying distribution f (I), and the relationship between minimax entropy and maximum likelihood estimator is addressed by Propositions 1 and 2. 2.4 Feature Pursuit. Enumerating all possible sets of features S ⊂ B and comparing their entropies is certainly impractical. Instead, we propose a greedy procedure to pursue the features in the following way.1 Start from an empty feature set ∅ and p(I) a uniform distribution, add to the model one feature at a time such that the added feature leads to the maximum decrease in the entropy of ME model p(I; 3? , S), and keep doing this until the entropy decrease is smaller than a certain value. To be precise, let S = {φ (α) , α = 1, . . . , K} be the currently selected set of features, and let ( ) K X 1 (α) (α) exp − hλ , φ (I)i p = p(I; 3, S) = Z(3) α=1
(2.14)
1 We use the word pursuit to represent the stepwise method and distinguish it from selection.
1638
Song Chun Zhu, Ying Nian Wu, and David Mumford
be the ME distribution fitted to f (I) (we omit ? from 3 for notational simplicity in this subsection). For any new feature φ (β) ∈ B /S, let S+ = S∪{φ (β) } be a new feature set. The new ME distribution is p+ = p(I; 3+ , S+ ) ( ) K X 1 (β) (α) (α) (β) = exp − hλ+ , φ (I)i − hλ+ , φ (I)i . Z(3+ ) α=1
(2.15)
(α) Ep+ [φ (α) (I)] = E f [φ (α) (I)] for α = 1, 2, . . . , K, β, and in general, λ(α) + 6= λ for α = 1, . . . , K. According to the above discussion, we choose feature φ (K+1) to maximize the entropy decrease over the remaining features; that is,
φ (K+1) = arg max d(φ (β) ), φ (β) ∈B/S
where d(φ (β) ) = KL( f, p) − KL( f, p+ ) = entropy(p) − entropy(p+ ) = KL(p+ , p) is the entropy decrease. Let 8(I) = (φ (1) (I), . . . , φ (K) (I)), since Ep+ [8(I)] = Ep [8(I)] = E f [8(I)], d(φ (β) ) is a function of the difference between p+ and p on feature φ (β) . By second-order Taylor expansion, d(φ (β) ) can be expressed in a quadratic form. Proposition 3. d(φ (β) ) =
In the above notation, 1 0 (Ep [φ (β) (I)] − E f [φ (β) (I)]) 2 (β) × Vp−1 (I)] − E f [φ (β) (I)]), 0 (Ep [φ
(2.16)
where p0 is a distribution such that Ep0 [8(I)] = E f [8(I)], and Ep0 [φ (β) (I)] lies −1 V12 , where V11 = between Ep [φ (β) (I)] and E f [φ (β) (I)]. Vp0 = V22 − V21 V11 0
Varp0 [8(I)], V22 = Varp0 [φ (β) (I)], V12 = Covp0 [8(I), φ (β) (I)], and V21 = V12 . See the appendix for proof.
(β) −1 , and let φ⊥ (I) = The Vp0 can be interpreted as follows. Let C = −V12 V11 (β) (β) φ (I)+C8(I) be the linear combination of φ (I) and 8(I); then under p0 , it (β)
(β)
can be shown that φ⊥ (I) is uncorrelated with 8(I), thus Vp0 = Varp0 [φ⊥ (I)] is the variance of φ (β) (I) with its dependence on 8(I) being eliminated. (β) In practice, E f [φ (β) (I)] is estimated by the observed statistic µobs , and (β)
Ep [φ (β) (I)] by µsyn —the sample mean computed from synthesized images
Minimax Entropy Principle
1639
sampled from the current model p. If the intermediate distribution p0 is approximated using the current distribution p, the distance d(φ (β) ) is approximated by d(φ (β) ) ≈
´0 ³ ´ 1 ³ (β) (β) −1 (β) µobs − µ(β) V − µ µ syn p syn , obs 2
(2.17)
(β)
where Vp = Varp (φ⊥ ) is the variance estimated from the synthesized images. We will further study the estimation of Vp in section 3.2. The feature pursuit procedure governed by equation 2.17 has the following intuitive interpretation. Under the current model p, for any new feature (β) φ (β) , µsyn is the statistic we observe from the image samples following p. If (β) (β) µsyn is close to µobs , then adding this new feature to p(I; 3) leads to little improvement in estimating f (I). So we should look for the most salient new (β) (β) feature φ (β) such that µsyn is very different from µobs . The saliency of the (β) new feature is measured by d(φ (β) ), which is the discrepancy between µsyn (β) and µobs scaled by Vp , where Vp is the variance of the new feature compensated for dependence of the new feature on the old ones under the current model. As a summary, Figure 2 illustrates the data flow for both the computation of the model and the pursuit of features. 3 More on Minimax Entropy 3.1 Correcting the Minimum Entropy Principle. In previous sections, for a set of features S = {φ (α) , α = 1, . . . , K}, we have studied two ME distriˆ S), which reproduces the observed feature statistics, butions. One is p(I; 3, that is, [φ (α) (I)] = µ(α) Ep(I;3,S) ˆ obs ,
for α = 1, . . . , K,
and the other is p(I; 3? , S), which reproduces the expected feature statistics, that is, Ep(I;3? ,S) [φ (α) (I)] = E f [φ (α) (I)],
for α = 1, . . . , K.
In the previous derivations, we assume that {E f [φ (α) (I)], α = 1, . . . , K} can be estimated exactly by the observed statistics {µ(α) obs , α = 1, . . . , K}, which is not true in practice since only a finite sample is observed. Taking the estimation errors into account, we need to correct the minimum entropy principle and the feature pursuit procedure. First, let us consider the minimum entropy principle, which relates the Kullback-Leibler divergence KL( f, p(I; 3, S)) to the entropy of the model ˆ the goodp(I; 3, S) for 3 = 3? . Since in practice 3 is estimated at 3, ˆ S)) instead ness of the actual model should be measured by KL( f, p(I; 3, of KL( f, p(I; 3? , S)), for which we have:
1640
Song Chun Zhu, Ying Nian Wu, and David Mumford
Proposition 4.
In the above notation,
ˆ S)) = KL( f, p(I; 3? , S)) + KL(p(I; 3? , S), p(I; 3, ˆ S)). (3.1) KL( f, p(I; 3, See the appendix for proof. ˆ S) does not come as close That is, because of the estimation error, p(I; 3, to f (I) as p(I; 3? , S) does, and the extra noise is measured by KL(p(I; 3? , S), ˆ S)). In fact, 3 ˆ in model p(I; 3, ˆ S) is a random variable depending p(I; 3, obs ˆ S)). Let Eobs on the random sample {Ii , i = 1, . . . , M}; so is KL( f, p(I; 3, stands for the expectation with respect to the training images. Applying Eobs to both sides of equation 3.1, we have, ˆ S))] Eobs [KL( f, p(I; 3, ˆ S))] = KL( f, p(I; 3? , S)) + Eobs [KL(p(I; 3? , S), p(I; 3, = entropy(p(I; 3? , S)) − entropy( f ) ˆ S))]. + Eobs [KL(p(I; 3? , S), p(I; 3,
(3.2)
ˆ S)). The following proposition relates entropy(p(I; 3? , S)) to entropy(p(I; 3, Proposition 5.
In the above notation,
ˆ S))] entropy(p(I; 3? , S)) = Eobs [entropy(p(I; 3, ˆ S), p(I; 3? , S))]. + Eobs [KL(p(I; 3,
(3.3)
See the appendix for proof. ˆ S) is on average smaller According to Proposition 5, the entropy of p(I; 3, ? ˆ than the entropy of p(I; 3 , S); this is because 3 is estimated from specific ˆ S) does a better job than p(I; 3? , S) in fitting training data, and hence p(I; 3, the training data. Combining equations 3.2 and 3.3, we have ˆ S))] = Eobs [entropy(p(I; 3, ˆ S))] Eobs [KL( f, p(I; 3, − entropy( f ) + C1 + C2 ,
(3.4)
where the two correction terms are ˆ S))], C1 = Eobs [KL(p(I; 3? , S), p(I; 3, ˆ S), p(I; 3? , S))]. C2 = Eobs [KL(p(I; 3, Following Ripley (1996, sec. 2.2), both C1 and C2 can be approximated by 1 −3/2 tr(Var f [8(I)]Varp−1 ), ∗ [8(I)]) + O(M 2M
Minimax Entropy Principle
1641
where tr( ) is the trace of matrix. Therefore, we arrive at the following form of the Akaike information criterion (Akaike, 1977): ˆ S))] ≈ Eobs [entropy(p(I; 3, ˆ S))] − entropy( f ) Eobs [KL( f, p(I; 3, 1 + tr(Var f [8(I)]Varp−1 ∗ [8(I)]), M where we drop the higher-order term O(M−3/2 ). The optimal set of features ˆ S))], which leads to the should be chosen to minimize Eobs [KL( f, p(I; 3, following correction of the minimum entropy principle: ˆ S)) + S∗ = arg min {entropy(p(I; 3, |S|=K
1 tr(Var f [8(I)]Varp−1 ∗ [8(I)])}. (3.5) M
In practice, Var f [8(I)] and Varp∗ [8(I)] can be estimated from the observed images and synthesized images, respectively. If Var f [8(I)] ≈ Varp∗ [8(I)], then tr(Var f [8(I)]Varp−1 ∗ [8(I)]) is approximately the number of free parameters in the model. This provides another reason for restricting the model complexity besides scientific parsimony and computational efficiency. Another perspective for this issue is the minimum description length (MDL) principle (Rissanen, 1989). Now let us consider correcting the feature pursuit procedure. Following the notation in section 2.4, at each step K + 1, suppose we choose a new feature φ (β) , and let 8+ (I) = (8(I), φ (β) (I)); the decrease of the expected Kullback-Leibler divergence is: Eobs [KL( f, p)] − Eobs [KL( f, p+ )] 1 = d(φ (β) ) − [tr(Var f [8+ (I)]Varp−1 ∗ [8+ (I)]) + M − tr(Var f [8(I)]Varp−1 ∗ [8(I)])]. By linear algebra, we can show that −1 tr(Var f [8+ (I)]Varp−1 ∗ [8+ (I)]) − tr(Var f [8(I)]Varp∗ [8(I)]) +
(β)
(β)
= tr(Var f [φ⊥ (I)]Varp∗+ [φ⊥ (I)]).
(3.6)
See the appendix for the proof of equation 3.6. Therefore, at every step of the (corrected) feature pursuit procedure, we should choose φ (β) to maximize 0
d (φ (β) ) = d(φ (β) ) −
´ 1 ³ (β) (β) tr Var f [φ⊥ ]Varp−1 ∗ [φ⊥ ] . + M (β)
(β)
In practice, we approximate Varp∗+ [φ⊥ ] by Varp [φ⊥ ], and estimate the (β) variances from the observed and synthesized images. Let µ and Vˆ obs be ⊥obs
1642
Song Chun Zhu, Ying Nian Wu, and David Mumford
(β) ˆ the sample mean and variance of {φ⊥ (Iobs i ), i = 1, 2, . . . , M} and let Vsyn be 0 (β) syn the sample variance of {φ⊥ (Ii ), i = 1, 2, . . . , M }. Thus we have 0
d (φ (β) ) ≈
1 (β) 1 (β) 0 −1 (β) −1 (µ − µobs ) Vˆ syn tr(Vˆ obs Vˆ syn (µ(β) ). syn − µobs ) − 2 syn M
(3.7)
We note that in equation 3.7, Ã ! M X 1 (β) obs (β) (β) obs (β) 0 ˆ −1 −1 ˆ ˆ tr(Vobs Vsyn ) = tr (φ⊥ (Ii ) − µ⊥obs )(φ⊥ (Ii ) − µ⊥obs ) Vsyn M i=1 =
M 1 X (β) (β) (β) (β) −1 (φ (Iobs ) − µ⊥obs )0 Vˆ syn (φ⊥ (Iobs i ) − µ⊥obs ) M i=1 ⊥ i
is a measure of fluctuation in the observed images. The intuitive meaning of equation 3.7 is the following. The first term is (β) (β) the distance between µsyn and µobs , and we call it the gain by introducing a (β) new feature φ . The second term measures the uncertainty in estimating E f [φ (β) (I)], and we call it the loss by adding φ (β) . If the loss term is large, 0 it means the feature is less common to the observed images; thus d (φ (β) ) 0 (β) (β) is small. When µsyn comes very close to µobs , d (φ (β) ) become negative, which provides a criterion for stopping the iteration in computing 3 in equation 2.10. 3.2 Variance Estimation in Homogeneous Random Field. In previous sections, we assume that we have M independent observations Iobs i ,i = is of the same size as the image domain D (N × N 1, 2, . . . , M, and each Iobs i pixels). A feature φ (β) (Iobs i ) is computed based on the intensities of an entire image, and the sample mean and variance are then computed from φ (β) (Iobs i ) i = 1, 2, . . . , M. The same is true for synthesized images. However, in many applications, such as texture modeling in the next section, it is assumed that the underlying distribution f (I) is ergodic and images I are homogeneous; thus, φ (β) (I) is often expressed as the average of local features ψ (β) ( ): φ (β) (I) =
1 X (β) ψ (I|W+Ev ), |D| v∈D
where ψ (β) is a function defined on locally supported (filter) windows W centered at vE ∈ D. Therefore by ergodicity we still estimate E f [φ (β) (I)] and Var f [φ (β) (I)] reasonably well even through only a single image is observed, provided that the observed image is large enough compared to the strength of autocorrelation. In particular, we adopt the method recently proposed by Sherman (1996) for the estimation of Var f [φ (β) (I)]. To fix notation, suppose we observe one
Minimax Entropy Principle
1643 j
image Iobs on an No ×No lattice Dobs , and we define subdomains Dm ⊂ Dobs , j = 1, 2, . . . , `. For simplicity, we assume all subdomains are square image patches of the same size of m × m pixels; m is usually chosen at the scale of √ No , and the subdomains may overlap each other. Then for each subdomain P j j Dm , we compute φ (β) (Dm ) = m12 vE∈D j ψ (β) (I|W+Ev ), and the sample mean m and variance are computed over the subdomains: (β)
µobs (Dm ) = (β)
Varobs (Dm ) =
` 1X j φ (β) (Dm ), ` j=1 ` 1X 0 j j (β) (β) (φ (β) (Dm ) − µobs (Dm ))(φ (β) (Dm ) − µobs (Dm )) . ` j=1
Then, according to Sherman (1996), Var f [φ (β) (I)] can be estimated by m2 (β) Varobs (Dm ). N2 Now let us consider the feature pursuit criterion in equation 3.7. For (β) feature φ⊥ we define variance Vˆ obs (D1 ) = m2 Vˆ obs (Dm ), where Vˆ obs (Dm ) is (β)
j
the sample variance of φ⊥ (Dm ), j = 1, 2, . . . , `. Then from the above result, we can approximate Vˆ obs in equation 3.7 by Vˆ obs (D1 )/N2 . Similarly Vˆ syn in equation 3.7 is replaced by Vˆ syn (D1 )/N2 . Thus we have 0
d (φ
(β)
· )≈N
2
1 (β) (β) 0 −1 (β) (µ − µobs ) Vˆ syn (D1 )(µ(β) syn − µobs ) 2 syn ¸ 1 −1 (D1 )) , − tr(Vˆ obs (D1 )Vˆ syn c
(3.8)
where c is |Dobs | minus the number of pixels around the boundary. From 0 equation 3.8, we notice that d (φ (β) ) is proportional to N2 —the size of domain D. A more rigorous study is often complicated by phase transition, and we shall not pursue it in this article. 4 Application to Texture Modeling This section applies the minimax entropy principle to texture modeling. 4.1 The General Problem. Texture is an important characteristic of surface property in visual scenes and a power cue in visual perception. A general model for textures has long been sought in both computational vision and psychology, but such a model is still far from being achieved because
1644
Song Chun Zhu, Ying Nian Wu, and David Mumford
of the vast diversity of the physical and chemical processes that generate textures and the large number of attributes that need to be considered. As an illustration of the diversity of textures, Figure 3 displays some typical texture images. Existing models for textures can be roughly classified into three categories: (1) dynamic equations or replacement rules, which simulate specific physical and chemical processes to generate textures (Witkin & Kass, 1991; Picard, 1996), (2) the kth-order statistics model for texture perception, that is, the famous Julesz’s conjecture (Julesz, 1962), and (3) MRF models. (For a discussion of previous models and methods, see Zhu et al., 1996.) In our method, a texture is considered an ensemble of images of similar texture appearances governed by a probability distribution f (I). As discussed in section 2, we seek a model p(I; 3, S) given a set of observed images. p(I; 3, S) should be consistent with human texture perception in the sense that if p(I; 3, S) estimates f (I) closely, the images sampled from p(I; 3, S) should be perceptually similar to the training images. 4.2 Choosing Features and Their Statistics. As the first step of applying the minimax entropy principle, we need to choose image features and their (α) statistics, that is, φ (α) (I) and µobs α = 1, 2, . . . , K. First, we limit our model to homogeneous textures; thus f (I) is stationary with respect to location vE. We assume that features of texture images can be extracted by “filters” F(α) , α = 1, 2, . . . , K, where F(α) can be a linear or v) denote the nonlinear function of the intensities of the image I. Let I(α) (E v) = F(α) (I|W+Ev ) is a function filter response at point vE ∈ D, that is, I(α) (E depending on the intensities inside window W centered at vE. Second, recent psychophysical research on human texture perception suggests that two homogeneous textures are often difficult to discriminate when they produce similar marginal distributions (histograms) of responses from a bank of filters (Bergen & Adelson, 1991; Chubb & Landy, 1991). Motivated by the psychophysical research, we make the following assumptions to limit the number of filters and the window size of each filter for computational reason, though these assumptions are not necessary conditions for our theory to hold true: 1. All features that concern texture perception can be captured by “locally” supported filters. By “locally” we mean that the sizes of filters should be much smaller than the size of the image. For example, the size of image is 256 × 256 pixels, and the window sizes of filters are limited to be less than 33 × 33 pixels. 2. Only a finite set of filters are used. Given a filter F(α) , we compute the histogram of the filtered image I(α) as the features of I. Therefore in texture modeling, the notation φ (α) (I) is
Minimax Entropy Principle
1645
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3: Some typical texture images.
1646
Song Chun Zhu, Ying Nian Wu, and David Mumford
replaced by H(α) (I, z) =
1 X δ(z − I(α) (E v)), |D| vE∈D
α = 1, 2, . . . , K,
z∈R
where δ( ) is the Dirac point mass function concentrated at 0. Correspondingly the observed statistics µ(α) obs are defined as µ(α) obs (z) =
M 1 X H(α) (Iobs i , z), M i=1
α = 1, 2, . . . , K.
2 H(α) (I, z) and µ(α) obs (z) are, in theory, continuous functions of z. In practice, they are approximated by piecewise constant functions of a finite number L of bins and are denoted by H(α) (I) and µ(α) obs as L (e.g., L = 32) dimensional vectors in the rest of the article. are large so that the large As the sample size M is large or the images Iobs i (α) sample effect takes place by ergodicity, then µobs (z) will be a close estimate of the marginal distributions of f (I):
f (α) (z) = E f [H(α) (I, z)]. Another motivation for choosing µ(α) obs (z) as feature statistics comes from a mathematical theorem, which states that f (I) is determined by all its marginal distributions f (α) (z). Thus, if model p(I) reproduces f (α) (z) for all α, then p(I) = f (I) (Zhu et al., 1996). Substituting H(α) (I) for φ (α) (I) in equation 2.3, we obtain ( ) K X 1 (α) (α) exp − hλ , H (I)i , p(I; 3, S) = Z(3) α=1
(4.1)
which we call the FRAME model. Here the angle brackets indicate that we P (α) are taking a sum over bin z: that is, hλ(α) , H(α) (I)i = z λ(α) z H (I, z). The computation of the parameters 3 and the selection of filters F(α) proceed as described in the last section. For detailed analysis of the texture modeling algorithm, see Zhu et al. (1996). 4.3 FRAME: A New Class of MRF Models. In this section, we derive a continuous form for the FRAME model in equation 4.1, and compare it with existing MRF models. Compared with the definitions of φ (α) (I) and µ(α) , H(α) (I, z) and µ(α) (z) are considobs obs ered vectors of infinite dimensions. 2
Minimax Entropy Principle
1647
Since the histograms of an image are continuous functions, the constraint in the ME optimization problem is the following: " Ep
# 1 X (α) δ(z − I (E v)) = µ(α) obs (z), |D| vE∈D
∀z ∈ R, ∀E v ∈ D, ∀α.
(4.2)
By an application of Lagrange multipliers, maximizing the entropy of p(I) under the above constraints gives ( ) K XZ X 1 X 1 (α) (α) exp − λ (z) δ(z − I (E v))dz p(I; 3, S) = Z(3) |D| vE∈D α=1 vE∈D ( ) K X X 1 (α) (α) = exp − λ (I (E v)) . Z(3) α=1 vE∈D
(4.3)
Since z is a continuous variable, there is an infinite number of constraints. The Lagrange multipliers 3 = (λ(1) ( ), . . . , λ(K) ( )) take the form as onedimensional potential functions. More specifically when the filters are linear, v) = F(α) ∗ I(E v), and we can rewrite equation 4.3 as, I(α) (E ( ) K X X 1 (α) (α) exp − λ (F ∗ I(E v)) . p(I; 3, S) = Z(3) α=1 vE
(4.4)
Clearly, equations 4.3 and 4.4 are MRF models or, equivalently, Gibbs distributions. But unlike the previous MRF models, the potentials are built directly on the filter response instead of cliques, and the forms of the potential functions λ(α) ( ) are learned from the training images, so they can incorporate high-order statistics and thus model nongaussian properties of images. The FRAME model has much stronger expressive power than traditional clique-based MRF models. Every filter introduces the same number of L parameters regardless of its window size, which enables us to explore structures at large scales (e.g., the 33 × 33 pixel filters in modeling the fabric texture in section 4.5). It is easy to show that existing MRF models for texture are special cases of the FRAME model with the filters and their potential functions specified. Detailed comparison between the FRAME model and the MRF models is covered in Zhu et al. (1996). 4.4 Designing a Filter Bank. To describe a wide variety of textures, we need to specify a general filter bank, which serves as the “vocabulary” by analogy to language. We shall not discuss the rules for constructing an optimal filter bank; instead, we use the following five kinds of filters motivated
1648
Song Chun Zhu, Ying Nian Wu, and David Mumford
by the multichannel filtering mechanism discovered and generally accepted in neurophysiology (Silverman, Grosof, De Valois, & Elfar, 1989). 1. The intensity filter, δ( ), for capturing the DC component. 2. The Laplacian of gaussian filters, which are isotropic center surrounded and are often used to model retinal ganglion cells. The impulse response functions are of the following form: −
LG(x, y | T) = const ·(x2 + y2 − T2 )e We choose eight scales with T = with scale T is denoted by LG(T).
√
x2 +y2 T2
.
(4.5)
2/2, 1, 2, 3, 4, 5, and 6. The filter
3. The Gabor filters, which are models for the frequency and orientationsensitive simple cells. The impulse response functions are of the following form, 1
Gabor(x, y | T, θ ) = const ·e 2T2
(4(x cos θ +y sin θ )2 +(−x sin θ +y cos θ )2 )
× e−i T (x cos θ +y sin θ ) , 2π
(4.6)
where T controls the scales and θ controls the orientations. We choose six scales T = 2, 4, 6, 8, 10, and 12 and six orientations θ = 0◦ , 30◦ , 60◦ , 90◦ , 120◦ , and 150◦ . Notice that these filters are not nearly orthogonal to each other, so there is overlap among the information captured by them. The sine and cosine components are denoted by G sin(T, θ ) and G cos(T, θ ), respectively. 4. The nonlinear Gabor filters, which are models for the complex cells, and responses from which are the powers of the responses from a pair of Gabor filters, | Gabor(x, y | T, θ) ∗ I|2 . This filter denoted by SP(T, θ) is, in fact, the local spectrum of I at (x, y) smoothed by a gaussian function. 5. Some specially designed filters for texton primitives. (See section 4.5.) 4.5 Experiments of Texture Modeling. This section describes the modeling of natural textures using the algorithm studied in sections 2 and 3. The first texture image is described in detail to illustrate the filter pursuit procedure. Suppose we are modeling f (I) where I is of 64 × 64 pixels. Figure 4a is an observed image of animal fur (128 × 128 pixels). We start from the filter set S = ∅ and p(I; 3, S) a uniform distribution from which a uniform white noise image is sampled and is displayed in Figure 4b (128 × 128 pixels). The 0 algorithm first computes d (φ (1) ) according to equations 3.7 and 3.8 for each
Minimax Entropy Principle
1649 0
Table 1: The Entropy Decrease d (φ (β) ) for Filter Pursuit. Filter δ √ LG( 22 ) LG(1) LG(2) G cos(2, 0◦ ) G cos(2, 30◦ ) G cos(2, 60◦ ) G cos(2, 90◦ ) G cos(2, 120◦ ) G cos(2, 150◦ ) G cos(4, 0◦ ) G cos(4, 30◦ ) G cos(4, 60◦ ) G cos(4, 90◦ ) G cos(4, 120◦ ) G cos(4, 150◦ ) G cos(6, 0◦ ) G cos(6, 30◦ ) G cos(6, 60◦ ) G cos(6, 90◦ ) G cos(6, 120◦ ) G cos(6, 150◦ ) G cos(8, 0◦ ) G cos(8, 30◦ ) G cos(8, 60◦ ) G cos(8, 90◦ ) G cos(8, 120◦ ) G cos(8, 150◦ )
Size
d0 (φ (1) )
d0 (φ (2) )
d0 (φ (3) )
d0 (φ (4) )
d0 (φ (5) )
d0 (φ (8) )
1×1 3×3 5×5 9×9 5×5 5×5 5×5 5×5 5×5 5×5 7×7 7×7 7×7 7×7 7×7 7×7 11 × 11 11 × 11 11 × 11 11 × 11 11 × 11 11 × 11 15 × 15 15 × 15 15 × 15 15 × 15 15 × 15 15 × 15
1018.2 4205.9 4492.3 20.2 3140.8 4240.3 3548.8 1063.3 1910.7 3717.2 958.2 2205.8 1199.5 108.8 19.2 157.5 102.1 217.3 85.6 13.6 321.7 3.8 −1.6 2.4 10.7 203.0 586.8 140.1
42.2 466.0 — 465.7 188.3 668.0 124.6 62.1 26.2 220.7 25.7 125.5 32.7 229.6 1146.4 247.1 12.8 54.8 4.7 134.8 706.8 100.1 11.0 33.0 5.5 51.9 276.6 44.6
50.8 107.4 — 159.3 140.4 307.6 25.1 38.1 2.0 189.2 17.9 61.0 35.4 130.6 — 10.4 4.3 8.4 0.1 192.4 640.3 12.7 −0.2 2.1 −1.2 71.7 361.8 1.3
20.0 172.9 — 24.5 137.0 317.8 21.9 90.3 2.5 161.7 5.3 75.2 12.2 20.2 — 101.9 −1.2 32.9 4.5 −0.4 — 98.6 4.6 13.8 4.1 3.9 58.2 45.5
26.4 41.6 — 6.3 135.4 — 14.2 40.7 47.6 9.3 8.2 35.0 10.9 31.9 — 56.0 19.0 11.5 3.8 7.9 — 75.1 9.7 12.7 6.8 12.3 58.2 42.5
*−1.8 22.6 *−1.4 18.5 *−3.2 *−1.9 7.5 1.1 16.4 −0.8 6.4 0.9 6.9 30.2 *−2.7 3.9 1.8 −1.7 6.0 1.6 *−2.8 *−1.4 14.3 −0.1 1.3 6.8 3.7 38.0 (β)
Notes: ∗ This filter has been chosen. Value computed using feature φ (β) , not φ⊥ . The boldface numbers are the largest in each column and are thus chosen in the algorithm. 0
filter, and d (φ (1) ) for some filters are listed in Table 1. Filter LG(1) has the largest entropy decrease and thus is chosen as the first filter, S = {LG(1)}. Then a model p(I; 3, S) is computed, and a synthesized image is shown in Figure 4c. Comparing Figure 4c with Figure 4b, it is evident that this filter captures local smoothness features of the observed texture image. Continuing the algorithm, six more filters are sequentially added: (2) G cos(4, 120◦ ); (3) G cos(6, 120◦ ); (4) G cos(2, 30◦ ); (5) G cos(2, 0◦ ); (6) G cos(6, 150◦ ); and (7) intensity δ( ). The texture images synthesized using 3, 4, and 7 filters are displayed in Figures 4d–f. Obviously, with more filters added, the synthesized texture image gets closer to the observed one. After choosing seven filters, the entropy decrease for all filters becomes very small; some are negative. Similar results are observed for those filters not listed in Table 1. This confirms our early assumption that the marginal distributions of a small
1650
Song Chun Zhu, Ying Nian Wu, and David Mumford
(a)
(b)
(c)
(d)
(e)
(f)
Figure 4: Synthesis of the fur texture. (a) The observed image. (b–f) The synthesized images using 0, 1, 3, 4, 7 filters, respectively.
Minimax Entropy Principle (a)
1651 (b)
Figure 5: (a) The observed texture: mud. (b) The synthesized one using five filters. (a)
(b)
Figure 6: (a) The observed texture image: cheetah blob. (b) The synthesized one using six filters.
number of filtered images should be adequate for capturing the essential features of the underlying probability distribution f (I).3 Figure 5a is the scene of mud ground with scattered animal footprints, which are filled with water and thus get brighter. This texture image shows sparse features. Figure 5b is the synthesized texture image using five filters. Figure 6a is an image taken from the skin of a cheetah, and Figure 6b displays the synthesized texture using six filters. Notice that the original 3 The synthetic fur texture in these figures is better than that in Zhu et al. (1996) since the L1 criterion used here for filter pursuit has been replaced by the criterion of equation 3.7.
1652
Song Chun Zhu, Ying Nian Wu, and David Mumford
(a)
(b)
(c)
(d)
Figure 7: (a) The input image of fabric. (b) The synthesized image with two pairs of Gabor filters plus the Laplacian of gaussian filter. (c, d) Two more images sampled at different steps of the Gibbs sampler.
observed texture image is not homogeneous, since the shapes of the blobs vary systematically with spatial locations, and the left upper corner is darker than the right lower one. The synthesized texture, shown in Figure 6b, also has elongated blobs introduced by different filters, but the bright pixels seem to spread uniformly across the image due to the effect of entropy maximization. Figure 7a shows a texture of fabric that has clear periods along both horizontal and vertical directions. We choose two nonlinear filters: spectrum analyzers SP(17, 0◦ ) and SP(17, 90◦ ) , with their periods T tuned to the periods of the texture, and the window sizes of the √ filters are 33 × 33 pixels. We also use the intensity filter δ( ) and filter LG( 2/2) to take care of the
Minimax Entropy Principle (a)
1653 (b)
(c)
(d)
Figure 8: Two typical texton images of 256 × 256 pixels: (a) circle and (b) cross. (c, d) The two synthesized images of 128 × 128 pixels.
intensity histogram and the smoothness features. Three synthesized texture images are displayed in Figures 7b–d at different sampling steps. This experiment shows that once the Markov chain becomes stationary or gets close to stationary, the sampled images from p(I) will always have perceptually similar appearances but with different details. Figures 8a and 8b show two special binary texture images formed from identical textons (circles and crosses), which are studied extensively by psychologists for the purpose of understanding human texture perception. Our interest here is to see whether this class of textures can still be modeled by FRAME. We use the linear filter whose impulse response function is a mask with the corresponding texton at the center. With this filter selected, Figure 1b plots the histograms of the filtered image F ∗ I, with I being the texton image observed in Figure 8a (solid curve) and a uniform noise image (dotted curve). Observe that there are many isolated peaks in the observed histogram, which stand for important image features. The computation of the model is complicated by the nature of such isolated peaks, and we proposed an annealing approach for computing 3 (for details see Zhu et al.,
1654
Song Chun Zhu, Ying Nian Wu, and David Mumford
1996). Figures 8c and 8d show two synthesized images. 5 Discussion This article proposes a minimax entropy principle for building probability models in a variety of applications. Our theory answers two major questions. The first is feature binding or feature fusion: how to integrate image features and their statistics into a single joint probability distribution without limiting the forms of the features. The second is feature selection: how to choose a set of features to characterize best the observed images. Algorithms are proposed for parameter estimation and stochastic simulation. A greedy algorithm is developed for feature pursuit, and the minimax entropy principle is corrected for the presence of sample variations. As an example of applications, we apply the minimax entropy principle to modeling textures. There are various artificial categories for textures with respect to various attributes, such as Fourier and non-Fourier, deterministic and stochastic, and macro- and microtextures. FRAME erases these artificial boundaries and characterizes them in a unified model with different filters and parameter values. It has been well recognized that the traditional MRF models, as special cases of FRAME, can be used to model stochastic, non-Fourier microtextures. From the textures we synthesized, it is evident that FRAME is also capable of modeling periodic and deterministic textures (fabric), textures with large-scale elements (fur and cheetah blob), and textures with distinguishable textons (circles and cross bars). Our method for texture modeling was inspired by and bears some similarities to the recent work by Heeger and Bergen (1995) on texture synthesis, where many natural-looking texture images are successfully synthesized by matching the histograms of filter responses organized in the form of a pyramid. Compared with Heeger and Bergen’s algorithm, the FRAME model is distinctive in the following aspects. First, we obtain a probability model p(I; 3, S) instead of merely synthesizing texture images. Second, the Monte Carlo Markov chain for model estimation and texture sampling is guaranteed to converge to a stationary process that follows the estimated distribution p(I; 3, S) (Geman & Geman, 1984), and the observed histograms can be matched closely. However, the FRAME model is computationally expensive, and approaches for further facilitating the computation are yet to be developed. For more discussion on this aspect, see Zhu et al. (1996). Many textures seem still difficult to model, such as the two human synthesized cloth textures shown in Figure 9. It appears that synthesizing such textures requires far more sophisticated features than those we have used in the texture modeling experiments, and these features may correspond to a high-level visual process, such as the geometrical properties of object shape. In this article, we choose filters from a fixed set of filters, but in general it is not understood how to design such set of features or structures for an arbitrary applications.
Minimax Entropy Principle (a)
1655 (b)
Figure 9: Two challenging texture images.
An important issue is whether the minimax entropy principle for model inference is “biologically plausible” and might be considered a model for the method used by natural intelligences in constructing models of classes of images. From a computational standpoint, the maximum entropy phase of the algorithm consists mainly of approximating the values of the Lagrange multipliers, which we have done by hill climbing with respect to log likelihood. Specifically, we have used Monte Carlo methods to sample our distributions and plugged the sampled statistics into the gradient of log likelihood. One of the authors has conjectured that feedback pathways in the cortex may serve the function of forming mental images on the basis of learned models of the distribution on images (Mumford, 1992). Such a mechanism might well sample by Monte Carlo as in the algorithm in this article. That theory further postulated that the cortex seeks out the “residuals,” the features of the observed image different from those of the mental image. The algorithm shows how such residuals can be used to drive a learning process in which the Lagrange multipliers are gradually improved to increase the log likelihood. We would conjecture that these Lagrange multipliers are stored as suitable synaptic weights in the higher visual area or in the top-down pathway. Given the massively parallel architecture, the apparent stochastic component in neural firing, and the huge amount of observed images processed every day, the computational load of our algorithm may not be excessive for cortical implementation. The minimum entropy phase of our algorithm has some direct experimental evidence in its favor. There has been extensive psychophysical experimentation on the phenomenon of preattentive texture discrimination.
1656
Song Chun Zhu, Ying Nian Wu, and David Mumford
We propose that textures that can be preattentively discriminated are exactly those for which suitable filters have been incorporated into a minimum entropy cortical model and that the process by which subjects can train themselves to discriminate new sets of textures preattentively is exactly that of incorporating a new filter feature into the model. Evidence that texture pairs that are not preattentively segmentable by naive subjects become segmentable after practice has been reported by many groups, most notably by Karni and Sagi (1991). The remarkable specificity of the reported texture discrimination learning suggests that very specific new filters are incorporated into the cortical texture model, as in our theory. Appendix: Mathematical Details Proof of Theorem 1. Let 3? = (λ?(1) , λ?(2) , . . . , λ?(K) ) be the parameter. By definition we have Ep(I;3? ,S) [φ (α) (I)] = E f [φ (α) (I)], α = 1, . . . , K. E f [log p(I; 3? , S)] = −E f [log Z(3? )] −
K X
E f [hλ?(α) , φ (α) (I)i],
α=1 K X hλ?(α) , E f [φ (α) (I)]i, = − log Z(3? ) − α=1 K X hλ?(α) , Ep(I;3? ,S) [φ (α) (I)]i, = − log Z(3? ) − α=1
= Ep(I;3? ,S) [log p(I; 3? , S)] = −entropy(p(I; 3? , S)), and the result follows. Proof of Proposition 3. Let 8 = (φ (1) (I), . . . , φ (K) (I)), 8+ = (8(I), φ (β) (I)). We have the entropy decrease d(φ (β) ) = KL(p+ ; p) 1 = (Ep [8+ (I)] − Ep+ [8+ (I)])0 Varp0 [8+ (I)]−1 2 × (Ep [8+ (I)] − Ep+ [8+ (I)]) 1 = (Ep [φ (β) (I)] − E f [φ (β) (I)])Vp−1 0 2 × (Ep [φ (β) (I)] − E f [φ (β) (I)]).
(A.1)
(A.2)
Equation A.1 follows a second-order Taylor expansion argument (corollary 4.4 of Kullback, 1959, p. 48), where p0 is a distribution whose expected feature statistics are between those of p and p+ , and à ! Covp0 [8(I), φ (β) (I)] Varp0 [8(I)] Varp0 [8+ (I)] = Varp0 [φ (β) (I)] Covp0 [φ (β) (I), 8(I)]
Minimax Entropy Principle
µ =
1657
V12 V22
V11 V21
¶ .
Equation A.2 results from the fact that Ep+ [8(I)] = Ep [8(I)], and by the −1 V12 . Schur formula, it is well known that Vp0 = V22 − V21 V11 Proof of Proposition 4. From the proof of Theorem 1, we know E f [log p (I; 3? , S)] = Ep(I;3? ,S) [log p(I; 3? , S)], and by similar derivation we have E f [log p(I; 3, S)] = Ep(I;3? ,S) [log p(I; 3, S)] for any 3. KL( f, p(I; 3, S)) = E f [log f (I)] − E f [log p(I; 3, S)] = E f [log f (I)] − Ep(I;3? ,S) [log p(I; 3, S)]
= E f [log f (I)] − E f [log p(I; 3? , S)] + Ep(I;3? ,S) [log p(I; 3? , S)] − Ep(I;3? ,S) [log p(I; 3, S)]
= KL( f, p(I; 3? , S)) + KL(p(I; 3? , S), p(I; 3, S)). ˆ The result follows by setting 3 = 3. ˆ S) = −entropy Proof of Proposition 5. By Proposition 2 we have L(3, ˆ (p(I; 3, S)). By similar derivation, we can prove that Eobs [L(3? , S)] = −entropy(p(I; 3? , S)) and ˆ S), p(I; 3? , S)). ˆ S) − L(3? , S) = KL(p(I; 3, L(3,
(A.3)
Applying Eobs to both sides of equation A.3, we have ˆ S))] + entropy(p(I; 3? , S)) −Eobs [entropy(p(I; 3, ˆ S), p(I; 3? , S))], = Eobs [KL(p(I; 3, and the result follows. Proof of Equation 3.6.
To simplify the notation, we denote
à Var [8+ (I)] = p∗+
à = à A=
Varp∗+ [8(I)]
Covp∗+ [8(I), φ (β) (I)]
Covp∗+ [φ (β) (I), 8(I)]
Varp∗+ [φ (β) (I)]
X11
X12
X21
X22
I1
0
−1 −X21 X11
I2
! ,
! ,
!
1658
Song Chun Zhu, Ying Nian Wu, and David Mumford
and à B=
!
Varp∗+ [8(I)]
0
0
Varp∗+ [φ⊥ (I)]
(β)
,
(β)
where I1 , I2 are identity matrices, and φ⊥ (I) is uncorrelated with 8(I) under 0 (β) p∗+ . So we have A8+ (I) = (8(I), φ⊥ (I))0 and AVarp∗+ [8+ (I)]A = B. 0 −1 A, and since Var ∗ [8(I)] = Var ∗ [8(I)], thereThus Varp−1 ∗ [8+ (I)] = A B p+ p + fore 0
−1 tr(Var f [8+ (I)]Varp−1 A) ∗ [8+ (I)]) = tr(Var f [8+ (I)](A )B +
0
= tr((AVar f [8+ (I)]A )B−1 ) = tr(Var f [8(I)]Varp−1 ∗ [8(I)]) (β)
(β)
+ tr(Var f [φ⊥ (I)]Varp−1 ∗ [φ⊥ (I)]) +
and equation 3.6 follows. Acknowledgments We are very grateful to two anonymous referees, whose insightful comments greatly improved the presentation of this article. This work is supported by the NSF grant DMS-91-21266 to David Mumford. The second author is supported by a grant to Donald B. Rubin. References Akaike, H. (1977). On entropy maximization principle. In P. R. Krishnaiah (Ed.), Applications of Statistics (pp. 27–42). Amsterdam: North-Holland. Barlow, H. B., Kaushal, T. P., & Mitchison, G. J. (1989). Finding minimum entropy codes. Neural Computation, 1, 412–423. Bergen, J. R., & Adelson, E. H. (1991). Theories of visual texture perception. In D. Regan (Ed.), Spatial Vision, Boca Raton, FL: CRC Press, 1991. Besag, J. (1973). Spatial interaction and the statistical analysis of lattice systems (with discussion). J. Royal Stat. Soc., B, 36, 192–236. Brown, L. D. (1986). Fundamentals of statistical exponential families: With applications in statistical decision theory. Hayward, CA: Institute of Mathematical Statistics. Chubb, C., & Landy, M. S. (1991). Orthogonal distribution analysis: A new approach to the study of texture perception. In M. S. Landy & J. A. Movshon (Eds.), Computational models of visual processing. Cambridge, MA: MIT Press. Coifman, R. R., & Wickerhauser, M. V. (1992). Entropy based algorithms for best basis selection. IEEE Trans. on Information Theory, 38, 713–718.
Minimax Entropy Principle
1659
Cross, G. R., & Jain, A. K. (1983). Markov random field texture models. IEEE, PAMI, 5, 25–39. Daugman, J. (1985). Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. Journal of Optical Soc. Amer., 2(7). Dayan, P., Hinton, G. E., Neal, R. N., & Zemel, R. S. (1995). The Helmholtz machine. Neural Computation, 7, 889–905. Donoho, D. L., & Johnstone, I. M. (1994). Ideal de-noising in an orthonormal basis chosen from a library of bases. Acad. Sci. Paris, Ser. I, 319, 1317–1322. Field, D. (1994). What is the goal of sensory coding? Neural Computation, 6, 559– 601. Geman, S., & Geman, D. (1984). Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images. IEEE Trans. PAMI, 6, 721–741. Heeger, D. J., & Bergen, J. R. (1995). Pyramid-based texture analysis/synthesis. In Computer Graphics Proceedings (pp. 229–238). Jaynes, E. T. (1957). Information theory and statistical mechanics. Physical Review, 106, 620–630. Jolliffe, I. T. (1986). Principle components analysis. New York: Springer-Verlag. Jordan, M. I., & Jacobs, R. A. (1994). Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 6, 181–214. Julesz, B. (1962). Visual pattern discrimination. IRE Transactions of Information Theory, IT-8, 84–92. Julesz, B. (1995). Dialogues on perception. Cambridge, MA: MIT press. Karni, A., & Sagi, D. (1991). Where practice makes perfect in texture discrimination—evidence for primary visual cortex plasticity. Proc. Nat. Acad. Sci. U.S.A., 88, 4966–4970. Kullback, S. (1959). Information theory and statistics. New York: Wiley. Kullback, S., & Leibler, R. A. (1951). On information and sufficiency. Annual Math. Stat., 22, 79–86. Mallat, S. (1989). Multi-resolution approximations and wavelet orthonormal bases of L2 (R). Trans. Amer. Math. Soc., 315, 69–87. Mumford, D. B. (1992). On the computational architecture of the neocortex II: The role of cortico-cortical loops. Biological Cybernetics, 66, 241–251. Mumford, D. B., & Shah, J. (1989). Optimal approximations by piecewise smooth functions and associated variational problems. Comm. Pure Appl. Math., 42, 577–684. Picard, R. W. (1996). A society of models for video and image libraries (Technical Rep. No. 360). Cambridge, MA: MIT Media Lab. Priestley, M. B. (1981). Spectral analysis and time series. San Diego: Academic Press. Ripley, B. (1996). Pattern recognition and neural networks. Cambridge: Cambridge University Press. Rissanen, J. (1989). Stochastic complexity in statistical inquiry. Singapore: World Scientific. Sherman, M. (1996). Variance estimation for statistics computed from spatial lattice data. J. R. Statistics Soc., B, 58, 509–523. Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27.
1660
Song Chun Zhu, Ying Nian Wu, and David Mumford
Silverman, M. S., Grosof, D. H., De Valois, R. L., & Elfar, S. D. (1989). Spatialfrequency organization in primate striate cortex. Proc. Natl. Acad. Sci. U.S.A., 86. Simoncelli, E. P., Freeman, W. T., Adelson, E. H., & Heeger, D. J. (1992). Shiftable multi-scale transforms. IEEE Trans. on Information Theory, 38, 587–607. Watson, A. (1987). Efficiency of a model human image code. J. Opt. Soc. Am. A, 4(12), 2401–2417. Winkler, G. (1995). Image analysis, random fields and dynamic Monte Carlo methods. Berlin: Springer-Verlag. Witkin, A., & Kass, M. (1991). Reaction-diffusion textures. Computer Graphics, 25, 299–308. Xu, L. (1995). Ying-Yang machine: A Bayesian-Kullback scheme for unified learnings and new results on vector quantization. Proc. Int’l Conf. on Neural Info. Proc. Hong Kong. Younes, L. (1988). Estimation and annealing for Gibbsian fields (STMA V30 1845). Annales de l’Institut Henri Poincar´e, Section B, Calcul des Probabilities et Statistique, 24, 269–294. Zhu, S. C. (1996). Statistical and computatinal theories for image segmentation, texture modeling and object recognition. Unpublished Ph.D. dissertation, Harvard University. Zhu, S. C., & Mumford, D. B. (1997). Learning generic prior models for visual computation. Proc. of Int’l Conf. on Computer Vision and Pattern Recognition. Puerto Rico. Zhu, S. C., Wu, Y. N., & Mumford, D. B. (1996). FRAME: Filters, random fields and maximum entropy—to a unified theory for texture modeling. In Proc. of Int’l Conf. on Computer Vision and Pattern Recognition. San Francisco. Received April 10, 1996; accepted March 27, 1997. Address correspondence to
[email protected].
NOTES
Communicated by Shun-ichi Amari
A Local Learning Rule That Enables Information Maximization for Arbitrary Input Distributions Ralph Linsker IBM Research Division, T. J. Watson Research Center, Yorktown Heights, NY 10598, U.S.A.
This note presents a local learning rule that enables a network to maximize the mutual information between input and output vectors. The network’s output units may be nonlinear, and the distribution of input vectors is arbitrary. The local algorithm also serves to compute the inverse C−1 of an arbitrary square connection weight matrix. 1 Introduction Unsupervised learning algorithms that maximize the information transmitted through a network—the infomax principle (Linsker, 1988)—typically require the computation of a function that is, to first appearances, highly nonlocal. This occurs because the units perform gradient ascent in a quantity that is essentially the joint entropy of the output values. If the input distribution is multivariate gaussian and the network mapping is linear, then (Linsker, 1992) the output entropy is proportional to ln det Q where Q is the covariance matrix of the output values. If the input distribution is arbitrary and the network mapping is nonlinear but constrained to be invertible (Bell & Sejnowski, 1995), then the output entropy contains the term ln | det C| where C is the (assumed square) matrix of feedforward connection strengths. In the latter case, the inverse of the connection strength matrix, (CT )−1 , must be computed. Unless a local infomax learning rule exists—one that makes use only of quantities available at the connection being updated—it is implausible that a biological system could implement this optimization principle. Linsker (1992) showed how to perform gradient ascent in ln det Q using a local algorithm. That result was embedded within the context of a linear network operating on gaussian-distributed input with additive noise. However, the essential features of that local algorithm can be applied to a broader class of information-maximizing networks, as shown here. This note uses a familiar network architecture—comprising feedforward and recurrent lateral connections—in which the lateral connections change according to an anti-Hebbian rule. This network computes the gradient of ln det Q and, in so doing, also computes the inverse (CT )−1 of an arbitrary square matrix of feedforward weights. The algorithm is entirely local; the Neural Computation 9, 1661–1665 (1997)
c 1997 Massachusetts Institute of Technology °
1662
Ralph Linsker
computation of each required quantity for each connection uses only information available at that connection. By using this local algorithm in conjunction with the method of Bell and Sejnowski (1995), one obtains a local learning rule for information maximization that applies to arbitrary input distributions provided the network mapping is constrained to be invertible. In an alternative method for entropy-gradient learning (Amari, Cichocki, and Yang, 1996), the need to compute the inverse of the connection weight matrix is avoided by defining the weight change to be proportional to the entropy gradient multiplied by CT C. In that case, lateral connections are absent, but both a feedforward and a feedback pass through the connections C are required. 2 The Network Consider a network with input vector x and feedforward connection matrix C that computes u ≡ Cx. Each component of u may be passed through a nonlinear function such as a sigmoid, but the nature of this “squashing” function is irrelevant here. See Bell and Sejnowski (1995) for a discussion of the role of this nonlinearity in information maximization. Lateral connections, between each pair of output units, are described by a matrix F. They are used recursively to compute the auxiliary vector v(t) = Cx + Fv(t − 1) at time steps t = 1, 2, . . . , where v(0) = Cx. For fixed x, v(t) converges to v = Cx + Fv, that is, v = (I − F)−1 Cx,
(2.1)
provided all eigenvalues of F have absolute value < 1. 3 Local Learning Rule We use this network to compute (CT )−1 for a square matrix C as follows. Define q = hxxT i and Q = huuT i = CqCT , where h.i denotes an ensemble average and superscript T denotes transpose. Assume that the ensemble of input vectors x spans the input space; then q is a positive definite matrix. (If x is defined so that hxi = 0, then q and Q are the covariance matrices of the input and output vectors, respectively.) Suppose for the moment that the lateral weights F can be made to satisfy F = I−αQ (for given C).1 Then Q−1 = α(I−F)−1 , whence I = α(I−F)−1 CqCT , 1 α is a positive number chosen to ensure that the eigenvalue condition on F (above) is satisfied. In order to ensure convergence of the recursively computed vector v, it is necessary and sufficient to choose 0 < α < 2/Q+ where Q+ is the largest eigenvalue of Q. See Linsker (1992) for more detail, including how to choose α to optimize the convergence rate of v.
Local Learning Rule
1663
yielding (CT )−1 = α(I − F)−1 Cq = αhvxT i,
(3.1)
where equation 2.1 has been used at the last step. This is a local network computation, since the component of the matrix inverse at connection i → n, namely, (C−1 )in = hvn xi i, is computed using only quantities available at the two ends of that connection. The lateral weights F achieve the required values by means of either of the following local algorithms: (1) For given C, one presents a set of input vectors x, computes the corresponding u vectors, and sets F = I − αhuuT i (= I − αQ). However, since the C values will themselves be changing during a training process, it is computationally inefficient to hold C fixed while presenting an ensemble of input vectors at each C. (2) It is more practical to let C evolve slowly and incrementally update F according to 1F = β(−αuuT + I − F), where β is a learning rate parameter for lateral connections. This is an antiHebbian learning rule. It is a local computation since the change at each lateral connection m → n, namely, 1Fnm = β(−αun um +δnm −Fnm ), depends only on information available at that connection. The network operation is summarized as follows: Vector x is presented and held at the inputs. The linear output un = 6Cni xi is stored at each output unit n. For an incremental learning rule, each pair of outputs un and um is used to compute the change 1Fnm in the weight of the lateral connection m → n. The outputs vn (t) are iteratively computed until asymptotic values vn are obtained. For each feedforward connection i → n, the input xi and output vn are used to compute the component [(CT )−1 ]ni for that connection. One combines this calculation with the method of Bell and Sejnowski (1995) to yield a completely local learning rule for arbitrary input distributions (assuming invertible C) as follows: The already-stored value of un and the input xi are used to compute the remaining term h(un )xi in Bell and Sejnowski’s feedforward weight update expression, which has the form: 1Cni ∝ [(CT )−1 ]ni + h(un )xi , where h is a nonlinear function of a unit’s output value. Following the application of the learning rule for C, whether that of Bell and Sejnowski (1995) or some other rule requiring the computation of the feedforward matrix inverse, a new input vector x is then presented, and the previous steps are repeated until C converges. Further comments relating the present method to that of Linsker (1992): 1. Relation between (CT )−1 and the gradient ∂(ln det Q)/∂C: We have (1/2)∂(ln det Q)/∂C = Q−1 Cq = (CqCT )−1 Cq = (CT )−1 .
(3.2)
A derivation of the first equality is given in Linsker (1992); it requires that Q be positive definite, which is true for nonsingular C.
1664
Ralph Linsker
2. Relation between gradients of ln det Q and of ln | det C|: We have ln det Q = ln det(CqCT ) = 2 ln | det C|+ln det q. The last term is fixed by the input distribution. Therefore, ∂(ln det Q)/∂C = 2∂(ln | det C|)/∂C. 3. For the noiseless case treated here, the value of ∂(ln det Q)/∂C is independent of the input statistics (i.e., the matrix q). In an informationmaximizing process in which the function being maximized is ln det(CqCT ) + S, where S may incorporate other terms (e.g., related to a nonlinear “squashing” function or a linear gain-control term, discussed later), the input statistics enter solely through their effect on S. This statement is not true when noise is present, as in Linsker (1992); then Q = CqCT + (noise variance term), and q does not vanish from the gradient term ∂(ln det Q)/∂C. 4 Comparison of Two Instantiations of the Infomax Principle The principle of maximum information preservation (infomax) states that an input-output mapping should be chosen from a set of admissible mappings so as to maximize the amount of Shannon information that the outputs jointly convey about the inputs (Linsker, 1988). The principle has been applied to linear maps with a gaussian input distribution (Linsker, 1989a, 1992; Atick, 1992), to nonlinear maps with arbitrary input distributions in which a single output node “fires” (Linsker, 1989b) and to nonlinear invertible maps with arbitrary input distributions and an equal number of inputs and outputs (Bell and Sejnowski, 1995). We have shown that the essential features of the local learning rule used in Linsker (1992) also provide a local rule for the Bell and Sejnowski (1995) model. What are the essential similarities and differences between the operation of the infomax principle in these two models? In Linsker (1992), the presence of input and output noise leads to a twophase procedure in which the network is trained by input signals (plus noise) and by noise alone. In each phase, the joint entropy of the outputs is computed. The variance of each output unit is held fixed by a gain control, in order to impose a bound on the unit’s dynamic range. In the zero-noise limit, the algorithm maximizes the joint entropy of the output units. The solution is well defined (weights do not diverge) because of the gain control. The maximized quantity has the form (1/2)(ln det Q)+ GC, where (1/2) ln det Q is the output entropy because the inputs (hence the outputs) have a gaussian distribution, and GC is the term arising from the gain control. In Bell and Sejnowski (1995), there is no noise, and the output entropy is maximized by maximizing ln | det C| + 6hln |g0n |i where g0n is the slope of the nonlinear squashing function at the nth output unit. As shown above, the first term is equal (apart from a constant term) to (1/2) ln det Q for zero noise. The second term is the nonlinear analog of the GC gain control term.
Local Learning Rule
1665
The nonlinear function provides the important benefit of being responsive to higher-order (nongaussian) statistics. Two points emerge from this comparison. First, the quantities being optimized in the two models are quite similar, although noise and nonlinearity each contributes distinctive and important terms to the learning rules. Second, it is the limitation of the output units’ dynamic range, whether by linear gain control or a nonlinear function, that prevents the weights and the information capacity from increasing without bound. Neither the presence of noise in the Linsker (1992) model nor the fact that the squashing function is nonlinear in Bell and Sejnowski (1995) is needed for the output entropy maximization process to be well defined. Understanding how different model elements affect infomax learning will be of value as information-maximizing principles are applied to more complex and biologically realistic networks. References Amari, S., Cichocki, A., & Yang, H. H. (1996). A new learning algorithm for blind signal separation. In D. S. Touretzky, M. C. Mozer, & M. E. Hasselmo (Eds.), Advances in neural information processing systems, 8 (pp. 757–763). Cambridge, MA: MIT Press. Atick, J. J. (1992). Could information theory provide an ecological theory of sensory processing? Network, 3, 213–251. Bell, A. J., & Sejnowski, T. J. (1995). An information-maximisation approach to blind separation and blind deconvolution. Neural Comp., 7, 1129–1159. Linsker, R. (1988). Self-organization in a perceptual network. Computer, 21 March, 105–117. Linsker, R. (1989a). An application of the principle of maximum information preservation to linear systems. In D. S. Touretzky (Ed.), Advances in neural information processing systems, 1 (pp. 186–194). San Mateo, CA: Morgan Kaufmann. Linsker, R. (1989b). How to generate ordered maps by maximizing the mutual information between input and output signals. Neural Comp., 1, 402–411. Linsker, R. (1992). Local synaptic learning rules suffice to maximize mutual information in a linear network. Neural Comp., 4, 691–702. Received October 14, 1996; accepted May 12, 1997.
Communicated by Helge Ritter
Convergence and Ordering of Kohonen’s Batch Map Yizong Cheng Department of Electrical and Computer Engineering and Computer Science, University of Cincinnati, Cincinnati, OH 45221, U.S.A.
The convergence and ordering of Kohonen’s batch-mode self-organizing map with Heskes and Kappen’s (1993) winner selection are proved. Selim and Ismail’s (1984) objective function for k-means clustering is generalized in the convergence proof of the self-organizing map. It is shown that when the neighborhood relation is doubly decreasing, order in the map is preserved. An unordered map becomes ordered when a degenerate state of ordering is entered, where the number of distinct winners is one or two. One strategy to enter this state is to run the algorithm with a broad neighborhood relation. 1 Introduction Kohonen’s self-organizing map adds a neighborhood relation to the cluster centers in competitive learning, so the data space is mapped onto a regularly arranged grid of cells. The mapping may distort the data space, but much of the local order is preserved. The original formulation of the map consists of random drawing of examples from the data sample and incremental adjustments to the map parameters after each example is shown. When this process is modeled as a adjustment is said to stochastic approximation, theP step size αi for parameter P have to satisfy the conditions i αi = ∞ and i αi2 < ∞. However, because the map is useful as an algorithm running on a computer, and algorithms must terminate in a finite number of iterations, these conditions are difficult to observe in practice. When the sample is finite or changes slowly, the average move of the parameter adjustment has been used to study the behavior of the selforganizing map. Such a study has been done by Erwin, Obermayer, and Schulten (1992), who conclude that “no strict proof of convergence is likely to be possible.” They also show that ordering is a consequence of the occurrence of a specific permutation of the sample in the drawing sequence. During random drawing, all permutations of the sample will eventually occur, and thus ordering is achieved with probability one. However, the average time for a particular permutation to occur from this random drawing is exponential, and no execution of the algorithm can support such a running time. Neural Computation 9, 1667–1676 (1997)
c 1997 Massachusetts Institute of Technology °
1668
Yizong Cheng
Kohonen (1995) eventually proposed the batch version of his self-organizing map, using mean shift instead of the average move of the batch as the adjustment of the parameters. Some variations and generalizations of the batch map have been proposed, for instance, by Mulier and Cherkassky (1995). Heskes and Kappen (1993) suggest a less straightforward winnertake-all step so that an energy function can be proposed for the process. In this article, two modifications are made to Kohonen’s batch map: (1) an adoption of the winner-take-all step suggested by Heskes and Kappen and (2) the abolition of neighborhood narrowing before the termination of the algorithm. Instead, the algorithm is run several times, with different but fixed neighborhood functions. Some of the results presented here also apply to the original Kohonen’s batch map. In section 2, an energy function similar to the objective function in Selim and Ismail’s (1984) treatment of k-means clustering is proposed. The algorithm is treated as alternating partial minimization of the energy function, and this allows an easy proof for its convergence. The general definition of the algorithm invites variations, although only a specific version is fully discussed here. In section 3, a condition called doubly decreasing is defined for the neighborhood function, and it is shown to be a sufficient condition for one-dimensional order preservation by the batch map. Section 4 discusses initialization of ordering with an unfocused map and refinement of the map by narrowing the neighborhood function on successive runs of the algorithm. 2 The Algorithm The batch map consists of two finite sets: S, the map, and D, the sample. The self-organizing process is an evolution of the assignment of S members to D members. The goal of k-means clustering, which is a special case of the batch map, is to assign the same S member to nearby D members. The goal of the self-organizing map is to assign nearby S members to nearby clusters of D. The nearness of the members of D or S is represented by considering them as subsets of some Euclidean spaces. To measure the nearness of S members, a nonnegative neighborhood function h: S×S → [0, ∞) is defined. A distance measure v: X × X → [0, ∞) is also defined over the data space X that includes D. D, S, h, and v are given and fixed, and the parameters that evolve through self-organization are a membership function u: S × D → [0, 1] with the constraint that X
u(s, x) = 1 for all x ∈ D,
(2.1)
s∈S
and a mapping z: S → X. The self-organizing algorithm is the alternating
Kohonen’s Batch Map
1669
partial minimization of the energy function f (u, z) =
XX
u(s, x)h(s, t)v(z(t), x).
(2.2)
s,t∈S x∈D
Given the current z, a u that minimizes f (u, z) is found, and based on this u, a new z that minimizes f (u, z) is found. This cycle is continued until f (u, z) is no longer decreasing. Algorithm 1.
Repeat steps 1 and 2 until f (u, z) stops decreasing.
Step 1. For each x ∈ D, select an s ∈ S that minimizes c(s, x) = v(z(t), x) and make u(s, x) = 1 and w(x) = s.
P
t∈S h(s, t)
Step 2. For each s ∈ S, let z(s) = ξ where ξ minimizes G(ξ ) =
X t∈S
h(t, s)
X
v(ξ, x).
(2.3)
w(x)=t
When v(ξ, x) = kξ − xk2 , this step is P h(w(x), s)x z(s) = Px∈D . x∈D h(w(x), s)
(2.4)
There are two significant differences between algorithm 1 and the batch map proposed in Kohonen (1995). First, Kohonen’s algorithm simply assigns the s ∈ S with z(s) nearest to x ∈ X as the “winner” w(x). Second, the neighborhood function h narrows during the execution of Kohonen’s algorithm. The reason that Heskes and Kappen’s winner-take-all step is used in algorithm 1 is based on the concern on convergence implied by Erwin et al. (1992). In section 4, it will be demonstrated that order can be initialized and refined using different neighborhood functions on different runs of algorithm 1, and thus neighborhood narrowing is factored out from the algorithm itself, to make the convergence proof more straightforward. Theorem 1.
Algorithm 1 terminates in finitely many steps.
Proof. First, we prove that step 1 is a partial minimization of P the energy P function f (u, z) with z fixed. The energy function is f (u, z) = s∈S x∈D u(s, x)c(s, x). Suppose s ∈ S minimizes c(s, x) for a particular x ∈ D, but u(s, x) < 1. Then there must be a t 6= s such that u(t, x) > 0. Let u0 be the same as u except u0 (s, x) = u(s, x)+u(t, x) and u0 (t, x) = 0. Then f (u, z)− f (u0 , z) = u(s, x)c(s, x)+u(t, x)c(t, x)−u0 (s, x)c(s, x) = u(t, x)[c(t, x)−c(s, x)] ≥ 0. Hence u0 is a choice at least as good as u. P a u is chosen in step 1, the energy function becomes f (u, z) = P Once x∈D t∈S h(w(x), t)v(z(t), x), and u is no longer a variable. We have to
1670
Yizong Cheng
determine z(t) for every t ∈ S so that f (u, z) is minimized. In f (u, z), the terms involving each z(t) can be separated and summed up to G(z(t)) = P x∈D h(w(x), t)v(z(t), P x). Hence for each t ∈ S, z(t) should be the ξ that minimizes G(ξ ) = x∈D h(w(x), t)v(ξ, x), and this is the same as step 2. During each iteration of algorithm 1, a winner assignment is chosen in step 1. This choice determines a z from step 2 and a lower f (u, z). Since there are no more than |S||D| different winner assignments, algorithm 1 terminates in finitely many steps. 3 Order Preservation Algorithm 1 is the iteration of two partial minimization steps. The current z determines the next w, which in turn determines the next z. To show that once the map is ordered, the order is preserved, we must show that an ordered z leads to an ordered w, and an ordered w leads to an ordered z. Ordering is well defined only in one-dimensional cases. It is often assumed that the topology preservation of a multidimensional map on multidimensional data is a natural consequence of one-dimensional order preservation. In the discussion of ordering, we assume that S is one-dimensional, with si < sj when i < j. We assume that D members are aligned in X, a Euclidean space. We assume that equation 2.4 is the adjustment formula for z in step 2 of algorithm 1. Because z(si ) is a weighted average of D members, they must be aligned too. We say that w is ordered if in one of the two directions in the one-dimensional subspace spanned by D, x < y implies w(x) ≤ w(y). We say that z is ordered if i < j implies z(si ) ≤ z(sj ), in a direction in X. The proofs of the order preservation results in this section require a property of the neighborhood function, defined below. Definition 1. A function h: [0, ∞) → [0, ∞) is said to be doubly decreasing if h(a) is decreasing and for any a, b, c ≥ 0 with h(a + b + c) > 0, h(a + c) h(a) ≤ . h(a + b) h(a + b + c)
(3.1)
When h is differentiable, the condition in equation 3.1 is equivalent to the condition that both h(a) and d log h(a)/da are decreasing. With 0 < r < 1 and 0 < p, a group of exponential-decay-type functions is doubly decreasing: p
h(a) = r a .
(3.2)
A special case of this group is the gaussian function, h(a) = e−a
2
/σ 2
.
(3.3)
Kohonen’s Batch Map
1671
Also, the triangle function, ½ 1 − σa , a < σ h(a) = 0, a≥σ
(3.4)
for σ > 0, is doubly decreasing. Lemma 1. ri =
Let si < sj when i < j. Let h be doubly decreasing. Then, for each q, h(|si − sq |) h(|si − sq+1 |)
(3.5)
is a decreasing sequence. Proof. Let i < j, a = sj − si , and c = sq+1 − sq . We show that ri ≥ rj in three cases—i < j ≤ q, q + 1 ≤ i < j, and i = q and j = q + 1—and thus show that ri is decreasing. Case 1: Let b = sq − sj . ri ≥ rj is h(a + b)/ h(a + b + c) ≥ h(b)/ h(b + c). Case 2: Let b = si − sq+1 , and ri ≥ rj is h(b + c)/ h(b) ≥ h(a + b + c)/ h(a + b). The inequalities in these two cases are the same as that in equation 3.5. Case 3: ri ≥ rj is h(0)/ h(c) ≥ h(c)/ h(0), which is true because h is decreasing. P P Lemma 2. Let ai ≥ 0 and bi ≥ 0 for i = 1, . . . , n and ni=1 ai = ni=1 bi . Let · · ≤ cn . If there c1 ≤ c2 ≤ ·P P is a k such that ai ≥ bi when i ≤ k and ai ≤ bi when i > k, then ni=1 ai ci ≤ ni=1 bi ci . Proof.
We must have
n X i=1
bi ci −
n X
Pk
ai ci =
i=1
≥
i=1 (ai n X
− bi ) =
Pn
i=k+1 (bi
(bi − ai )ci −
k X
(ai − bi )ci
i=k+1
i=1
n X
k X
(bi − ai )ck −
i=k+1
− ai ). Therefore,
(ai − bi )ck = 0.
(3.6)
i=1
Theorem 2. Let the neighborhood function h(s, t) = h(|s−t|) be doubly decreasing. If prior to step 2 of algorithm 1, w is ordered, then after step 2, z is ordered, · < xn and w(xi ) ≤ w(xj ) when i < j. Let Proof. P Let D be x1 < x2 < · ·P z(sq ) = ni=1 ai xi and z(sq+1 ) = ni=1 bi xi . When bi > 0, ai ≥ bi is equivalent to Pn h(w(xi ), sq ) h(w(xi ), sq ) ≥ Pn i=1 = K. (3.7) ri = h(w(xi ), sq+1 ) h(w(x i ), sq+1 ) i=1
1672
Yizong Cheng
If we can show that ri ≥ rj when i < j, then there must be a k such that ri ≥ K when i ≤ k and ri ≤ K when i > k, which is the condition of lemma 2. Using lemma 2, we have z(sq ) ≤ z(sq+1 ) for all q, and the theorem is proved. To show that ri is monotone decreasing, first we notice that because w is ordered, the problem is equivalent to showing that r∗i = h(si , sq )/ h(si , sq+1 ) is decreasing, which is a consequence of lemma 1. Theorem 3. Let the neighborhood function h(s, t) = h(|s−t|) be doubly decreasing. If prior P to step 1 of algorithm 1, z is ordered, then after step 1, the minima of cq (x) = i h(sq , si )(x − z(si ))2 , mq , are ordered, in the sense that mp ≤ mq when p < q. P P Proof. The minimumPof cq (x) is mq = q , si )z(si )/ i h(sq , si ). Let mq = P i h(sP P i ai z(si ) and mq+1 = i bi z(si ) with i ai = i bi = 1, and ai , bi ≥ 0. ai ≥ bi is equivalent to P h(sq , si ) h(sq , si ) ≥ P i = K. ri = h(sq+1 , si ) i h(sq+1 , si )
(3.8)
Lemma 1 says that ri is decreasing. This means that ai ≥ bi when i ≤ k and ai ≤ bi for i > k for some k, and lemma 2 says that mq ≤ mq+1 . Theorems 2 and 3 show that algorithm 1 preserves order in the selforganizing map under a doubly decreasing neighborhood function, except a gap between the ordered minima of cp (x)’s and the ordered w. First, this gap would not be there if the original Kohonen’s batch map were the algorithm. In that case, Voronoi regions based on the distance in space X are used to determine the winner, and an ordered z naturally implies an ordered w. Second, under some mild additional assumptions, the gap can be fixed. One P of these assumptions is that t∈S h(s, t) be independent of s. In this case, each pair of cp (x)’s has exactly one intersection, and it is always the case that when x is larger than the intersection, the cp (x) with the larger mp has lower value and may win the competition. This assumption is satisfied when the map is periodic, for example, as a ring of evenly spaced points. 4 Order Initialization and Fine Tuning The preceding section shows that algorithm 1 preserves the existing order. The main effect of the algorithm is the improvement of the distribution of the map images. Using different neighborhood functions, an ordered map can be made more uniformly distributed or more focused. A more critical problem is how the order is initialized. The strategies discussed in this section rely on two degenerate cases of order, in which the set {w(x); x ∈ D} has only one or two elements. Some of
Kohonen’s Batch Map
1673
Figure 1: cp (x) parabolas with z(si )’s as labeled boxes and D points as circles at their lowest cp (x) values. The upper row of labeled boxes indicates the positions of the initial, unordered z(si )’s with i as the labels in the boxes. The lower row of boxes indicates the ordered z(si )’s after just one iteration of algorithm 1. This next z is also the final configuration when algorithm 1 terminates.
the strategies are immediately applicable to the original Kohonen’s batch map. One strategy is to make all initial z(s)’s less than all x ∈ D. In this case, there is one winner for all D members, and z is considered ordered. Another strategy is to place the initial z(s)’s between two neighboring x ∈ D, and thus there are only two winners that make w ordered. The situation becomes more complex when algorithm 1 is used. The two strategies in the preceding paragraph still work to a certain extent. But a third strategy is more general and explains how an unordered map turns into an ordered one. This strategy is to run algorithm 1 with a broad neighborhood function. This basically means choosing a large σ in the gaussian function (see equation 3.3) or the triangle function (see equation 3.4). Indeed, a σ can be found such that the difference between all applicable h values is as small as desired. In this case, all cp (x) = c(sp , x) as parabolas have their Psymmetry centers as close to each other as desired. Those with smaller i h(sp , si ) will grow more slowly and have lower minima. When the neighborhood function is broad enough, in the interval spanned by D, only one or two of these parabolas will be the lowest ones, and only one or two winners will be generated. These winners often are the extreme members of S, because
1674
Yizong Cheng
Figure 2: Algorithm 1 on a one-dimensional ring over a randomly generated D in R2 , using the gaussian neighborhood function equation 3.3. (a) X is the unit square. D is 20 randomly generated points, denoted by squares. S is a ring of 30 points. The distance between two S points is the minimum number of hops one must cross on the ring from one to the other. Initially, z(s) were randomly generated and indicated with small circles connected into a ring, reflecting the ring topology of S. (b) Algorithm 1 was applied with σ = 8 in equation 3.3. It took six iterations to reach this fixed point. (c) Then algorithm 1 was applied to the resulting map from (b), with a narrower σ = 1. Two iterations were needed before this fixed point was reached. (d) Finally, σ = 0.1 was used, and two more iterations lead to this configuration.
P their corresponding i h(sp , si ) are the smallest. When this happens, w is ordered. Figure 1 shows an example when z is unordered but w becomes
Kohonen’s Batch Map
1675
ordered, because only two of these parabolas have the lowest value in the D range. There is a significant difference between the order initialization in one iteration of algorithm 1 and the one proved in Erwin et al. (1992), which takes exponential time to accomplish. To make the map more focused and more uniformly distributed, a few successive runs of algorithm 1 with narrower neighborhood functions will be enough. This is demonstrated using Figure 2. Figure 2 shows the order initialization and fine tuning with a periodic one-dimensional S of 30 members and a two-dimensional D with 20 members. Starting with a randomly generated z (circles connected with line segments indicating their neighboring relationship in Figure 1a), a broad gaussian neighborhood function is used to initialize the order. The result is a miniature copy of the S structure at the centroid of D (see Figure 2b). Then two runs of the algorithm with narrower neighborhood functions fine-tune the map so it is focused. The total number of iterations in all three runs of algorithm 1 is 10. 5 Concluding Remarks The convergence and ordering of Kohonen’s batch map with Heskes and Kappen’s winner selection are discussed and proved with some additional conditions, including the doubly decreasing property of the neighborhood function. The ordering results are applicable to the original Kohonen’s batch map. The general form of algorithm 1 also invites speculation on other variations of the batch map. The constraint on the membership function u can be replaced, and fuzzy winners or multiple winners will emerge. When v(x, y) = kx − yk, a median-shift instead of mean-shift batch map is created, and a more robust and more uniformly distributed map may be obtained. (A separate article will deal with the case when the data space is a sphere.) Ordering is only the one-dimensional case of topology preservation and does not explain all the attractions a self-organizing map brings and is useful for. Some measures designed for multidimensional topology preservation have been proposed, and topology preservation of algorithm 1 using measures other than one-dimensional order has yet to be studied. References Erwin, E., Obermayer, K., & Schulten K. (1992). Self-organizing maps: Ordering, convergence properties and energy functions. Biological Cybernetics, 67, 47–55. Heskes, T. M., & Kappen, B. (1993). Error potentials for self-organization. ICNN’93 (pp. 1219–1233). San Francisco. Kohonen, T. (1995). Self-organizing maps. Berlin: Springer-Verlag.
1676
Yizong Cheng
Mulier, F., & Cherkassky, V. (1995). Self-organization as an iterative kernel smoothing process. Neural Computation, 7, 1165–1177. Selim, S. Z., & Ismail, M. A. (1984). K-means-type algorithms: A generalized convergence theorem and characterization of local optimality. IEEE Trans. Pattern Analysis and Machine Intelligence, 6, 81–86. Received August 26, 1996; accepted March 14, 1997.
LETTERS
Communicated by Jack Cowan
Solitary Waves of Integrate-and-Fire Neural Fields David Horn Irit Opher School of Physics and Astronomy, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 69978, Israel
Arrays of interacting identical neurons can develop coherent firing patterns, such as moving stripes that have been suggested as possible explanations of hallucinatory phenomena. Other known formations include rotating spirals and expanding concentric rings. We obtain all of them using a novel two-variable description of integrate-and-fire neurons that allows for a continuum formulation of neural fields. One of these variables distinguishes between the two different states of refractoriness and depolarization and acquires topological meaning when it is turned into a field. Hence, it leads to a topologic characterization of the ensuing solitary waves, or excitons. They are limited to pointlike excitations on a line and linear excitations, including all the examples noted above, on a twodimensional surface. A moving patch of firing activity is not an allowed solitary wave on our neural surface. Only the presence of strong inhomogeneity that destroys the neural field continuity allows for the appearance of patchy incoherent firing patterns driven by excitatory interactions. 1 Introduction Can one construct a self-consistent description of neuronal tissue on the mm scale? If so, can one use for this description the same variables that characterize a single neuron? The answers to these questions are not obvious. Assuming they are affirmative, we are led to a theory of neural fields, describing the characteristic features of neurons at a given location and specific time. This necessitates continuity of neural variables, which may be quite natural in view of the large overlap between neighboring neurons and the functional maps observed in different cortical areas showing that nearby stimuli affect neighboring neurons. A model of neuronal tissue composed of two aggregates (excitatory and inhibitory) was proposed long ago by Wilson and Cowan (1973). They described three different dynamical modes that correspond to different forms of connectivity. One of their interesting observations is the formation of pairs of unattenuated traveling waves that move in opposite directions as a result of a localized input. The propagation velocity depends on the interactions and the amount of disinhibition in the network. Building on this approach, Neural Computation 9, 1677–1690 (1997)
c 1997 Massachusetts Institute of Technology °
1678
David Horn and Irit Opher
Amari (1977) analyzed pattern formation in continuum distributions of neurons or neural fields. He pointed out the possibility of local excitation solutions and derived the conditions for their formation in one dimension. Ermentrout and Cowan (1979) studied layers of excitatory and inhibitory neurons interacting with one another in two dimensions and obtained the formations of moving stripes, or rolls, hexagonal lattice patterns, and other forms. They pointed out that if their model is applied to V1, it can provide an explanation of drug-induced visual hallucinations (Kluver, ¨ 1967), relying on the retinocortical map interpretation of V1. A simpler derivation of this result is provided by Cowan (1985), who considered a single type of neural field on a two-dimensional manifold using a difference of gaussians (DOG) (or “Mexican hat”) interaction with close-by strong excitation surrounded by weak inhibition. Similar questions were recently investigated by Fohlmeister, Gerstner, Ritz, and van Hemmen (1995), who have structured their neural layer according to the spike response model of Gerstner and van Hemmen (1992). Their simulations exhibit stripe formations, rotating spirals, expanding concentric rings, and collective bursts. The simulations of Fohlmeister et al. (1995) are an example of how the advent of large-scale computations allows one to attack elaborate neuronal models in the simulation of cortical tissues. This point was made by Hill and Villa (1994), who have studied two-dimensional arrays of 10,000 neurons and obtained firing patterns in the form of clusters or patches. Moving patches were also observed by Usher, Stemmler, Koch, and Olami (1994), who investigated a plane of integrate-and-fire neurons with DOG interactions. This brings us to an interesting question. To what extent can one expect calculations on a grid of 100 × 100 neurons to reflect the behavior of a neural tissue that contains several orders of magnitude more neurons? Clearly, if the two questions raised at the beginning of this article are answered in the affirmative, there is a good chance of obtaining meaningful results. In this article, we will characterize allowed excitation formations that obey continuity requirements and show what happens when continuity is broken. Some of the structures found in the different investigations move in a fashion that conserves their general form unless they hit and merge or destroy one another. This is reminiscent of the particle property of solitons, which are known to arise in nonlinear systems (see, e.g. Newell, 1985). Yet some differences exist. In particular, the coherent firing structures annihilate during collision rather than stay intact. Therefore, an appropriate characterization is that of solitary waves (Meron, 1992). We propose using the term excitons to describe moving solitary waves in an excitable medium. Our study is geared toward investigating these structures. We will introduce a topological description that will help us identify the type of coherent structures that are allowed under our assumptions. In particular, we will see that an object with the topological characteristics of a moving circle (patch of firing activity) is not an allowed solitary wave on a two-dimensional manifold.
Integrate-and-Fire Neural Fields
1679
2 Integrate-and-Fire Neurons Integrate and fire (I & F ) neurons have been chosen for simulating large systems of interacting neurons by many authors (e.g., Usher et al., 1994). For our purpose we need a formulation of this system in which the basic variables are continuous and differentiable. We use two such variables: v, which is a subthreshold potential, and m, which distinguishes between two different modes in the dynamics of the single neuron, the active depolarization mode and the inactive refractory period: v˙ = −kv + α + cmv + mI
(2.1)
˙ = −m + 2(m − v). m
(2.2)
2(x) is the Heaviside step function. The neuron is influenced by a constant external input I, which is absent in the absolute refractory period, when m = 0. Starting out with m = 1, the total time derivative of v is positive, and v follows the dynamics of a charging capacitor. Hence, this represents the depolarization period of v. During all this time, since v < m, m stays unchanged. The dynamics change when v reaches the threshold that is arbitrarily set to 1. Then m decreases rapidly to zero, causing the time derivative of v to be negative, and v follows the dynamics of a discharging capacitor. Parameters are chosen such that the time constants of the charging and discharging periods are different. To complete this description of an I & F neuron, we need a quantity that represents the firing of the neuron. We introduce for this purpose f = m(1 − m)v,
(2.3)
which vanishes at almost all times except when v arrives at the threshold and m changes from 1 to 0.1 This can serve therefore as a description of the action potential. An example of the dynamics of v and m is shown in Figure 1. In a third frame we plot v + af , with a = 8, representing the total soma potential. The value of a is of no consequence in our work. It is used here for illustration purposes only. Our description is different from the FitzHugh-Nagumo model (FitzHugh, 1961), which is also a two-variable system. Their variables correspond to a two-dimensional projection of the Hodgkin-Huxley equations, and their nonlinearity is of third power rather than a step function. We believe that for the purpose of a description that leads to an intuitive insight, it is important to use variables whose meaning is clear, as long as they can lead to a coarse reconstruction of the neurons’ behavior. 1 f gets a small contribution also when m changes from 0 to 1, but since it is several orders of magnitude smaller, it does not have any computational consequences. An alternative choice of f can be proportional to − dm θ (− dm ). dt dt
1680
David Horn and Irit Opher
v
1 0.5 0 0
50
100
150
200
250
50
100
150
200
250
50
100
150
200
250
m
1 0.5
v+8*f
0 0 4 2 0 0
Figure 1: The dynamics of the single I & F neuron. The upper frame displays v, the subthreshold membrane potential, as a function of time. The second frame shows m, the variable that distinguishes between the depolarization state, m = 1, and refractoriness, m = 0. In the third frame we plot v + 8 f , where f is our spike profile, to give a schematic presentation of the total cell potential. Parameters for this figure, as well as all one-dimensional simulations, are: k = 0.015, α = −0.02, c = 0.0135, and I = 0.05.
3 I & F Neurons on a Line At this point we introduce interactions between the neurons and see what kind of activity patterns emerge. As we are dealing with pulse coupling, an interaction is evoked by spiking of other neurons, with or without delays. Thus, equation 2.1 is replaced by v˙i = −kvi + α + cmi vi + mi (I + 6j Wij fj ),
(3.1)
where i = 1, . . . , N represents the location of the neuron. Clearly the behavior of such a system of interacting neurons is strongly dependent on the property of the interaction matrix W. There are indications in V1 that excitations occur between neurons that lie close by (Marlin, Douglas, & Cynader, 1991) and inhibition is the rule further out (Worg ¨ otter, ¨ Niebur, & Koch, 1991), leading to a type of center-surround pattern of interactions. To be even closer to biological reality, we should introduce different descriptions for excitatory (pyramidal) neurons and inhibitory interneurons. For simplicity of presentation, we consider only one set of neurons undergoing both types of interactions. We may think of the neurons as pyramidal cells, with inhibition being an effective interaction.
Integrate-and-Fire Neural Fields
1681
f
t
m
t
x
x
Figure 2: Creation, annihilation, and motion of excitons in the x − t plane. For fixed x, they (black lines, left frame) appear on the border between m = 1 (black) and m = 0 (white) areas, shown in the right frame.
In Figure 2 we show a spatiotemporal pattern that emerges from such a system. We assume here that the neurons are located on a line with open boundary conditions. The interaction is assumed to be simultaneous and to have a finite width on this line. In all our simulations, we start with random initial conditions for vi while all mi = 1. I has the same constant value for all neurons. After some transitional period, the system settles into a spatiotemporal pattern that is a limit cycle. As the interaction strength is being increased, the behavior of vi and mi becomes a smooth function of the location i of the neuron. When this happens, we may replace vi and mi by continuous fields v(x, t) and m(x, t). Our problem may then be redefined by a set of equations for continuous variables. For example, equation 3.1 becomes: µ ¶ Z dv = −kv + α + cmv + m I + w(x, y) f (y)dy , (3.2) dt and equations 2.2 and 2.3 remain valid for the continuous fields. Once we are in the continuum regime, the number of neuronal units that we use in our simulations is immaterial. They represent a discretized approximation to a continuum system. We then have a global description of a neural tissue in terms of two continuous variables. f (x, t) represents a volley of action potentials originated at time t by somas of neurons located at x. This is usually regarded as the interesting object to look at. As we will see, it forms one of the borderlines of regions of m in (x, t) space-time. We propose looking at m(x, t) in order to understand the emergent behavior of this system. Looking again at Figure 2, we see characteristic solitary wave behavior. From time to time, pairs of excitons are created. In each pair, one moves to the right and the other to the left. The right moving exciton of the left pair
1682
David Horn and Irit Opher
1
t −−−−− >
x
−−−−− >
0
x
Figure 3: Two excitons propagating along a ring. The left frame shows the firing in the x − t plane. The right frame shows m(x) (solid line) and f (x) (dash-dotted line) at a fixed time step ( f was rescaled by a factor of 3 for the purpose of illustration). The arrows designate the direction of propagation. Note that for fixed x, firing occurs when m = 1 turns into m = 0. For fixed t, the m = 0 region lies behind the moving exciton.
collides with the left moving exciton of the right pair, and they annihilate each other. That this has to be the case follows from the fact that firing occurs when an m = 1 neighborhood turns into an m = 0 one. In the continuum limit, m(x, t) forms continuous regions with values close to either 1 or 0, thus providing us with a natural topologic argument: the borderline in (x, t) can represent either a moving exciton or the creation or annihilation of a pair of excitons. Figure 3 shows an example in which we close the line of neurons to form a ring; that is, we use periodic rather than open boundary conditions. In this example, we observe continuous motion of two excitons around the ring. The slope in the left frame is determined by the propagation velocity of the exciton. As in other models of neural excitation (e.g., Wilson & Cowan, 1973), the velocity depends on the interaction matrix Wij . In our model, stronger excitation leads to higher velocities (milder slopes), and the span of interactions determines the number of excitons that coexist. 4 Excitons in Two-Dimensional Space Let us turn now to two-dimensional space and investigate a square grid of I & F neurons. We will use either DOG interactions, Wij = CE exp(−d2ij /dE ) − CI exp(−d2ij /dI ), where dij represents the Euclidean distance between two neurons, or short-range excitatory connections. As will be demonstrated below, we obtain solitary waves that are lines or curves. This is to be expected since they should occur at the boundaries of m = 0 and m = 1 regions, which, in two dimensions, are one-dimensional structures. We find a variety of such firing patterns, depending on the span and strength of the interactions. Isolated arcs are a pattern typical of large-
Integrate-and-Fire Neural Fields
1683
f
m
y
y
x
x
Figure 4: Small arcs obtained in the case of strong inhibition. CE = 0.4, CI = 0.12, dE = 5, and dI = 40, restricted to a 20 × 20 area around each neuron on a 60 × 60 grid. The left frame shows the excitons in the form of small arcs that are fronts of moving m patches shown in the right frame. This figure, as well as all other two-dimensional simulations, uses the parameters k = 0.45, c = 0.35, α = −0.09, and I = 0.29.
y
x Figure 5: A periodic pattern of activity obtained on a 90 × 90 spatial grid, using long-range interactions: CE = 0.2, CI = 0.02, dE = 15, and dI = 100.
range interactions with (relatively) strong inhibition. An example is shown in Figure 4. Less inhibition leads to larger structures, as we can see in Figure 5. The emerging pattern is periodic in time and consists of several propagating waves that interact. It is evident that the activity begins at the corners of the grid. This is due to the open boundary conditions. They cause the corner neurons to be the least inhibited, thus rising first and exciting their
1684
David Horn and Irit Opher
a
y
b
y
x
x
Figure 6: Rotating spirals obtained on a 60 × 60 grid using DOG interactions: CE = 0.4, CI = 0.08, dE = 5, and dI = 40, restricted to a 20 × 20 area around each neuron. The left frame (a) appears prior to the right frame (b) in the simulation.
neighbors. Under similar interactions using periodic boundary conditions, we get propagating parallel stripes, the cortical patterns that Ermentrout and Cowan (1979) suggested as V1 excitations corresponding to the spiral retinal forms that appear in hallucinations. Spirals and expanding rings are well-known solitary wave formations (Meron, 1992) that are obtained also here. The former are displayed in Figure 6 and the latter in Figure 7. Both formations are often encountered in two-dimensional arrays of I & F neurons (Jung & Mayer-Kress, 1995; Milton, Chu, & Cowan, 1993). The example of expanding rings is obtained by keeping only few nearest neighbors in the interaction. The number of expanding rings is inversely related to the span of the interactions. In the example shown in Figure 7, two spontaneous foci of excitation form expanding rings, or target formations. They are displayed in three consecutive time steps. Note that firing exists only at the boundary between m = 0 and m = 1 areas. This property is responsible for the vanishing of two firing fronts that collide, because, after collision, there remains only a single m = 0 area, formed by the merger of the two former m = 0 areas. 5 The Topologic Constraint The assumption that continuous neural fields can be employed leads to very strong constraints. m(E x, t) is a continuous function on the D-dimensional manifold R 3 xE. Let us denote regions in which m ≥ 1/2 by S1 and regions where m < 1/2 by S0 . Clearly S0 + S1 = R. We have seen in Figure 1 that, in practice, m is very close to 1 throughout S1 , and very close to 0 throughout S0 . We have discussed moving solutions of firing patterns, which we called
Integrate-and-Fire Neural Fields
m
1685
y
y
m
m
y
x
x
x
f
f
f
y
y
x
y
x
x
Figure 7: Expanding rings formed on a 60 × 60 grid. The top frames show the m fields at three consecutive time steps. The bottom frames show the corresponding coherent firing patterns. In this simulation we used excitatory interactions only, coupling each neuron with its eight neighbors with an amplitude of 0.3.
excitons. In all these solutions the regions S1 and S0 change continuously with time. Since firing can occur only when a neuron switches from a state in S1 to one in S0 , excitons are restricted to domains σ that lie on boundaries between these two regions. Hence, the dimension of σ has to be D − 1. The situation is different for standing-wave solutions. These are solutions that vary with time, sometimes in a quite complicated fashion, but do not move around in space. They are high-order limit cycles. One encounters, then, a situation where a whole S1 region can switch into S0 . This will lead to a spontaneous excitation over a region of dimension D. Starting with random initial conditions we do not generate such solutions in general. However, for regular initial conditions, such as a checkerboard pattern of v = 1 and 0, such solutions emerge. The typical behavior in this case shows separation of space into two parts defined by the checkerboard pattern of initial conditions. While there is firing in one part, there is none in the other, and vice versa. The dynamics within each part can be rather complex, and does not necessarily retain all the symmetry properties that were present
1686
David Horn and Irit Opher
in the initial conditions. Since such solutions are not generated by random initial conditions, we conclude that they occupy a negligible fraction of the space of dynamical attractors. The stability of a solution can be tested by seeing how it behaves in the presence of noise. Adding fluctuations to I, in both space and time, we can test the stability of the different solutions mentioned so far. The general pattern of behavior that we find is that for small perturbations, excitons change to a small extent, while standing-wave solutions disappear. This is to be expected in view of the fact that the standing wave solutions required especially regular initial conditions. Continuity of v and m is an outcome of the excitatory interactions of neurons with their neighborhoods. We have seen it in our simulations, for both the one- and two-dimensional manifolds. It is a reflection of a wellknown property of clusters of I & F units: excitation without delays leads to coherence of firing (Mirrollo & Strogatz, 1990). We may then argue that the topologic rule will hold for all I & F systems, even ones where the field m does not appear but v is reset to 0 after firing. The interactions lead to continuity of v. Then, even in the absence of m, there always exists a relative refractory period, when v is small—for example, v < ²—in which excitations of neighboring neurons are insufficient to drive v over the threshold in the next time step. In that case we can classify the manifold into S1 and S0 regions according to whether v is larger or smaller than ². This leads to the same topologic consequences as the ones described above for our system. 6 Discussion Refractoriness is a basic property of neurons. We have embodied it in our formulation in an explicit fashion that turned into a topologic constraint for a theory of I & F neural fields. The important consequence is that coherent firing patterns that are obtained as solitary waves in our theory have a dimension that is smaller by one unit from that of the manifold to which they are attached. Thus, we obtain pointlike excitons for neural fields on a line or ring and linear firing fronts on a two-dimensional surface. The system that we discussed was always under the influence of some (usually constant) input I. Therefore, it formed a coupled oscillatory system. Alternatively, one may study a dissipative system of neural fields. This is an excitable medium that, in the absence of any input, will not generate excitons. However, given some initial conditions or some local input (Chu, Milton, & Cowan, 1994), it can generate target fomations and spiral waves. The latter are fed by the strong interactions between the neurons. If the interactions are not strong enough, these excitons will die out. In the case of weak interactions, it helps to use noisy inputs (Jung & Mayer-Kress, 1995). The latter lead to background random activity that helps to maintain the coherent formation of spirals. Sometimes there is an isotropy breaking element in the network, such as random connections or noise, that is responsible for
Integrate-and-Fire Neural Fields
1687
the abundance of spiral solutions. However, spirals can be obtained also in the absence of such elements, as is the case in our work. The details of what types of dynamic attractors are dominant depend on the type of network that one studies. Nonetheless, the refractory nature of I & F neurons guarantees that the topologic rule holds for all coherent phenomena. Once one arranges identical I & F neurons on a manifold with appropriate interactions, the continuity property of neural fields follows. These continuous fields lead to coherent neural firing of the type characterized by the excitons that are described in this article. Coherence is also maintained when we add synaptic delays that are proportional to the distance between neurons. One may now ask what happens if the continuity is explicitly broken, for example, by strong noise in the input or by randomness in the synaptic connections. What we would expect in this case is that the DOG interactions specify the resulting behavior of the system. This is indeed the case, as demonstrated in Figure 8. The resulting firing behavior is quite irregular, yet it has a patchy character with a typical length scale that is of the order of the range of excitatory interactions. We believe that this is the explanation for the moving patches of activity reported by other authors. They are incoherent phenomena, emerging in models with randomly distributed radial connections, as in Hill and Villa (1994) and Usher et al. (1994). A coherent moving patch of firing activity is forbidden on account of the continuity requirement and refractoriness.2 We learn therefore that our model embodies two competing factors. The DOG interactions tend to produce patchy firing patterns, but the ensuing coherence of identical neurons leads to the formation of one-dimensional excitons on a two-dimensional manifold. If, however, strong fluctuations exist—the neurons can no longer be described by homogeneous physiological and geometrical properties—the resulting patterns of firing activity will be determined by the form of the interactions. Abbott and van Vreeswijk (1993) have studied the conditions under which an I & F model with all-to-all coupling allows for stable asynchronous solutions. They concluded that if the interactions are instantaneous, the asynchronous state is unstable in the absence of noise. In other words, under these conditions, the system is synchronous. This obviously agrees with our results. However, they found that when the synaptic interactions have various temporal structures, asynchronous states can be stable. We, on the other hand, continue to obtain coherent solutions when we introduce delays in the synaptic interactions where the delays are proportional to distance. Clearly these two I & F models differ in both their geometrical and temporal structure. Our conclusions are limited to homogeneous I & F structures with 2 A possible exception is the case of bursting neurons. In this case, sustained activity can be maintained for a short while even when the potential v reaches threshold. Hence, the firing fronts can acquire some width. The effect depends on the relation between the duration of the burst and the velocity of the exciton.
1688
David Horn and Irit Opher
a
y
b
y
x
x
Figure 8: Incoherent firing patterns for (a) high variability of synaptic connections or (b) noisy input. In (a) we multiplied 75 percent of all synapses by a random gaussian component (mean = 1., S.D. = 3.) that stays constant in time. In (b) we employed a noisy input that varies in space and time (mean = 0.29, S.D. = 0.25). In both frames, the firing patterns are no longer excitons. We can see the formation of small clusters of firing neurons. The typical length of these patches is of the order of the span of excitatory interactions. This is a manifestation of the dominance of interactions in determining the spatial behavior in the absence of continuity that imposes the topologic constraint.
suitable interactions that lead to coherent behavior. Are there situations where coherent firing activity exists in neuronal tissue? If the explanation of hallucinatory phenomena (Ermentrout & Cowan, 1979) is correct, then this is expected to be the case. It could be proved experimentally through optical imaging of V1 under appropriate pharmacological conditions. Other abnormal brain activities, such as epileptic seizures, could also fall into the category of coherent firing patterns. Does coherence occur also under normal functioning conditions? Arieli, Sterkin, Grinvlad, and Aertsen (1996) have reported interesting spatiotemporal evoked activity in areas 17 and 18 in the cat. Do the underlying neurons fire coherently? Presumably the thalamocortical spindle waves that might be generated by the reticular thalamic nucleus (Golomb, Wang, & Rinzel, 1994; Contreras & Steriade, 1996) can be a good example of coherent activity. Another example could be the synchronous bursts of activity that propagate as wave fronts in retinal ganglion cells of neonatal mammals (Meister, Wong, Denis, & Shatz, 1991; Wong, 1993). It has been suggested that these waves play an important role in the formation of ocular dominance layers in the lateral geniculate nucleus (Meister et al., 1991). It would be interesting to have a systematic study of neuronal tissue on the mm scale in different areas and under different conditions, and learn if and when nature makes use of continuous neural fields.
Integrate-and-Fire Neural Fields
1689
References Abbott, L. F., & van Vreeswijk, C. (1993). Asynchronous states in networks of pulse-coupled oscillators. Phys. Rev., E 48, 1483–1490. Amari, S. (1977). Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cyb., 22, 77–87. Arieli, A., Sterkin, A., Grinvlad, A., & Aertsen, A. (1996). Dynamics of ongoing activity: Explanation of the large variability in evoked responses. Science, 273, 1868–1871. Chu, P. S., Milton, J. G., & Cowan, J. D. (1994). Connectivity and the dynamics of integrate-and-fire neural networks. Int. J. of Bifurcation and Chaos, 4, 237–243. Contreras, D., & Steriade, M. (1996). Spindle oscillation in cats: The role of corticothalamic feedback in a thalamically generated rhythm. J. of Physiology, 490, 159–179. Cowan, J. D. (1985). What do drug induced visual hallucinations tell us about the brain? In W. B. Levy, J. A. Anderson, & S. Lehmkuhle (Eds.), Synaptic modification, neuron selectivity, and nervous system organization (pp. 223–241). Hillsdale, NJ: Lawrence Erlbaum. Ermentrout, G. B., & Cowan J. D. (1979). A mathematical theory of visual hallucination patterns. Biol. Cyb., 34, 136–150. FitzHugh, R. (1961). Impulses and physiological states in theoretical models of nerve membrane. Biophy. J., 1, 445–466. Fohlmeister, C., Gerstner, W., Ritz, R., & van Hemmen, J. L. (1995). Spontaneous excitation in the visual cortex: Stripes, spirals, rings, and collective bursts. Neural Comp., 7, 905–914. Gerstner, W., & van Hemmen, J. L. (1992). Associative memory in a network of spiking neurons. Network, 3, 139–164. Golomb, D., Wang, X. J., & Rinzel, J. (1994). Synchronization properties of spindle oscillations in a thalamic reticular nucleus model. J. of Neurophys., 72, 1109– 1126. Hill, S. L., & Villa, A. E. P. (1994). Global spatio-temporal activity influenced by local kinetics in a simulated “cortical” neural network. In H. J. Herrmann, D. E. Wolf, & E. Poppel (Eds.), Supercomputing in brain research: From tomography to neural networks/workshop on supercomputing in brain research. Singapore: World Scientific. Jung, P., & Mayer-Kress, G. (1995). Noise controlled spiral growth in excitable media. Chaos, 5, 458–462. Kluver, ¨ H. (1967). Mescal and mechanisms of hallucinations. Chicago: University of Chicago Press. Marlin, S. G., Douglas, R. M., & Cynader, M. S. (1991). Position-specific adaptation in simple cell receptive fields of the cat striate cortex. Journal of Neurophysiology, 66(5), 1769–1784. Meron, E. (1992). Pattern formation in excitable media. Physics Reports, 218, 1–66. Meister, M., Wong, R. O. L., Denis, A. B., & Shatz, C. J. (1991). Synchronous bursts of action potentials in ganglion cells of the developing mammalian retina. Science, 252, 939–943.
1690
David Horn and Irit Opher
Milton, J. G., Chu, P. H., & Cowan, J. D. (1993). Spiral waves in integrate-and-fire neural networks. In S. J. Hanson, J. P. Cowan, & C. L. Giles (Eds.), Advances in neural information processing systems, 5 (pp. 1001–1007). San Mateo, CA: Morgan Kauffmann. Mirrollo, R. E., & Strogatz, S. H. (1990). Synchronization of pulse-coupled biological oscillators, SIAM Jour. of App. Math., 50, 1645–1662. Newell, A. C. 1985. Solitons in mathematics and physics. Philadelphia: Society for Industrial and Applied Mathematics. Usher, M., Stemmler, M., Koch, C., & Olami., Z. (1994). Network amplification of local fluctuations causes high spike rate variability, fractal firing patterns and oscillatory local field potentials. Neural Comp., 6, 795–836. Wilson, H. R., & Cowan, J. D. (1973). A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik, 13, 55–80. Wong, R. O. L. (1993). The role of spatio-temporal firing patterns in neuronal development of sensory systems. Curr. Op. Neurobio., 3, 595–601. Worg ¨ otter, ¨ F., Niebur, E., & Koch, C. (1991). Isotropic connections generate functional asymmetrical behavior in visual cortical cells. J. Neurophysiol., 66(2), 444–459. Received July 11, 1996; accepted February 14, 1997.
Communicated by Steven Nowlan
Time-Series Segmentation Using Predictive Modular Neural Networks Athanasios Kehagias Department of Electrical Engineering, Aristotle University of Thessaloniki, GR 54006, Thessaloniki, Greece, and Department of Mathematics, American College of Thessaloniki, GR 55510 Pylea, Thessaloniki, Greece
Vassilios Petridis Department of Electrical Engineering, Aristotle University of Thessaloniki, GR 54006, Thessaloniki, Greece
A predictive modular neural network method is applied to the problem of unsupervised time-series segmentation. The method consists of the concurrent application of two algorithms: one for source identification, the other for time-series classification. The source identification algorithm discovers the sources generating the time series, assigns data to each source, and trains one predictor for each source. The classification algorithm recursively computes a credit function for each source, based on the competition of the respective predictors, according to their predictive accuracy; the credit function is used for classification of the time-series observation at each time step. The method is tested by numerical experiments. 1 Introduction For a given time series, the segmentation process should result in a number of models, each of which best describes the observed behavior for a particular segment of the time series. This can be formulated in a more exact manner as follows. A time series yt , t = 1, 2, . . . is generated by a source S(zt ), where zt is a time-varying source parameter, taking values in a finite parameter set 2 = {θ1 , θ2 , . . . , θK }. At time t, yt depends on the value of zt , as well as on yt−1 , yt−2 , . . . However, the nature of this dependence—the number K of parameter values as well as the actual values θ1 , θ2 , . . . , θK —is unknown. What is required is to find the value of zt , for t=1, 2,. . . , or, in other words, to classify yt for t=1, 2, . . . Typical examples of this problem include speech recognition (Rabiner, 1988), radar signal classification (Haykin & Cong, 1991), and electroencephalogram processing (Zetterberg et al., 1981); other applications are listed in Hertz, Krogh, and Palmer (1991). Under the assumptions that (1) the parameter set 2 = {θ1 , θ2 , . . . , θK } is known in advance and (2) a predictor has been trained on source-specific Neural Computation 9, 1691–1709 (1997)
c 1997 Massachusetts Institute of Technology °
1692
Athanasios Kehagias and Vassilios Petridis
data before the classification process starts, we have presented a general framework for solving this type of problem (Petridis & Kehagias, 1996a, 1996b; Kehagias & Petridis, 1997), using predictive modular neural networks (PREMONNs). The basic feature of the PREMONN approach is the use of a decision module and a bank of prediction modules. Each prediction module is tuned to a particular source S(θk ) and computes the respective prediction error; the decision module uses the prediction errors recursively to compute certain quantities (one for each source), which can be understood as the Bayesian posterior probabilities (probabilistic interpretation) or simply as credit functions (phenomenological interpretation). Classification is performed by maximization of the credit function. Similar approaches have been proposed in the control and estimation literature (Hilborn & Lainiotis, 1969; Sims et al., 1969; Lainiotis, 1971) and in the context of predictive hidden Markov models (Kenny, Lennig, & Mermelstein, 1990). These approaches also assume the sources to be known in advance. On the other hand, several recent articles (Jacobs, Jordan, Nowlan, & Hinton, 1991; Jordan & Jacobs 1994; Jordan & Xu, 1995; Xu & Jordan, 1996) use a similar approach but provide a mechanism for unsupervised source specification. The term mixtures of experts is used in these articles; each expert is a neural network specialized in learning the required input-output relationship for a particular region of the pattern space. There is also a gating network that combines the output of each expert to produce a better approximation of the desired input-output relationship. The activation pattern of the local experts is the output of the gating network, and it can be considered as a classifier output: when a particular expert is activated, this indicates that the current observation lies in a particular section of the pattern space. The emphasis in the mixture-of-experts method is on static patterns classification rather than classification of dynamic patterns such as time series. However, there are some applications of the mixture-of-experts method to dynamic problems of time-series classification. For example, in Prank et al. (1996), the mixture-of-experts method is applied to a problem of classifying the “pulsatile” pattern of growth hormone in healthy subjects and patients. The goal is to discriminate the pulsatile pattern of healthy subjects from that of patients. In general, the time-series corresponding to a healthy subject is more predictable than that corresponding to a patient. Classification is based on the level of prediction error for a particular time series (low error indicating a healthy subject and high error indicating a patient). The selection pattern of local experts (which expert is activated at any given time step) can also be used for classification, since the frequency distribution of most probable experts significantly differs between patients and healthy subjects. However, the activation of experts by the gating network is performed at every time step independent of previous activation patterns. This may result in the loss of important correlations across time steps, which characterize the dynamics of the time series.
Time-Series Segmentation
1693
This static approach to time-series classification may be problematic in cases where there is significant correlation between successive time steps of the time series. This has been recognized and discussed by Cacciatore and Nowlan (1994), who state that “the original formulation of this [local experts] architecture was based on an assumption of statistical independence of training pairs. . . . This assumption is inappropriate for modelling the causal dependencies in control tasks.” Cacciatore and Nowlan proceed to develop the mixture of controllers architecture, which emphasizes “the importance of using recurrence in the gating network.” The mixture of controllers is similar to the method we present in this article. However, the method presented by Cacciatore and Nowlan requires several passes through the training data; our method, on the other hand, can be operated online because it does not require multiple passes. In addition, in some cases, Cacciatore and Nowlan need to provide explicitly the active sources (for some portion of the training process) for the mixture-of-controllers network to accomplish the required control task; in our case, learning is completely unsupervised, and the active sources always remain hidden. Pawelzik, Kohlmorgen, and Muller (1996) present an approach using annealed competition of experts. The method they present is similar to the work cited earlier; it is applied to time-series segmentation and does not assume the sources to be known in advance. This method is mainly suitable for offline implementation due to the heavy computational load incurred by the annealing process. In this article we present an unsupervised time-series segmentation scheme, which combines two algorithms: (1) the source identification algorithm, which detects the number of active sources in the time series, performs assignment of data to the sources and uses the data to train one predictor for each active source, and (2) the classification algorithm, which uses the sources and predictors specified by the identification algorithm and implements a PREMONN, which computes credit functions. A probabilistic (Bayesian) interpretation of this algorithm is possible but not necessary. The combination of the two algorithms—the source identification one and the time-series classification one—constitutes the combined time-series segmentation scheme; the two algorithms can be performed concurrently and online. A crucial assumption is that of slow switching between sources. Our approach is characterized by the use of a modular architecture, with predictive modules competing for data and credit assignment. A decision module is also used, to assign the credit in a recursive manner, which takes into account previous classification decisions. A convergence analysis of the classification algorithm is available elsewhere (Kehagias & Petridis, 1997). 2 The Time-Series Segmentation Problem Consider a time series zt , t = 1, 2, . . ., taking values in the unknown set 2 = {θ1 , . . . , θK }. Further consider a time series yt , t = 1, 2, . . . that is generated
1694
Athanasios Kehagias and Vassilios Petridis
by some source S(zt ): for every value of zt a source S(zt ) is activated, which produces yt using yt−1 , yt−2 , . . . . We call zt the source parameter and yt the observation. For instance, if zt = θ1 , we could have yt = f (yt−1 , yt−2 , . . . ; θ1 ), where f (·) is an appropriate function with parameter θ1 ; θ1 can be a scalar or vector quantity or, simply, a label variable. The task of time-series segmentation consists in estimating the values z1 , z2 , . . . , based on the observations y1 , y2 , . . . The size K of the source set 2 and its members θ1 , θ2 , . . . , θK , is initially unknown. This original task can be separated into two subtasks: source identification and time-series classification. The task of source identification consists of determining the active sources or, equivalently, the source set 2 = {θ1 , . . . , θK }. Having achieved this, the time-series classification task consists of corresponding each zt (t=1, 2, . . . ) to a value θk (k = 1, 2, . . . , K) or, equivalently, determining at each time step t the source S(θk ) that has generated yt . The combination of the two subtasks yields the original time-series segmentation task. Two algorithms are presented here, one to solve each subtask. The source identification algorithm distributes data y1 , y2 , . . . , yt among active sources and uses the data to train one predictor per source. For instance, a data segment y1 , y2 , . . . , yM may be assigned to source S(θ1 ) and used to approximate some postulated relationship yt = f (yt−1 , yt−2 , . . . ; θ1 ).1 The time-series classification algorithm uses the predictors trained in the source identification phase, to compute predictions of incoming observations. The respective prediction errors are used to compute a credit function for each predictor or source; an observation yt is classified to the source of maximum credit. Each of the two algorithms has been developed using a different approach. The source identification algorithm has a heuristic motivation; its justification is experimental. On the other hand, the classification algorithm is motivated by probabilistic considerations. Although in this article its performance is evaluated only experimentally, a detailed theoretical analysis is available (Kehagias & Petridis, 1997). 3 The Time-Series Segmentation Scheme We first present the two algorithms that constitute the time-series segmentation scheme, then describe the manner in which they are combined. 3.1 The Source Identification Algorithm. Other authors have tackled the same problem using similar methods (Pawelzik et al., 1996). There are 1 y = f (y t t−1 , yt−2 , . . . ; zt ) reflects the dependence of yt on past observations, as well as on the parameter zt . In general, the true input-output relationship will be unknown and f (·) will be an approximation, such as furnished by a sigmoid neural network. In fact, due to the competitive nature of the algorithms presented, even a crude approximation suffices, as will be explained later.
Time-Series Segmentation
1695
also connections to k-means (Duda and Hart, 1973) and Kohonen’s learning vector quantizer (Kohonen, 1988); note, however, that these have been applied mainly to classification of static patterns. We now give a description of the algorithm. The following parameters will be used: L, the length of a data block; M, the length of a data segment, where M < L; K, the number of active predictors; and Kmax , the maximum allowed number of active predictors. Source Identification Algorithm. Initialization. Set K = 1. Use data block y1 , y2 , . . . , yL to train the first predictor, of the form y1t = f (yt−1 , yt−2 , . . . ; θ1 ), where f (·) is a (sigmoid) neural network and θ1 is taken to be the connection weights. Main Routine.
Loop for n = 1, 2, . . .
1. Split data block yt , t = n · L + 1, n · L + 2, . . . , (n + 1) · L (that is, L time steps) into L/M data segments (that is, M time steps, M < L). 2. Loop for k = 1, . . . , K. (a) Use each segment of yt ’s as input to the kth predictor to compute a prediction ytk = f (yt−1 , . . . ; θk ) for each time t. (b) For every segment, compare the predictions ytk with the actual observations yt (t = n · L + 1, n · L + 2, . . . , (n + 1) · L) to compute the segment-average square prediction error (henceforth SASPE). End of k loop. 3. Loop for every segment in the block (a total of L/M segments). (a) For the current segment, find the predictor that produces the minimum SASPE. (b) For this predictor, compare minimum SASPE with a respective threshold.2 (c) In case SASPE is above the threshold and K < Kmax , set K = K+ 1 and assign the current segment to a new predictor; otherwise, assign the segment to the predictor of minimum SASPE. End of segment loop. 4. Loop for k = 1, . . . , K.
2
This threshold is computed separately for each predictor.
1696
Athanasios Kehagias and Vassilios Petridis
(a) Retrain the kth predictor for J iterations, using the L/M most current data segments assigned to it. End of k loop. End of n loop. It is assumed that the switching rate is slow enough so that the initial segment will mostly contain data from one source; however, some data from other sources may also be present. The modular and competitive nature of the algorithm is evident. A data segment is assigned to a predictor even if the respective prediction is poor; what matters is that it is better than the remaining ones. Hence, every predictor is assigned data segments on the assumption that they were generated by a particular source and the predictor is trained (specialized) on this source. Training can be performed independently, in parallel, for each predictor. Numerical experimentation indicates that the values of several parameters may affect performance: 1. L is the length of data block. If L is too large, then training for each data block requires a long time. If L is too small, then not enough data are available for the retraining of the predictors. For the experiments reported in section 4, we have found that values around 100 are sufficient to ensure good training of the predictors without incurring excessive computational load. 2. M is the length of data segment; it is related to the switching rate of zt . Let Ts denote the minimum number of time steps between two successive source switches. While Ts is unknown, we operate on the assumption of slow switching, which means that Ts will be relatively large. Since all the M data points included in a segment will be assigned to the same source (and used to train the same predictor), it is obviously desirable that they have been truly generated by the same source; otherwise the respective predictor will not specialize on one source. In reality, however, this is not guaranteed to happen. In general, a small value of M will increase the likelihood that most segments contain data from a single source. On the other hand, it has been found that small M leads to an essentially random assignment of data points to sources, especially in the initial stages of segmentation, when the predictors have not specialized sufficiently. The converse situation holds for large M. In practice, one needs to guess a value for Ts and then take M somewhere between 1 and Ts . This choice is consistent with the hypothesis of slow switching rate. The practical result is that most segments in a block contain data from exactly one source, and a few segments contain data from two sources. The exact value of Ts is not required to be known; a rough guess suffices.
Time-Series Segmentation
1697
3. J, the number of training iterations, should be taken relatively small, since the predictors must not be overspecialized in the early phases of the algorithm, when relatively few data points are available. 4. The error threshold is used to determine if and when a new predictor should be added. At every retraining epoch, the training prediction error, Ek , is computed for k = 1, 2, . . . , K. Then the threshold, Dk , is set equal to a · Ek , where a is a scale factor. A high a value makes the introduction of new predictors difficult, since large prediction errors are tolerated; conversely, a low a value facilitates the introduction of new predictors. 5. In practice a value Kmax is set to be the maximum number of predictors used. As long as this is equal to or larger than the number of truly active sources, it does not affect the performance of the algorithm seriously. A further quantity of interest is T, the length of time series. Theoretically this is infinite, since the algorithm is online, with new data becoming continuously available. However, it is interesting to note the minimum number of data required for the algorithm to perform the source identification successfully. In the experiments presented in section 4, it is usually between 1000 and 3000. This is the information that the algorithm requires to discover the number of active sources and to train respective predictors. In addition, the time required to process the data is short, because only a few training iterations are performed per data segment; these computations are performed while waiting for the new data segment to be accumulated. The short training time and the small amount of data required make the source identification algorithm very suitable for online implementation. 3.2 The Classification Algorithm. The classification algorithm has already been presented in Petridis and Kehagias (1996b) and Kehagias and Petridis (1997) under the name of predictive modular neural network (PREMONN). A brief summary is given here. It is assumed that the source parameter time series zt , t = 1, 2, . . ., takes values in the known set 2 = {θ1 , . . . , θK } (this has been specified by the source identification algorithm). Corresponding to each source S(θk ), there is a predictor (e.g., a neural network trained by the source identification algorithm) characterized by the following equation (k = 1, 2, . . . , K): ytk = f (yt−1 , yt−2 , . . . ; θk ).
(3.1)
At every time step t, the classification algorithm is required to identify the member of 2 that best matches the observation yt . Using a probabilistic approach, one can define ptk = Pr(zt = θk |y1 , y2 , . . . , yt ). In Petridis and
1698
Athanasios Kehagias and Vassilios Petridis
Kehagias (1996a) the following ptk update is obtained:
ptk =
k ·e pt−1
−
PK
|yt −y k |2 t σ2
l l=1 pt−1 · e
−
|yt −yl |2 t σ2
.
(3.2)
Equation (3.2) is a recursive implementation of Bayes’ rule. The recursion is started using p0k , the prior probability assigned to θk at time t = 0 (before any observations are available). In the absence of any prior information, all models are assumed equally credible: p0k =
1 for k = 1, 2, . . . , K. K
(3.3)
Finally, zˆ t (the estimate of zt ) is chosen as . zˆ t = arg max ptk . θk ∈2
(3.4)
Hence the PREMONN classification algorithm is summarized by equations 3.1 through 3.4. This is a recursive implementation of Bayes’ rule; the value ptk reflects our belief (at time t) that the observations are produced by a model with parameter value θk . The probabilistic interpretation of the algorithm, presented above, is not necessary. A different, phenomenological, interpretation goes as follows. ptk is a credit function that evaluates the performance of the kth source in predicting the observations y1 , y2 , . . . , yt . This evaluation depends on −
|yt −y k |2 t
two factors: e σ 2 reflects how accurately the current observation was k reflects past performance. Hence equation 3.2 updates predicted, and pt−1 k the credit function pt recursively. The update takes into account the previous k value pt−1 ; this makes ptk implicitly dependent on the entire time-series history. The final effect is that correlations between successive observations are captured. Note also the competitive nature of classification, which is evident in equation 3.2: even if a predictor performs poorly, it may still receive high credit as long as the remaining predictors perform even worse. Kehagias and Petridis (1997) proved that equation 3.2 converges to appropriate values that ensure classification to the most accurate predictor. In the case of slow switching (as assumed in this article), convergence to the active source is guaranteed for the time interval between two source switchings, provided that the convergence rate is fast, compared to switching rate. Again, this requirement is in accordance with the hypothesis of slow switching. The performance of the classification algorithm is influenced by two parameters, σ and h.
Time-Series Segmentation
1699
σ is a smoothing parameter. Under the probabilistic interpretation, σ is the standard deviation of prediction error. A phenomenological interpretation is also possible. For a fixed error |yt − ytk |, note that a large σ results in −
|yt −y k |2 t
a large value of e σ 2 . This is of interest particularly in the case where a predictor performs better than the rest over a large time interval but occasionally generates a bad prediction (perhaps due to random fluctuation). In −
|yt −y k |2 t
this case, a very small σ will result in a very small value of e σ 2 in equation 3.2. This, in turn, will decrease ptk excessively. On the other hand, a very large σ results in sluggish performance (a long series of large prediction errors is required before ptk is sufficiently decreased). For the experiments reported in this article, σ is determined at every time step t by the following adaptive estimate: qP σ =
K k=1 |yt
− ytk |2
K
.
A threshold parameter h is also used in the classification algorithm. Suppose that zt remains fixed for a long time at, say, θ1 ; then at time ts , a switch −
|yt −y1 |2 t
−
|yt −y2 |2 t
to θ2 occurs. For t < ts , e σ 2 is large and e σ 2 small (much less than one). Considering equation 3.2, one sees that p2t decreases exponentially; after a few steps it may become so small that computer underflow takes place, and p2t is set to 0. It follows from equation 3.2 that after that point, p2t will remain 0 for all subsequent time steps; in particular, this will also be true after ts , despite the fact that now the second predictor performs well −
|yt −y2 |2 t
(e σ 2 will be large, but multiplied by p2t−1 will still yield p2t = 0). Hence the currently best predictor receives zero credit and is excluded from the classification process. What is required is to ensure that p2t will never become too small. Hence the following strategy is used: whenever ptk falls below h, it is reset to h; if h is chosen small (say around 0.01), then this resetting does not influence the classification process, but also gives predictors with low credit the chance to recover quickly, once the respective source is activated. 3.3 The Time-Series Segmentation Scheme. The time-series segmentation scheme consists of the concurrent application of the two algorithms. The classification algorithm is run on the current data block, using the predictors discovered and trained by the source identification algorithm in the previous epoch3 (operating on the previous data block). Concurrently, the identification algorithm operates on the current block and provides an update of the predictors, but this update will be used only in the next epoch 3
An epoch is defined as the interval of L time steps that correspond to a data block.
1700
Athanasios Kehagias and Vassilios Petridis
of the segmentation scheme. Hence, the classification algorithm uses information provided by the source identification algorithm, but not vice versa. The identification algorithm in effect provides a crude classification by assigning segments to predictors; the role of the classification algorithm is to provide segmentation at the level of individual time steps. Comparatively speaking, in the initial stages of the segmentation process, the identification algorithm is more important than the classification algorithm and the predictors change significantly at every epoch; at a later stage, the predictors reach a steady state, and the source identification algorithm becomes less important. In fact, a reduced version of the segmentation scheme could use only the identification algorithm initially (until the predictors are fairly well defined) and only the classification algorithm later (when the predictors are not expected to change significantly). However, for this reduced version to work, it is required that new sources are not activated at a later stage of the time-series evolution. 4 Experiments Two sets of experiments are presented in this section to evaluate the segmentation scheme. 4.1 Four Chaotic Time Series. Consider four sources, each generating a chaotic time series: 1. For zt = 1, a logistic time series of the form yt = f1 (yt−1 ), where f1 (x) = 4x(1 − x). 2. For zt = 2, a tent map time series of the form yt = f2 (yt−1 ), where f2 (x) = 2x if x ∈ [0, 0.5) and f2 (x) = 2(1 − x) if x ∈ [0.5, 1]. 3. For zt = 3, a double logistic time series of the form yt = f3 (yt−1 ) = f1 ( f1 (yt−1 )). 4. For zt = 4, a double tent map time series of the form yt = f4 (yt−1 ) = f2 ( f2 (yt−1 )). The four sources are activated consecutively, each for 100 time steps, giving an overall period of 400 time steps. Ten such periods are used, resulting in a 4000-step time series. The task is to discover the four sources and the switching schedule by which they are activated. Six hundred time steps of the composite time series (encompassing the source transition 1 → 2 → 3 → 4 → 1 → 2) are presented in Figure 1. Hence the first hundred time steps correspond to zt = 1; the second hundred time steps correspond to zt = 2; and so on. A number of experiments are performed using several different combinations of block and segment lengths; also the time series is observed at various levels of noise—that is, at every step, yt is mixed with additive white
Time-Series Segmentation
1701
Figure 1: Six hundred steps of a composite time series produced by four chaotic sources (logistic, tent map, double logistic, and double tent map) being activated alternately, for 100 time steps each: steps 1 to 100 correspond to source 1, steps 101 to 200 correspond to source 2, steps 201 to 300 correspond to source 3, steps 301 to 400 correspond to source 4, steps 401 to 500 correspond to source 1, steps 501 to 600 correspond to source 2.
noise uniformly distributed in the interval [−A/2, A/2]. Hence an experiment is determined by a combination of L, M, and A values. The predictors used are 1-5-1 sigmoid neural networks; the maximum number of predictors is Kmax = 6; the number of training iterations per time step is J = 5; the credit threshold is set at h = 0.01.4 In every experiment performed, all four sources are eventually identified. This takes place at some time Tc , which is different for every experiment. After time Tc , a classification figure of merit is computed. It is denoted by c = T2 /T1 , where T1 is the total number of time steps after Tc , and T2 is the number of correctly classified time steps after Tc . The results of the experiments (Tc and c) are listed in Tables 1 and 2. The evolution of a representative credit function, in this case p3t (the remaining five credit functions evolve in a similar manner), is presented in Figure 2A. This figure was obtained from a noise-free experiment (A = 0.0), with L = 105, M = 15. Note that classification beyond time Tc is very stable. The classification decisions (namely the source and predictor selected at every time step) for the same experiment are presented in Figure 2B. It can be observed that 4 For training the sigmoid predictors, a Levenberg-Marquardt algorithm was used; this was implemented by Magnus Norgaard, Technical University of Denmark, and distributed as part of the Neural Network System Identification Toolbox for MATLAB.
1702
Athanasios Kehagias and Vassilios Petridis
Table 1: Four Chaotic Time Series, with Switching Period 100.
A
L = 70 Tc
M = 10 c
L = 100 Tc
M = 10 c
L = 120 Tc
M = 10 c
0.00 0.05 0.10 0.15 0.20
500 1600 1800 800 2200
0.982 0.969 0.939 0.947 0.529
1200 1100 1100 1600 3900
0.976 0.968 0.962 0.934 0.899
3000 3200 1300 1400 2200
0.963 0.899 0.898 0.767 0.359
Notes: Time-series length, 4000 points. Segment length M=10, fixed; block length L variable, noise range [−A/2, A/2], A variable. Tc is the time step by which all sources have been identified; c is classification accuracy.
Table 2: Four Chaotic Time Series, with Switching Period 100.
A
L = 75 Tc
M = 15 c
L = 105 Tc
M = 15 c
L = 120 Tc
M = 15 c
0.00 0.05 0.10 0.15 0.20
900 1600 1600 1700 1600
0.976 0.971 0.954 0.943 0.130
1200 1700 2800 2600 2200
0.983 0.967 0.973 0.927 0.416
1600 1600 1800 2200 2800
0.980 0.963 0.819 0.613 0.708
Notes: Time-series length, 4000 points. Segment length M = 15, fixed; block length L variable, noise range [−A/2, A/2], A variable. Tc is the time step by which all sources have been identified; c is classification accuracy.
the four sources correspond to predictors 1, 2, 3, and 6. Predictors 4 and 5 have received relatively few data, have been undertrained, and do not correspond to any of the truly active sources; generally these predictors play no significant role in the segmentation process. 4.2 Mackey-Glass Time Series. Now consider a time series obtained from three sources of the Mackey-Glass type. The original data evolve in continuous time and satisfy the differential equation: 0.2y(t − td ) dy = −0.1y(t) + . dt 1 + y(t − td )10
(4.1)
For each source a different value of the delay parameter td was used: td = 17, 23, and 30. The time series is sampled in discrete time, at a sampling rate τ = 6, with the three sources being activated alternately, for 100 time steps each. The final result is a time series with a switching period of 300
Time-Series Segmentation
1703
Figure 2: A time-series segmentation experiment with four chaotic time series, L=105, M=15, A=0.00 (noise-free time series). (A.) Evolution of credit function p3t . (B.) Classification decisions. Note that all four sources have been identifed at time Tc = 1200. The corresponding predictors are 1, 2, 3, and 6. After Tc = 1200, all classifications are made to these four predictors or sources; predictors 4 and 5 play practically no role in the classification process.
and a total length of 4000 time steps. Six hundred time steps of this final time series (encompassing the source transition 1 → 2 → 3 → 1 → 2 → 3) are presented in Figure 3. Hence, the first hundred time steps correspond to zt = 1; the second hundred time steps correspond to zt = 2; and so on.
1704
Athanasios Kehagias and Vassilios Petridis
Figure 3: Six hundred steps of a composite time series produced by three Mackey-Glass sources (with Td equal to 17, 23, and 30, respectively) being activated alternately, for 100 time steps each: steps 1 to 100 correspond to source 1, steps 101 to 200 correspond to source 2, steps 201 to 300 correspond to source 3, steps 301 to 400 correspond to source 1, steps 401 to 500 correspond to source 2, steps 501 to 600 correspond to source 3.
Again, a particular experiment is determined by a combination of L, M, and A values. The predictors used are 5-5-1 sigmoid neural networks; the maximum number of predictors is Kmax = 6; the number of training iterations per time step is J = 5; the credit threshold is set at h = 0.01. Again, all three sources were correctly identified in every experiment. The results of the experiments, Tc and c (described in section 4.1), are listed in Tables 3 and 4. The evolution of three representative credit functions, p1t , p2t , p3t , is presented in Figures 4A–C, respectively. This figure was obtained from an experiment with A = 0.15, L = 105, M = 15. Note that classification beyond time Tc is quite stable. The classification decisions (namely the source and predictor selected at every time step) for the same experiment are presented in Figure 4D. It can be observed that the three sources correspond to predictors 1, 2, and 3. Predictors 4, 5, and 6 have received relatively few data, have been undertrained, and do not correspond to any of the truly active sources; generally these predictors play no significant role in the segmentation process. 4.3 Discussion. It can be observed from Tables 1 through 4 that the segmentation scheme takes between 1000 and 3000 steps to discover the active sources. After that point, classification is quite accurate for low to middle
Time-Series Segmentation
1705
Table 3: Three Mackey-Glass Time Series, with Switching Period 100.
A
L = 70 Tc
M = 10 c
L = 100 Tc
M = 10 c
L = 120 Tc
M = 10 c
0.00 0.05 0.10 0.15 0.20
1700 1100 1300 3500 1200
0.978 0.977 0.853 0.935 0.664
1400 1200 2700 1300 1000
0.976 0.975 0.964 0.796 0.637
3000 2200 800 1300 1200
0.925 0.977 0.943 0.945 0.839
Notes: Time series length, 4000 points. Segment length M=10, fixed; block length L variable, noise range [−A/2, A/2], A variable. Tc is the time step by which all sources have been identified; c is classification accuracy.
Table 4: Three Mackey-Glass Time Series, with Switching Period 100.
A
L = 75 Tc
M = 15 c
L = 105 Tc
M = 15 c
L = 120 Tc
M = 15 c
0.00 0.05 0.10 0.15 0.20
1400 1200 2000 1600 1200
0.966 0.974 0.947 0.946 0.618
1000 1200 1700 1000 1500
0.968 0.940 0.966 0.918 0.676
1700 2000 2900 2000 2100
0.974 0.954 0.965 0.902 0.782
Notes: Time-series length, 4000 points. Segment length M=15, fixed; block length L variable, noise range [−A/2, A/2], A variable. Tc is the time step by which all sources have been identified; c is classification accuracy.
noise levels and gradually drops off as noise level increases.5 The scheme is fast: source identification and classification of the 4000 time steps (run concurrently) require a total of approximately 4 minutes. The algorithms are implemented in MATLAB and run on a 486 120-MHz IBM-compatible machine; a significant speedup would result from a C implementation (consider that MATLAB is an interpreted language). The same segmentation tasks have been treated by the method of annealed competition of experts (ACE) in Pawelzik et al. (1996). Regarding accuracy of segmentation, both methods appear to achieve roughly the same accuracy; however, the ACE method was applied to either noise-free or low-noise data; we do not know how well the ACE method performs in the presence of relatively high noise. Execution time is not cited in Pawelzik 5 To evaluate the effect of noise, keep in mind that the chaotic time series take values in the [0, 1] interval and the Mackey-Glass time series in the [0.7, 1.3] interval.
1706
Athanasios Kehagias and Vassilios Petridis
Figure 4: continued next page
et al. (1996) for the two tasks considered here. On a third, prediction, task, the ACE method requires 2.5 hours on a SPARC 10/20 GX. In general, we expect the ACE method to be rather time-consuming, since it depends on annealing, which requires many passes through the training data. Regarding the required number of time-series observations, the ACE method uses 1200 data points for the first task and 400 data points for the second; several annealing passes are required. Our method requires a single pass through at most 3000 data points. We do not consider this an important difference. Because all time series involved are ergodic, we
Time-Series Segmentation
1707
Figure 4: A time-series segmentation experiment with three Mackey-Glass time series: L = 105, M = 15, A = 0.15. (A.) Evolution of credit functions p1t . (B.) Evolution of credit functions p2t . (C.) Evolution of credit functions p3t . (D.) Classification decisions for the credit functions of Figures 2A–B. Note that all three sources have been identifed at time Tc =1000. The corresponding predictors are 1, 2, and 3. After Tc =1000, most classifications are made to these three predictors or sources; however, some misclassifications to predictor 4 occur due to the presence of noise. Predictors 5 and 6 play practically no role in the classification process.
1708
Athanasios Kehagias and Vassilios Petridis
conjecture that processing 3000 data points once should yield more or less the same information as cycling 30 times over 100 data points.6 5 Conclusions The unsupervised time-series segmentation scheme presented combines a source identification and a time-series classification algorithm. The source identification algorithm determines the number of sources generating the observations, distributes data between the sources, and trains one predictor for each source. Then predictors are used by the classification algorithm as follows. For every incoming data segment, each predictor computes a prediction; comparing these to the actual observation, segment prediction errors are computed for each predictor. The prediction errors are used to update a credit function recursively and competitively for each predictor or source. Sources that correspond to more succesful predictors receive higher credit. Each segment is classified to the source that currently has highest credit. The combined scheme consists of the identification and classification algorithms running concurrently, in a competitive and recursive manner. The scheme is accurate and fast and requires few training data, as has been established by numerical experimentation; hence it is suitable for online implementation. It is also modular: training and prediction are performed by independent modules, each of which can be removed from the system, without affecting the properties of the remaining modules. This results in short training and development times. Finally, a convergence analysis of the classification algorithm has been published elsewhere (Kehagias & Petridis, 1997); this guarantees correct classification. Although we have not made any assumptions about the switching behavior of the source parameter, a Markovian assumption can be easily incorporated in the update equation 3.2. A convergence analysis for this case is also available. This modification affects the classification algorithm only, leaving the source identification algorithm unaffected. Due to lack of space, these developments will be presented in the future. References Cacciatore, T. W., & Nowlan, S. J. (1994). Mixtures of controllers for jump linear and nonlinear plants. In Advances in neural information processing systems 6 (NIPS 93), J. D. Cowen, G. Tesauro, & J. Alspector (Eds.), (pp. 719–726). San Mateo, CA: Morgan Kaufmann.
6 Using the same data for training and testing makes the task considered in Pawelzik et al. (1996) somewhat easier than the one considered in this article, where segmentation is evaluated on previously unseen data. But we believe that the difference is not great, due to the ergodic nature of the time series.
Time-Series Segmentation
1709
Duda, R. O., & Hart, P. E. (1973). Pattern classification and scene analysis. New York: Wiley. Haykin, S., & Cong, D. (1991). Classification of radar clutter using neural networks. IEEE Trans. on Neural Networks, 2, 589–600. Hertz, J., Krogh, A., & Palmer, R. G. (1991). Introduction to the theory of neural computation. Redwood City, CA: Addison-Wesley. Hilborn, C. G., & Lainiotis, D.G. (1969). Unsupervised learning minimum risk pattern classification for dependent hypotheses and dependent measurements. IEEE Trans. on Systems Science and Cybernetics, 5, 109–115. Jacobs, R. A., Jordan, M. I., Nowlan, S. J., & Hinton, G. E. (1991). Adaptive mixtures of local experts. Neural Computation, 3, 79–87. Jordan, M. I., & Jacobs, R. A. (1994). Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 6, 181–214. Jordan, M. I., & Xu, L. (1995). Convergence results for the EM algorithm to mixtures of experts architectures. Neural Networks, 8, 1409–1431. Kehagias, A., & Petridis, V. (1997). Predictive modular neural networks for time series classification. Neural Networks, 10, 31–49. Kenny, P., Lennig, M., & Mermelstein, P. (1990). A linear predictive HMM for vector-valued observations with applications to speech recognition. IEEE Trans. on Acoustics, Speech and Signal Proc., 38, 220–225. Kohonen, T. (1988). Self organization and associative memory. New York: SpringerVerlag. Lainiotis, D. G. (1971). Optimal adaptive estimation: Structure and parameter adaptation. IEEE Trans. on Automatic Control, 16, 160–170. Pawelzik, K., Kohlmorgen, J., & Muller, K. R. (1996). Annealed competition of experts for a segmentation and classification of switching dynamics. Neural Computation, 8, 340–356. Petridis, V., & Kehagias, A. (1996a). A recurrent network implementation of time series classification. Neural Computation, 8, 357–372. Petridis, V., & Kehagias, A. (1996b). Modular neural betworks for MAP classification of time series and the partition algorithm. IEEE Trans. on Neural Networks, 7, 73–86. Prank, K., Kloppstech, M., Nowlan, S. J., Sejnowski, T. J., & Brabant, G. (1996). Self-organized segmentation of Time Series: Separating growth hormone secretion in acromegaly from normal controls. Biophysical Journal, 70, 2540–2547. Rabiner, L. R. (1988). A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE, 77, 257–286. Sims, F. L., Lainiotis, D. G., & Magill, D. T. (1969). Recursive algorithm for the calculation of the adaptive Kalman filter coefficients. IEEE Trans. on Automatic Control, 14, 215–218. Xu, L., & Jordan, M. I. (1996). On convergence properties of the EM algorithm for gaussian mixtures. Neural Computation, 8, 129–151. Zetterberg, L. H., Patric, M. & Scott, D. (1981). Computer analysis of EEG signals with parametric models. Proc. IEEE, 69, 451–461.
Received August 20, 1996, accepted January 29, 1997.
Communicated by Peter Dayan
Adaptive Mixtures of Probabilistic Transducers Yoram Singer AT&T Labs, Florham Park, NJ 07932, U.S.A.
We describe and analyze a mixture model for supervised learning of probabilistic transducers. We devise an online learning algorithm that efficiently infers the structure and estimates the parameters of each probabilistic transducer in the mixture. Theoretical analysis and comparative simulations indicate that the learning algorithm tracks the best transducer from an arbitrarily large (possibly infinite) pool of models. We also present an application of the model for inducing a noun phrase recognizer. 1 Introduction Supervised learning of probabilistic mappings between temporal sequences is an important goal of natural data analysis and classification with a broad range of applications, including handwriting and speech recognition, natural language processing, and biological sequence analysis. Research efforts in supervised learning of probabilistic mappings have focused almost exclusively on estimating the parameters of a predefined model. For example, Giles et al. (1992) used a second-order recurrent neural network to induce a finite-state automaton that classifies input sequences, and Bengio and Fransconi (1994) introduced an input-output hidden Markov model (HMM) architecture that was used for similar tasks. In this article, we introduce and analyze an alternative approach based on a mixture model of a new subclass of probabilistic transducers, which we call suffix tree transducers. Mixture models, often referred to as mixtures of experts, are a powerful approach both theoretically and experimentally. See DeSantis, Markowski, and Wegman (1988); Jacobs, Jordan, Nowlan, and Hinton (1991); Haussler and Barron (1993); Littlestone and Warmuth (1994); Cesa-Bianchi et al. (1993); and Helmbold and Schapire (1995) for analyses and applications of mixture models, from different perspectives, such as connectionism, Bayesian inference, and computational learning theory. A key advantage of the mixture of experts architecture is the ability to attack problems by dividing them into simpler subproblems, whose solutions can be found efficiently and then combined to yield a solution to the original problem. Thus, the common approach used in learning complex mixture models is based on the principle of divide and conquer, which usually results in a greedy search for a good model (Breiman, Friedman, Ohlsen, & Stone, Neural Computation 9, 1711–1733 (1997)
c 1997 Massachusetts Institute of Technology °
1712
Yoram Singer
1984; Quinlan, 1993) or in a hill-climbing algorithm that (locally) maximizes a utility function (Jacobs et al., 1991; Jordan & Jacobs, 1994). In this article we consider a restricted class of models, which enables us to devise an alternative learning approach. The alternative suggested is to calculate the evidence associated with each possible model in the mixture efficiently rather than searching for a single model. By combining techniques used for compression and unsupervised learning (Willems, Shtarkov, & Tjalkens, 1995; Ron, Singer, & Tishby, 1996), we obtain an online algorithm that efficiently estimates the weights of all the possible models from an arbitrarily large (possibly infinite) pool of suffix tree transducers. The article starts by defining the class of models used in it for learning a mapping from input sequences to output sequences: suffix tree transducers. Informally, suffix tree transducers are prediction trees where the features used to determine the predicting node are suffixes of the input sequence. We then describe a mixture model of suffix tree transducers. The specific mixture model considered exploits the tree structure of the transducers by defining a hierarchy of trees where each tree in the mixture is a refinement of a smaller tree in the hierarchy. Although there might be a huge number of tree-based transducers in the pool of models, we show that by using a recursive prior, we can efficiently calculate the prediction of the entire mixture in time linear in the depth of the largest tree in the mixture. This prior can be viewed as a creation process where at a given node in the suffix tree, we choose either to step successively down the tree and create more nodes or stop and use the current node for predicting the outcome (the next symbol of the output sequence). Based on the recursive prior, we describe how to maintain, in a computationally efficient manner, posterior probabilities of the same recursive structure as the prior probability distribution. Furthermore, we show that this construction of posterior probabilities is not only efficient but also robust in the sense that the predictions of the mixture of suffix trees are almost as good as the predictions of the best model in the ensemble of models considered. Specifically, we prove that the extra loss incurred for the mixture depends only on the choice of the prior and does not grow with the number of examples. We also look at the problem of estimating the output probability functions at the nodes. We present two online parameter estimation schemes: a simple smoothing technique for small alphabets and a more sophisticated mixture-based parameter estimation technique, which is suited for large alphabets. Both schemes are adaptive, enabling us to output predictions while estimating the parameters of all the transducers in the mixture. We show for both schemes that the extra loss incurred by the online estimation (and prediction) algorithm grows sublinearly in the number of examples. Combining the mixture-weighing technique for all possible suffix trees with the mixture-based parameter estimation scheme yields a flexible and powerful learning algorithm. Put another way, no hard decisions are made by
Adaptive Mixtures of Probabilistic Transducers
1713
the learning algorithm, not even for the set of relevant parameters of each model. We conclude with simulations and experiments using natural data. The experimental results support the formal analysis and indicate that the learning algorithm is indeed able to track the best model in an arbitrarily large pool of models, yielding an accurate approximation of the source. 2 Mixtures of Suffix Tree Transducers Let 6in and 6out be two finite alphabets. A suffix tree transducer T over (6in , 6out ) is a |6in |-ary tree where every internal node of T has one child for each symbol in 6in . The nodes of the tree are associated with pairs (s, γs ), where s is the string associated with the path (sequence of symbols in 6in ) that leads from the root to that node, and γs : 6out → [0, 1] is an output probability function. A suffix tree transducer maps arbitrarily long input sequences over 6in to output sequences over 6out as follows. The probability that a suffix tree transducer T will output a symbol y ∈ 6out at time step n, denoted by P(y | x1 , . . . , xn , T), is γsn (y), where sn is the string labeling the leaf reached by taking the path corresponding to xn xn−1 xn−2 . . . starting at the root of T. We assume without loss of generality that the input sequence x1 , . . . , xn was padded with a sufficiently long sequence . . . x−2 x−1 x0 so that the leaf corresponding to sn is well defined for any time step n ≥ 1. The n given an input probability that T will output a string y1 , . . . , yn in 6out n , . . . , x in 6 , denoted by P(y , . . . , y | x , . . . , x , T), is therefore string x 1 n 1 n 1 n in Qn k=1 γsk (yk ). Note that only the leaves of T are used for prediction. A suffix tree transducer is thus a probabilistic mapping that induces a measure over the possible output strings given an input string. A submodel T0 of a suffix tree transducer T is obtained by removing some of the internal nodes and leaves of T and using the leaves of the pruned tree T0 for prediction. An example of a suffix tree transducer and two of its possible submodels is given in Figure 1. In this article we consider transducers based on complete suffix trees. That is, each node in a given suffix tree is either a leaf or all of its children are also in the tree. For a suffix tree transducer T and a node s ∈ T, we use the following notations: • The set of all possible complete subtrees of T (including T itself) is denoted by Sub(T). • The set of leaves of T is denoted by Leaves(T). • The set of subtrees T0 ∈ Sub(T) such that s ∈ Leaves(T0 ) is denoted by Ls (T). • The set of subtrees T0 ∈ Sub(T) such that s ∈ T0 and s 6∈ Leaves(T0 ) is denoted by Is (T). • The set of subtrees T0 ∈ Sub(T) such that s ∈ T0 is denoted by As (T).
1714
Yoram Singer
@@@@@@ P(a)=0.5 @@@@@@ e P(b)=0.25 P(c)=0.25 @@@@@@ @@@@@@ @@@@@@
P(a)=0.5 P(b)=0.2 P(c)=0.3
@@@@@ @@@@@ 0 @@@@@ @@@@@ @@@@@
P(a)=0.3 P(b)=0.2 P(c)=0.5
@@@@@@ @@@@@@ @@@@@@ 00 @@@@@@ @@@@@@ @@@@@@ P(a)=0.8 P(b)=0.1 P(c)=0.1
@@@@@@ @@@@@@ 1 @@@@@@ @@@@@@ @@@@@@ P(a)=0.7 P(b)=0.2 P(c)=0.1
@@@@@@ @@@@@@ @@@@@@ 10 @@@@@@ @@@@@@ @@@@@@
@@@@@ @@@@@ @@@@@ 010 @@@@@ @@@@@
P(a)=0.5 P(b)=0.3 P(c)=0.2
P(a)=0.4 P(b)=0.2 P(c)=0.4
@@@@@@ @@@@@@ 01 @@@@@@ @@@@@@ @@@@@@
P(a)=0.6 P(b)=0.3
P(c)=0.1 @@@@@ @@@@@ @@@@@ 11 @@@@@ @@@@@ @@@@@
P(a)=0.5 P(b)=0.4 P(c)=0.1
@@@@@@ @@@@@@ @@@@@@ 110 @@@@@@ @@@@@@
©
ª
Figure 1: A suffix tree transducer over (6in , 6out ) = ({0, 1} , a, b, c ) and two of its possible submodels (subtrees). The strings labeling the nodes are the suffixes of the input string used to predict the output string. At each node, there is an output probability function defined for each of the possible output symbols. For instance, using the full suffix tree transducer, the probability of observing the symbol b given that the input sequence is . . . 010, is 0.1. The probability of the current output, when each transducer is associated with a weight (prior), is the weighted sum of the predictions of each transducer. For example, assume that the weights of the trees are 0.7 (full tree), 0.2 (large subtree), and 0.1 (small subtree); then the probability that the output yn = a given that (xn−2 , xn−1 , xn ) = (0, 1, 0) is 0.7 · P(a | 010, T1 ) + 0.2 · P(a | 10, T2 ) + 0.1 · P(a | 0, T3 ) = 0.7 · 0.8 + 0.2 · 0.7 + 0.1 · 0.5 = 0.75.
Whenever it is clear from the context, we will omit the dependency on T and denote the above sets as Ls , Is , and As . Note that the above definitions imply that As = Ls ∪ Is . For a given suffix tree transducer T we are interested in the mixture of all possible submodels (subtrees) of T. We associate with each subtree (including T itself) a weight that can be interpreted as its prior probability. We later show how the learning algorithm for a mixture of suffix tree transducers adapts these weights in accordance with the performance (evidence, in Bayesian terms) of each subtree on past observations. Direct calculation of the mixture probability is infeasible since there might be an enormous
Adaptive Mixtures of Probabilistic Transducers
1715
number of such subtrees. However, as shown subsequently, the technique introduced in Willems et al. (1995) can be generalized and applied to our setting. Let T0 be a subtree of T. Denote by n1 the number of internal nodes of T0 and by n2 the number of leaves of T0 that are not leaves of T. For example, n1 = 2 and n2 = 1 for the small sub tree of the suffix tree transducer depicted in Figure 1. We define the prior weight of a tree T0 , denoted by P0 (T0 ), to be (1 − α)n1 α n2 , where α ∈ (0, 1). It can be easily verified by induction on the number of nodes P of T that this definition of the weights is a proper measure, that is, T0 ∈Sub(T) P0 (T0 ) = 1. This distribution over suffix trees can be extended to trees of unbounded depth assuming that T is an infinite |6in |-ary suffix tree transducer. The prior weights can be interpreted as if they were created by the following recursive probabilistic process. We start with a suffix tree that includes only the root node. With probability α, we stop the process and with probability 1 − α, we add all the possible |6in | children of the node and continue the process recursively for each child. For a prior probability distribution over suffix trees of unbounded depth, the process ends when no new children were created. For bounded-depth trees, the recursive process stops if either no new children were created at a node or if the node is a leaf of the maximal suffix tree T. This notion of recursive equations for hierarchical mixtures of experts architecture has been used before (Jordan & Jacobs, 1994). In the context of suffix tree transducers, such a recursive formulation has several merits. First, as we show in the next section, it enables us to calculate the prediction of a mixture that includes a huge number of models, which cannot be computed directly by calculating the contribution of each individual model in the mixture. Second, the self-similar structure of the prior enables the pool of models to grow on-the-fly as more data arrive, such that the current mixture is a refinement of its predecessor. Last, the recursive definition of the prior probability distribution implies that larger (and more complex) suffix trees are assigned a smaller prior probability. Thus, the influence of large suffix trees, which might overfit the data, is contrasted with their initial small prior probability and the danger that the entire mixture of suffix trees would overfit the data is relatively small (if not negligible). Each node in the pool of suffix trees has two roles: it can be either a leaf and used for prediction or an internal node that simply resides on a path to one of its possible children nodes. Thus, the weight α can be interpreted as the prior probability that the node is a leaf of any transducer T0 ∈ Sub(T), that is, P 0 T0 ∈Ls P0 (T ) . (2.1) α= P 0 T0 ∈As P0 (T ) Therefore, for a bounded depth transducer T, if s ∈ Leaves(T), then
1716
Yoram Singer
As = Ls ⇒ α = 1, and the recursive process stops with probability 1. In fact, we can associate a different prior probability αs with each node s ∈ T. For the sake of brevity, we omit the dependency on the node s and assume that all the priors for internal nodes are equal, that is, ∀s 6∈ Leaves(T): αs = α. Using the recursive prior over suffix tree transducers, we can calculate the prediction of the mixture of all T0 ∈ Sub(T) on the first output symbol y = y1 as follows, αγ² (y) + (1 − α)(αγx1 (y) + (1 − α) · (αγx0 x1 (y) + (1 − α)(αγx−1 x0 x1 (y) + (1 − α) . . .))). (2.2) The calculation ends when a leaf of T is reached or when the beginning of the input sequence is encountered. Therefore, the prediction time of a single symbol is bounded by the maximal depth of T, or the length of the input sequence if T is infinite. Let us assume from now on that T is finite. (A similar derivation holds when T is infinite.) Let γ˜s (y) be the prediction propagating from a node s labeled by xl , . . . , x1 (we give a precise definition of γ˜s in the next section), that is, γ˜s (y) = γ˜xl ,...,x1 (y) = αγxl ,...,x1 (y) + (1 − α) ( αγxl−1 ,...,x1 (y) + (1 − α) ( αγxl−2 ,...,x1 (y) + (1 − α) ( . . . + γxj ,...,x1 (y)) . . . ))) ,
(2.3)
where xj , . . . , x1 is a label of a leaf in T. Let σ = x1−|s| ; then the above sum can be evaluated recursively as follows: ½ s ∈ Leaves(T) γs (y) . (2.4) γ˜s (y) = α γs (y) + (1 − α) γ˜σ s (y) otherwise The prediction of the whole mixture of suffix tree transducers on y is the prediction propagated from the root node ², namely, γ˜² (y). For example, for the input sequence . . . 0110, output symbol y = b, and α = 1/2, the predictions propagated from the nodes of the full suffix tree transducer from figure 1 are γ˜110 (b) = 0.4, γ˜10 (b) = 0.5 γ10 (b) + 0.5 0.4 = 0.3, γ˜0 (b) = 0.5 γ0 (b) + 0.5 0.3 = 0.25, γ˜² (b) = 0.5 γ² (b) + 0.5 0.25 = 0.25. 3 An Online Learning Algorithm We now describe an efficient learning algorithm for the mixture of suffix tree transducers. The learning algorithm uses the recursive prior and the evidence to update efficiently the posterior probability of each possible
Adaptive Mixtures of Probabilistic Transducers
1717
transducer in the mixture. In this section, we assume that the output probability functions at the nodes are known. Hence, at each time step, we need to evaluate the following, X P(yn | x1 , . . . , xn , T0 )Pn−1 (T0 ), (3.1) P(yn | x1 , . . . , xn ) = T0 ∈Sub(T)
where Pn (T0 ) is the posterior weight of the suffix tree transducer T after n input-output pairs, (x1 , y1 ), . . . , (xn , yn ). Omitting the dependency on the input sequence for the sake of brevity, Pn (T0 ) equals to Q P0 (T0 ) ni=1 P(yi | T0 ) Qn Pn (T0 ) = P 00 00 i=1 P(yi | T ) T00 ∈Sub(T) P0 (T ) P0 (T0 )P(y1 , . . . , yn | T0 ) . 00 00 T00 ∈Sub(T) P0 (T )P(y1 , . . . , yn | T )
= P
(3.2)
Direct evaluation of the above is again infeasible due to the fact that there is an enormous number of submodels of T. However, using the technique of recursive evaluation as in equation 2.4, we can efficiently calculate the prediction of the mixture. Let s be a node in T that is reached at time step n (s = xn−|s|+1 , . . . , xn ). We denoted by 1 ≤ i1 < i2 < · · · < il = n the time steps at which the node s was reached when observing a given sequence of n input-output pairs. Similar to the definition of the recursive prior α, we define qn (s) to be the posterior probability (on the subsequences reaching s) that the node s is a leaf. Analogously to the interpretation of the prior as a ratio of probabilities of suffix tree, qn (s) is the ratio between the weighted predictions of all subtrees for which the node s is a leaf and the weighted predications of all subtrees that include s. That is, P 0 0 T0 ∈Ls P0 (T )P(yi1 , . . . , yil | T ) . (3.3) qn (s) = P 0 0 T0 ∈As P0 (T )P(yi1 , . . . , yil | T ) We now can compute the predictions of the nodes along the path defined by xn xn−1 xn−2 . . . simply by replacing the prior weight α with the posterior weight qn−1 (s), ( γs (yn ) s ∈ Leaves(T) , (3.4) γ˜s (yn ) = qn−1 (s) γs (yn ) + (1 − qn−1 (s)) γ˜σ s (yn ) otherwise where σ = xn−|s| . We show in theorem 1 that γ˜s is the following weighted prediction of all the suffix tree transducers that include the node s, P 0 0 T0 ∈As P0 (T )P(yi1 . . . yil | T ) . (3.5) γ˜s (yn ) = P 0 0 T0 ∈As P0 (T )P(yi1 . . . yil−1 | T )
1718
Yoram Singer
In order to update qn (s) we introduce one more variable, which we denote by rn (s). Setting r0 (s) = log(α/(1 − α)) for all s, rn (s) is updated as follows: rn (s) = rn−1 (s) + log(γs (yn )) − log(γ˜σ s (yn )).
(3.6)
Based on the definition of qn (s), rn (s) is the log-likelihood ratio between the weighted predictions of the subtrees for which s is a leaf and the weighted predictions of the subtrees for which s is an internal node. The new posterior weights qn (s) are calculated from rn (s) using a sigmoid function, ´ ³ qn (s) = 1/ 1 + e−rn (s) .
(3.7)
For a node s0 that is not reached at time step n, we simply define rn (s0 ) = rn−1 (s0 ) and qn (s0 ) = qn−1 (s0 ). In summary, for each new observation pair, we traverse the tree by following the path that corresponds to the input sequence xn xn−1 xn−2 . . . The predictions at the nodes along the path are calculated using equation 3.4. Given these predictions, the posterior probabilities of being a leaf are updated for each node s along the path using equations 3.6 and 3.7. Finally, the probability of yn induced by the whole mixture is the probability propagated from the root node, as stated by the following theorem. (The proof of the theorem is given in the appendix.) Theorem 1.
γ˜² (yn ) =
P
T0 ∈Sub(T) P(yn
| T0 )Pn−1 (T0 ).
We now discuss the performance of a mixture of suffix tree transducers. Let Lossn (T) be the negative log-likelihood of a suffix tree transducer T achieved on n input-output pairs, Lossn (T) = −
n X
log2 (P(yi | T)).
(3.8)
i=1
Similarly, the loss of the mixture is defined to be Lossmix n
=−
n X i=1
=−
n X
à log2
X
! 0
0
P(yi | T )Pi−1 (T )
T0 ∈Sub(T)
log2 (γ˜² (yi )).
(3.9)
i=1
The advantage of using a mixture of suffix tree transducers over a single suffix tree is due to the robustness of the solution, in the sense that the predictions of the mixture are almost as good as the predictions of the best suffix tree in the mixture, as shown in the following theorem.
Adaptive Mixtures of Probabilistic Transducers
1719
Theorem 2. Let T be a (possibly infinite) suffix tree transducer, and let (x1 , y1 ), . . . , (xn , yn ) be any possible sequence of input-output pairs. The loss of the mixture is at most min
ª Lossn (T0 ) − log2 (P0 (T0 )) .
©
T0 ∈Sub(T)
The running time of the algorithm is Dn, where D is the maximal depth of T or 1/2(n + 1)2 when T is of an unbounded depth. The proof of theorem 2 is given in the appendix. Note that the extra loss incurred for the mixture is fixed and does not grow with the number of examples. Moreover, theorem 2 implies that if the best model changes with time, then the learning algorithm implicitly tracks such a change at an additional cost that depends only on the complexity, that is, the prior probability, of the (new) best model. 4 Parameter Estimation In this section we describe how the output probability functions are estimated. We devise two adaptive schemes that track the maximum likelihood assignment for the parameters of the output probability functions. To simplify notation, we omit the dependency on the node s and regard the subsequence yi1 yi2 . . . yik (where ij are the time steps at which the node s was reached) as an individual sequence y1 y2 . . . yk . We show later in this section how the estimations of the output probability functions at the different nodes are combined. Denote by cn (y) = |{i | yi = y, 1 ≤ i ≤ n}| the number of times an output symbol y ∈ 6out was observed out of the n times a node was visited, def
and let K = |6out |. A commonly used estimator approximates the output probability function at time step n + 1 by adding a constant δ to each count cn (y), γ (y) ≈ γˆ n (y) = P
cn (y) + δ cn (y) + δ = . 0 n+δK y0 ∈6out cn (y ) + δ K
(4.1)
The estimator that achieves the best likelihood on a given sequence of length n, called the maximum likelihood estimator (MLE), is attained for δ = 0: γML (y) =
cn (y) . n
(4.2)
The MLE is evaluated in batch, after the entire sequence is observed, while the online estimator defined by equation 4.1 can be used on the fly as new symbols arrive. The special case of δ = 12 is called Laplace’s modified rule
1720
Yoram Singer
of succession or the add 12 estimator. Krichevsky and Trofimov (1981) proved that the likelihood attained by the add 12 estimator is almost as good as the likelihood achieved by the maximum likelihood estimator. Applying the bound of Krichevsky and Trofimov in our setting results in the following bound on the predictions of the add 12 estimator, −
n X
log2 (γˆ i−1 (yi )) = −
i=1
≤−
n X
µ log2
i=1 n X i=1
ci−1 (yi ) + 1/2 i − 1 + K/2
µ log (cn (yi )/n) +(K − 1) } | 2 {z
¶
¶ 1 log2 (n) + 1 . 2
(4.3)
=log2 (γML (yi ))
For a mixture of suffix tree transducers, we use the add 12 estimator at each node separately. To do so, we simply need to store counts of the number of appearances of each symbol at each node of the maximal suffix tree transducer T. Let T0 ∈ Sub(T) be any suffix tree transducer from the pool of possible submodels whose parameters are set so that it achieves the maximum likelihood on (x1 , y1 ), . . . , (xn , yn ), and let Tˆ 0 be the same suffix tree transducer but with parameters adaptively estimated using the add 12 estimator. Denote by ns the number of times a node s ∈ Leaves(T0 ) was reached on the input-output sequence, and let l(T0 ) = |Leaves(T0 )| be the total number of leaves of T0 . From theorem 1 we get, ˆ0 ˆ0 Lossmix n ≤ Lossn (T ) − log2 (P0 (T )).
(4.4)
From the bound of equation 4.3 on the add 12 estimator we get, X ¡ ¢ ˆ 0 ) ≤ Lossn (T0 ) + Lossn (T 1/2(K − 1) log2 (ns ) + K − 1 s∈Leaves(T0 )
X 1 l(T0 ) (K − 1) log2 (ns ) 0 2 s l(T ) ¶ µP l(T0 ) s ns 0 0 (K − 1) log2 ≤ Lossn (T ) + l(T )(K − 1) + 2 l(T0 ) ¶ ¶ µ µ 1 n log2 +1 , (4.5) = Lossn (T0 ) + l(T0 ) (K − 1) 2 l(T0 )
= Lossn (T0 ) + l(T0 )(K − 1) +
where we used the concavity of the logarithm function to obtain the last inequality. Finally, combining equations 4.4 and 4.5 we get, 0 0 Lossmix n ≤ Lossn (T ) − log2 (P0 (T )) ¶ ¶ µ µ n 1 0 log2 +1 . + l(T ) (K − 1) 2 l(T0 )
(4.6)
Adaptive Mixtures of Probabilistic Transducers
1721
The above bound holds for any T0 ∈ Sub(T); thus we obtain the following corollary: Corollary 3. Let T be a (possibly infinite) suffix tree transducer, and let (x1 , y1 ), . . . , (xn , yn ) be any possible sequence of input-output pairs. The loss of the mixture is at most ½ min
T0 ∈Sub(T)
Lossn (T0 ) − log2 (P0 (T0 )) µ
1 log2 + l(T )(K − 1) 2 0
µ
n l(T0 )
¶
¶¾ +1
,
where the parameters of each T0 ∈ Sub(T) are set so as to maximize the likelihood of T0 on the entire input-output sequence. The running time of the algorithm is K·D·n, where D is the maximal depth of T or K2 (n + 1)2 when T is of an unbounded depth.
Note that the above corollary holds for any n. Therefore, the mixture model tracks the best transducer, although that transducer and its parameters change in time. The additional cost we need to pay for not knowing ahead of time the transducer’s structure and parameters grows sublinearly with the length of the input-output sequence like O(log(n)). Normalizing this additional cost term by the length of the sequence, we get that the predictions of the adaptively estimated mixture lag behind the predictions of the best transducer by an additive factor, which behaves like C log(n)/n→0, where the constant C depends on the total number of parameters of the best transducer. When K is large, the smoothing of the output probabilities using the add 12 estimator is too crude. Furthermore, in many real problems, only a small subset of the full output alphabet is observed in a given context (a node in the tree). For example, when mapping phonemes to phones (Riley, 1991), for a given sequence of input phonemes, the possible phones that can be pronounced are limited to a few possibilities (typically, two to four). Therefore, we would like to devise an estimation scheme whose performance depends on the actual local (context-dependent) alphabet, not on the full alphabet. Such an estimation scheme can be devised by employing a mix0 of 6 . Although ture of models—one model for each possible subset 6out out K there are 2 possible subsets of 6out , we now show that if the estimation technique depends only on the size of each subset, then the whole mixture can be maintained in time linear in K. As before, we omit the dependency on the node and treat the subsequence 0 ) the that reaches a node as an individual sequence. Denote by γˆ n (y | 6out estimate of γ (y) after n observations assuming that the actual alphabet is
1722
Yoram Singer
0 . Using the add 1 estimator, we get 6out 2 0 0 γˆ n (y | 6out ) = γˆ n (y | |6out | = i) = (cn (y) + 1/2)/(n + i/2).
(4.7)
0 | = i) as γˆ n (y | i). For the sake of brevity we simply denote γˆ n (y | |6out n Let 6out be the set of different output symbols that were observed at a node, that is,
© ª n def = σ | σ ∈ 6out , σ = yi , 1 ≤ i ≤ n . 6out def
0 n to be the empty set and let Kn = |6out |. Since each possible We define 6out 0 of size i = |6 0 | should contain all symbols in 6 n , there are alphabet 6out out out ¡ ¢ n be the posterior probability n exactly K−K possible alphabets of size i. Let w i i−Kn (after n observations) of any alphabet of size i. The prediction of the mixture 0 such that 6 n ⊆ 6 0 ⊆ 6 of all possible subsets 6out out is out out
γˆ n (y) =
¶ K µ X K − Kn j=Kn
j − Kn
wjn γˆ n (y | j).
(4.8)
Evaluation of this sum requires O(K) operations (and not O(2K )). Furthermore, we can compute equation 4.8 in an online fashion as follows. Let Win be the (prior weighted) likelihood of all alphabets of size i: def
Win =
X 0 |6out |=i
w0i
n Y
0 γˆ k−1 (yk | 6out )
k=1
µ ¶ n Y K − Kn = γˆ k−1 (yk | i). w0i i − Kn k=1
(4.9)
Without loss of generality, let us assume that all alphabets are equally likely ¡ ¢ 0 ) = 1/2K . Therefore, W 0 = K /2K .1 The weights a priori, that is, P0 (6out i i Win can be updated incrementally depending on the following cases. If the n | > i), number of different symbols observed so far exceeds a given size (|6out then all alphabets of this size are too small and should be eliminated from the mixture by setting their posterior probability to zero. Otherwise, if the n ), the output probability is next symbol was observed before (yn+1 ∈ 6out 1 approximated by the add 2 estimator. Last, if the next symbol is entirely new 1 One can easily define and use nonuniform priors. For instance, the assumption that all alphabet sizes are equally likely a priori and all alphabets of the same size have the same prior probability results in a prior that favors small and large alphabets. Such a prior is used in the example in Figure 2.
Adaptive Mixtures of Probabilistic Transducers
1723
n+1 n (yn+1 6∈ 6out ), we need to rescale Win for i ≥ |6out | for the following reason. n+1 n n Since yn+1 6∈ 6out , then 6out = 6out ∪ {yn+1 } ⇒ Kn+1 = Kn + 1 and
µ
K − Kn+1 i − Kn+1
¶ µ µ ¶ K − Kn i − Kn K − Kn − 1 = . i − Kn K − Kn i − Kn − 1
¶ =
Hence, the number of possible alphabets of size i has decreased by a factor of (i − Kn )/(K − Kn ) and each weight Win+1 (i ≥ Kn+1 ) has to be multiplied by this factor in addition to the prediction of the add 12 estimator on yn+1 . The above three cases can be summarized as an update rule of Win+1 from Win as follows,
Win+1 = Win ×
0
if Kn+1 > i
cn (yn+1 )+1/2 n+i/2 i−Kn 1/2 K−Kn n+i/2
n if Kn+1 ≤ i and yn+1 ∈ 6out . (4.10) n if Kn+1 ≤ i and yn+1 6∈ 6out
Applying Bayes rule for the mixture of all possible subsets of the output alphabet, we get γˆ n (yn+1 ) =
K X
Win+1 /
K X
i=1
Win =
K X
Win+1 /
i=Kn
i=1
K X
Win .
(4.11)
i=Kn
˜ n be the posterior probability of all possible alphabets of size i, that is, Let W i P n def ˜ n = Wn / W W . Let Pn (yn+1 ) = W n+1 /W n be the sum of the predictions i
i
j
i
j
i
i
of all possible alphabets of size i on yn+1 . Thus, the prediction of the mixture γˆ n (yn+1 ) can be rewritten as γˆ n (yn+1 ) =
K X i=1
Win+1 /
K X i=1
Win =
K X
˜ n Pn (yn+1 ). W i i
i=1
˜ n in In Figure 2 we give an illustration of the values obtained for W i the following experiment. We simulated a multinomial source whose output symbols belong to an alphabet of size |6out | = 10 and set the probabilities of observing any of the last five symbols to zero. Therefore, the ˜ n (1 ≤ i ≤ 10), actual alphabet is of size 5. The posterior probabilities, W i were calculated after each iteration. These probabilities are plotted left to right and top to bottom. The first observations rule out alphabets of size smaller than 5, setting their posterior probability to zero. After a few observations, the posterior probability is concentrated around the actual size, yielding an accurate and adaptive estimate of the actual alphabet size of the source.
1724
Yoram Singer
0.4
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0
0
0
0
5
10
5
10
5
10
0 5
10
0.4
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0
0
0
0
5
10
5
10
5
10
10
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0 5
0
10
5
10
0 5
10
10
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0
0
0
0
10
5
10
5
10
10
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0
0
0
0
10
5
10
5
10
10
5
10
5
10
5
10
0 5
0.4
5
5
0 5
0.4
5
10
0 5
0.4
0
5
0 5
10
Figure 2: An illustration of adaptive estimation of the alphabet size for a multinomial source with a large number of possible outcomes when only a subset of the full alphabet is actually observed. In this illustration, examples were randomly drawn from an alphabet of size 10, but only five symbols have a nonzero probability. The posterior probability concentrates on the right size after fewer than 20 iterations. The initial prior used in this example assigns an equal probability to the different alphabet sizes.
Let γML denote again the MLE of γ evaluated after n input-output observation pairs, cn (y) = γML (y) = n
(
cn (y)/n
n y ∈ 6out
0
n y 6∈ 6out
.
(4.12)
We can now apply the same technique used to bound the loss of a mixture of transducers to bound the loss of a mixture of all possible subsets of the output alphabet. First, we bound the likelihood of the mixture of add 12 estimators based on all possible subsets of 6out . We do so by comparing the mixture’s likelihood to the likelihood achieved by a single add 12 estimator n , that uses the “right” size (i.e., the estimator that uses the final alphabet, 6out for prediction), −
n X i=1
log2 (γˆ i−1 (yi )) ≤ −
n X i=1
n n log2 (γˆ i−1 (yi | 6out )) − log2 (P0 (6out ))
Adaptive Mixtures of Probabilistic Transducers
=−
n X
³ ´ n log2 (γˆ i−1 (yi | 6out )) + log2 2K .
1725
(4.13)
i=1
Using the bound on the likelihood of the add 12 estimator from equations 4.3, we get
−
n X
n log2 (γˆ i−1 (yi | 6out ))
i=1
≤−
n X
log2 (γML (yi )) + (Kn − 1)(1/2 log2 (n) + 1)
(4.14)
i=1
Combining the bounds from equations 4.13 and 4.14, we obtain a bound on the predictions of the mixture with online estimate parameters relative to the predictions of maximum likelihood model,
−
n X
log2 (γˆ i−1 (yi ))
i=1
≤−
n X
log2 (γML (yi )) + (Kn − 1)(1/2 log2 (n) + 1) + K.
(4.15)
i=1 2K
It is easy to verify that if n ≥ 2 K−Kn , which is the case when the observed alphabet is small (Kn ¿ K), then a mixture of add 12 predictors, one for each possible subset of the alphabet, achieves a better performance than a single add 12 predictor based on the full alphabet. Applying the online mixture estimation technique twice, first for the mixture of all possible submodels of a suffix tree transducer T and then for the parameters of each model in the mixture, yields an accurate and efficient online learning algorithm for both the structure and the parameters of the source. As before, let T0 ∈ Sub(T) be a suffix tree transducer whose parameters are set so as to maximize the likelihood of T0 on the entire input-output sequence. l(T0 ) is again the number of leaves of T0 , and ns the number of times a node s ∈ Leaves(T0 ) was reached on a sequence of n input-output pairs. Denote by ms the number of nonzero parameters of the output probability ∀n0 ≤ n, the value function of T0 at node s. Note that for a given node s andP 0 0 0 Kn for that node satisfies Kn ≤ ms . Denote by m(T ) = s∈Leaves(T0 ) ms the total number of nonzero parameters of the maximum likelihood transducer T0 . Combining the bound on the mixture of transducers from theorem 1 with the bound on the mixture of parameters from equation 4.15 while following the same steps used to derive equation 4.6, we get that
1726
Yoram Singer 0 0 Lossmix n ≤ Lossn (T ) − log2 (P0 (T )) X £ ¡ ¢ ¤ + (ms − 1) 1/2 log2 (ns ) + 1 + K s∈Leaves(T0 )
≤ Lossn (T0 ) − log2 (P0 (T0 )) +l(T0 )(K − 1) + m(T0 ) + Now,
X s
X 1 (ms − 1) log2 (ns ). (4.16) 2 s∈Leaves(T0 )
X (ms − 1) log2 (ns /(ms − 1))
(ms − 1) log2 (ns ) =
s
X + (ms − 1) log2 (ms − 1) s
X
≤
s
(ms − 1) log2 (ns /(ms − 1))+m(T0 ) log2 (K).
Using the log-sum inequality (Cover & Thomas, 1991) we get, X (ms − 1) log2 (ns /(ms − 1)) ≤ s
Ã
X s
! ms log2
P
µ P
s ns
s (ms
− 1)
¶
= m(T0 ) log2
µ
¶ n . m(T0 ) − l(T0 )
Therefore, 0 0 Lossmix n ≤ Lossn (T ) − log2 (P0 (T )) ¶ µ 1 n + m(T0 ) log2 2 m(T0 ) − l(T0 )
+ l(T0 )(K − 1) + m(T0 ) log2 (2K).
(4.17)
As before, the above bound holds for any T0 ∈ Sub(T). Therefore, the mixture model again tracks the best transducer in the pool, but the additional cost for specifying the parameters is smaller. Put another way, predictions of a mixture of suffix tree transducers, each with parameters estimated using a mixture of all possible alphabets, lag behind the predictions of the maximum likelihood model by an additive factor that scales again like C log(n)/n; but the constant C now depends on the total number of the nonzero parameters of the maximum likelihood model. Using efficient data structures to maintain the pointers to all the children of a node, both the time and the space complexity of the algorithm are O(Dn|6out |) (O(n2 |6out |) for unbounded depth transducers. Furthermore, if the model is kept fixed after the training phase, we can evaluate the final estimation of the output functions once for each node, and the time complexity during a test (prediction only) phase reduces to O(D) per observation.
Adaptive Mixtures of Probabilistic Transducers
1727
Figure 3: (Left) Performance comparison of the predictions of a single model and two mixture models. The y-axis is the normalized negative log-likelihood where the base of the log is |6out |. (Right) The relative weights of three transducers in the mixture.
5 Evaluation and Applications In this section we present evaluation results of the model and its learning algorithm on artificial data. We also discuss and present results obtained from learning syntactic structure of noun phrases. In all the experiments described in this section, α was set to 12 . The low time and space complexity of the learning algorithm enables us to evaluate the algorithm on millions of input-output pairs in a few minutes. For example, the average update time for a mixture of suffix tree transducers of maximal depth 10 when |6out | = 4 is about 0.2 millisecond on an SGI Indy 200 Mhz workstation. Results of the online predictions various estimation schemes are shown on the lefthand side of Figure 3. In this example, 6out = 6in = {1, 2, 3, 4}. The description of the source is as follows. If xn ≥ 3, then yn is uniformly distributed over 6out ; otherwise, (xn ≤ 2) yn = xn−5 with probability 0.9 and yn−5 = 5 − xn−5 with probability 0.1. The input sequence x1 x2 . . . was created entirely at random. This source can be implemented by a sparse suffix tree transducer of maximal depth 5. Note that the actual size of the alphabet is only 2 at half of the leaves of that tree. We used a mixture of suffix tree transducers of maximal depth 20 to learn the source. The normalized negative log-likelihood values are given for (1) a mixture model of suffix tree transducers with parameters estimated using a mixture of all subsets of the output alphabet, (2) a mixture model of the possible suffix tree transducers with parameters estimated using the add 12 scheme, and (3) a single model of depth 8 whose parameters are estimated using the add 12 scheme (note that the source creating the examples can be embedded in this model).
1728
Yoram Singer
After a relatively small number of examples, the prediction of the mixture model converges to the prediction of the source much faster than the single model. Furthermore, applying the mixture estimation for both the structure and the parameters significantly accelerates the convergence. In fewer than 10,000 examples, the cross-entropy between the mixture model and source is less than 0.01 bit; even after 50,000 examples, the cross-entropy between the source and a single model is about 0.1 bit. On the righthand side of Figure 3, we show how the mixture weights evolve as a function of the number of examples. At the beginning, the weights are distributed among all models where the weight of a model is inversely proportional to its size. When there are only a few examples, the estimated parameters are inaccurate, and the predictions of all models are rather poor. Hence, the posterior weights remain close to the initial prior weights. As more data arrive, the predictions of the transducers of a large enough structure (i.e., the source itself and larger trees in which the source can be embedded) become more and more accurate. Thus, the sum of their weights grows larger in the expense of the small models. To illustrate the above, we plot in Figure 3 the relative weight as a function of the number of examples for three transducers in the mixture. The first, TS , is based on a suffix tree, which is strictly contained in the suffix tree transducer that generated the examples. Thus, this transducer is too small to approximate the source accurately. The second, TC , is the transducer generating the examples, and the third, TL , is a transducer based on a large suffix tree in which the true source can be embedded. The predictions of the small model are consistently wrong, and its relative weight decreases to almost zero after fewer than 10,000 input-output pairs. Eventually the parameters of TC and TL become accurate, and their predictions are essentially the same. Thus, their relative weights converge to the relative prior weights: P0 (TC )/(P0 (TC ) + P0 (TL )) and P0 (TL )/(P0 (TC ) + P0 (TL )). As mentioned in the introduction, there is a broad range of applications of probabilistic transducers. Here we briefly demonstrate and discuss how to induce an English noun phrase recognizer based on a suffix tree transducer. Recognizing noun phrases is an important task in automatic natural text processing for applications such as information retrieval, translation tools, and data extraction from texts. A common practice is to recognize noun phrases by first analyzing the text with a part of speech tagger, which assigns the appropriate part-of-speech (verb, noun, adjective, etc.) for each word in context. Then noun phrases are identified by manually defined regular expression patterns that are matched against the part-of-speech sequences. An alternative approach is to learn a mapping from the part-ofspeech sequence to an output sequence that identifies the words that belong to a noun phrase (Church, 1988). We followed the second approach by building a suffix tree transducer based on a labeled data set from the Penn Tree Bank corpus. We defined 6in to be the set of possible part-of-speech tags and set 6out = {0, 1}, where an output symbol given its corresponding input
Adaptive Mixtures of Probabilistic Transducers
1729
Table 1: Extraction of Noun Phrases Using a Suffix Tree Transducer. Sentence Tom parts-of-speech tag PNP Class 1 Prediction 0.99
Smith
,
PNP 1 0.99
, 0 0.01
group chief executive NN 1 0.98
NN 1 0.98
Sentence and industrial materials , parts-of-speech tag CC JJ NNS , Class 1 1 1 0 Prediction 0.67 0.96 0.99 0.03
will MD 0 0.03
NN 1 0.98
of
U.K. metals
IN 0 0.02
PNP NNS 1 1 0.99 0.99
become chairman VB 0 0.01
NN 1 0.87
. . 0 0.01
Notes: In this typical example, two long noun phrases were identified correctly with high confidence. PNP = possessive nominal pronoun; NN = singular or mass noun; IN = preposition; NNS = plural noun; CC = coordination conjunction; JJ = adjective; MD = modal verb; VB = verb, baseform.
sequence (the part-of-speech tags of the words constituting the sentence) is 1 iff the word is part of a noun phrase. We used 250,000 marked tags and tested the performance on 37,000 tags. Testing the performance of the suffix tree transducer based tagger is performed by freezing the model structure, the mixture weights, and the estimated parameters. The suffix tree transducer was of maximal depth 15; hence very long phrases could be potentially identified. By comparing the predictions of the transducer to a threshold (1/2), we labeled each tag in the test data as either a member of a nonphrase or a filler. An example of the prediction of the transducer and the resulting classification is given in Table 1. We found that 2.4 percent of the words were labeled incorrectly. For comparison, when a single (zero-depth) suffix tree transducer is used to classify words, the error rate is about 15 percent. It is impractical to compare the performance of our transducer-based tagger to previous works due to the many differences in the train and test data sets used to construct the different systems. Nevertheless, the low error rate achieved by the mixture of suffix tree transducers and the efficiency of its learning algorithm suggest that this approach might be a powerful tool for language processing. One of our future research goals is to investigate methods for incorporating linguistic knowledge through the set of priors. Appendix Proof of Theorem 1. Let s be any node in the full suffix tree T. To remind the reader, we denote by 1 ≤ i1 < i2 < · · · < il = n the time steps at which the node s was reached when observing a given sequence of n input-output
1730
Yoram Singer
pairs. Define def
P
def
P
def
P
Ls (n) = Is (n) = As (n) =
T0 ∈Ls T0 ∈Is T0 ∈As
P0 (T0 )P(yi1 . . . yil | T0 ), P0 (T0 )P(yi1 . . . yil | T0 ), P0 (T0 )P(yi1 . . . yil | T0 ).
Note that since As = Ls ∪ Is , then As (n) = Ls (n) + Is (n). Also, if s was reached at time step n (il = n), then Ls (n) =
X T0 ∈L
=
X
P0 (T0 )P(yi1 . . . yil | T0 ) s
P0 (T0 )P(yi1 . . . yil−1 | T0 ) γs (yn ),
(A.1)
T0 ∈Ls
and therefore γs (yn ) = Ls (n) /Ls (n − 1).
(A.2) def
For boundary conditions we define, ∀s ∈ Leaves(T) : rn (s) = ∞, ∀s : As (0) = def
def
As (−1) = 1, and γs (y0 ) = 1. Let the depth of a node be the node’s maximal distance from any of the leaves and denote it by d. We now prove the following two properties of the algorithm by induction on n and d. Property A: For all nodes s ∈ T: rn (s) = log(Ls (n)/Is (n)). Property B: For any node s ∈ T reached at time step n: γ˜s (yn ) = As (n)/As (n − 1). First note that if a node was not reached at time n, then Ls (n) = Ls (n − 1) and Is (n) = Is (n − 1) and property A holds based on the induction hypotheses for n − 1. Second, if a node s was reached at time step n, then the following also holds based on the induction hypotheses, qn (s) = 1/(1 + e−rn (s) ) = 1/(1 + Is (n)/Ls (n)) = Ls (n)/(Ls (n) + Is (n)) = Ls (n)/As (n).
(A.3)
We now show that the base of the induction hypotheses is true. Property A holds by definition for d = 0 (s ∈ Leaves(T)) and for all n. Since ∀s ∈ Leaves(T), As (n) = Ls (n), then from equation A.2 we get that γ˜s (yn ) = γs (yn ) = Ls (n)/Ls (n − 1) = As (n)/As (n − 1) and property B holds as well. For n = 0 and for all d, property A holds from the definition of the prior
Adaptive Mixtures of Probabilistic Transducers
1731
probabilities: µ
α r0 (s) = log 1−α Ã P def
¶
T0 ∈Ls
P
T0 ∈As
P0 (T0 )
!
P P0 (T0 ) / T0 ∈As P0 (T0 ) ¶ µ Ls (0)/As (0) = log 1 − Ls (0)/As (0) ¶ µ ¶ µ Ls (0) Ls (0) = log . = log As (0) − Ls (0) Is (0) = log
1−
P
P0 (T0 ) /
T0 ∈Ls
(A.4)
Property B simply follows from the defined boundary conditions. Assume that the induction assumptions hold for d0 < d, n0 ≤ n and d0 ≤ d, n0 < n. Since property A holds for any node not reached at time n, we need to show that the two properties hold only for nodes that are reached at time step n, namely, nodes of the form s = xj , . . . , xn where j ≤ n. Let σ = xn−|s| , then, using the induction hypotheses for γ˜ at the node σ s (of depth d − 1 < d) and for rn−1 (s) (at time n − 1 < n), we get, rn (s) = rn−1 (s) + log(γs (yn )/γ˜σ s (yn )) ¶ µ Ls (n − 1) Ls (n) Aσ s (n − 1) . = log Is (n − 1) Ls (n − 1) Aσ s (n)
(A.5)
The sets As and Is consist of complete subtrees. Therefore, if T0 ∈ Aσ s , then T0 ∈ Is , and Aσ s (n) = Is (n), which implies that, ¶ µ Ls (n − 1) Ls (n) Is (n − 1) rn (s) = log = log(Ls (n)/Is (n)). (A.6) Is (n − 1) Ls (n − 1) Is (n) Similarly, using the induction hypotheses for qn−1 (s) and the node σ s, γ˜s (yn ) = qn−1 (s)γs (yn ) + (1 − qn−1 (s))γ˜σ s (yn ) =
Is (n − 1) Aσ s (n) Ls (n − 1) Ls (n) + As (n − 1) Ls (n − 1) As (n − 1) Aσ s (n − 1)
=
Is (n − 1) Is (n) Ls (n − 1) Ls (n) + As (n − 1) Ls (n − 1) As (n − 1) Is (n − 1)
=
As (n) Ls (n) + Is (n) = . As (n − 1) As (n − 1)
(A.7)
Hence, the induction hypotheses hold for all n and d and in particular, γ˜² (yn ) = A² (n)/A² (n − 1) =
1732
Yoram Singer
X T0 ∈A
X
P0 (T0 )P(yi1 , . . . , yil | T0 ) /
T0 ∈A
²
P0 (T0 )P(yi1 , . . . , yil−1 | T0 ). (A.8) ²
Since the root node is reached at all time steps, then ij = j and yi1 , . . . , yil = y1 , . . . , yn . Thus γ˜² (yn ) is equal to, X X P0 (T0 )P(y1 , . . . , yn | T0 ) / P0 (T0 )P(y1 , . . . , yn−1 | T0 ). (A.9) γ˜² (yn ) = T0 ∈A²
T0 ∈A²
Therefore, the theorem holds since by definition A² = Sub(T). Proof of Theorem 2. The proof of the first part of the theorem is based on a technique introduced by DeSantis et al. (1988). Based on the definition of A² (n) and Pn−1 (T0 ) from theorem 1 we can rewrite Lossmix n as ! Ã n X X Lossmix log2 P(yi | T0 )Pi−1 (T0 ) n = − T0 ∈Sub(T)
i=1
=−
n X
µ log2
i=1
A² (i) A² (i − 1)
¶ = − log2
à n Y i=1
! A² (i) . A² (i − 1)
(A.10)
Canceling the numerator of one term with the denominator of the next, we get ! Ã ¶ µ A² (n) A² (n) mix = − log2 P Lossn = − log2 0 A² (0) T0 ∈Sub(T) P0 (T ) ! Ã X 0 0 P0 (T )P(y1 , . . . , yn | T ) = − log2 µ ≤ − log2 =
min
T0 ∈Sub(T)
T0 ∈Sub(T)
¶
max P0 (T0 )P(y1 , . . . , yn | T0 )
T0 ∈Sub(T)
©
ª Lossn (T0 ) − log2 (P0 (T0 )) .
(A.11)
This completes the proof of the first part of the theorem. At time step i, we traverse the suffix tree T until a leaf is reached (at most D nodes are visited) or the beginning of the input sequence is encountered (at most i nodes are visited). Hence, the runningPtime for a sequence of length n is Dn for a bounded depth suffix tree or ni=1 i ≤ 12 (n + 1)2 for an unbounded depth suffix tree. Acknowledgments Thanks to Y. Bengio, L. Lee, F. Pereira, and M. Shaw for helpful discussions. I also thank the anonymous reviewers for their helpful comments.
Adaptive Mixtures of Probabilistic Transducers
1733
References Bengio, Y., & Fransconi, P. (1994). An input/output HMM architecture. In Advances in neural information processing systems 7, (pp. 427–434). Cambridge, MA: MIT Press. Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1984). Classification and regression trees. Belmont, CA: Wadsworth International Group. Cesa-Bianchi, N., Freund, Y., Haussler, D., Helmbold, D. P., Schapire, R. E., & Warmuth, M. K. (1993). How to use expert advice. In Proceedings of the 25th Annual ACM Symposium on the Theory of Computing (pp. 382–391). Church, K. W. (1988). A stochastic parts program and noun phrase parser for unrestricted text. In Second Conference on Applied Natural Language Processing (pp. 136–143). Cover, T. M., & Thomas, J. A. (1991). Elements of information theory. New York: Wiley. DeSantis, A., Markowski, C., & Wegman, M. N. (1988). Learning probabilistic prediction functions. In Proceedings of the First Annual Workshop on Computational Learning Theory (pp. 312–328). Giles, C. L., Miller, C. B., Chen, D., Sun, G. Z., Chen, H. H., & Lee, Y. C. (1992). Learning and extracting finite state automata with second-order recurrent neural networks. Neural Computation, 4, 393–405. Haussler, D., & Barron, A. (1993). How well do Bayes methods work for on-line prediction of {+1, −1} values? In The 3rd NEC Symposium on Computation and Cognition. Helmbold, D. P., & Schapire, R. E. (1995). Predicting nearly as well as the best pruning of a decision tree. In Proceedings of the Eighth Annual Conference on Computational Learning Theory (pp. 61–68). Jacobs, R. A., Jordan, M. I., Nowlan, S. J., & Hinton, G. E. (1991). Adaptive mixture of local experts. Neural Computation, 3, 79–87. Jordan, M. I., & Jacobs, R. A. (1994). Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 6, 181–214. Krichevsky, R. E., & Trofimov, V. K. (1981). The performance of universal coding. IEEE Transactions on Information Theory, 27, 199–207. Littlestone, N., & Warmuth, M. K. (1994). The weighted majority algorithm. Information and Computation, 108, 212–261. Quinlan, J. R. (1993). C4.5: Programs for machine learning. San Mateo, CA: Morgan Kaufmann. Riley, M. D. (1991). A statistical model for generating pronunication networks. In Proc. of IEEE Conf. on Acoustics, Speech, and Signal Processing (pp. 737–740). Ron, D., Singer, Y., & Tishby, N. (1996). The power of amnesia: Learning probabilistic automata with variable memory length. Machine Learning, 25, 117–149. Willems, F. M. J., Shtarkov, Y. M., & Tjalkens, T. J. (1995). The context tree weighting method: Basic properties. IEEE Transactions on Information Theory, 41(3), 653–664. Received September 12, 1996; accepted March 14, 1997.
Communicated by Ronald Williams
Long Short-Term Memory Sepp Hochreiter ¨ Informatik, Technische Universit¨at Munchen, ¨ ¨ Fakult¨at fur 80290 Munchen, Germany
Jurgen ¨ Schmidhuber IDSIA, Corso Elvezia 36, 6900 Lugano, Switzerland
Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter’s (1991) analysis of this problem, then address it by introducing a novel, efficient, gradientbased method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms. 1 Introduction In principle, recurrent networks can use their feedback connections to store representations of recent input events in the form of activations (short-term memory, as opposed to long-term memory embodied by slowly changing weights). This is potentially significant for many applications, including speech processing, non-Markovian control, and music composition (Mozer, 1992). The most widely used algorithms for learning what to put in shortterm memory, however, take too much time or do not work well at all, especially when minimal time lags between inputs and corresponding teacher signals are long. Although theoretically fascinating, existing methods do not provide clear practical advantages over, say, backpropagation in feedforward nets with limited time windows. This article reviews an analysis of the problem and suggests a remedy. Neural Computation 9, 1735–1780 (1997)
c 1997 Massachusetts Institute of Technology °
1736
Sepp Hochreiter and Jurgen ¨ Schmidhuber
The problem. With conventional backpropagation through time (BPTT; Williams & Zipser, 1992; Werbos, 1988) or real-time recurrent learning (RTRL; Robinson & Fallside, 1987), error signals flowing backward in time tend to (1) blow up or (2) vanish; the temporal evolution of the backpropagated error exponentially depends on the size of the weights (Hochreiter, 1991). Case 1 may lead to oscillating weights; in case 2, learning to bridge long time lags takes a prohibitive amount of time or does not work at all (see section 3). This article presents long short-term memory (LSTM), a novel recurrent network architecture in conjunction with an appropriate gradient-based learning algorithm. LSTM is designed to overcome these error backflow problems. It can learn to bridge time intervals in excess of 1000 steps even in case of noisy, incompressible input sequences, without loss of short-timelag capabilities. This is achieved by an efficient, gradient-based algorithm for an architecture enforcing constant (thus, neither exploding nor vanishing) error flow through internal states of special units (provided the gradient computation is truncated at certain architecture-specific points; this does not affect long-term error flow, though). Section 2 briefly reviews previous work. Section 3 begins with an outline of the detailed analysis of vanishing errors due to Hochreiter (1991). It then introduces a naive approach to constant error backpropagation for didactic purposes and highlights its problems concerning information storage and retrieval. These problems lead to the LSTM architecture described in section 4. Section 5 presents numerous experiments and comparisons with competing methods. LSTM outperforms them and also learns to solve complex, artificial tasks no other recurrent net algorithm has solved. Section 6 discusses LSTM’s limitations and advantages. The appendix contains a detailed description of the algorithm (A.1) and explicit error flow formulas (A.2). 2 Previous Work This section focuses on recurrent nets with time-varying inputs (as opposed to nets with stationary inputs and fixed-point-based gradient calculations; e.g., Almeida, 1987; Pineda, 1987). 2.1 Gradient-Descent Variants. The approaches of Elman (1988), Fahlman (1991), Williams (1989), Schmidhuber (1992a), Pearlmutter (1989), and many of the related algorithms in Pearlmutter’s comprehensive overview (1995) suffer from the same problems as BPTT and RTRL (see sections 1 and 3). 2.2 Time Delays. Other methods that seem practical for short time lags only are time-delay neural networks (Lang, Waibel, & Hinton, 1990) and Plate’s method (Plate, 1993), which updates unit activations based on a
Long Short-Term Memory
1737
weighted sum of old activations (see also de Vries & Principe, 1991). Lin et al. (1996) propose variants of time-delay networks called NARX networks. 2.3 Time Constants. To deal with long time lags, Mozer (1992) uses time constants influencing changes of unit activations (deVries and Principe’s 1991 approach may in fact be viewed as a mixture of time-delay neural networks and time constants). For long time lags, however, the time constants need external fine tuning (Mozer, 1992). Sun, Chen, and Lee’s alternative approach (1993) updates the activation of a recurrent unit by adding the old activation and the (scaled) current net input. The net input, however, tends to perturb the stored information, which makes long-term storage impractical. 2.4 Ring’s Approach. Ring (1993) also proposed a method for bridging long time lags. Whenever a unit in his network receives conflicting error signals, he adds a higher-order unit influencing appropriate connections. Although his approach can sometimes be extremely fast, to bridge a time lag involving 100 steps may require the addition of 100 units. Also, Ring’s net does not generalize to unseen lag durations. 2.5 Bengio et al.’s Approach. Bengio, Simard, and Frasconi (1994) investigate methods such as simulated annealing, multigrid random search, time-weighted pseudo-Newton optimization, and discrete error propagation. Their “latch” and “two-sequence” problems are very similar to problem 3a in this article with minimal time lag 100 (see Experiment 3). Bengio and Frasconi (1994) also propose an expectation-maximazation approach for propagating targets. With n so-called state networks, at a given time, their system can be in one of only n different states. (See also the beginning of section 5.) But to solve continuous problems such as the adding problem (section 5.4), their system would require an unacceptable number of states (i.e., state networks). 2.6 Kalman Filters. Puskorius and Feldkamp (1994) use Kalman filter techniques to improve recurrent net performance. Since they use “a derivative discount factor imposed to decay exponentially the effects of past dynamic derivatives,” there is no reason to believe that their Kalman filtertrained recurrent networks will be useful for very long minimal time lags. 2.7 Second Order Nets. We will see that LSTM uses multiplicative units (MUs) to protect error flow from unwanted perturbations. It is not the first recurrent net method using MUs, though. For instance, Watrous and Kuhn (1992) use MUs in second-order nets. There are some differences from LSTM: (1) Watrous and Kuhn’s architecture does not enforce constant error flow and is not designed to solve long-time-lag problems; (2) it has fully connected second-order sigma-pi units, while the LSTM architecture’s MUs
1738
Sepp Hochreiter and Jurgen ¨ Schmidhuber
are used only to gate access to constant error flow; and (3) Watrous and Kuhn’s algorithm costs O(W 2 ) operations per time step, ours only O(W), where W is the number of weights. See also Miller and Giles (1993) for additional work on MUs. 2.8 Simple Weight Guessing. To avoid long-time-lag problems of gradient-based approaches, we may simply randomly initialize all network weights until the resulting net happens to classify all training sequences correctly. In fact, recently we discovered (Schmidhuber & Hochreiter, 1996; Hochreiter & Schmidhuber, 1996, 1997) that simple weight guessing solves many of the problems in Bengio et al. (1994), Bengio and Frasconi (1994), Miller and Giles (1993), and Lin et al. (1996) faster than the algorithms these authors proposed. This does not mean that weight guessing is a good algorithm. It just means that the problems are very simple. More realistic tasks require either many free parameters (e.g., input weights) or high weight precision (e.g., for continuous-valued parameters), such that guessing becomes completely infeasible. 2.9 Adaptive Sequence Chunkers. Schmidhuber’s hierarchical chunker systems (1992b, 1993) do have a capability to bridge arbitrary time lags, but only if there is local predictability across the subsequences causing the time lags (see also Mozer, 1992). For instance, in his postdoctoral thesis, Schmidhuber (1993) uses hierarchical recurrent nets to solve rapidly certain grammar learning tasks involving minimal time lags in excess of 1000 steps. The performance of chunker systems, however, deteriorates as the noise level increases and the input sequences become less compressible. LSTM does not suffer from this problem. 3 Constant Error Backpropagation 3.1 Exponentially Decaying Error 3.1.1 Conventional BPTT (e.g., Williams & Zipser, 1992). Output unit k’s target at time t is denoted by dk (t). Using mean squared error, k’s error signal is ϑk (t) = fk0 (netk (t))(dk (t) − yk (t)), where yi (t) = fi (neti (t)) is the activation of a noninput unit i with differentiable activation function fi , X wij y j (t − 1) neti (t) = j
Long Short-Term Memory
1739
is unit i’s current net input, and wij is the weight on the connection from unit j to i. Some nonoutput unit j’s backpropagated error signal is X wij ϑi (t + 1). ϑj (t) = fj0 (netj (t)) i
The corresponding contribution to wjl ’s total weight update is αϑj (t)yl (t−1), where α is the learning rate and l stands for an arbitrary unit connected to unit j. 3.1.2 Outline of Hochreiter’s Analysis (1991, pp. 19–21). Suppose we have a fully connected net whose noninput unit indices range from 1 to n. Let us focus on local error flow from unit u to unit v (later we will see that the analysis immediately extends to global error flow). The error occurring at an arbitrary unit u at time step t is propagated back into time for q time steps, to an arbitrary unit v. This will scale the error by the following factor: ∂ϑv (t − q) = ∂ϑu (t)
(
fv0 (netv (t − 1))wuv P ∂ϑ (t−q+1) fv0 (netv (t − q)) nl=1 l∂ϑu (t) wlv
q=1 . q>1
(3.1)
With lq = v and l0 = u, we obtain: q n n X Y ∂ϑv (t − q) X = ... fl0m (netlm (t − m))wlm lm−1 ∂ϑu (t) l =1 l =1 m=1 1
(3.2)
q−1
Qq (proof by induction). The sum of the nq−1 terms m=1 fl0m (netlm (t−m))wlm lm−1 determines the total error backflow (note that since the summation terms may have different signs, increasing the number of units n does not necessarily increase error flow). 3.1.3 Intuitive Explanation of Equation 3.2.
If
| fl0m (netlm (t − m))wlm lm−1 | > 1.0 for all m (as can happen, e.g., with linear flm ), then the largest product increases exponentially with q. That is, the error blows up, and conflicting error signals arriving at unit v can lead to oscillating weights and unstable learning (for error blowups or bifurcations, see also Pineda, 1988; Baldi & Pineda, 1991; Doya, 1992). On the other hand, if | fl0m (netlm (t − m))wlm lm−1 | < 1.0 for all m, then the largest product decreases exponentially with q. That is, the error vanishes, and nothing can be learned in acceptable time.
1740
Sepp Hochreiter and Jurgen ¨ Schmidhuber
If flm is the logistic sigmoid function, then the maximal value of fl0m is 0.25. If ylm−1 is constant and not equal to zero, then | fl0m (netlm )wlm lm−1 | takes on maximal values where µ ¶ 1 1 netlm , wlm lm−1 = l coth 2 y m−1 goes to zero for |wlm lm−1 | → ∞, and is less than 1.0 for |wlm lm−1 | < 4.0 (e.g., if the absolute maximal weight value wmax is smaller than 4.0). Hence with conventional logistic sigmoid activation functions, the error flow tends to vanish as long as the weights have absolute values below 4.0, especially in the beginning of the training phase. In general, the use of larger initial weights will not help, though, as seen above, for |wlm lm−1 | → ∞ the relevant derivative goes to zero “faster” than the absolute weight can grow (also, some weights will have to change their signs by crossing zero). Likewise, increasing the learning rate does not help either; it will not change the ratio of long-range error flow and short-range error flow. BPTT is too sensitive to recent distractions. (A very similar, more recent analysis was presented by Bengio et al., 1994.) 3.1.4 Global Error Flow. The local error flow analysis above immediately shows that global error flow vanishes too. To see this, compute X ∂ϑv (t − q) . ∂ϑu (t) u: u output unit 3.1.5 Weak Upper Bound for Scaling Factor. The following, slightly extended vanishing error analysis also takes n, the number of units, into account. For q > 1, equation 3.2 can be rewritten as (WuT )T F0 (t − 1)
q−1 Y
(WF0 (t − m)) Wv fv0 (netv (t − q)),
m=2
where the weight matrix W is defined by [W]ij := wij , v’s outgoing weight vector Wv is defined by [Wv ]i := [W]iv = wiv , u’s incoming weight vector WuT is defined by [WuT ]i := [W]ui = wui , and for m = 1, . . . , q, F0 (t − m) is the diagonal matrix of first-order derivatives defined as [F0 (t − m)]ij := 0 if i 6= j, and [F0 (t−m)]ij := fi0 (neti (t−m)) otherwise. Here T is the transposition operator, [A]ij is the element in the ith column and jth row of matrix A, and [x]i is the ith component of vector x. Using a matrix norm k · kA compatible with vector norm k · kx , we define 0 fmax := max {kF0 (t − m)kA }. m=1,...,q
For maxi=1,...,n {|xi |} ≤ kxkx we get |xT y| ≤ n kxkx kykx . Since 0 | fv0 (netv (t − q))| ≤ kF0 (t − q)kA ≤ fmax ,
Long Short-Term Memory
1741
we obtain the following inequality: ¯ ¯ ¯ ∂ϑv (t − q) ¯ ¡ 0 ¢q q−2 0 q ¯ ¯ ≤ n fmax kWkA . ¯ ∂ϑ (t) ¯ ≤ n ( fmax ) kWv kx kWuT kx kWkA u This inequality results from kWv kx = kWev kx ≤ kWkA kev kx ≤ kWkA and kWuT kx = kW T eu kx ≤ kWkA keu kx ≤ kWkA , where ek is the unit vector whose components are 0 except for the kth component, which is 1. Note that this is a weak, extreme case upper bound; it will be reached only if all kF0 (t − m)kA take on maximal values, and if the contributions of all paths across which error flows back from unit u to unit v have the same sign. Large kWkA , however, typically result in small values of kF0 (t − m)kA , as confirmed by experiments (see, e.g., Hochreiter, 1991). For example, with norms X |wrs | kWkA := max r
s
and kxkx := max |xr |, r
0 we have fmax = 0.25 for the logistic sigmoid. We observe that if
|wij | ≤ wmax
0. The differentiable function g squashes netcj ; the differentiable function h scales memory cell outputs computed from the internal state scj . 4.2 Why Gate Units? To avoid input weight conflicts, inj controls the error flow to memory cell cj ’s input connections wcj i . To circumvent cj ’s output weight conflicts, outj controls the error flow from unit j’s output
Long Short-Term Memory
1745
output
cell 1 cell 2 block block 1 1
out 1
in 1
out 2
in 2
cell 1 cell 2 block block 2 2
hidden
input
Figure 2: Example of a net with eight input units, four output units, and two memory cell blocks of size 2. in1 marks the input gate, out1 marks the output gate, and cell1/block1 marks the first memory cell of block 1. cell1/block1’s architecture is identical to the one in Figure 1, with gate units in1 and out1 (note that by rotating Figure 1 by 90 degrees anticlockwise, it will match with the corresponding parts of Figure 2). The example assumes dense connectivity: each gate unit and each memory cell sees all non-output units. For simplicity, however, outgoing weights of only one type of unit are shown for each layer. With the efficient, truncated update rule, error flows only through connections to output unit, and through fixed self-connections within cell blocks (not shown here; see Figure 1). Error flow is truncated once it “wants” to leave memory cells or gate units. Therefore, no connection shown above serves to propagate error back to the unit from which the connection originates (except for connections to output units), although the connections themselves are modifiable. That is why the truncated LSTM algorithm is so efficient, despite its ability to bridge very long time lags. See the text and the appendix for details. Figure 2 shows the architecture used for experiment 6a; only the bias of the noninput units is omitted.
connections. In other words, the net can use inj to decide when to keep or override information in memory cell cj and outj to decide when to access memory cell cj and when to prevent other units from being perturbed by cj (see Figure 1). Error signals trapped within a memory cell’s CEC cannot change, but different error signals flowing into the cell (at different times) via its output gate may get superimposed. The output gate will have to learn which
1746
Sepp Hochreiter and Jurgen ¨ Schmidhuber
errors to trap in its CEC by appropriately scaling them. The input gate will have to learn when to release errors, again by appropriately scaling them. Essentially the multiplicative gate units open and close access to constant error flow through CEC. Distributed output representations typically do require output gates. Both gate types are not always necessary, though; one may be sufficient. For instance, in experiments 2a and 2b in section 5, it will be possible to use input gates only. In fact, output gates are not required in case of local output encoding; preventing memory cells from perturbing already learned outputs can be done by simply setting the corresponding weights to zero. Even in this case, however, output gates can be beneficial: they prevent the net’s attempts at storing long-time-lag memories (which are usually hard to learn) from perturbing activations representing easily learnable short-timelag memories. (This will prove quite useful in experiment 1, for instance.) 4.3 Network Topology. We use networks with one input layer, one hidden layer, and one output layer. The (fully) self-connected hidden layer contains memory cells and corresponding gate units (for convenience, we refer to both memory cells and gate units as being located in the hidden layer). The hidden layer may also contain conventional hidden units providing inputs to gate units and memory cells. All units (except for gate units) in all layers have directed connections (serve as inputs) to all units in the layer above (or to all higher layers; see experiments 2a and 2b). 4.4 Memory Cell Blocks. S memory cells sharing the same input gate and the same output gate form a structure called a memory cell block of size S. Memory cell blocks facilitate information storage. As with conventional neural nets, it is not so easy to code a distributed input within a single cell. Since each memory cell block has as many gate units as a single memory cell (namely, two), the block architecture can be even slightly more efficient. A memory cell block of size 1 is just a simple memory cell. In the experiments in section 5, we will use memory cell blocks of various sizes. 4.5 Learning. We use a variant of RTRL (e.g., Robinson & Fallside, 1987) that takes into account the altered, multiplicative dynamics caused by input and output gates. To ensure nondecaying error backpropagation through internal states of memory cells, as with truncated BPTT (e.g., Williams & Peng, 1990), errors arriving at memory cell net inputs (for cell cj , this includes netcj , netinj , netoutj ) do not get propagated back further in time (although they do serve to change the incoming weights). Only within memory cells, are errors propagated back through previous internal states scj .2 To visualize 2 For intracellular backpropagation in a quite different context, see also Doya and Yoshizawa (1989).
Long Short-Term Memory
1747
this, once an error signal arrives at a memory cell output, it gets scaled by output gate activation and h0 . Then it is within the memory cell’s CEC, where it can flow back indefinitely without ever being scaled. When it leaves the memory cell through the input gate and g, it is scaled once more by input gate activation and g0 . It then serves to change the incoming weights before it is truncated (see the appendix for formulas). 4.6 Computational Complexity. As with Mozer’s focused recurrent backpropagation algorithm (Mozer, 1989), only the derivatives ∂scj /∂wil need to be stored and updated. Hence the LSTM algorithm is very efficient, with an excellent update complexity of O(W), where W the number of weights (see details in the appendix). Hence, LSTM and BPTT for fully recurrent nets have the same update complexity per time step (while RTRL’s is much worse). Unlike full BPTT, however, LSTM is local in space and time:3 there is no need to store activation values observed during sequence processing in a stack with potentially unlimited size. 4.7 Abuse Problem and Solutions. In the beginning of the learning phase, error reduction may be possible without storing information over time. The network will thus tend to abuse memory cells, for example, as bias cells (it might make their activations constant and use the outgoing connections as adaptive thresholds for other units). The potential difficulty is that it may take a long time to release abused memory cells and make them available for further learning. A similar “abuse problem” appears if two memory cells store the same (redundant) information. There are at least two solutions to the abuse problem: (1) sequential network construction (e.g., Fahlman, 1991): a memory cell and the corresponding gate units are added to the network whenever the error stops decreasing (see experiment 2 in section 5), and (2) output gate bias: each output gate gets a negative initial bias, to push initial memory cell activations toward zero. Memory cells with more negative bias automatically get “allocated” later (see experiments 1, 3, 4, 5, and 6 in section 5). 4.8 Internal State Drift and Remedies. If memory cell cj ’s inputs are mostly positive or mostly negative, then its internal state sj will tend to drift away over time. This is potentially dangerous, for the h0 (sj ) will then adopt very small values, and the gradient will vanish. One way to circumvent this problem is to choose an appropriate function h. But h(x) = x, for instance, has the disadvantage of unrestricted memory cell output range. Our simple 3 Following Schmidhuber (1989), we say that a recurrent net algorithm is local in space if the update complexity per time step and weight does not depend on network size. We say that a method is local in time if its storage requirements do not depend on input sequence length. For instance, RTRL is local in time but not in space. BPTT is local in space but not in time.
1748
Sepp Hochreiter and Jurgen ¨ Schmidhuber
but effective way of solving drift problems at the beginning of learning is initially to bias the input gate inj toward zero. Although there is a trade-off between the magnitudes of h0 (sj ) on the one hand and of yinj and fin0 j on the other, the potential negative effect of input gate bias is negligible compared to the one of the drifting effect. With logistic sigmoid activation functions, there appears to be no need for fine-tuning the initial bias, as confirmed by experiments 4 and 5 in section 5.4. 5 Experiments Which tasks are appropriate to demonstrate the quality of a novel longtime-lag algorithm? First, minimal time lags between relevant input signals and corresponding teacher signals must be long for all training sequences. In fact, many previous recurrent net algorithms sometimes manage to generalize from very short training sequences to very long test sequences (see, e.g., Pollack, 1991). But a real long-time-lag problem does not have any short-time-lag exemplars in the training set. For instance, Elman’s training procedure, BPTT, offline RTRL, online RTRL, and others fail miserably on real long-time-lag problems. (See, e.g., Hochreiter, 1991; Mozer, 1992.) A second important requirement is that the tasks should be complex enough such that they cannot be solved quickly by simple-minded strategies such as random weight guessing. Recently we discovered (Schmidhuber & Hochreiter, 1996; Hochreiter & Schmidhuber, 1996, 1997) that many long-time-lag tasks used in previous work can be solved more quickly by simple random weight guessing than by the proposed algorithms. For instance, guessing solved a variant of Bengio and Frasconi’s parity problem (1994) much faster4 than the seven methods tested by Bengio et al. (1994) and Bengio and Frasconi (1994). The same is true for some of Miller and Giles’s problems (1993). Of course, this does not mean that guessing is a good algorithm. It just means that some previously used problems are not extremely appropriate to demonstrate the quality of previously proposed algorithms. All our experiments (except experiment 1) involve long minimal time lags; there are no short-time-lag training exemplars facilitating learning. Solutions to most of our tasks are sparse in weight space. They require either many parameters and inputs or high weight precision, such that random weight guessing becomes infeasible. We always use online learning (as opposed to batch learning) and logistic sigmoids as activation functions. For experiments 1 and 2, initial weights are chosen in the range [−0.2, 0.2], for the other experiments in [−0.1, 0.1]. Training sequences are generated randomly according to the various task 4 Different input representations and different types of noise may lead to worse guessing performance (Yoshua Bengio, personal communication, 1996).
Long Short-Term Memory
1749
descriptions. In slight deviation from the notation in appendix A.1, each discrete time step of each input sequence involves three processing steps: (1) use current input to set the input units, (2) compute activations of hidden units (including input gates, output gates, memory cells), and (3) compute output unit activations. Except for experiments 1, 2a, and 2b, sequence elements are randomly generated online, and error signals are generated only at sequence ends. Net activations are reset after each processed input sequence. For comparisons with recurrent nets taught by gradient descent, we give results only for RTRL, except for comparison 2a, which also includes BPTT. Note, however, that untruncated BPTT (see, e.g., Williams & Peng, 1990) computes exactly the same gradient as offline RTRL. With long-time-lag problems, offline RTRL (or BPTT) and the online version of RTRL (no activation resets, online weight changes) lead to almost identical, negative results (as confirmed by additional simulations in Hochreiter, 1991; see also Mozer, 1992). This is because offline RTRL, online RTRL, and full BPTT all suffer badly from exponential error decay. Our LSTM architectures are selected quite arbitrarily. If nothing is known about the complexity of a given problem, a more systematic approach would be to: start with a very small net consisting of one memory cell. If this does not work, try two cells, and so on. Alternatively, use sequential network construction (e.g., Fahlman, 1991). Following is an outline of the experiments: • Experiment 1 focuses on a standard benchmark test for recurrent nets: the embedded Reber grammar. Since it allows for training sequences with short time lags, it is not a long-time-lag problem. We include it because it provides a nice example where LSTM’s output gates are truly beneficial, and it is a popular benchmark for recurrent nets that has been used by many authors. We want to include at least one experiment where conventional BPTT and RTRL do not fail completely (LSTM, however, clearly outperforms them). The embedded Reber grammar’s minimal time lags represent a border case in the sense that it is still possible to learn to bridge them with conventional algorithms. Only slightly longer minimal time lags would make this almost impossible. The more interesting tasks in our article, however, are those that RTRL, BPTT, and others cannot solve at all. • Experiment 2 focuses on noise-free and noisy sequences involving numerous input symbols distracting from the few important ones. The most difficult task (task 2c) involves hundreds of distractor symbols at random positions and minimal time lags of 1000 steps. LSTM solves it; BPTT and RTRL already fail in case of 10-step minimal time lags (see also Hochreiter, 1991; Mozer, 1992). For this reason RTRL and BPTT are omitted in the remaining, more complex experiments, all of which involve much longer time lags.
1750
Sepp Hochreiter and Jurgen ¨ Schmidhuber
• Experiment 3 addresses long-time-lag problems with noise and signal on the same input line. Experiments 3a and 3b focus on Bengio et al.’s 1994 two-sequence problem. Because this problem can be solved quickly by random weight guessing, we also include a far more difficult two-sequence problem (experiment 3c), which requires learning real-valued, conditional expectations of noisy targets, given the inputs. • Experiments 4 and 5 involve distributed, continuous-valued input representations and require learning to store precise, real values for very long time periods. Relevant input signals can occur at quite different positions in input sequences. Again minimal time lags involve hundreds of steps. Similar tasks never have been solved by other recurrent net algorithms. • Experiment 6 involves tasks of a different complex type that also has not been solved by other recurrent net algorithms. Again, relevant input signals can occur at quite different positions in input sequences. The experiment shows that LSTM can extract information conveyed by the temporal order of widely separated inputs. Section 5.7 provides a detailed summary of experimental conditions in two tables for reference. 5.1 Experiment 1: Embedded Reber Grammar. 5.1.1 Task. Our first task is to learn the embedded Reber grammar (Smith & Zipser, 1989; Cleeremans, Servan-Schreiber, & McClelland, 1989; Fahlman,1991). Since it allows for training sequences with short time lags (of as few as nine steps), it is not a long-time-lag problem. We include it for two reasons: (1) it is a popular recurrent net benchmark used by many authors, and we wanted to have at least one experiment where RTRL and BPTT do not fail completely, and (2) it shows nicely how output gates can be beneficial. Starting at the left-most node of the directed graph in Figure 3, symbol strings are generated sequentially (beginning with the empty string) by following edges—and appending the associated symbols to the current string—until the right-most node is reached (the Reber grammar substrings are analogously generated from Figure 4). Edges are chosen randomly if there is a choice (probability: 0.5). The net’s task is to read strings, one symbol at a time, and to predict the next symbol (error signals occur at every time step). To predict the symbol before last, the net has to remember the second symbol. 5.1.2 Comparison. We compare LSTM to Elman nets trained by Elman’s training procedure (ELM) (results taken from Cleeremans et al., 1989), Fahlman’s recurrent cascade-correlation (RCC) (results taken from Fahlman,
Long Short-Term Memory
1751
S
X
S
T B
X
E
P
P
V V
T
Figure 3: Transition diagram for the Reber grammar.
REBER GRAMMAR
T
T
B
E
P
P REBER GRAMMAR
Figure 4: Transition diagram for the embedded Reber grammar. Each box represents a copy of the Reber grammar (see Figure 3).
1752
Sepp Hochreiter and Jurgen ¨ Schmidhuber
1991), and RTRL (results taken from Smith & Zipser, 1989), where only the few successful trials are listed). Smith and Zipser actually make the task easier by increasing the probability of short-time-lag exemplars. We did not do this for LSTM. 5.1.3 Training/Testing. We use a local input-output representation (seven input units, seven output units). Following Fahlman, we use 256 training strings and 256 separate test strings. The training set is generated randomly; training exemplars are picked randomly from the training set. Test sequences are generated randomly, too, but sequences already used in the training set are not used for testing. After string presentation, all activations are reinitialized with zeros. A trial is considered successful if all string symbols of all sequences in both test set and training set are predicted correctly— that is, if the output unit(s) corresponding to the possible next symbol(s) is(are) always the most active ones. 5.1.4 Architectures. Architectures for RTRL, ELM, and RCC are reported in the references listed above. For LSTM, we use three (four) memory cell blocks. Each block has two (one) memory cells. The output layer’s only incoming connections originate at memory cells. Each memory cell and each gate unit receives incoming connections from all memory cells and gate units (the hidden layer is fully connected; less connectivity may work as well). The input layer has forward connections to all units in the hidden layer. The gate units are biased. These architecture parameters make it easy to store at least three input signals (architectures 3-2 and 4-1 are employed to obtain comparable numbers of weights for both architectures: 264 for 4-1 and 276 for 3-2). Other parameters may be appropriate as well, however. All sigmoid functions are logistic with output range [0, 1], except for h, whose range is [−1, 1], and g, whose range is [−2, 2]. All weights are initialized in [−0.2, 0.2], except for the output gate biases, which are initialized to −1, −2, and −3, respectively (see abuse problem, solution 2 of section 4). We tried learning rates of 0.1, 0.2, and 0.5. 5.1.5 Results. We use three different, randomly generated pairs of training and test sets. With each such pair we run 10 trials with different initial weights. See Table 1 for results (mean of 30 trials). Unlike the other methods, LSTM always learns to solve the task. Even when we ignore the unsuccessful trials of the other approaches, LSTM learns much faster. 5.1.6 Importance of Output Gates. The experiment provides a nice example where the output gate is truly beneficial. Learning to store the first T or P should not perturb activations representing the more easily learnable transitions of the original Reber grammar. This is the job of the output gates. Without output gates, we did not achieve fast learning.
Long Short-Term Memory
1753
Table 1: Experiment 1: Embedded Reber Grammar.
Method
Hidden Units
Number of Weights
Learning Rate
RTRL RTRL ELM RCC LSTM LSTM LSTM LSTM LSTM
3 12 15 7–9 4 blocks, size 1 3 blocks, size 2 3 blocks, size 2 4 blocks, size 1 3 blocks, size 2
≈ 170 ≈ 494 ≈ 435 ≈ 119–198 264 276 276 264 276
0.05 0.1
0.1 0.1 0.2 0.5 0.5
% of Success
After
Some fraction Some fraction 0 50 100 100 97 97 100
173,000 25,000 >200,000 182,000 39,740 21,730 14,060 9500 8440
Notes: Percentage of successful trials and number of sequence presentations until success for RTRL (results taken from Smith & Zipser, 1989), Elman net trained by Elman’s procedure (results taken from Cleeremans et al., 1989), recurrent cascade-correlation (results taken from Fahlman, 1991), and our new approach (LSTM). Weight numbers in the first four rows are estimates, the corresponding papers do not provide all the technical details. Only LSTM almost always learns to solve the task (only 2 failures out of 150 trials). Even when we ignore the unsuccessful trials of the other approaches, LSTM learns much faster (the number of required training examples in the bottom row varies between 3800 and 24,100).
5.2 Experiment 2: Noise-Free and Noisy Sequences. 5.2.1 Task 2a: Noise-Free Sequences with Long Time Lags. There are p + 1 possible input symbols denoted a1 , . . . , ap−1 , ap = x, ap+1 = y. ai is locally represented by the p + 1-dimensional vector whose ith component is 1 (all other components are 0). A net with p + 1 input units and p + 1 output units sequentially observes input symbol sequences, one at a time, permanently trying to predict the next symbol; error signals occur at every time step. To emphasize the long-time-lag problem, we use a training set consisting of only two very similar sequences: (y, a1 , a2 , . . . , ap−1 , y) and (x, a1 , a2 , . . . , ap−1 , x). Each is selected with probability 0.5. To predict the final element, the net has to learn to store a representation of the first element for p time steps. We compare real-time recurrent learning for fully recurrent nets (RTRL), back-propagation through time (BPTT), the sometimes very successful two-net neural sequence chunker (CH; Schmidhuber, 1992b), and our new method (LSTM). In all cases, weights are initialized in [−0.2, 0.2]. Due to limited computation time, training is stopped after 5 million sequence presentations. A successful run is one that fulfills the following criterion: after training, during 10,000 successive, randomly chosen input sequences, the maximal absolute error of all output units is always below 0.25.
1754
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Table 2: Task 2a: Percentage of Successful Trials and Number of Training Sequences until Success.
Method
Delay p
Learning Rate
Number of Weights
% Successful Trials
Success After
RTRL RTRL RTRL RTRL RTRL BPTT CH LSTM
4 4 4 10 100 100 100 100
1.0 4.0 10.0 1.0–10.0 1.0–10.0 1.0–10.0 1.0 1.0
36 36 36 144 10404 10404 10506 10504
78 56 22 0 0 0 33 100
1,043,000 892,000 254,000 > 5,000,000 > 5,000,000 > 5,000,000 32,400 5,040
Notes: Table entries refer to means of 18 trials. With 100 time-step delays, only CH and LSTM achieve successful trials. Even when we ignore the unsuccessful trials of the other approaches, LSTM learns much faster.
Architectures. RTRL: One self-recurrent hidden unit, p + 1 nonrecurrent output units. Each layer has connections from all layers below. All units use the logistic activation function sigmoid in [0, 1]. BPTT: Same architecture as the one trained by RTRL. CH: Both net architectures like RTRL’s, but one has an additional output for predicting the hidden unit of the other one (see Schmidhuber, 1992b, for details). LSTM: As with RTRL, but the hidden unit is replaced by a memory cell and an input gate (no output gate required). g is the logistic sigmoid, and h is the identity function h : h(x) = x, ∀x. Memory cell and input gate are added once the error has stopped decreasing (see abuse problem: solution 1 in section 4). Results. Using RTRL and a short four-time-step delay (p = 4), 7/9 of all trials were successful. No trial was successful with p = 10. With long time lags, only the neural sequence chunker and LSTM achieved successful trials; BPTT and RTRL failed. With p = 100, the two-net sequence chunker solved the task in only one-third of all trials. LSTM, however, always learned to solve the task. Comparing successful trials only, LSTM learned much faster. See Table 2 for details. It should be mentioned, however, that a hierarchical chunker can also always quickly solve this task (Schmidhuber, 1992c, 1993). 5.2.2 Task 2b: No Local Regularities. With task 2a, the chunker sometimes learns to predict the final element correctly, but only because of pre-
Long Short-Term Memory
1755
dictable local regularities in the input stream that allow for compressing the sequence. In a more difficult task, involving many more different possible sequences, we remove compressibility by replacing the deterministic subsequence (a1 , a2 , . . . , ap−1 ) by a random subsequence (of length p − 1) over the alphabet a1 , a2 , . . . , ap−1 . We obtain two classes (two sets of sequences) {(y, ai1 , ai2 , . . . , aip−1 , y) | 1 ≤ i1 , i2 , . . . , ip−1 ≤ p − 1} and {(x, ai1 , ai2 , . . . , aip−1 , x) | 1 ≤ i1 , i2 , . . . , ip−1 ≤ p − 1}. Again, every next sequence element has to be predicted. The only totally predictable targets, however, are x and y, which occur at sequence ends. Training exemplars are chosen randomly from the two classes. Architectures and parameters are the same as in experiment 2a. A successful run is one that fulfills the following criterion: after training, during 10,000 successive, randomly chosen input sequences, the maximal absolute error of all output units is below 0.25 at sequence end. Results. As expected, the chunker failed to solve this task (so did BPTT and RTRL, of course). LSTM, however, was always successful. On average (mean of 18 trials), success for p = 100 was achieved after 5680 sequence presentations. This demonstrates that LSTM does not require sequence regularities to work well. 5.2.3 Task 2c: Very Long Time Lags—No Local Regularities. This is the most difficult task in this subsection. To our knowledge, no other recurrent net algorithm can solve it. Now there are p + 4 possible input symbols denoted a1 , . . . , ap−1 , ap , ap+1 = e, ap+2 = b, ap+3 = x, ap+4 = y. a1 , . . . , ap are also called distractor symbols. Again, ai is locally represented by the p + 4-dimensional vector whose ith component is 1 (all other components are 0). A net with p + 4 input units and 2 output units sequentially observes input symbol sequences, one at a time. Training sequences are randomly chosen from the union of two very similar subsets of sequences: {(b, y, ai1 , ai2 , . . . , aiq+k , e, y) | 1 ≤ i1 , i2 , . . . , iq+k ≤ q} and {(b, x, ai1 , ai2 , . . . , aiq+k , e, x) | 1 ≤ i1 , i2 , . . . , iq+k ≤ q}. To produce a training sequence, we randomly generate a sequence prefix of length q + 2, randomly generate a sequence suffix of additional elements (6= b, e, x, y) with probability 9/10 or, alternatively, an e with probability 1/10. In the latter case, we conclude the sequence with x or y, depending on the second element. For a given k, this leads to a uniform distribution on the possible sequences with length q + k + 4. The minimal sequence length is q + 4; the expected length is 4+
µ ¶k ∞ X 9 1 (q + k) = q + 14. 10 10 k=0
The expected number of occurrences of element ai , 1 ≤ i ≤ p, in a sequence q is (q + 10)/p ≈ p . The goal is to predict the last symbol, which always occurs after the “trigger symbol” e. Error signals are generated only at sequence
1756
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Table 3: Task 2c: LSTM with Very Long Minimal Time Lags q + 1 and a Lot of Noise. q (Time Lag −1)
p (Number of Random Inputs)
q p
Number of Weights
Success After
50 100 200 500 1000 1000 1000 1000 1000
50 100 200 500 1,000 500 200 100 50
1 1 1 1 1 2 5 10 20
364 664 1264 3064 6064 3064 1264 664 364
30,000 31,000 33,000 38,000 49,000 49,000 75,000 135,000 203,000
Notes: p is the number of available distractor symbols (p + 4 is the number of input units). q/p is the expected number of occurrences of a given distractor symbol in a sequence. The right-most column lists the number of training sequences required by LSTM (BPTT, RTRL, and the other competitors have no chance of solving this task). If we let the number of distractor symbols (and weights) increase in proportion to the time lag, learning time increases very slowly. The lower block illustrates the expected slowdown due to increased frequency of distractor symbols.
ends. To predict the final element, the net has to learn to store a representation of the second element for at least q + 1 time steps (until it sees the trigger symbol e). Success is defined as prediction error (for final sequence element) of both output units always below 0.2, for 10,000 successive, randomly chosen input sequences. Architecture/Learning. The net has p + 4 input units and 2 output units. Weights are initialized in [−0.2, 0.2]. To avoid too much learning time variance due to different weight initializations, the hidden layer gets two memory cells (two cell blocks of size 1, although one would be sufficient). There are no other hidden units. The output layer receives connections only from memory cells. Memory cells and gate units receive connections from input units, memory cells, and gate units (the hidden layer is fully connected). No bias weights are used. h and g are logistic sigmoids with output ranges [−1, 1] and [−2, 2], respectively. The learning rate is 0.01. Note that the minimal time lag is q + 1; the net never sees short training sequences facilitating the classification of long test sequences. Results. Twenty trials were made for all tested pairs (p, q). Table 3 lists the mean of the number of training sequences required by LSTM to achieve success (BPTT and RTRL have no chance of solving nontrivial tasks with minimal time lags of 1000 steps).
Long Short-Term Memory
1757
Scaling. Table 3 shows that if we let the number of input symbols (and weights) increase in proportion to the time lag, learning time increases very slowly. This is another remarkable property of LSTM not shared by any other method we are aware of. Indeed, RTRL and BPTT are far from scaling reasonably; instead, they appear to scale exponentially and appear quite useless when the time lags exceed as few as 10 steps. Distractor Influence. In Table 3, the column headed by q/p gives the expected frequency of distractor symbols. Increasing this frequency decreases learning speed, an effect due to weight oscillations caused by frequently observed input symbols. 5.3 Experiment 3: Noise and Signal on Same Channel. This experiment serves to illustrate that LSTM does not encounter fundamental problems if noise and signal are mixed on the same input line. We initially focus on Bengio et al.’s simple 1994 two-sequence problem. In experiment 3c we pose a more challenging two-sequence problem. 5.3.1 Task 3a (Two-Sequence Problem). The task is to observe and then classify input sequences. There are two classes, each occurring with probability 0.5. There is only one input line. Only the first N real-valued sequence elements convey relevant information about the class. Sequence elements at positions t > N are generated by a gaussian with mean zero and variance 0.2. Case N = 1: the first sequence element is 1.0 for class 1, and −1.0 for class 2. Case N = 3: the first three elements are 1.0 for class 1 and −1.0 for class 2. The target at the sequence end is 1.0 for class 1 and 0.0 for class 2. Correct classification is defined as absolute output error at sequence end below 0.2. Given a constant T, the sequence length is randomly selected between T and T + T/10 (a difference to Bengio et al.’s problem is that they also permit shorter sequences of length T/2). Guessing. Bengio et al. (1994) and Bengio and Frasconi (1994) tested seven different methods on the two-sequence problem. We discovered, however, that random weight guessing easily outperforms them all because the problem is so simple.5 See Schmidhuber and Hochreiter (1996) and Hochreiter and Schmidhuber (1996, 1997) for additional results in this vein. LSTM Architecture. We use a three-layer net with one input unit, one output unit, and three cell blocks of size 1. The output layer receives connections only from memory cells. Memory cells and gate units receive inputs from input units, memory cells, and gate units and have bias weights. Gate 5 However, different input representations and different types of noise may lead to worse guessing performance (Yoshua Bengio, personal communication, 1996).
1758
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Table 4: Task 3a: Bengio et al.’s Two-Sequence Problem.
T
N
Stop: ST1
Stop: ST2
Number of Weights
ST2: Fraction Misclassified
100 100 1000
3 1 3
27,380 58,370 446,850
39,850 64,330 452,460
102 102 102
0.000195 0.000117 0.000078
Notes: T is minimal sequence length. N is the number of information-conveying elements at sequence begin. The column headed by ST1 (ST2) gives the number of sequence presentations required to achieve stopping criterion ST1 (ST2). The right-most column lists the fraction of misclassified posttraining sequences (with absolute error > 0.2) from a test set consisting of 2560 sequences (tested after ST2 was achieved). All values are means of 10 trials. We discovered, however, that this problem is so simple that random weight guessing solves it faster than LSTM and any other method for which there are published results.
units and output unit are logistic sigmoid in [0, 1], h in [−1, 1], and g in [−2, 2]. Training/Testing. All weights (except the bias weights to gate units) are randomly initialized in the range [−0.1, 0.1]. The first input gate bias is initialized with −1.0, the second with −3.0, and the third with −5.0. The first output gate bias is initialized with −2.0, the second with −4.0, and the third with −6.0. The precise initialization values hardly matter though, as confirmed by additional experiments. The learning rate is 1.0. All activations are reset to zero at the beginning of a new sequence. We stop training (and judge the task as being solved) according to the following criteria: ST1: none of 256 sequences from a randomly chosen test set is misclassified; ST2: ST1 is satisfied, and mean absolute test set error is below 0.01. In case of ST2, an additional test set consisting of 2560 randomly chosen sequences is used to determine the fraction of misclassified sequences. Results. See Table 4. The results are means of 10 trials with different weight initializations in the range [−0.1, 0.1]. LSTM is able to solve this problem, though by far not as fast as random weight guessing (see “Guessing” above). Clearly this trivial problem does not provide a very good testbed to compare performance of various nontrivial algorithms. Still, it demonstrates that LSTM does not encounter fundamental problems when faced with signal and noise on the same channel. 5.3.2 Task 3b. The architecture, parameters, and other elements are as in task 3a, but now with gaussian noise (mean 0 and variance 0.2) added to the
Long Short-Term Memory
1759
Table 5: Task 3b: Modified Two-Sequence Problem.
T
N
Stop: ST1
Stop: ST2
Number of Weights
ST2: Fraction Misclassified
100 100 1000
3 1 1
41,740 74,950 481,060
43,250 78,430 485,080
102 102 102
0.00828 0.01500 0.01207
Note: Same as in Table 4, but now the information-conveying elements are also perturbed by noise.
information-conveying elements (t 0.1. The network output is considered acceptable if the mean absolute difference between noise-free target and output is below 0.015. Since this requires high weight precision, task 3c (unlike tasks 3a and 3b) cannot be solved quickly by random guessing. Training/Testing. The learning rate is 0.1. We stop training according to the following criterion: none of 256 sequences from a randomly chosen test set is misclassified, and mean absolute difference between the noise-free target and output is below 0.015. An additional test set consisting of 2560 randomly chosen sequences is used to determine the fraction of misclassified sequences. Results. See Table 6. The results represent means of 10 trials with different weight initializations. Despite the noisy targets, LSTM still can solve the problem by learning the expected target values.
1760
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Table 6: Task 3c: Modified, More Challenging Two-Sequence Problem.
T
N
Stop
Number of Weights
Fraction Misclassified
Average Difference to Mean
100 100
3 1
269,650 565,640
102 102
0.00558 0.00441
0.014 0.012
Notes: Same as in Table 4, but with noisy real-valued targets. The system has to learn the conditional expectations of the targets given the inputs. The right-most column provides the average difference between network output and expected target. Unlike tasks 3a and 3b, this one cannot be solved quickly by random weight guessing.
5.4 Experiment 4: Adding Problem. The difficult task in this section is of a type that has never been solved by other recurrent net algorithms. It shows that LSTM can solve long-time-lag problems involving distributed, continuous-valued representations. 5.4.1 Task. Each element of each input sequence is a pair of components. The first component is a real value randomly chosen from the interval [−1, 1]; the second is 1.0, 0.0, or −1.0 and is used as a marker. At the end of each sequence, the task is to output the sum of the first components of those pairs that are marked by second components equal to 1.0. Sequences have random lengths between the minimal sequence length T and T + T/10. In a given sequence, exactly two pairs are marked, as follows: we first randomly select and mark one of the first 10 pairs (whose first component we call X1 ). Then we randomly select and mark one of the first T/2 − 1 still unmarked pairs (whose first component we call X2 ). The second components of all remaining pairs are zero except for the first and final pair, whose second components are −1. (In the rare case where the first pair of the sequence gets marked, we set X1 to zero.) An error signal is generated only at the sequence end: the target is 0.5 + (X1 + X2 )/4.0 (the sum X1 + X2 scaled to the interval [0, 1]). A sequence is processed correctly if the absolute error at the sequence end is below 0.04. 5.4.2 Architecture. We use a three-layer net with two input units, one output unit, and two cell blocks of size 2. The output layer receives connections only from memory cells. Memory cells and gate units receive inputs from memory cells and gate units (the hidden layer is fully connected; less connectivity may work as well). The input layer has forward connections to all units in the hidden layer. All noninput units have bias weights. These architecture parameters make it easy to store at least two input signals (a cell block size of 1 works well, too). All activation functions are logistic with output range [0, 1], except for h, whose range is [−1, 1], and g, whose range is [−2, 2].
Long Short-Term Memory
1761
Table 7: Experiment 4: Results for the Adding Problem.
T
Minimal Lag
Number of Weights
Number of Wrong Predictions
Success After
100 500 1000
50 250 500
93 93 93
1 out of 2560 0 out of 2560 1 out of 2560
74,000 209,000 853,000
Notes: T is the minimal sequence length, T/2 the minimal time lag. “Number of Wrong Predictions” is the number of incorrectly processed sequences (error > 0.04) from a test set containing 2560 sequences. The right-most column gives the number of training sequences required to achieve the stopping criterion. All values are means of 10 trials. For T = 1000 the number of required training examples varies between 370,000 and 2,020,000, exceeding 700,000 in only three cases.
5.4.3 State Drift Versus Initial Bias. Note that the task requires storing the precise values of real numbers for long durations; the system must learn to protect memory cell contents against even minor internal state drift (see section 4). To study the significance of the drift problem, we make the task even more difficult by biasing all noninput units, thus artificially inducing internal state drift. All weights (including the bias weights) are randomly initialized in the range [−0.1, 0.1]. Following section 4’s remedy for state drifts, the first input gate bias is initialized with −3.0 and the second with −6.0 (though the precise values hardly matter, as confirmed by additional experiments). 5.4.4 Training/Testing. The learning rate is 0.5. Training is stopped once the average training error is below 0.01, and the 2000 most recent sequences were processed correctly. 5.4.5 Results. With a test set consisting of 2560 randomly chosen sequences, the average test set error was always below 0.01, and there were never more than three incorrectly processed sequences. Table 7 shows details. The experiment demonstrates that LSTM is able to work well with distributed representations, LSTM is able to learn to perform calculations involving continuous values, and since the system manages to store continuous values without deterioration for minimal delays of T/2 time steps, there is no significant, harmful internal state drift. 5.5 Experiment 5: Multiplication Problem. One may argue that LSTM is a bit biased toward tasks such as the adding problem from the previous subsection. Solutions to the adding problem may exploit the CEC’s built-in integration capabilities. Although this CEC property may be viewed as a
1762
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Table 8: Experiment 5: Results for the Multiplication Problem.
T
Minimal Lag
Number of Weights
nseq
Number of Wrong Predictions
MSE
Success After
100 100
50 50
93 93
140 13
139 out of 2560 14 out of 2560
0.0223 0.0139
482,000 1,273,000
Notes: T is the minimal sequence length and T/2 the minimal time lag. We test on a test set containing 2560 sequences as soon as less than nseq of the 2000 most recent training sequences lead to error > 0.04. “Number of Wrong Predictions” is the number of test sequences with error > 0.04. MSE is the mean squared error on the test set. The right-most column lists numbers of training sequences required to achieve the stopping criterion. All values are means of 10 trials.
feature rather than a disadvantage (integration seems to be a natural subtask of many tasks occurring in the real world), the question arises whether LSTM can also solve tasks with inherently nonintegrative solutions. To test this, we change the problem by requiring the final target to equal the product (instead of the sum) of earlier marked inputs. 5.5.1 Task. This is like the task in section 5.4, except that the first component of each pair is a real value randomly chosen from the interval [0, 1]. In the rare case where the first pair of the input sequence gets marked, we set X1 to 1.0. The target at sequence end is the product X1 × X2 . 5.5.2 Architecture. This is as in section 5.4. All weights (including the bias weights) are randomly initialized in the range [−0.1, 0.1]. 5.5.3 Training/Testing. The learning rate is 0.1. We test performance twice: as soon as less than nseq of the 2000 most recent training sequences lead to absolute errors exceeding 0.04, where nseq = 140 and nseq = 13. Why these values? nseq = 140 is sufficient to learn storage of the relevant inputs. It is not enough though to fine-tune the precise final outputs. nseq = 13, however, leads to quite satisfactory results. 5.5.4 Results. For nseq = 140 (nseq = 13) with a test set consisting of 2560 randomly chosen sequences, the average test set error was always below 0.026 (0.013), and there were never more than 170 (15) incorrectly processed sequences. Table 8 shows details. (A net with additional standard hidden units or with a hidden layer above the memory cells may learn the finetuning part more quickly.) The experiment demonstrates that LSTM can solve tasks involving both continuous-valued representations and nonintegrative information processing.
Long Short-Term Memory
1763
5.6 Experiment 6: Temporal Order. In this subsection, LSTM solves other difficult (but artificial) tasks that have never been solved by previous recurrent net algorithms. The experiment shows that LSTM is able to extract information conveyed by the temporal order of widely separated inputs. 5.6.1 Task 6a: Two Relevant, Widely Separated Symbols. The goal is to classify sequences. Elements and targets are represented locally (input vectors with only one nonzero bit). The sequence starts with an E, ends with a B (the “trigger symbol”), and otherwise consists of randomly chosen symbols from the set {a, b, c, d} except for two elements at positions t1 and t2 that are either X or Y. The sequence length is randomly chosen between 100 and 110, t1 is randomly chosen between 10 and 20, and t2 is randomly chosen between 50 and 60. There are four sequence classes Q, R, S, U, which depend on the temporal order of X and Y. The rules are: X, X → Q; X, Y → R; Y, X → S; Y, Y → U. 5.6.2 Task 6b: Three Relevant, Widely Separated Symbols. Again, the goal is to classify sequences. Elements and targets are represented locally. The sequence starts with an E, ends with a B (the trigger symbol), and otherwise consists of randomly chosen symbols from the set {a, b, c, d} except for three elements at positions t1 , t2 , and t3 that are either X or Y. The sequence length is randomly chosen between 100 and 110, t1 is randomly chosen between 10 and 20, t2 is randomly chosen between 33 and 43, and t3 is randomly chosen between 66 and 76. There are eight sequence classes— Q, R, S, U, V, A, B, C—which depend on the temporal order of the Xs and Ys. The rules are: X, X, X → Q; X, X, Y → R; X, Y, X → S; X, Y, Y → U; Y, X, X → V; Y, X, Y → A; Y, Y, X → B; Y, Y, Y → C. There are as many output units as there are classes. Each class is locally represented by a binary target vector with one nonzero component. With both tasks, error signals occur only at the end of a sequence. The sequence is classified correctly if the final absolute error of all output units is below 0.3. Architecture. We use a three-layer net with eight input units, two (three) cell blocks of size 2, and four (eight) output units for task 6a (6b). Again all noninput units have bias weights, and the output layer receives connections from memory cells, only. Memory cells and gate units receive inputs from input units, memory cells, and gate units (the hidden layer is fully connected; less connectivity may work as well). The architecture parameters for task 6a (6b) make it easy to store at least two (three) input signals. All activation functions are logistic with output range [0, 1], except for h, whose range is [−1, 1], and g, whose range is [−2, 2].
1764
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Table 9: Experiment 6: Results for the Temporal Order Problem.
Task
Number of Weights
Number of Wrong Predictions
Success After
Task 6a Task 6b
156 308
1 out of 2560 2 out of 2560
31,390 571,100
Notes: “Number of Wrong Predictions” is the number of incorrectly classified sequences (error > 0.3 for at least one output unit) from a test set containing 2560 sequences. The right-most column gives the number of training sequences required to achieve the stopping criterion. The results for task 6a are means of 20 trials; those for task 6b of 10 trials.
Training/Testing. The learning rate is 0.5 (0.1) for experiment 6a (6b). Training is stopped once the average training error falls below 0.1 and the 2000 most recent sequences were classified correctly. All weights are initialized in the range [−0.1, 0.1]. The first input gate bias is initialized with −2.0, the second with −4.0, and (for experiment 6b) the third with −6.0 (again, we confirmed by additional experiments that the precise values hardly matter). Results. With a test set consisting of 2560 randomly chosen sequences, the average test set error was always below 0.1, and there were never more than three incorrectly classified sequences. Table 9 shows details. The experiment shows that LSTM is able to extract information conveyed by the temporal order of widely separated inputs. In task 6a, for instance, the delays between the first and second relevant input and between the second relevant input and sequence end are at least 30 time steps. Typical Solutions. In experiment 6a, how does LSTM distinguish between temporal orders (X, Y) and (Y, X)? One of many possible solutions is to store the first X or Y in cell block 1 and the second X/Y in cell block 2. Before the first X/Y occurs, block 1 can see that it is still empty by means of its recurrent connections. After the first X/Y, block 1 can close its input gate. Once block 1 is filled and closed, this fact will become visible to block 2 (recall that all gate units and all memory cells receive connections from all nonoutput units). Typical solutions, however, require only one memory cell block. The block stores the first X or Y; once the second X/Y occurs, it changes its state depending on the first stored symbol. Solution type 1 exploits the connection between memory cell output and input gate unit. The following events cause different input gate activations: X occurs in conjunction with a filled block; X occurs in conjunction with an empty block. Solution type 2 is based on a strong, positive connection between memory cell output and memory cell input. The previous occurrence of X (Y) is represented by a
Long Short-Term Memory
1765
Table 10: Summary of Experimental Conditions for LSTM, Part I. (1) (2) Task p 1-1 1-2 1-3 1-4 1-5 2a 2b 2c-1 2c-2 2c-3 2c-4 2c-5 2c-6 2c-7 2c-8 2c-9 3a 3b 3c 4-1 4-2 4-3 5 6a 6b
9 9 9 9 9 100 100 50 100 200 500 1000 1000 1000 1000 1000 100 100 100 100 500 1000 100 100 100
(3) (4) (5) (6) (7) lag b s in out 9 9 9 9 9 100 100 50 100 200 500 1000 1000 1000 1000 1000 100 100 100 50 250 500 50 40 24
4 3 3 4 3 1 1 2 2 2 2 2 2 2 2 2 3 3 3 2 2 2 2 2 3
(8) w
(9) c
(10) ogb
1 7 7 264 F −1, −2, −3, −4 2 7 7 276 F −1, −2, −3 2 7 7 276 F −1, −2, −3 1 7 7 264 F −1, −2, −3, −4 2 7 7 276 F −1, −2, −3 1 101 101 10,504 B No og 1 101 101 10,504 B No og 1 54 2 364 F None 1 104 2 664 F None 1 204 2 1264 F None 1 504 2 3064 F None 1 1004 2 6064 F None 1 504 2 3064 F None 1 204 2 1264 F None 1 104 2 664 F None 1 54 2 364 F None 1 1 1 102 F −2, −4, −6 1 1 1 102 F −2, −4, −6 1 1 1 102 F −2, −4, −6 2 2 1 93 F r 2 2 1 93 F r 2 2 1 93 F r 2 2 1 93 F r 2 8 4 156 F r 2 8 8 308 F r
(11) igb r r r r r None None None None None None None None None None None −1, −3, −5 −1, −3, −5 −1, −3, −5 −3, −6 −3, −6 −3, −6 r −2, −4 −2, −4, −6
(12) (13) (14) bias h g ga ga ga ga ga None None None None None None None None None None None b1 b1 b1 All All All All All All
h1 h1 h1 h1 h1 id id h1 h1 h1 h1 h1 h1 h1 h1 h1 h1 h1 h1 h1 h1 h1 h1 h1 h1
g2 g2 g2 g2 g2 g1 g1 g2 g2 g2 g2 g2 g2 g2 g2 g2 g2 g2 g2 g2 g2 g2 g2 g2 g2
(15) α 0.1 0.1 0.2 0.5 0.5 1.0 1.0 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 1.0 1.0 0.1 0.5 0.5 0.5 0.1 0.5 0.1
Notes: Col. 1: task number. Col. 2: minimal sequence length p. Col. 3: minimal number of steps between most recent relevant input information and teacher signal. Col. 4: number of cell blocks b. Col. 5: block size s. Col. 6: Number of input units in. Col. 7: Number of output units out. Col. 8: number of weights w. Col. 9: c describes connectivity: F means “output layer receives connections from memory cells; memory cells and gate units receive connections from input units, memory cells and gate units”; B means “each layer receives connections from all layers below.” Col. 10: Initial output gate bias ogb, where r stands for “randomly chosen from the interval [−0.1, 0.1]” and no og means “no output gate used.” Col. 11: initial input gate bias igb (see Col. 10). Col. 12: which units have bias weights? b1 stands for “all hidden units”, ga for “only gate units,” and all for “all noninput units.” Col. 13: the function h, where id is identity function, h1 is logistic sigmoid in [−2, 2]. Col. 14: the logistic function g, where g1 is sigmoid in [0, 1], g2 in [−1, 1]. Col. 15: learning rate α.
positive (negative) internal state. Once the input gate opens for the second time, so does the output gate, and the memory cell output is fed back to its own input. This causes (X, Y) to be represented by a positive internal state, because X contributes to the new internal state twice (via current internal state and cell output feedback). Similarly, (Y, X) gets represented by a negative internal state. 5.7 Summary of Experimental Conditions. Tables 10 and 11 provide an overview of the most important LSTM parameters and architectural details for experiments 1 through 6. The conditions of the simple experiments 2a
1766
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Table 11: Summary of Experimental Conditions for LSTM, Part II. (1) Task
(2) Select
(3) Interval
(4) Test Set Size
(5) Stopping Criterion
(6) Success
1 2a 2b 2c 3a 3b 3c 4 5 6a 6b
t1 t1 t2 t2 t3 t3 t3 t3 t3 t3 t3
[−0.2, 0.2] [−0.2, 0.2] [−0.2, 0.2] [−0.2, 0.2] [−0.1, 0.1] [−0.1, 0.1] [−0.1, 0.1] [−0.1, 0.1] [−0.1, 0.1] [−0.1, 0.1] [−0.1, 0.1]
256 no test set 10,000 10,000 2560 2560 2560 2560 2560 2560 2560
Training and test correctly pred. After 5 million exemplars After 5 million exemplars After 5 million exemplars ST1 and ST2 (see text) ST1 and ST2 (see text) ST1 and ST2 (see text) ST3(0.01) see text ST3(0.1) ST3(0.1)
See text ABS(0.25) ABS(0.25) ABS(0.2) ABS(0.2) ABS(0.2) See text ABS(0.04) ABS(0.04) ABS(0.3) ABS(0.3)
Notes: Col. 1: task number. Col. 2: training exemplar selelction, where t1 stands for “randomly chosen form training set,” t2 for “randomly chosen from two classes,” and t3 for “randomly generated on line.” Col. 3: weight initialization interval. Col. 4: test set size. Col. 5: Stopping criterion for training, where ST3(β) stands for “average training error below β and the 2000 most recent sequences were processed correctly.” Col. 6: success (correct classification) criterion, where ABS(β) stands for “absolute error of all output units at sequence end is below β.”
and 2b differ slightly from those of the other, more systematic experiments, due to historical reasons. 6 Discussion 6.1 Limitations of LSTM. • The particularly efficient truncated backpropagation version of the LSTM algorithm will not easily solve problems similar to strongly delayed XOR problems, where the goal is to compute the XOR of two widely separated inputs that previously occurred somewhere in a noisy sequence. The reason is that storing only one of the inputs will not help to reduce the expected error; the task is nondecomposable in the sense that it is impossible to reduce the error incrementally by first solving an easier subgoal. In theory, this limitation can be circumvented by using the full gradient (perhaps with additional conventional hidden units receiving input from the memory cells). But we do not recommend computing the full gradient for the following reasons: (1) It increases computational complexity, (2) constant error flow through CECs can be shown only for truncated LSTM, and (3) we actually did conduct a few experiments with nontruncated LSTM. There was no significant difference to truncated LSTM, exactly because outside the CECs, error flow tends
Long Short-Term Memory
1767
to vanish quickly. For the same reason, full BPTT does not outperform truncated BPTT. • Each memory cell block needs two additional units (input and output gate). In comparison to standard recurrent nets, however, this does not increase the number of weights by more than a factor of 9: each conventional hidden unit is replaced by at most three units in the LSTM architecture, increasing the number of weights by a factor of 32 in the fully connected case. Note, however, that our experiments use quite comparable weight numbers for the architectures of LSTM and competing approaches. • Due to its constant error flow through CECs within memory cells, LSTM generally runs into problems similar to those of feedforward nets’ seeing the entire input string at once. For instance, there are tasks that can be quickly solved by random weight guessing but not by the truncated LSTM algorithm with small weight initializations, such as the 500-step parity problem (see the introduction to section 5). Here, LSTM’s problems are similar to the ones of a feedforward net with 500 inputs, trying to solve 500-bit parity. Indeed LSTM typically behaves much like a feedforward net trained by backpropagation that sees the entire input. But that is also precisely why it so clearly outperforms previous approaches on many nontrivial tasks with significant search spaces. • LSTM does not have any problems with the notion of recency that go beyond those of other approaches. All gradient-based approaches, however, suffer from a practical inability to count discrete time steps precisely. If it makes a difference whether a certain signal occurred 99 or 100 steps ago, then an additional counting mechanism seems necessary. Easier tasks, however, such as one that requires making a difference only between, say, 3 and 11 steps, do not pose any problems to LSTM. For instance, by generating an appropriate negative connection between memory cell output and input, LSTM can give more weight to recent inputs and learn decays where necessary. 6.2 Advantages of LSTM. • The constant error backpropagation within memory cells results in LSTM’s ability to bridge very long time lags in case of problems similar to those discussed above. • For long-time-lag problems such as those discussed in this article, LSTM can handle noise, distributed representations, and continuous values. In contrast to finite state automata or hidden Markov models, LSTM does not require an a priori choice of a finite number of states. In principle, it can deal with unlimited state numbers.
1768
Sepp Hochreiter and Jurgen ¨ Schmidhuber
• For problems discussed in this article, LSTM generalizes well, even if the positions of widely separated, relevant inputs in the input sequence do not matter. Unlike previous approaches, ours quickly learns to distinguish between two or more widely separated occurrences of a particular element in an input sequence, without depending on appropriate short-time-lag training exemplars. • There appears to be no need for parameter fine tuning. LSTM works well over a broad range of parameters such as learning rate, input gate bias, and output gate bias. For instance, to some readers the learning rates used in our experiments may seem large. However, a large learning rate pushes the output gates toward zero, thus automatically countermanding its own negative effects. • The LSTM algorithm’s update complexity per weight and time step is essentially that of BPTT, namely, O(1). This is excellent in comparison to other approaches such as RTRL. Unlike full BPTT, however, LSTM is local in both space and time. 7 Conclusion Each memory cell’s internal architecture guarantees constant error flow within its CEC, provided that truncated backpropagation cuts off error flow trying to leak out of memory cells. This represents the basis for bridging very long time lags. Two gate units learn to open and close access to error flow within each memory cell’s CEC. The multiplicative input gate affords protection of the CEC from perturbation by irrelevant inputs. Similarly, the multiplicative output gate protects other units from perturbation by currently irrelevant memory contents. To find out about LSTM’s practical limitations we intend to apply it to real-world data. Application areas will include time-series prediction, music composition, and speech processing. It will also be interesting to augment sequence chunkers (Schmidhuber, 1992b, 1993) by LSTM to combine the advantages of both. Appendix A.1 Algorithm Details. In what follows, the index k ranges over output units, i ranges over hidden units, cj stands for the jth memory cell block, cjv denotes the vth unit of memory cell block cj , u, l, m stand for arbitrary units, and t ranges over all time steps of a given input sequence. The gate unit logistic sigmoid (with range [0, 1]) used in the experiments is f (x) =
1 . 1 + exp(−x)
(A.1)
Long Short-Term Memory
1769
The function h (with range [−1, 1]) used in the experiments is h(x) =
2 −1. 1 + exp(−x)
(A.2)
The function g (with range [−2, 2]) used in the experiments is g(x) =
4 −2. 1 + exp(−x)
(A.3)
A.1.1 Forward Pass. The net input and the activation of hidden unit i are X wiu yu (t − 1) (A.4) neti (t) = u
yi (t) = fi (neti (t)) . The net input and the activation of inj are X winj u yu (t − 1) netinj (t) =
(A.5)
u
yinj (t) = finj (netinj (t)) . The net input and the activation of outj are X woutj u yu (t − 1) netoutj (t) =
(A.6)
u
youtj (t) = foutj (netoutj (t)) . cv
The net input netcjv , the internal state scjv , and the output activation y j of the vth memory cell of memory cell block cj are: X wcjv u yu (t − 1) (A.7) netcjv (t) = u
³ ´ scjv (t) = scjv (t − 1) + yinj (t)g netcjv (t) cv
y j (t) = youtj (t)h(scjv (t)) . The net input and the activation of output unit k are X wku yu (t − 1) netk (t) = u: u not a gate yk (t) = fk (netk (t)) . The backward pass to be described later is based on the following truncated backpropagation formulas.
1770
Sepp Hochreiter and Jurgen ¨ Schmidhuber
A.1.2 Approximate Derivatives for Truncated Backpropagation. The truncated version (see section 4) only approximates the partial derivatives, which is reflected by the ≈tr signs in the notation below. It truncates error flow once it leaves memory cells or gate units. Truncation ensures that there are no loops across which an error that left some memory cell through its input or input gate can reenter the cell through its output or output gate. This in turn ensures constant error flow through the memory cell’s CEC. In the truncated backpropagation version, the following derivatives are replaced by zero: ∂netinj (t) ∂yu (t − 1) ∂netoutj (t) ∂yu (t − 1)
≈tr 0 ∀u, ≈tr 0 ∀u,
and ∂netcj (t) ∂yu (t − 1)
≈tr 0 ∀u.
Therefore we get ∂netinj (t) ∂yinj (t) = fin0 j (netinj (t)) u ≈tr 0 ∀u, u ∂y (t − 1) ∂y (t − 1) ∂netoutj (t) ∂youtj (t) 0 = fout ≈tr 0 ∀u, (netoutj (t)) u j ∂yu (t − 1) ∂y (t − 1) and ∂ycj (t) ∂netoutj (t) ∂ycj (t) ∂netinj (t) ∂ycj (t) = + u u ∂y (t − 1) ∂netoutj (t) ∂y (t − 1) ∂netinj (t) ∂yu (t − 1) +
∂ycj (t) ∂netcj (t) ≈tr 0 ∀u. ∂netcj (t) ∂yu (t − 1)
This implies for all wlm not on connections to cjv , inj , outj (that is, l 6∈ {cjv , inj , outj }): cv
cv
∂y j (t) X ∂y j (t) ∂yu (t − 1) = ≈tr 0. u ∂wlm ∂wlm u ∂y (t − 1) The truncated derivatives of output unit k are: ∂yk (t) = fk0 (netk (t)) ∂wlm
Ã
X
u: u not a gate
!
∂yu (t − 1) wku + δkl ym (t − 1) ∂wlm
Long Short-Term Memory
1771
Sj XX
≈tr fk0 (netk (t))
v=1
j
+
X³ j
δinj l + δoutj l X
+
i: i hidden unit
= fk0 (netk (t))
cv
δcjv l wkcjv
Sj ´X
∂y j (t − 1) ∂wlm cv
w
v=1
kcjv
∂y j (t − 1) ∂wlm
!
∂yi (t − 1) wki + δkl ym (t − 1) ∂wlm ym (t − 1)
P
PSj
wkcjv
v=1 wkcj
v
l=k
cv ∂y j (t−1)
∂wlm cv ∂y j (t−1)
∂wlm ∂yi (t−1) w ki i: i hidden unit ∂wlm
l = cjv l = inj OR l = outj l otherwise (A.8)
where δ is the Kronecker delta (δab = 1 if a = b and 0 otherwise), and Sj is the size of memory cell block cj . The truncated derivatives of a hidden unit i that is not part of a memory cell are: ∂neti (t) ∂yi (t) = fi0 (neti (t)) ≈tr δli fi0 (neti (t))ym (t − 1) . ∂wlm ∂wlm
(A.9)
(Here it would be possible to use the full gradient without affecting constant error flow through internal states of memory cells.) Cell block cj ’s truncated derivatives are: ∂netinj (t) ∂yinj (t) = fin0 j (netinj (t)) ∂wlm ∂wlm ≈tr δinj l fin0 j (netinj (t))ym (t − 1) . ∂netoutj (t) ∂youtj (t) 0 = fout (netoutj (t)) j ∂wlm ∂wlm 0 m ≈tr δoutj l fout (net (t))y (t − 1) . outj j ∂scjv (t) ∂wlm
=
∂scjv (t − 1) ∂wlm
´ ´ ∂netcv (t) ³ ∂yinj (t) ³ j g netcjv (t) + yinj (t)g0 netcjv (t) ∂wlm ∂wlm ´ ³ ´ ∂scv (t − 1) ∂yinj (t) ³ j ≈tr δinj l + δcjv l + δinj l g netcjv (t) ∂wlm ∂wlm ´ ∂netcv (t) ³ j + δcjv l yinj (t)g0 netcjv (t) ∂wlm +
(A.10)
(A.11)
1772
Sepp Hochreiter and Jurgen ¨ Schmidhuber
³
=
δinj l + δcjv l
´ ∂scv (t − 1) j
∂wlm
³ ´ + δinj l fin0 j (netinj (t)) g netcjv (t) ym (t − 1) ´ ³ + δcjv l yinj (t) g0 netcjv (t) ym (t − 1) . ∂scjv (t) ∂y (t) ∂youtj (t) = h(scjv (t)) + h0 (scjv (t)) youtj (t) ∂wlm ∂wlm ∂wlm ∂youtj (t) h(scjv (t)) ≈tr δoutj l ∂wlm ´ ³ ∂scjv (t) youtj (t) . + δinj l + δcjv l h0 (scjv (t)) ∂wlm
(A.12)
cjv
(A.13)
To update the system efficiently at time t, the only (truncated) derivatives that need to be stored at time t − 1 are ∂scjv (t − 1) ∂wlm
,
where l = cjv or l = inj . A.1.3 Backward Pass. We will describe the backward pass only for the particularly efficient truncated gradient version of the LSTM algorithm. For simplicity we will use equal signs even where approximations are made according to the truncated backpropagation equations above. The squared error at time t is given by ³
X
E(t) =
´2
tk (t) − yk (t)
,
(A.14)
k: k output unit
where tk (t) is output unit k’s target at time t. Time t’s contribution to wlm ’s gradient-based update with learning rate α is 1wlm (t) = −α
∂E(t) . ∂wlm
(A.15)
We define some unit l’s error at time step t by el (t) := −
∂E(t) . ∂netl (t)
(A.16)
Using (almost) standard backpropagation, we first compute updates for weights to output units (l = k), weights to hidden units (l = i) and weights
Long Short-Term Memory
1773
to output gates (l = outj ). We obtain (compare formulas A.8, A.9, and A.11): ³ ´ l = k (output) : ek (t) = fk0 (netk (t)) tk (t) − yk (t) , l = i (hidden) : ei (t) = fi0 (neti (t))
X
wki ek (t) ,
(A.18)
k: k output unit
l = outj (output gates) : Sj X 0 eoutj (t) = fout (net (t)) h(scjv (t)) out j j v=1
(A.17)
X k: k output unit
wkcjv ek (t) .
(A.19)
For all possible l time t’s contribution to wlm ’s update is 1wlm (t) = α el (t) ym (t − 1) .
(A.20)
The remaining updates for weights to input gates (l = inj ) and to cell units (l = cjv ) are less conventional. We define some internal state scjv ’s error: escv := − j
∂E(t) ∂scjv (t)
= foutj (netoutj (t)) h0 (scjv (t))
X k: k output unit
wkcjv ek (t) .
(A.21)
We obtain for l = inj or l = cjv , v = 1, . . . , Sj −
Sj ∂scjv (t) ∂E(t) X = escv (t) . j ∂wlm ∂wlm v=1
(A.22)
The derivatives of the internal states with respect to weights and the corresponding weight updates are as follows (compare expression A.12): l = inj (input gates) : ∂scjv (t) ∂scjv (t − 1) = + g(netcjv (t)) fin0 j (netinj (t))ym (t − 1) ; ∂winj m ∂winj m
(A.23)
therefore, time t’s contribution to winj m ’s update is (compare expression A.8): 1winj m (t) = α
Sj X v=1
escv (t) j
∂scjv (t) ∂winj m
.
(A.24)
1774
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Similarly we get (compare expression A.12): l = cjv (memory cells) : ∂scjv (t) ∂wcjv m
=
∂scjv (t − 1) ∂wcjv m
+ g0 (netcjv (t)) finj (netinj (t))ym (t − 1) ;
(A.25)
therefore time t’s contribution to wcjv m ’s update is (compare expression A.8): 1wcjv m (t) = αescv (t) j
∂scjv (t) ∂wcjv m
.
(A.26)
All we need to implement for the backward pass are equations A.17 through A.21 and A.23 through A.26. Each weight’s total update is the sum of the contributions of all time steps. A.1.4 Computational Complexity. step is
LSTM’s update complexity per time
O(KH + KCS + HI + CSI) = O(W),
(A.27)
where K is the number of output units, C is the number of memory cell blocks, S > 0 is the size of the memory cell blocks, H is the number of hidden units, I is the (maximal) number of units forward connected to memory cells, gate units and hidden units, and W = KH + KCS + CSI + 2CI + HI = O(KH + KCS + CSI + HI) is the number of weights. Expression A.27 is obtained by considering all computations of the backward pass: equation A.17 needs K steps; A.18 needs KH steps; A.19 needs KSC steps; A.20 needs K(H + C) steps for output units, HI steps for hidden units, CI steps for output gates; A.21 needs KCS steps; A.23 needs CSI steps; A.24 needs CSI steps; A.25 needs CSI steps; A.26 needs CSI steps. The total is K + 2KH + KC + 2KSC + HI + CI + 4CSI steps, or O(KH + KSC + HI + CSI) steps. We conclude that LSTM algorithm’s update complexity per time step is just like BPTT’s for a fully recurrent net. At a given time step, only the 2CSI most recent ∂scjv /∂wlm values from equations A.23 and A.25 need to be stored. Hence LSTM’s storage complexity also is O(W); it does not depend on the input sequence length. A.2 Error Flow. We compute how much an error signal is scaled while flowing back through a memory cell for q time steps. As a by-product, this analysis reconfirms that the error flow within a memory cell’s CEC is indeed constant, provided that truncated backpropagation cuts off error flow trying to leave memory cells (see also section 3.2). The analysis also highlights a
Long Short-Term Memory
1775
potential for undesirable long-term drifts of scj , as well as the beneficial, countermanding influence of negatively biased input gates. Using the truncated backpropagation learning rule, we obtain ∂scj (t − k) ∂scj (t − k − 1)
= 1+
¢ ∂yinj (t − k) ¡ g netcj (t − k) ∂scj (t − k − 1)
¢ ∂netcj (t − k) ¡ + yinj (t − k)g0 netcj (t − k) ∂scj (t − k − 1) # " in u X ∂y j (t − k) ∂y (t − k − 1) = 1+ ∂yu (t − k − 1) ∂scj (t − k − 1) u ¢ ¡ ×g netcj (t − k) ¢ ¡ + yinj (t − k)g0 netcj (t − k) # " X ∂netcj (t − k) ∂yu (t − k − 1) × ∂yu (t − k − 1) ∂scj (t − k − 1) u ≈tr 1.
(A.28)
The ≈tr sign indicates equality due to the fact that truncated backpropagation replaces by zero the following derivatives: ∂yinj (t − k) ∀u ∂yu (t − k − 1)
and
∂netcj (t − k) ∂yu (t − k − 1)
∀u.
In what follows, an error ϑj (t) starts flowing back at cj ’s output. We redefine X wicj ϑi (t + 1) . (A.29) ϑj (t) := i
Following the definitions and conventions of section 3.1, we compute error flow for the truncated backpropagation learning rule. The error occurring at the output gate is ϑoutj (t) ≈tr
∂youtj (t) ∂ycj (t) ϑj (t) . ∂netoutj (t) ∂youtj (t)
(A.30)
The error occurring at the internal state is ϑscj (t) =
∂scj (t + 1) ∂scj (t)
ϑscj (t + 1) +
∂ycj (t) ϑj (t) . ∂scj (t)
Since we use truncated backpropagation we have X wicj ϑi (t + 1); ϑj (t) = i:i no gate and no memory cell
(A.31)
1776
Sepp Hochreiter and Jurgen ¨ Schmidhuber
therefore we get X ∂ϑj (t) ∂ϑi (t + 1) = ≈tr 0 . wicj ∂ϑscj (t + 1) ∂ϑ scj (t + 1) i
(A.32)
Equations A.31 and A.32 imply constant error flow through internal states of memory cells: ∂ϑscj (t) ∂ϑscj (t + 1)
=
∂scj (t + 1) ∂scj (t)
≈tr 1 .
(A.33)
The error occurring at the memory cell input is ϑcj (t) =
∂g(netcj (t))
∂scj (t)
∂netcj (t) ∂g(netcj (t))
ϑscj (t) .
(A.34)
The error occurring at the input gate is ϑinj (t) ≈tr
∂yinj (t) ∂scj (t) ϑs (t) . ∂netinj (t) ∂yinj (t)) cj
(A.35)
A.2.1 No External Error Flow. Errors are propagated back from units l to unit v along outgoing connections with weights wlv . This “external error” (note that for conventional units there is nothing but external error) at time t is ϑve (t) =
∂yv (t) X ∂netl (t + 1) ϑl (t + 1) . ∂netv (t) l ∂yv (t)
(A.36)
We obtain µ ∂yv (t − 1) ∂ϑoutj (t) ∂netoutj (t) ∂ϑve (t − 1) = ∂ϑj (t) ∂netv (t − 1) ∂ϑj (t) ∂yv (t − 1) ¶ ∂ϑcj (t) ∂netcj (t) ∂ϑinj (t) ∂netinj (t) + + ∂ϑj (t) ∂yv (t − 1) ∂ϑj (t) ∂yv (t − 1) ≈tr 0 .
(A.37)
We observe that the error ϑj arriving at the memory cell output is not backpropagated to units v by external connections to inj , outj , cj . A.2.2 Error Flow Within Memory Cells. We now focus on the error backflow within a memory cell’s CEC. This is actually the only type of error flow that can bridge several time steps. Suppose error ϑj (t) arrives at cj ’s output
Long Short-Term Memory
1777
at time t and is propagated back for q steps until it reaches inj or the memory cell input g(netcj ). It is scaled by a factor of ∂ϑv (t − q) , ∂ϑj (t) where v = inj , cj . We first compute ∂ϑscj (t − q) ∂ϑj (t)
≈tr
c
∂y j (t) ∂scj (t) ∂scj (t−q+1) ∂ϑscj (t−q+1) ∂scj (t−q)
∂ϑj (t)
q=0 q>0
.
(A.38)
Expanding equation A.38, we obtain ∂ϑv (t − q) ∂ϑscj (t − q) ∂ϑv (t − q) ≈tr ∂ϑj (t) ∂ϑscj (t − q) ∂ϑj (t) Ã ! 1 ∂s (t − m + 1) Y ∂ycj (t) ∂ϑv (t − q) cj ≈tr ∂ϑscj (t − q) m=q ∂scj (t − m) ∂scj (t) ( g0 (netcj (t − q)yinj (t − q) v = cj (A.39) . ≈tr youtj (t)h0 (scj (t)) g(netcj (t − q) fin0 j (netinj (t − q)) v = inj Consider the factors in the previous equation’s last expression. Obviously, error flow is scaled only at times t (when it enters the cell) and t − q (when it leaves the cell), but not in between (constant error flow through the CEC). We observe: 1. The output gate’s effect is youtj (t) scales down those errors that can be reduced early during training without using the memory cell. It also scales down those errors resulting from using (activating/deactivating) the memory cell at later training stages. Without the output gate, the memory cell might, for instance, suddenly start causing avoidable errors in situations that already seemed under control (because it was easy to reduce the corresponding errors without memory cells). See “Output Weight Conflict” in section 3 and “Abuse Problem and Solution” (section 4.7). 2. If there are large positive or negative scj (t) values (because scj has drifted since time step t − q), then h0 (scj (t)) may be small (assuming that h is a logistic sigmoid). See section 4. Drifts of the memory cell’s internal state scj can be countermanded by negatively biasing the input gate inj (see section 4 and the next point). Recall from section 4 that the precise bias value does not matter much. 3. yinj (t − q) and fin0 j (netinj (t − q)) are small if the input gate is negatively biased (assume finj is a logistic sigmoid). However, the potential sig-
1778
Sepp Hochreiter and Jurgen ¨ Schmidhuber
nificance of this is negligible compared to the potential significance of drifts of the internal state scj . Some of the factors above may scale down LSTM’s overall error flow, but not in a manner that depends on the length of the time lag. The flow will still be much more effective than an exponentially (of order q) decaying flow without memory cells. Acknowledgments Thanks to Mike Mozer, Wilfried Brauer, Nic Schraudolph, and several anonymous referees for valuable comments and suggestions that helped to improve a previous version of this article (Hochreiter and Schmidhuber, 1995). This work was supported by DFG grant SCHM 942/3-1 from Deutsche Forschungsgemeinschaft. References Almeida, L. B. (1987). A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. In IEEE 1st International Conference on Neural Networks, San Diego (Vol. 2, pp. 609–618). Baldi, P., & Pineda, F. (1991). Contrastive learning and neural oscillator. Neural Computation, 3, 526–545. Bengio, Y., & Frasconi, P. (1994). Credit assignment through time: Alternatives to backpropagation. In J. D. Cowan, G. Tesauro, & J. Alspector (Eds.), Advances in neural information processing systems 6 (pp. 75–82). San Mateo, CA: Morgan Kaufmann. Bengio, Y., Simard, P., & Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2), 157–166. Cleeremans, A., Servan-Schreiber, D., & McClelland, J. L. (1989). Finite-state automata and simple recurrent networks. Neural Computation, 1, 372–381. de Vries, B., & Principe, J. C. (1991). A theory for neural networks with time delays. In R. P. Lippmann, J. E. Moody, & D. S. Touretzky (Eds.), Advances in neural information processing systems 3, (pp. 162–168). San Mateo, CA: Morgan Kaufmann. Doya, K. (1992). Bifurcations in the learning of recurrent neural networks. In Proceedings of 1992 IEEE International Symposium on Circuits and Systems (pp. 2777–2780). Doya, K., & Yoshizawa, S. (1989). Adaptive neural oscillator using continuoustime backpropagation learning. Neural Networks, 2, 375–385. Elman, J. L. (1988). Finding structure in time (Tech. Rep. No. CRL 8801). San Diego: Center for Research in Language, University of California, San Diego. Fahlman, S. E. (1991). The recurrent cascade-correlation learning algorithm. In R. P. Lippmann, J. E. Moody, & D. S. Touretzky (Eds.), Advances in neural information processing systems 3 (pp. 190–196). San Mateo, CA: Morgan Kaufmann.
Long Short-Term Memory
1779
Hochreiter, J. (1991). Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut fur ¨ Informatik, Lehrstuhl Prof. Brauer, Technische Universit¨at Munchen. ¨ See http://www7.informatik.tu-muenchen.de/˜hochreit. Hochreiter, S., & Schmidhuber, J. (1995). Long short-term memory (Tech. Rep. No. FKI-207-95). Fakult¨at fur ¨ Informatik, Technische Universit¨at Munchen. ¨ Hochreiter, S., & Schmidhuber, J. (1996). Bridging long time lags by weight guessing and “long short-term memory.” In F. L. Silva, J. C. Principe, & L. B. Almeida (Eds.), Spatiotemporal models in biological and artificial systems (pp. 65–72). Amsterdam: IOS Press. Hochreiter, S., & Schmidhuber, J. (1997). LSTM can solve hard long time lag problems. In Advances in neural information processing systems 9. Cambridge, MA: MIT Press. Lang, K., Waibel, A., & Hinton, G. E. (1990). A time-delay neural network architecture for isolated word recognition. Neural Networks, 3, 23–43. Lin, T., Horne, B. G., Tino, P., & Giles, C. L. (1996). Learning long-term dependencies in NARX recurrent neural networks. IEEE Transactions on Neural Networks, 7, 1329–1338. Miller, C. B., & Giles, C. L. (1993). Experimental comparison of the effect of order in recurrent neural networks. International Journal of Pattern Recognition and Artificial Intelligence, 7(4), 849–872. Mozer, M. C. (1989). A focused back-propagation algorithm for temporal sequence recognition. Complex Systems, 3, 349–381. Mozer, M. C. (1992). Induction of multiscale temporal structure. In J. E. Moody, S. J. Hanson, & R. P. Lippman (Eds.), Advances in neural information processing systems 4 (pp. 275–282). San Mateo, CA: Morgan Kaufmann. Pearlmutter, B. A. (1989). Learning state space trajectories in recurrent neural networks. Neural Computation, 1(2), 263–269. Pearlmutter, B. A. (1995). Gradient calculations for dynamic recurrent neural networks: A survey. IEEE Transactions on Neural Networks, 6(5), 1212–1228. Pineda, F. J. (1987). Generalization of back-propagation to recurrent neural networks. Physical Review Letters, 19(59), 2229–2232. Pineda, F. J. (1988). Dynamics and architecture for neural computation. Journal of Complexity, 4, 216–245. Plate, T. A. (1993). Holographic recurrent networks. In S. J. Hanson, J. D. Cowan, & C. L. Giles ( Eds.), Advances in neural information processing systems 5 (pp. 34– 41). San Mateo, CA: Morgan Kaufmann. Pollack, J. B. (1991). Language induction by phase transition in dynamical recognizers. In R. P. Lippmann, J. E. Moody, & D. S. Touretzky (Eds.), Advances in neural information processing systems 3 (pp. 619–626). San Mateo, CA: Morgan Kaufmann. Puskorius, G. V., and Feldkamp, L. A. (1994). Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks. IEEE Transactions on Neural Networks, 5(2), 279–297. Ring, M. B. (1993). Learning sequential tasks by incrementally adding higher orders. In S. J. Hanson, J. D. Cowan, & C. L. Giles (Eds.), Advances in neural information processing systems 5 (pp. 115–122). San Mateo, CA: Morgan Kaufmann.
1780
Sepp Hochreiter and Jurgen ¨ Schmidhuber
Robinson, A. J., & Fallside, F. (1987). The utility driven dynamic error propagation network (Tech. Rep. No. CUED/F-INFENG/TR.1). Cambridge: Cambridge University Engineering Department. Schmidhuber, J. (1989). A local learning algorithm for dynamic feedforward and recurrent networks. Connection Science, 1(4), 403–412. Schmidhuber, J. (1992a). A fixed size storage O(n3 ) time complexity learning algorithm for fully recurrent continually running networks. Neural Computation, 4(2), 243–248. Schmidhuber, J. (1992b). Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2), 234–242. Schmidhuber, J. (1992c). Learning unambiguous reduced sequence descriptions. In J. E. Moody, S. J. Hanson, & R. P. Lippman (Eds.), Advances in neural information processing systems 4 (pp. 291–298). San Mateo, CA: Morgan Kaufmann. Schmidhuber, J. (1993). Netzwerkarchitekturen, Zielfunktionen und Kettenregel. Habilitationsschrift, Institut fur ¨ Informatik, Technische Universit¨at Munchen. ¨ Schmidhuber, J., & Hochreiter, S. (1996). Guessing can outperform many long time lag algorithms (Tech. Rep. No. IDSIA-19-96). Lugano, Switzerland: Instituto Dalle Molle di Studi sull’Intelligenza Artificiale. Silva, G. X., Amaral, J. D., Langlois, T., & Almeida, L. B. (1996). Faster training of recurrent networks. In F. L. Silva, J. C. Principe, & L. B. Almeida (Eds.), Spatiotemporal models in biological and artificial systems (pp. 168–175). Amsterdam: IOS Press. Smith, A. W., & Zipser, D. (1989). Learning sequential structures with the realtime recurrent learning algorithm. International Journal of Neural Systems, 1(2), 125–131. Sun, G., Chen, H., & Lee, Y. (1993). Time warping invariant neural networks. In S. J. Hanson, J. D. Cowan, & C. L. Giles (Eds.), Advances in neural information processing systems 5 (pp. 180–187). San Mateo, CA: Morgan Kaufmann. Watrous, R. L., & Kuhn, G. M. (1992). Induction of finite-state languages using second-order recurrent networks. Neural Computation, 4, 406–414. Werbos, P. J. (1988). Generalization of backpropagation with application to a recurrent gas market model. Neural Networks, 1, 339–356. Williams, R. J. (1989). Complexity of exact gradient computation algorithms for recurrent neural networks (Tech. Rep. No. NU-CCS-89-27). Boston: Northeastern University, College of Computer Science. Williams, R. J. & Peng, J. (1990). An efficient gradient-based algorithm for on-line training of recurrent network trajectories. Neural Computation, 4, 491–501. Williams, R. J., & Zipser, D. (1992). Gradient-based learning algorithms for recurrent networks and their computational complexity. In Y. Chauvin, & D. E. Rumelhart (Eds.), Back-propagation: Theory, architectures and applications. Hillsdale, NJ: Erlbaum.
Received August 28, 1995; accepted February 24, 1997.
Communicated by Robert Jacobs
Factor Analysis Using Delta-Rule Wake-Sleep Learning Radford M. Neal Department of Statistics and Department of Computer Science, University of Toronto, Toronto M5S 1A1, Canada
Peter Dayan Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139 U.S.A.
We describe a linear network that models correlations between real-valued visible variables using one or more real-valued hidden variables—a factor analysis model. This model can be seen as a linear version of the Helmholtz machine, and its parameters can be learned using the wakesleep method, in which learning of the primary generative model is assisted by a recognition model, whose role is to fill in the values of hidden variables based on the values of visible variables. The generative and recognition models are jointly learned in wake and sleep phases, using just the delta rule. This learning procedure is comparable in simplicity to Hebbian learning, which produces a somewhat different representation of correlations in terms of principal components. We argue that the simplicity of wake-sleep learning makes factor analysis a plausible alternative to Hebbian learning as a model of activity-dependent cortical plasticity.
1 Introduction Statistical structure in a collection of inputs can be found using purely local Hebbian learning rules (Hebb, 1949), which capture second-order aspects of the data in terms of principal components (Linsker, 1988; Oja, 1989). This form of statistical analysis has therefore been used to provide a computational account of activity-dependent plasticity in the vertebrate brain (e.g., von der Malsburg, 1973; Linsker, 1986; Miller, Keller, & Stryker, 1989). There are reasons, however, that principal component analysis may not be an adequate model for the generation of cortical receptive fields (e.g., Olshausen & Field, 1996). Furthermore, a Hebbian mechanism for performing principal component analysis would accord no role to the top-down (feedback) connections that always accompany bottom-up connections (Felleman & Van Essen, 1991). Hebbian learning must also be augmented by extra mechanisms in order to extract more than just the first principal comNeural Computation 9, 1781–1803 (1997)
c 1997 Massachusetts Institute of Technology °
1782
Radford M. Neal and Peter Dayan
ponent and in order to prevent the synaptic weights from growing without bound. In this article, we present the statistical technique of factor analysis as an alternative to principal component analysis and show how factor analysis models can be learned using an algorithm whose demands on synaptic plasticity are as local as those of the Hebb rule. Our approach follows the suggestion of Hinton and Zemel (1994) (see also Grenander, 1976–1981; Mumford, 1994; Dayan, Hinton, Neal, & Zemel, 1995; Olshausen & Field, 1996) that the top-down connections in the cortex might be constructing a hierarchical, probabilistic “generative” model, in which dependencies between the activities of neurons reporting sensory data are seen as being due to the presence in the world of hidden or latent “factors.” The role of the bottom-up connections is to implement an inverse “recognition” model, which takes low-level sensory input and instantiates in the activities of neurons in higher layers a set of likely values for the hidden factors, which are capable of explaining this input. These higherlevel activities are meant to correspond to significant features of the external world, and thus to form an appropriate basis for behavior. We refer to such a combination of generative and recognition models as a Helmholtz machine. The simplest form of Helmholtz machine, with just two layers, linear units, and gaussian noise, is equivalent to the factor analysis model, a statistical method that is used widely in psychology and the social sciences as a way of exploring whether observed patterns in data might be explainable in terms of a small number of unobserved factors. Everitt (1984) gives a good introduction to factor analysis and to other latent variable models. Even though factor analysis involves only linear operations and gaussian noise, learning a factor analysis model is not computationally straightforward; practical algorithms for maximum likelihood factor analysis (Joreskog, ¨ 1967, 1969, 1977) took many years to develop. Existing algorithms change the values of parameters based on complex and nonlocal calculations. A general approach to learning in Helmholtz machines that is attractive for its simplicity is the wake-sleep algorithm of Hinton, Dayan, Frey, and Neal (1995). We show empirically in this article that maximum likelihood factor analysis models can be learned by the wake-sleep method, which uses just the purely local delta rule in both of its two separate learning phases. The wake-sleep algorithm has previously been applied to the more difficult task of learning nonlinear models with binary latent variables, with mixed results. We have found that good results are obtained much more consistently when the wake-sleep algorithm is used to learn factor analysis models, perhaps because settings of the recognition model parameters that invert the generative model always exist. In the factor analysis models we look at in this article, the factors are a priori independent, and this simplicity prevents them from reproducing interesting aspects of cortical structure. However, these results contribute to
Factor Analysis Using Delta-Rule Wake-Sleep Learning
1783
the understanding of wake-sleep learning. They also point to possibilities for modeling cortical plasticity, since the wake-sleep approach avoids many of the criticisms that can be leveled at Hebbian learning. 2 Factor Analysis Factor analysis with a single hidden factor is based on a generative model for the distribution of a vector of real-valued visible inputs, x, given by x = µ + gy + ².
(2.1)
Here, y is the single real-valued factor, and is assumed to have a gaussian distribution with mean zero and variance one. The vector of “factor loadings,” g, which we refer to as the generative weights, expresses the way the visible variables are related to the hidden factor. The vector of overall means, µ, will, for simplicity of presentation, be taken to be zero in this article, unless otherwise stated. Finally, the noise, ², is assumed to be gaussian, with a diagonal covariance matrix, 9, which we will write as
0 9=
0
...
τ22
...
0
0
τ12
0
···
0 .
...
τn2
(2.2)
The τj2 are sometimes called the “uniquenesses,” as they represent the portion of the variance in each visible variable that is unique to it rather than being explained by the common factor. We will refer to them as the generative variance parameters (not to be confused with the variance of the hidden factor in the generative model, which is fixed at one). The model parameters, µ, g, and 9, define a joint gaussian distribution for both the hidden factor, y, and the visible inputs, x. In a Helmholtz machine, this generative model is accompanied by a recognition model, which represents the conditional distribution for the hidden factor, y, given a particular input vector, x. This recognition model also has a simple form, which for µ = 0, is y = rT x + ν,
(2.3)
where r is the vector of recognition weights and ν has a gaussian distribution with mean zero and variance σ 2 . It is straightforward to show that the correct recognition model has r = [ggT + 9]−1 g and σ 2 = 1 − gT [ggT + 9]−1 g, but we presume that directly obtaining the recognition model’s parameters in this way is not plausible in a neurobiological model. The factor analysis model can be extended to include several hidden factors, which are usually assumed to have independent gaussian distributions
1784
Radford M. Neal and Peter Dayan
with mean zero and variance one (though we discuss other possibilities in section 5.2). These factors jointly produce a distribution for the visible variables, as follows: x = µ + Gy + ².
(2.4)
Here, y is the vector of hidden factor values, G is a matrix of generative weights (factor loadings), and ² is again a noise vector with diagonal covariance. A linear recognition model can again be found to represent the conditional distribution of the hidden factors for given values of the visible variables. Note that there is redundancy in the definition of the multiplefactor model, caused by the symmetry of the distribution of y in the generative model. A model with generative weights G0 = GUT , where U is any unitary matrix (for which UT U = I), will produce the same distribution for x, with the hidden factors y0 = Uy again being independent, with mean zero and variance one. The presence of multiple solutions is sometimes seen as a problem with factor analysis in statistical applications, since it makes interpretation more difficult, but this is hardly an issue in neurobiological modeling, since easy interpretability of neural activities is not something that we are at liberty to require. Although there are some special circumstances in which factor analysis is equivalent to principal component analysis, the techniques are in general quite different (Jolliffe, 1986). Loosely speaking, principal component analysis pays attention to both variance and covariance, whereas factor analysis looks only at covariance. In particular, if one of the components of x is corrupted by a large amount of noise, the principal eigenvector of the covariance matrix of the inputs will have a substantial component in the direction of that input. Hebbian learning will therefore result in the output, y, being dominated by this noise. In contrast, factor analysis uses ²j to model any noise that affects only component j. A large amount of noise simply results in the corresponding τj2 being large, with no effect on the output y. Principal component analysis is also unaffected by a rotation of the coordinate system of the input vectors, whereas factor analysis privileges the particular coordinate system in which the data are presented, because it is assumed in equation 2.2 that it is in this coordinate system that the components of the noise are independent. 3 Learning Factor Analysis Models with the Wake-Sleep Algorithm In this section, we describe wake-sleep algorithms for learning factor analysis models, starting with the simplest such model, having a single hidden factor. We also provide an intuitive justification for believing that these algorithms might work, based on their resemblance to the ExpectationMaximization (EM) algorithm. We have not found a complete theoretical
Factor Analysis Using Delta-Rule Wake-Sleep Learning
1785
proof that wake-sleep learning for factor analysis works, but we present empirical evidence that it usually does in section 4. 3.1 Maximum Likelihood and the EM Algorithm. In maximum likelihood learning, the parameters of the model are chosen to maximize the probability density assigned by the model to the data that were observed (the “likelihood”). For a factor analysis model with a single factor, the likelihood, L, based on the observed data in n independent cases, x(1) , . . . , x(n) , is obtained by integrating over the possible values that the hidden factors, y(c) , might take on for each case: L(g, 9) =
n Y
p(x(c) | g, 9) =
c=1
n Z Y
p(x(c) | y(c) , g, 9) p(y(c) ) dy(c) (3.1)
c=1
where g and 9 are the model parameters, as described previously, and p(·) is used to write probability densities and conditional probability densities. The prior distribution for the hidden factor, p(y(c) ), is gaussian with mean zero and variance one. The conditional distribution for x(c) given y(c) is gaussian with mean µ + gy(c) and covariance 9, as implied by equation 2.1. Wake-sleep learning can be viewed as an approximate implementation of the EM algorithm, which Dempster, Laird, and Rubin (1977) present as a general approach to maximum likelihood estimation when some variables (in our case, the values of the hidden factors) are unobserved. Applications of EM to factor analysis are discussed by Dempster et al. (1977) and by Rubin and Thayer (1982), who find that EM produces results almost identical to those of Joreskog’s ¨ (1969) method, including getting stuck in essentially the same local maxima given the same starting values for the parameters.1 EM is an iterative method, in which each iteration consists of two steps. In the E-step, one finds the conditional distribution for the unobserved variables given the observed data, based on the current estimates for the model parameters. When the cases are independent, this conditional distribution factors into distributions for the unobserved variables in each case. For the single-factor model, the distribution for the hidden factor, y(c) , in a case with observed data x(c) , is p(y(c) | x(c) , g, 9) = R
p(y(c) , x(c) | g, 9) . p(x(c) | y, g, 9) p(y) dy
(3.2)
1 It may seem surprising that maximum likelihood factor analysis can be troubled by local maxima, in view of the simple linear nature of the model, but this is in fact quite possible. For example, a local maximum can arise if a single-factor model is applied to data consisting of two pairs of correlated variables. The single factor can capture only one of these correlations. If the initial weights are appropriate for modeling the weaker of the two correlations, learning may never find the global maximum in which the single factor is used instead to model the stronger correlation.
1786
Radford M. Neal and Peter Dayan
In the M-step, one finds new parameter values, g0 and 9 0 , that maximize (or at least increase) the expected log likelihood of the complete data, with the unobserved variables filled in according to the conditional distribution found in the E-step: n Z X
h i p(y(c) | x(c) , g, 9) log p(x(c) | y(c) , g0 , 9 0 ) p(y(c) ) dy(c) .
(3.3)
c=1
For factor analysis, the conditional distribution for y(c) in equation 3.2 is gaussian, and the quantity whose expectation with respect to this distribution is required above is quadratic in y(c) . This expectation is therefore easily found on a computer. However, the matrix calculations involved require nonlocal operations. 3.2 The Wake-Sleep Approach. To obtain a local learning procedure, we first eliminate the explicit maximization in the M-step of the quantity defined in equation 3.3 in terms of an expectation. We replace this by a gradient-following learning procedure in which the expectation is implicitly found by combining many updates based on values for the y(c) that are stochastically generated from the appropriate conditional distributions. We furthermore avoid the direct computation of these conditional distributions in the E-step by learning to produce them using a recognition model, trained in tandem with the generative model. This approach results in the wake-sleep learning procedure, with the wake phase playing the role of the M-step in EM and the sleep phase playing the role of the E-step. The names for these phases are metaphorical; we are not proposing neurobiological correlates for the wake and sleep phases. Learning consists of interleaved iterations of these two phases, which in the context of factor analysis operate as follows: Wake phase: From observed values for the visible variables, x, randomly fill in values for the hidden factors, y, using the conditional distribution defined by the current recognition model. Update the parameters of the generative model to make this filled-in case more likely. Sleep phase: Without reference to any observation, randomly choose values for the hidden factors, y, from their fixed generative distribution, and then randomly choose “fantasy” values for the visible variables, x, from their conditional distribution given y, as defined by the current parameters of the generative model. Update the parameters of the recognition model to make this fantasy case more likely. The wake phase of learning will correctly implement the M-step of EM (in a stochastic sense), provided that the recognition model produces the correct conditional distribution for the hidden factors. The aim of sleep phase learning is to improve the recognition model’s ability to produce this
Factor Analysis Using Delta-Rule Wake-Sleep Learning
1787
conditional distribution, which would be computed in the E-step of EM. If values for the recognition model parameters exist that reproduce this conditional distribution exactly and if learning of the recognition model proceeds at a much faster rate than changes to the generative model, so that this correct distribution can continually be tracked, then the sleep phase will effectively implement the E-step of EM, and the wake-sleep algorithm as a whole will be guaranteed to find a (local) maximum of the likelihood (in the limit of small learning rates, so that stochastic variation is averaged out). These conditions for wake-sleep learning to mimic EM are too stringent for actual applications, however. At a minimum, we would like wake-sleep learning to work for a wide range of relative learning rates in the two phases, not just in the limit as the ratio of the generative learning rate to the recognition learning rate goes to zero. We would also like wake-sleep learning to do something sensible even when it is not possible for the recognition model to invert the generative model perfectly (as is typical in applications other than factor analysis), though one would not usually expect the method to produce exact maximum likelihood estimates in such a case. We could guarantee that wake-sleep learning behaves well if we could find a cost function that is decreased (on average) by both the wake phase and the sleep phase updates. The cost would then be a Lyapunov function for learning, providing a guarantee of stability and allowing something to be said about the stable states. Unfortunately, no such cost function has been discovered, for either wake-sleep learning in general or wake-sleep learning applied to factor analysis. The wake and sleep phases each separately reduces a sensible cost function (for each phase, a Kullback-Leibler divergence), but these two cost functions are not compatible with a single global cost function. An algorithm that correctly performs stochastic gradient descent in the recognition parameters of a Helmholtz machine using an appropriate global cost function does exist (Dayan & Hinton, 1996), but it involves reinforcement learning methods for which convergence is usually extremely slow. We have obtained some partial theoretical results concerning wake-sleep learning for factor analysis, which show that the maximum likelihood solutions are second-order Lyapunov stable and that updates of the weights (but not variances) for a single-factor model decrease an appropriate cost function. However, the primary reason for thinking that the wake-sleep algorithm generally works well for factor analysis is not these weak theoretical results but rather the empirical results in section 4. Before presenting these, we discuss in more detail how the general wake-sleep scheme is applied to factor analysis, first for a single-factor model and then for models with multiple factors. 3.3 Wake-Sleep Learning for a Single-Factor Model. A Helmholtz machine that implements factor analysis with a single hidden factor is shown
1788
Radford M. Neal and Peter Dayan
Hidden Factor y
Generative Connections
g1
g2 r1
x1
Recognition Connections
g3 r2
x2
g4 r3 x3
r4 x4
Visible Variables
Figure 1: The one-factor linear Helmholtz machine. Connections of the generative model are shown using solid lines, those of the recognition model using dashed lines. The weights for these connections are given by the gj and the rj .
in Figure 1. The connections for the linear network that implements the generative model of equation 2.1 are shown in the figure using solid lines. Generation of data using this network starts with the selection of a random value for the hidden factor, y, from a gaussian distribution with mean zero and variance one. The value for the jth visible variable, xj , is then found by multiplying y by a connection weight, gj , and adding random noise with variance τj2 . (A bias value, µj , could be added as well, to produce nonzero means for the visible variables.) The recognition model’s connections are shown in Figure 1 using dashed lines. These connections implement equation 2.3. When presented with values for the visible input variables, xj , the recognition network produces a value for the hidden factor, y, by forming a weighted sum of the inputs, with the weight for the jth input being rj , and then adding gaussian noise with mean zero and variance σ 2 . (If the inputs have nonzero means, the recognition model would include a bias for the hidden factor as well.) The parameters of the generative model are learned in the wake phase, as follows. Values for the visible variables, x(c) , are obtained from the external world; they are set to a training case drawn from the distribution that we wish to model. The current version of the recognition network is then used to stochastically fill in a corresponding value for the hidden factor, y(c) , using equation 2.3. The generative weights are then updated using the delta rule,
Factor Analysis Using Delta-Rule Wake-Sleep Learning
1789
as follows: gj0 = gj + η (xj(c) − gj y(c) ) y(c) ,
(3.4)
where η is a small positive learning rate parameter. The generative variances, τj2 , which make up 9, are also updated, using an exponentially weighted moving average: (τj2 )0 = α τj2 + (1 − α) (xj(c) − gj y(c) )2 ,
(3.5)
where α is a learning rate parameter that is slightly less than one. If the recognition model correctly inverted the generative model, these wake-phase updates would correctly implement the M-step of the EM algorithm (in the limit as η → 0 and α → 1). The update for gj in equation 3.4 improves the expected log likelihood because the increment to gj is proportional to the derivative of the log likelihood for the training case with y filled in, which is as follows: ´ ∂ ∂ ³ log p(x(c) , y(c) | g, 9) = ∂gj ∂gj =
Ã
1 − 2 (xj(c) − gj y(c) )2 2τj
!
1 (c) (x − gj y(c) ) y(c) . τj2 j
(3.6) (3.7)
The averaging operation by which the generative variances, τj2 , are learned will (for α close to one) also lead toward the maximum likelihood values based on the filled-in values for y. The parameters of the recognition model are learned in the sleep phase, based not on real data but on “fantasy” cases, produced using the current version of the generative model. Values for the hidden factor, y( f ) , and the visible variables, x( f ) , in a fantasy case are stochastically generated according to equation 2.1, as described at the beginning of this section. The connection weights for the recognition model are then updated using the delta rule, as follows: (f)
rj0 = rj + η (y( f ) − rT x( f ) ) xj ,
(3.8)
where η is again a small positive learning rate parameter, which might or might not be the same as that used in the wake phase. The recognition variance, σ 2 , is updated as follows: (σ 2 )0 = α σ 2 + (1 − α) (y( f ) − rT x( f ) )2 , where α is again a learning rate parameter slightly less than one.
(3.9)
1790
Radford M. Neal and Peter Dayan
These updates are analogous to those made in the wake phase and have the effect of improving the recognition model’s ability to produce the correct distribution for the hidden factor. However, as noted in section 3.2, the criterion by which the sleep phase updates “improve” the recognition model does not correspond to a global Lyapunov function for wake-sleep learning as a whole, and we therefore lack a theoretical guarantee that the wake-sleep procedure will be stable and will converge to a maximum of the likelihood, though the experiments in section 4 indicate that it usually does. 3.4 Wake-Sleep Learning for Multiple-Factor Models. A Helmholtz machine with more than one hidden factor can be trained using the wakesleep algorithm in much the same way as described above for a single-factor model. The generative model is simply extended by including more than one hidden factor. In the most straightforward case, the values of these factors are chosen independently at random when a fantasy case is generated. However, a new issue arises with the recognition model. When there are k hidden factors and p visible variables, the recognition model has the following form (assuming zero means): y = Rx + ν
(3.10)
where the k × p matrix R contains the weights on the recognition connections, and ν is a k-dimensional gaussian random vector with mean 0 and covariance matrix 6, which in general (i.e., for some arbitrary generative model) will not be diagonal. Generation of a random ν with such covariance can easily be done on a computer using the Cholesky decomposition of 6, but in a method intended for consideration as a neurobiological model, we would prefer a local implementation that produces the same effect. One way of producing arbitrary covariances between the hidden factors is to include a set of connections in the recognition model that link each hidden factor to hidden factors that come earlier (in some arbitrary ordering). A Helmholtz machine with this architecture is shown in Figure 2. During the wake phase, values for the hidden factors are filled in sequentially by random generation from gaussian distributions. The mean of the distribution used in picking a value for factor yi is the weighted sum of inputs along connections from the visible variables and from the hidden factors whose values were chosen earlier. The variance of the distribution for factor yi is an additional recognition model parameter, σi2 . The k(k−1)/2 connections between the hidden factors and the k variances associated with these factors together make up the k(k+1)/2 independent degrees of freedom in 6. Accordingly, a recognition model of this form exists that can perfectly invert any generative model.2 2
Another way of seeing this is to note that the joint distribution for the visible vari-
Factor Analysis Using Delta-Rule Wake-Sleep Learning
Correlation-inducing Recognition Connections
1791
y3 Recognition Connections
y2
Generative
y1
Connections
x1
x2
x3
x4
Figure 2: A Helmholtz machine implementing a model with three hidden factors, using a recognition model with a set of correlation-inducing connections. The figure omits some of the generative connections from hidden factors to visible variables and some of the recognition connections from visible variables to hidden factors (indicated by the ellipses). Note, however, that the only connections between hidden factors are those shown, which are part of the recognition model. There are no generative connections between hidden factors.
This method is reminiscent of some of the proposals to allow Hebbian learning to extract more than one principal component (e.g., Sanger, 1989; Plumbley, 1993); various of these also order the “hidden” units. In these proposals, the connections are used to remove correlations between the units so that they can represent different facets of the input. However, for the Helmholtz machine, the factors come to represent different facets of the input because they are jointly rather than separately engaged in capturing the statistical regularities in the input. The connections between the hidden units capture the correlations in the distribution for the hidden factors con-
ables and hidden factors is multivariate gaussian. From general properties of multivariate gaussians, the conditional distribution for a hidden factor given values for the visible variables and for the earlier hidden factors must also be gaussian, with some constant variance and with a mean given by some linear function of the other values.
1792
Radford M. Neal and Peter Dayan
ditional on given values for the visible variables that may be induced by this joint responsibility for modeling the visible variables. Another approach is possible if the covariance matrix for the hidden factors, y, in the generative model is rotationally symmetric, as is the case when it is the identity matrix, as we have assumed so far. We may then freely rotate the space of factors (y0 = Uy, where U is a unitary matrix), making corresponding changes to the generative weights (G0 = GUT ), without changing the distribution of the visible variables. When the generative model is rotated, the corresponding recognition model will also rotate. There always exist rotations in which the recognition covariance matrix, 6, is diagonal and can therefore be represented by just k variances, σi2 , one for each factor. This amounts to forcing the factors to be independent, or factorial, which has itself long been suggested as a goal for early cortical processing (Barlow, 1989) and has generally been assumed in nonlinear versions of the Helmholtz machine. We can therefore hope to learn multiple-factor models using wake-sleep learning in exactly the same way as we learn single-factor models, with equations 2.4 and 3.10 specifying how the values of all the factors y combine to predict the input x, and vice versa. As seen in the next section, such a Helmholtz machine with only the capacity to represent k recognition variances, with no correlation-inducing connections, is usually able to find a rotation in which such a recognition model is sufficient to invert the generative model. Note that there is no counterpart in Hebbian learning of this second approach, in which there are no connections between hidden factors. If Hebbian units are not connected in some manner, they will all extract the same single principal component of the inputs. 4 Empirical Results We have run a number of experiments on synthetic data in order to test whether the wake-sleep algorithm applied to factor analysis finds parameter values that at least locally maximize the likelihood, in the limit of small values for the learning rates. These experiments also provide data on how small the learning rates must be in practice and reveal situations in which learning (particularly of the uniquenesses) can be relatively slow. We also report in section 4.4 the results of applying the wake-sleep algorithm to a real data set used by Everitt (1984). Away from conditions in which the maximum likelihood model is not well specified, the wake-sleep algorithm performs quite competently. 4.1 Experimental Procedure. All the systematic experiments described were done with randomly generated synthetic data. Models for various numbers (p) of visible variables, using various numbers (k) of hidden factors, were tested. For each such model, 10 sets of model parameters were generated, each of which was used to generate 2 sets of training data, the
Factor Analysis Using Delta-Rule Wake-Sleep Learning
1793
first with 10 cases and the second with 500 cases. Models with the same p and k were then learned from these training sets using the wake-sleep algorithm. The models used to generate the data were randomly constructed as follows. First, initial values for generative variances (the “uniquenesses”) were drawn independently from exponential distributions with mean one, and initial values for the generative weights (the “factor loadings”) were drawn independently from gaussian distributions with mean zero and variance one. These parameters were then rescaled so as to produce a variance of one for each visible variable; that is, for a single-factor model, the new generative variances were set to τj02 = fj2 τj2 and the new generative weights to gj0 = fj gj , with the fj chosen such that the new variances for the visible
variables, τj02 + gj02 , were equal to one. In all experiments, the wake-sleep learning procedure was started with the generative and recognition variances set to one and the generative and recognition weights and biases set to zero. Symmetry is broken by the stochastic nature of the learning procedure. Learning was done online, with cases from the training set being presented one at a time, in a fixed sequence. The learning rates for both generative and recognition models were usually set to η = 0.0002 and α = 0.999, but other values of η and α were investigated as well, as described below. In the models used to generate the data, the variables all had mean zero and variance one, but this knowledge was not built into the learning procedure. Bias parameters were therefore present, allowing the network to learn the means of the visible variables in the training data (which were close to zero, but not exactly zero, due to the use of a finite training set). We compared the estimates produced by the wake-sleep algorithm with the maximum likelihood estimates based on the same training sets produced using the “factanal” function in S-Plus (Version 3.3, Release 1), which uses a modification of Joreskog’s ¨ (1977) method. We also implemented the EM factor analysis algorithm to examine in more detail cases in which wake-sleep disagreed with S-Plus. As with most stochastic learning algorithms, using fixed learning rates implies that convergence can at best be to a distribution of parameter values that is highly concentrated near some stable point. One would in general have to reduce the learning rates over time to ensure stronger forms of convergence. The software used in these experiments may be obtained over the Internet.3 4.2 Experiments with Single-Factor Models. We first tried using the wake-sleep algorithm to learn single-factor models for three (p = 3) and six (p = 6) visible variables. Figure 3 shows the progress of learning for a 3 Follow the links from the first author’s home page: http://www.cs.utoronto.ca/ ∼radford/
Radford M. Neal and Peter Dayan
1.0
1.2
1794
0.4
6 4
0.5 0.0
2 5
-0.5
0.6
1
Generative Weights
0.8
2 5
1 6 4
3 0.0
0.5
1.0
1.5
2.0
-1.0
0.0
0.2
Generative Variances
1.0
3
0.0
0.5
1.0
1.5
2.0
Figure 3: Wake-sleep learning of a single-factor model for six variables. The graphs show the progress of the generative variances and weights over the course of 2 million presentations of input vectors, drawn sequentially from a training set of size 500, with learning parameters of η = 0.0002 and α = 0.999 being used for both the wake and the sleep phases. The training set was randomly generated from a single-factor model whose parameters were picked at random, as described in section 4.1. The maximum likelihood parameter estimates found by S-Plus are shown as horizontal lines.
typical run with p = 6, applied to a training set of size 500. The figure shows progress over 2 million presentations of input vectors (4000 presentations of each of the 500 training cases). Both the generative variances and the generative weights are seen to converge to the maximum likelihood estimates found by S-Plus, with a small amount of random variation, as is expected with a stochastic learning procedure. Of the 20 runs with p = 6 (based on data generated from 10 random models, with training sets of size 10 and 500), all but one showed similar convergence to the S-Plus estimates within 3 million presentations (and usually much earlier). The remaining run (on a training set of size 10) converged to a different local maximum, which S-Plus found when its maximization routine was started from the values found by wake-sleep. Convergence to maximum likelihood estimates was sometimes slower when there were only three variables (p = 3). This is the minimum number of visible variables for which the single-factor model is identifiable (that is, for which the true values of the parameters can be found given enough data, apart from an ambiguity in the overall signs of the weights). Three of the 10 runs with training sets of size 10 and one of the 10 runs with training sets of size 500 failed to converge clearly within 3 million presentations,
Factor Analysis Using Delta-Rule Wake-Sleep Learning
1795
but convergence was seen when these runs were extended to 10 million presentations (in one case to a different local maximum than initially found by S-Plus). The slowest convergence was for a training set of size 10 for which the maximum likelihood estimate for one of the generative weights was close to zero (0.038), making the parameters nearly unidentifiable. All the runs were done with learning rates of η = 0.0002 and α = 0.999, for both the generative and recognition models. Tests on the training sets with p = 3 were also done with learning for the recognition model slowed to η = 0.00002 and α = 0.9999—a situation that one might speculate would cause problems, due to the recognition model’s not being able to keep up with the generative model. No problems were seen (though the learning was, of course, slower). 4.3 Experiments with Multiple-Factor Models. We have also tried using the wake-sleep algorithm to learn models with two and three hidden factors (k = 2 and k = 3) from synthetic data generated as described in section 4.1. Systematic experiments were done for k = 2, p = 5, using models with and without correlation-inducing recognition weights, and for k = 2, p = 8 and k = 3, p = 9, in both cases using models with no correlationinducing recognition weights. For most of the experiments, learning was done with η = 0.0002 and α = 0.999. The behavior of wake-sleep learning with k = 2 and p = 5 was very similar for the model with correlation-inducing recognition connections and the model without such connections. The runs on most of the 20 data sets converged within the 6 million presentations that were initially performed; a few required more iterations before convergence was apparent. For two data sets (both of size 10), wake-sleep learning converged to different local maxima than S-Plus. For one training set of size 500, the maximum likelihood estimate for one of the generative variances is very close to one, making the model almost unidentifiable (p = 5 being the minimum number of visible variables for identifiability with k = 2); this produced an understandable difficulty with convergence, though the wake-sleep estimates still agreed fairly closely with the maximum likelihood values found by S-Plus. In contrast with these good results, a worrying discrepancy arose with one of the training sets of size 10, for which the two smallest generative variances found using wake-sleep learning differed somewhat from the maximum likelihood estimates found by S-Plus (one of which was very close to zero). When S-Plus was started at the values found using wakesleep, it did not find a similar local maximum; it simply found the same estimates as it had found with its default initial values. However, when the full EM algorithm was started at the estimates found by wake-sleep, it barely changed the parameters for many iterations. One explanation could be that there is a local maximum in this vicinity, but it is for some reason not found by S-Plus. Another possible explanation could be that the likelihood is extremely flat in this region. In neither case would the discrepancy be cause
1796
Radford M. Neal and Peter Dayan
for much worry regarding the general ability of the wake-sleep method to learn these multiple-factor models. However, it is also possible that this is an instance of the problem that arose more clearly in connection with the Everitt crime data, as reported in section 4.4. Runs using models without correlation-inducing recognition connections were also performed for data sets with k = 2 and p = 8. All runs converged, most within 6 million presentations, in one case to a different local maximum than S-Plus. We also tried runs with the same model and data sets using higher learning rates (η = 0.002 and α = 0.99). The higher learning rates produced both higher variability and some bias in the parameter estimates, but for most data sets the results were still generally correct. Finally, runs using models without correlation-inducing recognition connections were performed for data sets with k = 3 and p = 9. Most of these converged fine. However, for two data sets (both of size 500), a small but apparently real difference was seen between the estimates for the two smallest generative variances found using wake-sleep and the maximum likelihood estimates found using S-Plus. As was the case with the similar situation with k = 2, p = 5, S-Plus did not converge to a local maximum in this vicinity when started at the wake-sleep estimates. As before, however, the EM algorithm moved very slowly when started at the wake-sleep estimates, which is one possible explanation for wake-sleep having apparently converged to this point. However, it is also possible that something more fundamental prevented convergence to a local maximum of the likelihood, as discussed in connection with similar results in the next section. 4.4 Experiments with the Everitt Crime Data. We also tried learning a two-factor model for a data set used as an example by Everitt (1984), in which the visible variables are the rates for seven types of crime, with the cases being 16 American cities. The same learning procedure (with η = 0.0002 and α = 0.999) was used as in the experiments above, except that the visible variables were normalized to have mean zero and variance one, and bias parameters were accordingly omitted from both the generative and recognition models. Fifteen runs with different random number seeds were done, for both models with a correlation-inducing recognition connection between the two factors and without such a connection. All of these runs produced results fairly close to the maximum likelihood estimates found by S-Plus (which match the results of Everitt). In some runs, however, there were small, but clearly real, discrepancies, most notably in the smallest two generative variances, for which the wake-sleep estimates were sometimes nearly zero, whereas the maximum likelihood estimate is zero for only one of them. This behavior is similar to that seen in the three runs where discrepancies were found in the systematic experiments of section 4.3. These discrepancies arose much more frequently when a correlationinducing recognition connection was not present (14 out of 15 runs) than
Factor Analysis Using Delta-Rule Wake-Sleep Learning
1797
when such a connection was included (6 out of 15 runs). Figure 4a shows one of the runs with discrepancies for a model with a correlation-inducing connection present. Figure 4b shows a run differing from that of 4a only in its random seed, but which did find the maximum likelihood estimates. The sole run in which the model without a correlation-inducing recognition connection converged to the maximum likelihood estimates is shown in Figure 4c. Close comparison of Figures 4b and 4c shows that even in this run, convergence was much more labored without the correlation-inducing connection. (In particular, the generative variance for variable 6 approaches zero rather more slowly.) According to S-Plus, the solution found in the discrepant runs is not an alternative local maximum. Furthermore, extending the runs to many more iterations or reducing the learning rates by a large factor does not eliminate the problem. This makes it seem unlikely that the problem is merely slow convergence due to the likelihood of being nearly flat in this vicinity, although, on the other hand, when EM is started at the discrepant wakesleep estimates, its movement toward the maximum likelihood estimates is rather slow (after 200 iterations, the estimate for the generative variance for variable 4 had moved only about a third of the way from the wake-sleep estimate to the maximum likelihood estimate). It seems most likely therefore that the runs with discrepancies result from a local “basin of attraction” for wake-sleep learning that does not lead to a local maximum of the likelihood. The discrepancies can be eliminated for the model with a correlationinducing connection in either of two ways. One way is to reduce the learning rate for the generative parameters (in the wake phase), while leaving the recognition learning rate unchanged. As discussed in section 3.2, there is a theoretical reason to think that this method will lead to the maximum likelihood estimates, since the recognition model will then have time to learn how to invert the generative model perfectly. When the generative learning rates are reduced by setting η = 0.00005 and α = 0.99975, the maximum likelihood estimates are indeed found in eight of eight test runs. The second solution is to impose a constraint that prevents the generative variances from falling below 0.01. This also worked in eight of eight runs. However, these two methods produce little or no improvement when the correlationinducing connection is omitted from the recognition model (no successes in eight runs with smaller generative learning rates; three successes in eight runs with a constraint on the generative variances). Thus, although models without correlation-inducing connections often work well, it appears that learning is sometimes easier and more reliable when they are present. Finally, we tried learning a model for this data without first normalizing the visible variables to have mean zero and variance one, but with biases included in the generative and recognition models to handle the nonzero means. This is not a very sensible thing to do; the means of the variables in this data set are far from zero, so learning would at best take a long time, while the biases slowly adjusted. In fact, however, wake-sleep learning fails
Radford M. Neal and Peter Dayan
0.8 0.6
3 1
0.4
7
0.2
5 2
4 6
0.0
Generative Variances
1.0
1798
0
5
10
15
0.8 0.6
3 1
0.4
7
0.2
5 2
4 6
0.0
Generative Variances
1.0
(a)
0
5
10
15
0.8 0.6
3 1
0.4
7
0.2
5 2
4 6
0.0
Generative Variances
1.0
(b)
0
5
10
15
(c)
Figure 4: Wake-sleep learning of two-factor models for the Everitt crime data. The horizontal axis shows the number of presentations in millions. The vertical axis shows the generative variances, with the maximum likelihood values indicated by horizontal lines. (a) One of the six runs in which a model with a correlation-inducing recognition connection did not converge to the maximum likelihood estimates. (b) One of the nine such runs that did find the maximum likelihood estimates. (c) The only run in which the maximum likelihood estimates were found using a model without a correlation-inducing connection.
Factor Analysis Using Delta-Rule Wake-Sleep Learning
1799
rather spectacularly. The generative variances immediately become quite large. As soon as the recognition weights depart significantly from zero, the recognition variances also become quite large, at which point positive feedback ensues, and all the weights and variances diverge. The instability that can occur once the generative and recognition weights and variances are too large appears to be a fundamental aspect of wake-sleep dynamics. Interestingly, however, it seems that the dynamics may operate to avoid this unstable region of parameter space, since the instability does not occur with the Everitt data if the learning rate is set very low. With a larger learning rate, however, the stochastic aspect of the learning can produce a jump into the unstable region. 5 Discussion We have shown empirically that wake-sleep learning, which involves nothing more than two simple applications of the local delta rule, can be used to implement the statistical technique of maximum likelihood factor analysis. However, just as it is usually computationally more efficient to implement principal component analysis using a standard matrix technique such as singular-value decomposition rather than by using Hebbian learning, factor analysis is probably better implemented on a computer using either EM (Rubin & Thayer, 1982) or the second-order Newton methods of Joreskog ¨ (1967, 1969, 1977) than by the wake-sleep algorithm. In our view, wake-sleep factor analysis is interesting as a simple and successful example of wakesleep learning and as a possible model of activity-dependent plasticity in the cortex. 5.1 Implications for Wake-Sleep Learning. The experiments in section 4 show that, in most situations, wake-sleep learning applied to factor analysis leads to estimates that (locally) maximize the likelihood, when the learning rate is set to be small enough. In a few situations, the parameter estimates found using wake-sleep deviated slightly from the maximum likelihood values. More work is needed to determine exactly when and why this occurs, but even in these situations, the estimates found by wake-sleep are reasonably good and the likelihoods are close to their maxima. The sensitivity of learning to settings of parameters such as learning rates appears no greater than is typical for stochastic gradient descent methods. In particular, setting the learning rate for the generative model to be equal to or greater than that for the recognition model did not lead to instability, even though this is the situation in which one might worry that the algorithm would no longer mimic EM (as discussed in section 3.2). However, we have been unable to prove in general that the wake-sleep algorithm for factor analysis is guaranteed to converge to the maximum likelihood solution. Indeed, the empirical results show that any general theoretical proof of correctness would have to contain caveats regarding unstable regions of the parameter
1800
Radford M. Neal and Peter Dayan
space. Such a proof may be impossible if the discrepancies seen in the experiments are indeed due to a false basin of attraction (rather than being an effect of finite learning rates). Despite the lack so far of strong theoretical results, the relatively simple factor analysis model may yet provide a good starting point for a better theoretical understanding of wake-sleep learning for more complex models. One empirical finding was that it is possible to learn multiple-factor models even when correlation-inducing recognition connections are absent, although the lack of such connections did cause difficulties in some cases. A better understanding of what is going on in this respect would provide insight into wake-sleep learning for more complex models, in which the recognition model will seldom be able to invert the generative model perfectly. Such a complex generative model may allow the data to be modeled in many nearly equivalent ways, for some of which the generative model is harder to invert than for others. Good performance may sometimes be possible only if wake-sleep learning favors the more easily inverted generative models. We have seen that this does usually happen for factor analysis. 5.2 Implications for Activity-Dependent Plasticity. Much progress has been made using Hebbian learning to model the development of structures in early cortical areas, such as topographic maps (Willshaw & von der Malsburg 1976, 1979; von der Malsburg, & Willshaw, 1977), ocular dominance stripes (Miller et al., 1989), and orientation domains (Linkser, 1988; Miller, 1994). However, Hebbian learning suffers from three problems that the equally-local wake-sleep algorithm avoids. First, because of the positive feedback inherent in Hebbian learning, some form of synaptic normalization is required to make it work (Miller & MacKay, 1994). There is no evidence for synaptic normalization in the cortex. This is not an issue for Helmholtz machines because building a statistical model of the inputs involves negative feedback instead, as in the delta rule. Second, in order for Hebbian learning to produce several outputs that represent more than just the first principal component of a collection of inputs, there must be connections of some sort between the output units, which force them to be decorrelated. Typically, some form of anti-Hebbian learning is required for these connections (see Sanger, 1989; Foldi´ ¨ ak, 1989; Plumbley, 1993). The common alternative of using fixed lateral connections between the output units is not informationally efficient. In the Helmholtz machine, the goal of modeling the distribution of the inputs forces output units to differentiate rather than perform the same task. This goal also supplies a clear interpretation in terms of finding the hidden causes of the input. Third, Hebbian learning does not accord any role to the prominent cortical feature that top-down connections always accompany bottom-up connections. In the Helmholtz machine, these top-down connections play a
Factor Analysis Using Delta-Rule Wake-Sleep Learning
1801
crucial role in wake-sleep learning. Furthermore, the top-down connections come to embody a hierarchical generative model of the inputs, some form of which appears necessary in any case to combine top-down and bottom-up processing during inference. Of course, the Helmholtz machine suffers from various problems itself. Although learning rules equivalent to the delta rule are conventional in classical conditioning (Rescorla & Wagner, 1972; Sutton & Barto, 1981) and have also been suggested as underlying cortical plasticity (Montague & Sejnowski, 1994), it is not clear how cortical microcircuitry might construct the required predictions (such as gj y(c) of equation 3.4) or prediction errors (such as xj(c) − gj y(c) of the same equation). The wake-sleep algorithm also requires two phases of activation, with different connections being primarily responsible for driving the cells in each phase. Although there is some suggestive evidence of this (Hasselmo & Bower, 1993), it has not yet been demonstrated. There are also alternative developmental methods that construct top-down statistical models of their inputs using only one, slightly more complicated, phase of activation (Olshausen & Field, 1996; Rao & Ballard, 1995). In Hebbian models of activity-dependent development, a key role is played by lateral local intracortical connections (longer-range excitatory connections are also present but develop later). Lateral connections are not present in the linear Helmholtz machines we have so far described, but they could play a role in inducing correlations between the hidden factors in the generative model, in the recognition model, or in both, perhaps replacing the correlation-inducing connections shown in Figure 2. Purely linear models, such as those presented here, will not suffice to explain fully the intricacies of information processing in the cortical hierarchy. However, understanding how a factor analysis model can be learned using simple and local operations is a first step to understanding how more complex statistical models can be learned using more complex architectures involving nonlinear elements and multiple layers. Acknowledgments We thank Geoffrey Hinton for many useful discussions. This research was supported by the Natural Sciences and Engineering Research Council of Canada and by grant R29 MH55541-01 from the National Institute of Mental Health of the United States. All opinions expressed are those of the authors. References Barlow, H. B. (1989). Unsupervised learning. Neural Computation, 1, 295–311. Dayan, P., & Hinton, G. E. (1996). Varieties of Helmholtz machine. Neural Networks, 9, 1385–1403.
1802
Radford M. Neal and Peter Dayan
Dayan, P., Hinton, G. E., Neal, R. M., & Zemel, R. S. (1995). The Helmholtz machine. Neural Computation, 7, 889–904. Dempster, A. P., Laird, N. M, & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Proceedings of the Royal Statistical Society, B39, 1–38. Everitt, B. S. (1984). An introduction to latent variable models. London: Chapman and Hall. Felleman D. J., & Van Essen, D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1, 1–47. Foldi´ ¨ ak, P. (1989). Adaptive network for optimal linear feature extraction. In Proceedings of the International Joint Conference on Neural Networks, Washington, DC (Vol. I, pp. 401–405). Grenander, U. (1976–1981). Lectures in pattern theory I, II and III: Pattern analysis, pattern synthesis and regular structures. Berlin: Springer-Verlag. Hasselmo, M. E., & Bower, J. M. (1993). Acetylcholine and memory. Trends in Neurosciences, 16, 218–222. Hebb, D. O. (1949). The organization of behavior: A neuropsychological theory. New York: Wiley. Hinton, G. E., Dayan, P., Frey, B., & Neal, R. M. (1995). The wake-sleep algorithm for self-organizing neural networks. Science, 268, 1158–1160. Hinton, G. E., & Zemel, R. S. (1994). Autoencoders, minimum description length and Helmholtz free energy. In J. D. Cowan, G. Tesauro, & J. Alspector (Eds.), Advances in neural information processing systems, 6 (pp. 3–10). San Mateo, CA: Morgan Kaufmann. Jolliffe, I. T. (1986) Principal component analysis, New York: Springer-Verlag. Joreskog, ¨ K. G. (1967). Some contributions to maximum likelihood factor analysis, Psychometrika, 32, 443–482. Joreskog, ¨ K. G. (1969). A general approach to confirmatory maximum likelihood factor analysis, Psychometrika, 34, 183–202. Joreskog, ¨ K. G. (1977). Factor analysis by least-squares and maximum-likelihood methods. In K. Enslein, A. Ralston, & H. S. Wilf (Eds.), Statistical methods for digital computers. New York: Wiley. Linsker, R. (1986). From basic network principles to neural architecture. Proceedings of the National Academy of Sciences, 83, 7508–7512, 8390–8394, 8779–9783. Linsker, R. (1988). Self-organization in a perceptual network. Computer, 21, 105– 128. Miller, K. D. (1994). A model for the development of simple cell receptive fields and the ordered arrangement of orientation columns through activitydependent competition between ON- and OFF-center inputs. Journal of Neuroscience, 14, 409–441. Miller, K. D., Keller, J. B., & Stryker, M. P. (1989). Ocular dominance column development: Analysis and simulation. Science, 245, 605–615. Miller, K. D., & MacKay, D. J. C. (1994). The role of constraints in Hebbian learning. Neural Computation, 6, 100–126. Montague, P. R., & Sejnowski, T. J. (1994). The predictive brain: Temporal coincidence and temporal order in synaptic learning mechanisms. Learning and Memory, 1, 1–33.
Factor Analysis Using Delta-Rule Wake-Sleep Learning
1803
Mumford, D. (1994). Neuronal architectures for pattern-theoretic problems. In C. Koch & J. Davis (Eds.), Large-scale theories of the cortex (pp. 125–152). Cambridge, MA: MIT Press. Oja, E. (1989). Neural networks, principal components, and subspaces. International Journal of Neural Systems, 1, 61–68. Olshausen, B. A., & Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381, 607–609. Plumbley, M. D. (1993). Efficient information transfer and anti-Hebbian neural networks. Neural Networks, 6, 823–833. Rao, P. N. R., & Ballard, D. H. (1995). Dynamic model of visual memory predicts neural response properties in the visual cortex (Technical Rep. No. 95.4). Rochester, NY: Department of Computer Science, University of Rochester. Rescorla, R. A., & Wagner, A. R. (1972). A theory of Pavlovian conditioning: The effectiveness of reinforcement and non-reinforcement. In A. H. Black & W. F. Prokasy (Eds.), Classical conditioning II: Current research and theory (pp. 64–69). New York: Appleton-Century-Crofts. Rubin, D. B., & Thayer, D. T. (1982). EM algorithms for ML factor analysis. Psychometrika, 47, 69–76. Sanger, T. D. (1989). Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks, 2, 459–473. Sutton, R. S., & Barto, A. G. (1981). Toward a modern theory of adaptive networks: Expectation and prediction. Psychological Review, 88, 135–170. von der Malsburg, C. (1973). Self-organization of orientation sensitive cells in the striate cortex. Kybernetic, 14, 85–100. von der Malsburg, C., & Willshaw D. J. (1977). How to label nerve cells so that they can interconnect in an ordered fashion. Proceedings of the National Academy of Sciences, 74, 5176–5178. Willshaw, D. J., & von der Malsburg, C. (1976). How patterned neural connections can be set up by self-organisation. Proceedings of the Royal Society of London B, 194, 431–445. Willshaw, D. J., & von der Malsburg, C. (1979). A marker induction mechanism for the establishment of ordered neural mappings: Its application to the retinotectal problem. Philosophical Transactions of the Royal Society B, 287, 203–243. Received August 7, 1996; accepted April 3, 1997.
Communicated by Joachim Buhmann
Data Clustering Using a Model Granular Magnet Marcelo Blatt Shai Wiseman Eytan Domany Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 76100, Israel
We present a new approach to clustering, based on the physical properties of an inhomogeneous ferromagnet. No assumption is made regarding the underlying distribution of the data. We assign a Potts spin to each data point and introduce an interaction between neighboring points, whose strength is a decreasing function of the distance between the neighbors. This magnetic system exhibits three phases. At very low temperatures, it is completely ordered; all spins are aligned. At very high temperatures, the system does not exhibit any ordering, and in an intermediate regime, clusters of relatively strongly coupled spins become ordered, whereas different clusters remain uncorrelated. This intermediate phase is identified by a jump in the order parameters. The spin-spin correlation function is used to partition the spins and the corresponding data points into clusters. We demonstrate on three synthetic and three real data sets how the method works. Detailed comparison to the performance of other techniques clearly indicates the relative success of our method. 1 Introduction In recent years there has been significant interest in adapting numerical (Kirkpatrick, Gelatt, & Vecchi, 1983) and analytic (Fu & Anderson, 1986; M´ezard & Parisi, 1986) techniques from statistical physics to provide algorithms and estimates for good approximate solutions to hard optimization problems (Yuille & Kosowsky, 1994). In this article we formulate the problem of data clustering as that of measuring equilibrium properties of an inhomogeneous Potts model. We are able to give good clustering solutions by solving the physics of this model. Cluster analysis is an important technique in exploratory data analysis, where a priori knowledge of the distribution of the observed data is not available (Duda & Hart, 1973; Jain & Dubes, 1988). Partitional clustering methods, which divide the data according to natural classes present in it, have been used in a large variety of scientific disciplines and engineering applications, among them pattern recognition (Duda & Hart, 1973), learning theory (Moody & Darken, 1989), astrophysics (Dekel & West, 1985), medical Neural Computation 9, 1805–1842 (1997)
c 1997 Massachusetts Institute of Technology °
1806
Marcelo Blatt, Shai Wiseman, and Eytan Domany
imaging (Suzuki, Shibata, & Suto, 1995) and data processing (Phillips et al., 1995), machine translation of text (Cranias, Papageorgiou, & Piperdis, 1994), image compression (Karayiannis, 1994), satellite data analysis (Baraldi & Parmiggiani, 1995), automatic target recognition (Iokibe, 1994), and speech recognition (Kosaka & Sagayama, 1994) and analysis (Foote & Silverman, 1994). The goal is to find a partition of a given data set into several compact groups. Each group indicates the presence of a distinct category in the measurements. The problem of partitional clustering can be formally stated as follows. Determine the partition of N given patterns {vi }N i=1 into groups, called clusters, such that the patterns of a cluster are more similar to each other than to patterns in different clusters. It is assumed that either dij , the measure of dissimilarity between patterns vi and vj , is provided or that each pattern vi is represented by a point xEi in a D-dimensional metric space, in xi − xEj |. which case dij = |E The two main approaches to partitional clustering are called parametric and nonparametric. In parametric approaches some knowledge of the clusters’ structure is assumed, and in most cases patterns can be represented by points in a D-dimensional metric space. For instance, each cluster can be parameterized by a center around which the points that belong to it are spread with a locally gaussian distribution. In many cases the assumptions are incorporated in a global criterion whose minimization yields the “optimal” partition of the data. The goal is to assign the data points so that the criterion is minimized. Classical approaches are variance minimization, maximal likelihood, and fitting gaussian mixtures. A nice example of variance minimization is the method proposed by Rose, Gurewitz, and Fox (1990) based on principles of statistical physics, which ensures an optimal solution under certain conditions. This work gave rise to other mean field methods for clustering data (Buhmann & Kuhnel, ¨ 1993; Wong, 1993; Miller & Rose, 1996). Classical examples of fitting gaussian mixtures are the Isodata algorithm (Ball & Hall, 1967) or its sequential relative, the K-means algorithm (MacQueen, 1967) in statistics, and soft competition in neural networks (Nowlan & Hinton, 1991). In many cases of interest, however, there is no a priori knowledge about the data structure. Then it is more natural to adopt nonparametric approaches, which make fewer assumptions about the model and therefore are suitable to handle a wider variety of clustering problems. Usually these methods employ a local criterion, against which some attribute of the local structure of the data is tested, to construct the clusters. Typical examples are hierarchical techniques such as the agglomerative and divisive methods (see Jain & Dubes, 1988). These algorithms suffer, however, from at least one of the following limitations: high sensitivity to initialization, poor performance when the data contain overlapping clusters, or an inability to handle variabilities in cluster shapes, cluster densities, and cluster sizes. The most serious problem is the lack of cluster validity criteria; in particular, none of
Data Clustering
1807
these methods provides an index that could be used to determine the most significant partitions among those obtained in the entire hierarchy. All of these algorithms tend to create clusters even when no natural clusters exist in the data. We recently introduced a new approach to clustering, based on the physical properties of a magnetic system (Blatt, Wiseman, & Domany, 1996a, 1996b, 1996c). This method has a number of rather unique advantages: it provides information about the different self-organizing regimes of the data; the number of “macroscopic” clusters is an output of the algorithm; and hierarchical organization of the data is reflected in the manner the clusters merge or split when a control parameter (the physical temperature) is varied. Moreover, the results are completely insensitive to the initial conditions, and the algorithm is robust against the presence of noise. The algorithm is computationally efficient; equilibration time of the spin system scales with N, the number of data points, and is independent of the embedding dimension D. In this article we extend our work by demonstrating the efficiency and performance of the algorithm on various real-life problems. Detailed comparisons with other nonparametric techniques are also presented. The outline of the article is as follows. The magnetic model and thermodynamic definitions are introduced in section 2. A very efficient Monte Carlo method used for calculating the thermodynamic quantities is presented in section 3. The clustering algorithm is described in section 4. In section 5 we analyze synthetic and real data to demonstrate the main features of the method and compare its performance with other techniques. 2 The Potts Model Ferromagnetic Potts models have been studied extensively for many years (see Wu, 1982 for a review). The basic spin variable s can take one of q integer values: s = 1, 2, . . . , q. In a magnetic model the Potts spins are located at points vi that reside on (or off) the sites of some lattice. Pairs of spins associated with points i and j are coupled by an interaction of strength Jij > 0. Denote by S a configuration of the system, S = {si }N i=1 . The energy of such a configuration is given by the Hamiltonian
H(S) =
X
¡ ¢ Jij 1 − δsi ,sj
si = 1, . . . , q ,
(2.1)
hi,ji
where the notation hi, ji stands for neighboring sites vi and vj . The contribution of a pair hi, ji to H is 0 when si = sj , that is, when the two spins are aligned, and is Jij > 0 otherwise. If one chooses interactions that are a decreasing function of the distance dij ≡ d(vi , vj ), then the closer two points are to each other, the more they “like” to be in the same state. The Hamiltonian (see equation 2.1) is very similar to other energy functions used in neural
1808
Marcelo Blatt, Shai Wiseman, and Eytan Domany
systems, where each spin variable represents a q-state neuron with an excitatory coupling to its neighbors. In fact, magnetic models have inspired many neural models (see, for example, Hertz, Krogh, & Palmer, 1991). In order to calculate the thermodynamic average of a physical quantity A at a fixed temperature T, one has to calculate the sum hAi =
X
A(S) P(S) ,
(2.2)
S
where the Boltzmann factor, P(S) =
µ ¶ 1 H(S) exp − , Z T
(2.3)
plays the role of the probability density, which gives the statistical weight of each spin configuration SP= {si }N i=1 in thermal equilibrium and Z is a normalization constant, Z = S exp(−H(S)/T). Some of the most important physical quantities A for this magnetic system are the order parameter or magnetization and the set of δsi ,sj functions, because their thermal averages reflect the ordering properties of the model. The order parameter of the system is hmi, where the magnetization, m(S), associated with a spin configuration S is defined (Chen, Ferrenberg, & Landau, 1992) as m(S) =
q Nmax (S) − N (q − 1) N
(2.4)
with ª © Nmax (S) = max N1 (S), N2 (S), . . . , Nq (S) , P where Nµ (S) is the number of spins with the value µ; Nµ (S) = i δsi ,µ . The thermal average of δsi ,sj is called the spin-spin correlation function, ® Gij = δsi ,sj ,
(2.5)
which is the probability of the two spins si and sj being aligned. When the spins are on a lattice and all nearest-neighbor couplings are equal, Jij = J, the Potts system is homogeneous. Such a model exhibits two phases. At high temperatures the system is paramagnetic or disordered, hmi = 0, indicating that Nmax (S) ≈ N/q for all statistically significant configurations. In this phase the correlation function Gij decays to 1/q when the distance between points vi and vj is large; this is the probability of finding two completely independent Potts spins in the same state. At very high temperatures even neighboring sites have Gij ≈ 1/q.
Data Clustering
1809
As the temperature is lowered, the system undergoes a sharp transition to an ordered, ferromagnetic phase; the magnetization jumps to hmi 6= 0. This means that in the physically relevant configurations (at low temperatures), one Potts state “dominates” and Nmax (S) exceeds N/q by a macroscopic number of sites. At very low temperatures hmi ≈ 1 and Gij ≈ 1 for all pairs {vi , vj }. The variance of the magnetization is related to a relevant thermal quantity, the susceptibility, χ=
´ N³ 2 hm i − hmi2 , T
(2.6)
which also reflects the thermodynamic phases of the system. At low temperatures, fluctuations of the magnetizations are negligible, so the susceptibility χ is small in the ferromagnetic phase. The connection between Potts spins and clusters of aligned spins was established by Fortuin and Kasteleyn (1972). In the article appendix we present such a relation and the probability distribution of such clusters. We turn now to strongly inhomogeneous Potts models. This is the situation when the spins form magnetic “grains,” with very strong couplings between neighbors that belong to the same grain and very weak interactions between all other pairs. At low temperatures, such a system is also ferromagnetic, but as the temperature is raised, the system may exhibit an intermediate, superparamagnetic phase. In this phase strongly coupled grains are aligned (that is, are in their respective ferromagnetic phases), while there is no relative ordering of different grains. At the transition temperature from the ferromagnetic to superparamagnetic phase a pronounced peak of χ is observed (Blatt et al., 1996a). In the superparamagnetic phase fluctuations of the state taken by grains acting as a whole (that is, as giant superspins) produce large fluctuations in the magnetization. As the temperature is raised further, the superparamagneticto-paramagnetic transition is reached; each grain disorders, and χ abruptly diminishes by a factor that is roughly the size of the largest cluster. Thus, the temperatures where a peak of the susceptibility occurs and the temperatures at which χ decreases abruptly indicate the range of temperatures in which the system is in its superparamagnetic phase. In principle one can have a sequence of several transitions in the superparamagnetic phase. As the temperature is raised, the system may break first into two clusters, each of which breaks into more (macroscopic) subclusters, and so on. Such a hierarchical structure of the magnetic clusters reflects a hierarchical organization of the data into categories and subcategories. To gain some analytic insight into the behavior of inhomogeneous Potts ferromagnets, we calculated the properties of such a “granular” system with a macroscopic number of bonds for each spin. For such “infinite-range” models, mean field is exact, and we have shown (Wiseman, Blatt, & Domany,
1810
Marcelo Blatt, Shai Wiseman, and Eytan Domany
1996; Blatt et al., 1996b) that in the paramagnetic phase, the spin state at each site is independent of any other spin, that is, Gij = 1/q. At the paramagnetic-superparamagnetic transition the correlation between spins belonging to the same group jumps abruptly to µ ¶ µ ¶ 2 1 q−1 q−2 2 1 + '1− +O 2 , q q−1 q q q while the correlation between spins belonging to different groups is unchanged. The ferromagnetic phase is characterized by strong correlations between all spins of the system: µ ¶ q−1 q−2 2 1 + . Gij > q q−1 q There is an important lesson to remember from this: in mean field we see that in the superparamagnetic phase, two spins that belong to the same grain are strongly correlated, whereas for pairs that do not belong to the same grain, Gij is small. As it turns out, this double-peaked distribution of the correlations is not an artifact of mean field and will be used in our solution of the problem of data clustering. As we will show below, we use the data points of our clustering problem as sites of an inhomogeneous Potts ferromagnet. Presence of clusters in the data gives rise to magnetic grains of the kind described above in the corresponding Potts model. Working in the superparamagnetic phase of the model, we use the values of the pair correlation function of the Potts spins to decide whether a pair of spins does or does not belong to the same grain, and we identify these grains as the clusters of our data. This is the essence of our method. 3 Monte Carlo Simulation of Potts Models: The Swendsen-Wang Method The aim of equilibrium statistical mechanics is to evaluate sums such as equation 2.2 for models with N À 1 spins.1 This can be done analytically only for very limited cases. One resorts therefore to various approximations (such as mean field) or to computer simulations that aim at evaluating thermal averages numerically. Direct evaluation of sums like equation 2.2 is impractical, since the number of configurations S increases exponentially with the system size N. Monte Carlo simulations methods (see Binder & Heermann, 1988, for an introduction) overcome this problem by generating a characteristic subset of configurations, which are used as a statistical sample. They are based 1 Actually one is usually interested in the thermodynamic limit, for example, when the number of spins N → ∞.
Data Clustering
1811
on the notion of importance sampling, in which a set of spin configurations {S1 , S2 , . . . , SM } is generated according to the Boltzmann probability distribution (see equation 2.3). Then, expression 2.2 is reduced to a simple arithmetic average, hAi ≈
M 1 X A(Si ), M i
(3.1)
where the number of configurations in the sample, M, is much smaller than qN , the total number of configurations. The set of M states necessary for the implementation of equation 3.1 is constructed by means of a Markov process in the configuration space of the system. There are many ways to generate such a Markov chain; in this work it turned out to be essential to use the Swendsen-Wang (Wang & Swendsen, 1990; Swendsen, Wang, & Ferrenberg,1992) Monte Carlo algorithm (SW). The main reason for this choice is that it is perfectly suitable for working in the superparamagnetic phase: it overturns an aligned cluster in one Monte Carlo step, whereas algorithms that use standard local moves will take forever to do this. The first configuration can be chosen at random (or by setting all si = 1). Say we already generated n configurations of the system, {Si }ni=1 , and we start to generate configuration n + 1. This is the way it is done. First, visit all pairs of spins hi, ji that interact, that is, have Jij > 0; the two spins are frozen together with probability f pi,j
µ
¶ Jij = 1 − exp − δsi ,sj . T
(3.2)
That is, if in our current configuration Sn the two spins are in the same state, si = sj , then sites i and j are frozen with probability p f = 1 − exp(−Jij /T). Having gone over all the interacting pairs, the next step of the algorithm is to identify the SW clusters of spins. An SW cluster contains all spins that have a path of frozen bonds connecting them. Note that according to equation 3.2, only spins of the same value can be frozen in the same SW cluster. After this step, our N sites are assigned to some number of distinct SW clusters. If we think of the N sites as vertices of a graph whose edges are the interactions between neighbors Jij > 0, each SW cluster is a subgraph of vertices connected by frozen bonds. The final step of the procedure is to generate the new spin configuration Sn+1 . This is done by drawing, independently for each SW cluster, randomly a value s = 1, . . . , q, which is assigned to all its spins. This defines one Monte Carlo step Sn → Sn+1 . By iterating this procedure M times while calculating at each Monte Carlo step the physical quantity A(Si ) the thermodynamic average (see equation 3.1) is obtained. The physical quantities that we are interested in are the magnetization (see equation 2.4) and its square
1812
Marcelo Blatt, Shai Wiseman, and Eytan Domany
value for the calculation of the susceptibility χ, and the spin-spin correlation function (see equation 2.5). Actually, in most simulations a number of the early configurations are discarded, to allow the system to “forget” its initial state. This is not necessary if the number of configurations M is not too small (increasing M improves the statistical accuracy of the Monte Carlo measurement). Measuring autocorrelation times (Gould & Tobochnik, 1989) provides a way of both deciding on the number of discarded configurations and checking that the number of configurations M generated is sufficiently large. A less rigorous way is simply plotting the energy as a function of the number of SW steps and verifying that the energy reached a stable regime. At temperatures where large regions of correlated spins occur, local methods (such as Metropolis), which flip one spin at a time, become very slow. The SW procedure overcomes this difficulty by flipping large clusters of aligned spins simultaneously. Hence the SW method exhibits much smaller autocorrelation times than local methods. The efficiency of the SW method, which is widely used in numerous applications, has been tested in various Potts (Billoire et al., 1991) and Ising (Hennecke & Heyken, 1993) models. 4 Clustering of Data: Detailed Description of the Algorithm So far we have defined the Potts model, the various thermodynamic functions that one measures for it, and the (numerical) method used to measure these quantities. We can now turn to the problem for which these concepts will be utilized: clustering of data. For the sake of concreteness, assume that our data consist of N patterns or measurements vi , specified by N corresponding vectors xEi , embedded in a D-dimensional metric space. Our method consists of three stages. The starting point is the specification of the Hamiltonian (see equation 2.1), which governs the system. Next, by measuring the susceptibility χ and magnetization as a function of temperature, the different phases of the model are identified. Finally, the correlation of neighboring pairs of spins, Gij , is measured. This correlation function is then used to partition the spins and the corresponding data points into clusters. The outline of the three stages and the subtasks contained in each can be summarized as follows: 1. Construct the physical analog Potts spin problem: (a) Associate a Potts spin variable si = 1, 2, . . . , q to each point vi . (b) Identify the neighbors of each point vi according to a selected criterion. (c) Calculate the interaction Jij between neighboring points vi and vj . 2. Locate the superparamagnetic phase:
Data Clustering
1813
(a) Estimate the (thermal) average magnetization, hmi, for different temperatures. (b) Use the susceptibility χ to identify the superparamagnetic phase. 3. In the superparamagnetic regime: (a) Measure the spin-spin correlation, Gij , for all neighboring points vi , vj . (b) Construct the data clusters. In the following subsections we provide detailed descriptions of the manner in which each of the three stages is to be implemented. 4.1 The Physical Analog Potts Spin Problem. The goal is to specify the Hamiltonian of the form in equation 2.1, that serves as the physical analog of the data points to be clustered. One has to assign a Potts spin to each data point and introduce short-range interactions between spins that reside on neighboring points. Therefore we have to choose the value of q, the number of possible states a Potts spin can take, define what is meant by neighbor points, and provide the functional dependence of the interaction strength Jij on the distance between neighboring spins. We discuss now the possible choices for these attributes of the Hamiltonian and their influence on the algorithm’s performance. The most important observation is that none of them needs fine tuning; the algorithm performs well provided a reasonable choice is made, and the range of reasonable choices is very wide. 4.1.1 The Potts Spin Variables. The number of Potts states, q, determines mainly the sharpness of the transitions and the temperatures at which they occur. The higher the q, the sharper the transition.2 On the other hand, in order to maintain a given statistical accuracy, it is necessary to perform longer simulations as the value the q increases. From our simulations we conclude that the influence of q on the resulting classification is weak. We used q = 20 in all the examples presented in this work. Note that the value of q does not imply any assumption about the number of clusters present in the data. 4.1.2 Identifying Neighbors. The need for identification of the neighbors of a point xEi could be eliminated by letting all pairs i, j of Potts spins interact with each other via a short-range interaction Jij = f (dij ), which decays sufficiently fast (say, exponentially or faster) with the distance between the 2 For a two-dimensional regular lattice, one must have q > 4 to ensure that the transition is of first order, in which case the order parameter exhibits a discontinuity (Baxter, 1973; Wu, 1982).
1814
Marcelo Blatt, Shai Wiseman, and Eytan Domany
two data points. The phases and clustering properties of the model will not be affected strongly by the choice of f . Such a model has O(N2 ) interactions, which makes its simulation rather expensive for large N. For the sake of computational convenience, we decided to keep only the interactions of a spin with a limited number of neighbors, and setting all other Jij to zero. Since the data do not form a regular lattice, one has to supply some reasonable definition for “neighbors.” As it turns out, our results are quite insensitive to the particular definition used. Ahuja (1982) argues for intuitively appealing characteristics of Delaunay triangulation over other graphs structures in data clustering. We use this definition when the patterns are embedded in a low-dimensional (D ≤ 3) space. For higher dimensions, we use the mutual neighborhood value; we say that vi and vj have a mutual neighborhood value K, if and only if vi is one of the K-nearest neighbors of vj and vj is one of the K-nearest neighbors of vi . We chose K such that the interactions connect all data points to one connected graph. Clearly K grows with the dimensionality. We found convenient, in cases of very high dimensionality (D > 100), to fix K = 10 and to superimpose to the edges obtained with this criterion the edges corresponding to the minimal spanning tree associated with the data. We use this variant only in the examples presented in sections 5.2 and 5.3. 4.1.3 Local Interaction. In order to have a model with the physical properties of a strongly inhomogeneous granular magnet, we want strong interaction between spins that correspond to data from a high-density region and weak interactions between neighbors that are in low-density regions. To this end and in common with other local methods, we assume that there is a local length scale ∼ a, which is defined by the high-density regions and is smaller than the typical distance between points in the low-density regions. This a is the characteristic scale over which our short-range interactions decay. We tested various choices but report here only results that were obtained using Jij =
1
b K 0
µ 2¶ d exp − 2aij2
if vi and vj are neighbors
(4.1)
otherwise.
We chose the local length scale, a, to be the average of all distances dij between b is the average number of neighbors; it is twice neighboring pairs vi and vj . K the number of nonvanishing interactions divided by the number of points N. This careful normalization of the interaction strength enables us to estimate the temperature corresponding to the highest superparamagnetic transition (see section 4.2). Everything done so far can be easily implemented in the case when instead of providing the xEi for all the data we have an N × N matrix of dissim-
Data Clustering
1815
ilarities dij . This was tested in experiments for clustering of images where only a measure of the dissimilarity between them was available (Gdalyahu & Weinshall, 1997). Application of other clustering methods would have necessitated embedding these data in a metric space; the need for this was eliminated by using superparamagnetic clustering. The results obtained by applying the method on the matrix of dissimilarities3 of these images were excellent; all points were classified with no error. 4.2 Locating the Superparamagnetic Regions. The various temperature intervals in which the system self-organizes into different partitions to clusters are identified by measuring the susceptibility χ as a function of temperature. We start by summarizing the Monte Carlo procedure and conclude by providing an estimate of the highest transition temperature to the superparamagnetic regime. Starting from this estimate, one can take increasingly refined temperature scans and calculate the function χ(T) by Monte Carlo simulation. We used the SW method described in section 3, with the following procedure: 1. Choose the number of iterations M to be performed. 2. Generate the initial configuration by assigning a random value to each spin. 3. Assign a frozen bond between nearest neighbors points vi and vj with f
probability pi,j (see equation 3.2). 4. Find the connected subgraphs, the SW clusters. 5. Assign new random values to the spins (spins that belong to the same SW cluster are assigned the same value). This is the new configuration of the system. 6. Calculate the value assumed by the physical quantities of interest in the new spin configuration. 7. Go to step 3 unless the maximal number of iterations, M, was reached. 8. Calculate the averages (see equation 3.1). The superparamagnetic phase can contain many different subphases with different ordering properties. A typical example can be generated by data with a hierarchical structure, giving rise to different acceptable partitions of the data. We measure the susceptibility χ at different temperatures in order to locate these different regimes. The aim is to identify the temperatures at which the system changes its structure. 3
Interestingly, the triangle inequality was violated in about 5 percent of the cases.
1816
Marcelo Blatt, Shai Wiseman, and Eytan Domany
The superparamagnetic phase is characterized by a nonvanishing susceptibility. Moreover, there are two basic features of χ in which we are interested. The first is a peak in the susceptibility, which signals a ferromagneticto-superparamagnetic transition, at which a large cluster breaks into a few smaller (but still macroscopic) clusters. The second feature is an abrupt decrease of the susceptibility, corresponding to a superparamagnetic-toparamagnetic transition, in which one or more large clusters have melted (i.e., they broke up into many small clusters). The location of the superparamagnetic-to-paramagnetic transition, which occurs at the highest temperature, can be roughly estimated by the following considerations. First, we approximate the clusters by an ordered lattice b and a constant interaction of coordination number K DD EE Ã ** !++ d2ij d2ij ®® 1 1 exp − 2 exp− 2 , ≈ J ≈ Jij = b b 2a 2a K K where hh· · ·ii denotes the average over all neighbors. Second, from the Potts model on a square lattice (Wu, 1982), we get that this transition should occur at roughly DD EE d2ij 1 T≈ √ exp− 2 . 4 log(1 + q) 2a
(4.2)
An estimate based on the mean field model yields a very similar value. 4.3 Identifying the Data Clusters. Once the superparamagnetic phase and its different subphases have been identified, we select one temperature in each region of interest. The rationale is that each subphase characterizes a particular type of partition of the data, with new clusters merging or breaking. On the other hand, as the temperature is varied within a phase, one expects only shrinking or expansion of the existing clusters, changing only the classification of the points on the boundaries of the clusters. 4.3.1 The Spin-Spin Correlation. We use the spin-spin correlation function Gij , between neighboring sites vi and vj , to build the data clusters. In principle we have to calculate the thermal average (see equation 3.1) of δsi ,sj in order to obtain Gij . However, the SW method provides an improved estimator (Niedermayer, 1990) of the spin-spin correlation function. One calculates the two-point connectedness Cij , the probability that sites vi and vj belong to the same SW cluster, which is estimated by the average (see equation 3.1) of the following indicator function: ½ 1 if vi and vj belong to the same SW cluster cij = 0 otherwise.
Data Clustering
1817
® Cij = cij is the probability of finding sites vi and vj in the same SW cluster. Then the relation (Fortuin & Kasteleyn, 1972) Gij =
(q − 1) Cij + 1 q
(4.3)
is used to obtain the correlation function Gij . 4.3.2 The Data Clusters.
Clusters are identified in three steps:
1. Build the clusters’ core using a thresholding procedure; if Gij > 0.5, a link is set between the neighbor data points vi and vj . The resulting connected graph depends weakly on the value (0.5) used in this thresholding, as long as it is bigger than 1/q and less than 1 − 2/q. The reason is that the distribution of the correlations between two neighboring spins peaks strongly at these two values and is very small between them (see Figure 3b). 2. Capture points lying on the periphery of the clusters by linking each point vi to its neighbor vj of maximal correlation Gij . It may happen, of course, that points vi and vj were already linked in the previous step. 3. Data clusters are identified as the linked components of the graphs obtained in steps 1 and 2. Although it would be completely equivalent to use in steps 1 and 2 the two-point connectedness, Cij , instead of the spin-spin correlation, Gij , we considered the latter to stress the relation of our method with the physical analogy we are using. 5 Applications The approach presented in this article has been successfully tested on a variety of data sets. The six examples we discuss were chosen with the intention of demonstrating the main features and utility of our algorithm, to which we refer as the superparamagnetic clustering (SPC) method. We use both artificial and real data. Comparisons with the performance of other classical (nonparametric) methods are also presented. We refer to different clustering methods by the nomenclature used by Jain and Dubes (1988) and Fukunaga (1990). The nonparametric algorithms we have chosen belong to four families: (1) hierarchical methods: single link and complete link; (2) graph theory–based methods: Zhan’s minimal spanning tree and Fukunaga’s directed graph method; (3) nearest-neighbor clustering type, based on different proximity measures: the mutual neighborhood clustering algorithm and k-shared neighbors; and (4) density estimation: Fukunaga’s valley-seeking method. These algorithms are of the same kind as the superparamagnetic method
1818
Marcelo Blatt, Shai Wiseman, and Eytan Domany
in the sense that only weak assumptions are required about the underlying data structure. The results from all these methods depend on various parameters in an uncontrolled way; we always used the best result that was obtained. A unifying view of some of these methods in the framework of the work discussed here is presented in the article appendix. 5.1 A Pedagogical 2-Dimensional Example. The main purpose of this simple example is to illustrate the features of the method discussed, in particular the behavior of the susceptibility and its use for the identification of the two kinds of phase transitions. The influence of the number of Potts states, q, and the partition of the data as a function of the temperature are also discussed. The toy problem of Figure 1 consists of 4800 points in D = 2 dimensions whose angular distribution is uniform and whose radial distribution is normal with variance 0.25; θ ∼ U[0, 2π] r ∼ N[R, 0.25] ; we generated half the points with R = 3, one-third with R = 2, and one-sixth with R = 1. Since there is a small overlap between the clusters, we consider the Bayes solution as the optimal result; that is, points whose distance to the origin is bigger than 2.5 are considered a cluster, points whose radial coordinate lies between 1.5 and 2.5 are assigned to a second cluster, and the remaining points define the third cluster. These optimal clusters consist of 2393, 1602, and 805 points, respectively. By applying our procedure, and choosing the neighbors according to the mutual neighborhood criterion with K = 10, we obtain the susceptibility as a function of the temperature as presented in Figure 2a. The estimated temperature (4.2) corresponding to the superparamagnetic-toparamagnetic transition is 0.075, which is in good agreement with the one inferred from Figure 2a. Figure 1 presents the clusters obtained at T = 0.05. The sizes of the three largest clusters are 2358, 1573, and 779, including 98 percent of the data; the classification of all these points coincides with that of the optimal Bayes classifier. The remaining 90 points are distributed among 43 clusters of size smaller than 4. As can be noted in Figure 1, the small clusters (fewer than 4 points) are located at the boundaries between the main clusters. One of the most salient features of the SPC method is that the spin-spin correlation function, Gij , reflects the existence of two categories of neighboring points: neighboring points that belong to the same cluster and those that do not. This can be observed from Figure 3b, the two-peaked frequency distribution of the correlation function Gij between neighboring points of
Data Clustering
1819
4
2
0
-2
Outer Cluster Central Cluster Inner Cluster Unclassified Points -4
-4
-2
0
2
4
Figure 1: Data distribution: The angular coordinate is uniformly distributed, that is, U[0, 2π ], while the radial one is normal N[R, 0.25] distributed around three different radius R. The outer cluster (R = 3.0) consists of 2400 points, the central one (R = 2.0) of 1600, and the inner one (R = 1.0) of 800. The classified data set: Points classified at T = 0.05 as belonging to the three largest clusters are marked by circles (outer cluster, 2358 points), squares (central cluster, 1573 points), and triangles (inner cluster, 779 points). The x’s denotes the 90 remaining points, which are distributed in 43 clusters, the biggest of size 4.
Figure 1. In contrast, the frequency distribution (see Figure 3a) of the normalized distances dij /a between neighboring points of Figure 1 contains no hint of the existence of a natural cutoff distance, that separates neighboring points into two categories. It is instructive to observe the behavior of the size of the clusters as a function of the temperature, presented in Figure 2b. At low temperatures, as expected, all data points form only one cluster. At the ferromagneticto-superparamagnetic transition temperature, indicated by a peak in the susceptibility, this cluster splits into three. These essentially remain stable in
1820
Marcelo Blatt, Shai Wiseman, and Eytan Domany
0.050
(a)
cT/N
0.040 0.030 0.020 0.010 0.000
0.00
0.02
0.04
0.06
0.08
0.10
5000
(b) Cluster Size
4000 3000 outer cluster
2000 st
1000 0 0.00
1 cluster nd 2 cluster rd 3 cluster th 4 cluster
0.02
central cluster inner cluster
0.04
0.06
0.08
0.10
T
Figure 2: (a) The susceptibility density χ T/N of the data set of Figure 1 as a function of the temperature. (b) Size of the four biggest clusters obtained at each temperature.
their composition until the superparamagnetic-to-paramagnetic transition temperature is reached, expressed in a sudden decrease of the susceptibility χ , where the clusters melt. Turning now to the effect of the parameters on the procedure, we found (Wiseman et al., 1996) that the number of Potts states q affects the sharpness of the transition, but the obtained classification is almost the same. For instance, choosing q = 5 we found that the three largest clusters contained 2349, 1569, and 774 data points, while taking q = 200, we yielded 2354, 1578, and 782. Of all the algorithms listed at the beginning of this section, only the single-link and minimal spanning methods were able to give (at the optimal values of their clustering parameter) a partition that reflects the underlying distribution of the data. The best results are summarized in Table 1, together
Data Clustering
1821
4000
24000
(a)
(b)
22000 3000
8000
6000
2000
4000 1000
2000
0 0.0
1.0
dij/a
2.0
3.0
1/q
1
Gij
Figure 3: Frequency distribution of (a) distances between neighboring points of Figure 1 (scaled by the average distance a), and (b) spin-spin correlation of neighboring points.
Table 1: Clusters Obtained with the Methods That Succeeded in Recovering the Structure of the Data. Method Bayes Superparamagnetic (q = 200) Superparamagnetic (q = 20) Superparamagnetic (q = 5) Single link Minimal spanning tree
Outer Cluster
Central Cluster
Inner Cluster
2393 2354 2358 2349 2255 2262
1602 1578 1573 1569 1513 1487
805 782 779 774 758 756
Unclassified Points — 86 90 108 274 295
Notes: Points belonging to cluster of sizes fewer than 50 points are considered as unclassified points. The Bayes method is used as the benchmark because it is the one that minimizes the expected number of mistakes, provided that the distribution that generated the set of points is known.
with those of the SPC method. Clearly, the standard parametric methods (such as K-means or Ward’s method) would not be able to give a reasonable answer because they assume that different clusters are parameterized by different centers and a spread around them. In Figure 4 we present, for the methods that depend on only a single parameter, the sizes of the four biggest clusters that were obtained as a function of the clustering parameter. The best solution obtained with the single-link method (for a narrow range of the parameter) corresponds also
1822
Marcelo Blatt, Shai Wiseman, and Eytan Domany
cluster size
(a) single linkage
(b) directed graph
5000
5000
4000
4000
3000
3000
2000
2000
1000
1000
0.06
0.08
0.10
0.12
0.14
0.30
cluster size
(c) complete linkage 5000
5000
4000
4000
3000
3000
2000
2000
1000
1000
1.0
3.0
5.0
7.0
6
7
cluster size
(e) valey seeking 5000
4000
4000
3000
3000
2000
2000
1000
1000
0.80
1.00
1.20
clustering parameter
0.90
1.20
8
9
10
(f) shared neighborhood
5000
0.60
0.60
(d) mutual neighborhood
1.40
11
12
13
14
clustering parameter
Figure 4: Size of the three biggest clusters as a function of the clustering parameter obtained with (a) single-link, (b) directed graph, (c) complete-link, (d) mutual neighborhood, (e) valley-seeking, and (f) shared-neighborhood algorithm. The arrow in (a) indicates the region corresponding to the optimal partition for the single-link method. The other algorithms were unable to recover the data structure.
to three big clusters of 2255, 1513, and 758 points, respectively, while the remaining clusters are of size smaller than 14. For larger threshold distance, the second and third clusters are linked. This classification is slightly worse than the one obtained by the superparamagnetic method. When comparing SPC with single link, one should note that if the “correct” answer is not known, one has to rely on measurements such as the stability of the largest clusters (existence of a plateau) to indicate the quality of the partition. As can be observed from Figure 4a there is no clear indication that signals which plateau corresponds to the optimal partition among the whole hierarchy yielded by single link. The best result obtained with the minimal spanning tree method is very similar to the one obtained with
Data Clustering
1823
the single link, but this solution corresponds to a very small fraction of its parameter space. In comparison, SPC allows clear identification of the relevant superparamagnetic phase; the entire temperature range of this regime yields excellent clustering results. 5.2 Only One Cluster. Most existing algorithms impose a partition on the data even when there are no natural classes present. The aim of this example is to show how the SPC algorithm signals this situation. Two different 100-dimensional data sets of 1000 samples are used. The first data set is taken from a gaussian distribution centered at the origin, with covariance matrix equal to the identity. The second data set consists of points generated randomly from a uniform distribution in a hypercube of side 2. The susceptibility curve, which was obtained by using the SPC method with these data sets, is shown in Figures 5a and 5b. The narrow peak and the absence of a plateau indicate that there is only a single-phase transition (ferromagnetic to paramagnetic), with no superparamagnetic phase. This single-phase transition is also evident from Figures 5c and 5d where only one cluster of almost 1000 points appears below the transition. This single “macroscopic” cluster “melts” at the transition, to many “microscopic” clusters of 1 to 3 points in each. Clearly, all existing methods are able to give the correct answer since it is always possible to set the parameters such that this trivial solution is obtained. Again, however, there is no clear indicator for the correct value of the control parameters of the different methods. 5.3 Performance: Scaling with Data Dimension and Influence of Irrelevant Features. The aim of this example is to show the robustness of the SPC method and to give an idea of the influence of the dimension of the data on its performance. To this end, we generated N D–dimensional points whose density distribution is a mixture of two isotropic gaussians, that is,
P(Ex) =
³√ ´−D · 2πσ 2
µ
kE x − yE1 k2 exp − 2σ 2
¶
µ
kE x − yE2 k2 + exp − 2σ 2
¶¸ ,
(5.1)
where yE1 and yE2 are the centers of the gaussians and σ determines its width. Since the two characteristics lengths involved are kE y1 − yE2 k and σ , the relevant parameter of this example is the normalized distance, L=
kE y1 − yE2 k . σ
The manner in which these data points were generated satisfies precisely the hypothesis about data distribution that is assumed by the K-means algorithm. Therefore, it is clear that this algorithm (with K = 2) will achieve the Bayes optimal result; the same will hold for other parametric methods,
1824
Marcelo Blatt, Shai Wiseman, and Eytan Domany
Uniform Distribution
Gaussian Distribution 0.008
0.03
(a)
(b) 0.006
cT/N
0.02 0.004 0.01 0.002
0.00 0.00
0.05
0.10
0.15
0.20
0.25
1000
0.000 0.00
cluster size
(c)
0.10
0.05
0.10
0.15
0.20
0.25
0.15
0.20
0.25
(d)
750
750
500
500
250
250
0 0.00
0.05
1000
0.05
0.10
0.15
T
0.20
0.25
0 0.00
T
Figure 5: Susceptibility density χT/N as a function of the temperature T for data points (a) uniformly distributed in a hypercube of side 2 and (b) multinormally distributed with a covariance matrix equal to the identity in a 100-dimensional space. The sizes of the two biggest clusters obtained at each temperature are presented in (c) and (d), respectively.
such as maximal likelihood (once a two-gaussian distribution for the data is assumed). Although such algorithms have an obvious advantage over SPC for these kinds of data, it is interesting to get a feeling about the loss in the quality of the results, caused by using our method, which relies on fewer assumptions. To this end we considered the case of 4000 points generated in a 200–dimensional space from the distribution in equation 5.1, setting the parameter L = 13.0 σ . The two biggest clusters we obtained were of sizes 1853 and 1816; the smaller ones contained fewer than 4 points each. About 8.0 percent of the points were left unclassified, but all those points that the method did assign to one of the two large clusters were classified in agreement with a Bayes classifier. For comparison we applied the singlelinkage algorithm to the same data; at the best classification point, 74 percent of the points were unclassified. Next we studied the minimal distance, Lc , at which the method is able to recognize that two clusters are present in the data and to find the dependence of Lc on the dimension D and number of samples N. Note that the
Data Clustering
1825
lower bound for the minimal discriminant distance for any nonparametric algorithm is 2 (for any dimension D). Below this distance, the distribution is no longer bimodal; rather, the maximal density of points is located at the midpoint between the gaussian centers. Sets of N = 1000, 2000, 4000, and 8000 samples and space dimensions D = 2, 10, 100, 100, and 1000 were tested. We set the number of neighbors K = 10 and superimposed the minimal spanning tree to ensure that at T = 0, all points belong to the same cluster. To our surprise, we observed that in the range 1000 ≤ N ≤ 8000, the critical distance seems to depend only weakly on the number of samples, N. The second remarkable result is that the critical discriminant distance Lc grows very slowly with the dimensionality of the data points, D. Apparently the minimal discriminant distance Lc increases like the logarithm of the number of dimensions D, Lc ≈ α + β log D,
(5.2)
where α and β do not depend on D. The best fit in the range 2 ≤ D ≤ 1000 yields α = 2.3 ± 0.3 and β = 1.3 ± 0.2. Thus, this example suggests that the dimensionality of the points does not affect the performance of the method significantly. A more careful interpretation is that the method is robust against irrelevant features present in the characterization of the data. Clearly there is only one relevant feature in this problem, which is given by the projection x0 =
yE1 − yE2 · xE . kE y1 − yE2 k
The Bayes classifier, which has the lowest expected error, is implemented by assigning xEi to cluster 1 if x0i < 0 and to cluster 2 otherwise. Therefore we can consider the other D − 1 dimensions as irrelevant features because they do not carry any relevant information. Thus, equation 5.2 is telling us how noise, expressed as the number of irrelevant features present, affects the performance of the method. Adding pure noise variables to the true signal can lead to considerable confusion when classical methods are used (Fowlkes, Gnanadesikan, & Kettering, 1988). 5.4 The Iris Data. The first “real” example we present is the time-honored Anderson-Fisher Iris data, a popular benchmark problem for clustering procedures. It consists of measurement of four quantities, performed on each of 150 flowers. The specimens were chosen from three species of Iris. The data constitute 150 points in four-dimensional space. The purpose of this experiment is to present a slightly more complicated scenario than that of Figure 1. From the projection on the plane spanned by the first two principal components, presented on Figure 6, we observe that there is a well-separated cluster (corresponding to the Iris setosa species)
1826
Marcelo Blatt, Shai Wiseman, and Eytan Domany
second principal component
5.0
4.0
3.0
2.0 Iris Setosa Iris Versicolor Iris Virginica
1.0 2.0
4.0
6.0 8.0 first principal component
10.0
Figure 6: Projection of the iris data on the plane spanned by its two principal components.
while clusters corresponding to the Iris virginia and Iris versicolor do overlap. We determined neighbors in the D = 4 dimensional space according to the mutual K (K = 5) nearest neighbors definition, applied the SPC method, and obtained the susceptibility curve of Figure 7a; it clearly shows two peaks. When heated, the system first breaks into two clusters at T ≈ 0.1. At Tclus = 0.2 we obtain two clusters, of sizes 80 and 40; points of the smaller cluster correspond to the species Iris setosa. At T ≈ 0.6 another transition occurs; the larger cluster splits to two. At Tclus = 0.7 we identified clusters of sizes 45, 40, and 38, corresponding to the species Iris versicolor, virginica, and setosa, respectively. As opposed to the toy problems, the Iris data break into clusters in two stages. This reflects the fact that two of the three species are “closer” to each other than to the third one; the SPC method clearly handles such hierarchical organization of the data very well. Among the samples, 125 were classified correctly (as compared with manual classification); 25 were left unclassified. No further breaking of clusters was observed; all three disorder at Tps ≈ 0.8 (since all three are of about the same density). The best results of all the clustering algorithms used in this work together with those of the SPC method are summarized in Table 2. Among these,
Data Clustering
1827
(a) 0.20
cT/N
0.15
0.10
0.05
0.00 0.00
cluster size
(b)
0.02
0.04
0.06
0.08
0.10
150 st
100
50
1 cluster nd 2 cluster rd 3 cluster th 4 cluster th 5 cluster
Versicolor + Virginica
Setosa
Setosa Virginica
0 0.00
0.02
0.04
0.06
Versicolor
0.08
0.10
T
Figure 7: (a) The susceptibility density χT/N as a function of the temperature and (b) the size of the four biggest clusters obtained at each temperature for the Iris data.
the minimal spanning tree procedure obtained the most accurate result, followed by our method, while the remaining clustering techniques failed to provide a satisfactory result. 5.5 LANDSAT Data. Clustering techniques have been very popular in remote sensing applications (Faber, Hochberg, Kelly, Thomas, & White, 1994; Kelly & White, 1993; Kamata, Kawaguchi, & Niimi, 1995; Larch, 1994; Kamata, Eason, & Kawaguchi, 1991). Multispectral scanners on LANDSAT satellites sense the electromagnetic energy of the light reflected by the earth’s surface in several bands (or wavelengths) of the spectrum. A pixel represents the smallest area on earth’s surface that can be separated from the neighboring areas. The pixel size and the number of bands vary, depending on the scanner; in this case, four bands are utilized, whose pixel resolution is of 80 × 80 meters. Two of the wavelengths are in the visible region, corresponding approximately to green (0.52–0.60 µm ) and red (0.63–0.69µm ), and the other two are in the near-infrared (0.76–0.90 µm ) and mid-infrared (1.55–1.75 µm ) regions. The wavelength interval associated with each band is tuned to a particular cover category. For example, the green band is useful
1828
Marcelo Blatt, Shai Wiseman, and Eytan Domany
Table 2: Best Partition Obtained with Clustering Methods. Method Minimal spanning tree Superparamagnetic Valley seeking Complete link Directed graph K-shared neighbors Single link Mutual neighborhood value
Biggest Cluster
Middle Cluster
Smallest Cluster
50 45 67 81 90 90 101 101
50 40 42 39 30 30 30 30
50 38 37 30 30 30 19 19
Note: Only the minimal spanning tree and the superparamagnetic method returned clusters where points belonging to different Iris species were not mixed.
for identifying areas of shallow water, such as shoals and reefs, whereas the red band emphasizes urban areas. The data consist of 6437 samples that are contained in a rectangle of 82 × 100 pixels. Each “data point” is described by thirty-six features that correspond to a 3 × 3 square of pixels. A classification label (ground truth) of the central pixel is also provided. The data are given in random order, and certain samples have been removed, so that one cannot reconstruct the original image. The data were provided by Srinivasan (1994) and are available at the University of California at Irvine (UCI) Machine Learning Repository (Murphy & Aha, 1994). The goal is to find the “natural classes” present in the data (without using the labels, of course). The quality of our results is determined by the extent to which the clustering reflects the six terrain classes present in the data: red soil, cotton crop, grey soil, damp grey soil, soil with vegetation stubble, and very damp grey soil. This exercise is close to a real problem of remote sensing, where the true labels (ground truth) on the pixels are not available, and therefore clustering techniques are needed to group pixels on the basis of the sensed observations. We used the projection pursuit method (Friedman, 1987), a dimensionreducing transformation, in order to gain some knowledge about the organization of the data. Among the first six two-dimensional projections that were produced, we present in Figure 8 the one that best reflects the (known) structure of the data. We observe that the clusters differ in their density, there is unequal coupling between clusters, and the density of the points within a cluster is not uniform but rather decreases toward the perimeter of the cluster. The susceptibility curve in Figure 9a reveals four transitions that reflect
Data Clustering
1829
100
cotton crop 60
red soil 20
-20
grey soil very damp grey soil damp grey soil soil with vegetation subble -20
20
60
100
Figure 8: Best two-dimensional projection pursuit among the first six solutions for the LANDSAT data.
the presence of the following hierarchy of clusters (see Figure 10). At the lowest temperature, two clusters, A and B, appear. Cluster A splits at the second transition into A1 and A2 . At the next transition cluster, A1 splits into A11 and A21 . At the last transition cluster, A2 splits into four clusters Ai2 , i = 1, . . . , 4. At this temperature the clusters A1 and B are no longer identifiable; their spins are in a disordered state, since the density of points in A1 and B is significantly smaller than within the Ai2 clusters. Thus, the superparamagnetic method overcomes the difficulty of dealing with clusters of different densities by analyzing the data at several temperatures. This hierarchy indeed reflects the structure of the data. Clusters obtained in the range of temperature 0.08 to 0.12 coincides with the picture obtained by projection pursuit; cluster B corresponds to cotton crop terrain class, A1 to red soil, and the remaining four terrain classes are grouped in the cluster A2 . The clusters A11 and A21 are a partition of the red soil, while A12 , A22 , A32 , and A42 correspond, respectively, to the classes grey soil, very damp grey
1830
Marcelo Blatt, Shai Wiseman, and Eytan Domany
(2)
(a) 0.010
cT/N
0.008 0.006 0.004
(4)
0.002 0.000
(b)
0.04
0.06
0.08
0.10
0.12
0.14
0.16
7000 st
6000
cluster size
(3)
(1)
1 cluster nd 2 cluster rd 3 cluster th 4 cluster
A
5000 A2
4000 3000
A2 : grey soil
2000
A2 : very damp grey soil
1
2
A1 : red soil
0
3
A2 : damp
1
1000
A1
B: cotton crop
0.04
0.06
0.08
0.10
0.12
2
A1
4
A2 : vegetation
0.14
0.16
T
Figure 9: (a) Susceptibility density χT/N of the LANDSAT data as a function of the temperature T. The numbers in parentheses indicate the phase transitions. (b) The sizes of the four biggest clusters at each temperature. The jumps indicate j that a cluster has been split. Symbols A, B, Ai , and Ai correspond to the hierarchy depicted in Figure 10.
soil, damp grey soil and soil with vegetation stubble.4 Ninety-seven percent purity was obtained, meaning that points belonging to different categories were almost never assigned to the same cluster. Only the optimal answer of Fukunaga’s valley-seeking, and our SPC method succeeded in recovering the structure of the LANDSAT data. Fukunaga’s method, however, yielded grossly different answers for different (random) initial conditions; our answer was stable.
4 This partition of the red soil is not reflected in the “true” labels. It would be of interest to reevaluate the labeling and try to identify the features that differentiate the two categories of red soil that were discovered by our method.
Data Clustering
1831
(1)
A
(2)
A1
(3)
1
A2
A1
red soil
2
A1
B cotton crop
(4)
1
2
A2
A2
grey soil
very damp
3
A2
4
A2
damp vegetation g.s.
Figure 10: The LANDSAT data structure reveals a hierarchical structure. The numbers in parentheses correspond to the phase transitions indicated by a peak in the susceptibility (see Figure 9).
5.6 Isolated-Letter Speech Recognition. In the isolated-letter speechrecognition task, the “name” of a single letter is pronounced by a speaker. The resulting audio signal is recorded for all letters of the English alphabet for many speakers. The task is to find the structure of the data, which is expected to be a hierarchy reflecting the similarity that exists between different groups of letters, such as {B, D} or {M, N}, which differ only in a single articulatory feature. This analysis could be useful, for instance, to determine to what extent the chosen features succeed in differentiating the spoken letters. We used the ISOLET database of 7797 examples created by Ron Cole (Fanty & Cole, 1991), which is available at the UCI Machine Learning Repository (Murphy & Aha, 1994). The data were recorded from 150 speakers balanced for sex and representing many different accents and English dialects. Each speaker pronounced each of the twenty-six letters twice (there are three examples missing). Cole’s group has developed a set of 617 features describing each example. All attributes are continuous and scaled into the range −1 to 1. The features include spectral coefficients, contour features,
1832
Marcelo Blatt, Shai Wiseman, and Eytan Domany
and sonorant, presonorant, and postsonorant features. The order of appearance of the features is not known. We applied the SPC method and obtained the susceptibility curve shown in Figure 11a and the cluster size versus temperature curve presented in Figure 11b. The resulting partitioning obtained at different temperatures can be cast in hierarchical form, as presented in Figure 12a. We also tried the projection pursuit method, but none of the first six twodimensional projections succeeded in revealing any relevant characteristic about the structure of the data. In assessing the extent to which the SPC method succeeded in recovering the structure of the data, we built a “true” hierarchy by using the known labels of the examples. To do this, we first calculate the center of each class (letter) by averaging over all the examples belonging to it. Then a matrix 26 × 26 of the distances between these centers is constructed. Finally, we apply the single-link method to construct a hierarchy, using this proximity matrix. The result is presented in Figure 12b. The purity of the clustering was again very high (93 percent), and 35 percent of the samples were left as unclassified points. The cophentic correlation coeffecient validation index (Jain & Dubes, 1988) is equal to 0.98 for this graph, which indicates that this hierarchy fits the data very well. Since our method does not have a natural length scale defined at each resolution, we cannot use this index for our tree. Nevertheless, the good quality of our tree, presented in Figure 12a, is indicated by the good agreement between it and the tree of Figure 12b. In order to construct the reference tree depicted in Figure 12b, the correct label of each point must be known. 6 Complexity and Computational Overhead Nonparametric clustering is performed in two main stages: Stage 1: Determination of the geometrical structure of the problem. Basically a number of nearest neighbors of each point has to be found, using any reasonable algorithm, such as identifying the points lying inside a sphere of a given radius or a given number of closest neighbors (like in the SPC algorithm). Stage 2: Manipulation of the data. Each method is characterized by a specific processing of the data. For almost all methods, including SPC, complexity is determined by the first stage because it deserves more computational effort than the data manipulation itself. Finding the nearest neighbors is an expensive task; the complexity of branch and bound algorithms (Kamgar-Parsi & Kanal, 1985) is of order O(Nν log N) (1 < ν < 2). Since this operation is common for all nonparametric clustering methods, any extra computational overhead our algorithm may have over some other nonparametric method must be due to the difference between the costs of the manipulations performed beyond
Data Clustering
1833
0.010
(a) 0.008
cT/N
0.006
0.004
0.002
0.000
0.04
0.06
0.08
0.10
0.12
0.14
T
Figure 11: (a) Susceptibility density as a function of the temperature for the isolated-letter speech-recognition data. (b) Size of the four biggest clusters returned by the algorithm for each temperature.
1834
Marcelo Blatt, Shai Wiseman, and Eytan Domany
(a)
ABCDEFGHIJKLMNOPQRSTUVWXYZ ABCDEFGHIJKLMNOPQRSTUVXYZ ABCDEFGHJKLMNOPSTVXZ ABCDEGJKLMNOPTVZ ABCDEGJKMNPTVZ ABCDEGJKPTVZ
IRY
W W
QU
HFSX
LO
H
W
FSX IR
MN I
Y
U
Q
R
ABDEGJKPTV CZ
W
BDEGJKPTV A BDEGPTV GPT
P
MN
JK
BDEV
GT E
L
CZ
BD V
JK
A
Q
O O
FS
L
C
U
X X I
H FS
R
Y
ABCDEFGHIJKLMNOPQRSTUVWXYZ
(b)
ABCDEFGHIJKLMNOPRSTVXYZ ABCDEFGHIJKLMNOPRSTVXZ
ABCDEGHJKMNPTVZ
MN
ABDEGHJKPTV ABDEGJKPTV BDEGPTV BDEGPT BDE BD B D
E
V
GPT P
H
M
LO
CZ C
Z
W
U
FIRSX
ABCDEGHJKLMNOPTVZ ABDEGHJKMNPTV
QU
Y Q
L
O
FSX FS
IR X
I
R
F S
N
AJK JK
A J
K
GT G
T
Figure 12: Isolated-letter speech-recognition hierarchy obtained by (a) the superparamagnetic method and (b) using the labels of the data and assuming each letter is well represented by a center.
this stage. The second stage in the SPC method consists of equilibrating a system at each temperature. In general, the complexity is of order N (Binder & Heermann, 1988; Gould & Tobochnik ,1988). Scaling with N. The main reason for choosing an unfrustrated ferromagnetic system, versus a spin glass (where negative interactions are allowed), is that ferromagnets reach thermal equilibrium very fast. Very efficient Monte
Data Clustering
1835
Carlo algorithms (Wang & Swendsen, 1990; Wolff, 1989; Kandel & Domany, 1991) were developed for these systems, in which the number of sweeps needed for thermal equilibration is small at the temperatures of interest. The number of operations required for each SW Monte Carlo sweep scales linearly with the number of edges; it is of order K ×N (Hoshen & Kopelman, 1976). In all the examples of this article we used a fixed number of sweeps (M = 1000). Therefore, the fact that the SPC method relies on a stochastic algorithm does not prevent it from being efficient. Scaling with D. The equilibration stage does not depend on the dimension of the data, D. In fact, it is not necessary to know the dimensionality of the data as long as the distances between neighboring points are known. Since the complexity of the equilibration stage is of order N and does not scale with D, the complexity of the method is determined by the search for the nearest neighbors. Therefore, we conclude that the complexity of our method does not exceed that of the most efficient deterministic nonparametric algorithms. For the sake of concreteness, we present the running times, corresponding to the second stage, on an HP–9000 (series K200) machine for two problems: the LANDSAT and ISOLET data. The corresponding running times were 1.97 and 2.53 minutes per temperature, respectively (0.12 and 0.15 sec per sweep per temperature). Note that there is a good agreement with the discussion presented above; the ratio of the CPU times is close to the ratio of the corresponding total number of edges (18,388 in the LANDSAT and 22,471 in the ISOLET data set), and there is no dependence on the dimensionality. Typical runs involve about twenty temperatures, which leads to 40 and 50 minutes of CPU. This number of temperatures can be significantly reduced by using the Monte Carlo histogram method (Swendsen, 1993), where a set of simulations at small number of temperatures suffices to calculate thermodynamic averages for the complete temperature range of interest. Of all the deterministic methods we used, the most efficient one is the minimal spanning tree. Once the tree is built, it requires only 19 and 23 seconds of CPU, respectively, for each set of clustering parameters. However, the actual running time is determined by how long one spends searching for the optimal parameters in the (three-dimensional) parameter space of the method. The other nonparametric methods presented in this article were not optimized, and therefore comparison of their running times could be misleading. For instance, we used Johnson’s algorithm for implementing the single and complete linkage, which requires O(N3 ) operations for recovering all the hierarchy, but faster versions, based on minimal spanning trees, require fewer operations. Running Friedman’s projection pursuit algorithm,5 whose results are presented in Figure 8, required 55 CPU minutes for LANDSAT. For the case of the ISOLET data (where D = 617)
5
We thank Jerome Friedman for allowing public use of his program.
1836
Marcelo Blatt, Shai Wiseman, and Eytan Domany
the difference was dramatic; projection pursuit required more than a week of CPU time, while SPC required about 1 hour. The reason is that our algorithm does not scale with the dimension of the data D, whereas the complexity of projection pursuit increases very fast with D. 7 Discussion This article proposes a new approach to nonparametric clustering, based on a physical, magnetic analogy. The mapping onto the magnetic problem is very simple. A Potts spin is assigned to each data point, and short-range ferromagnetic interactions between spins are introduced. The strength of these interactions decreases with distance. The thermodynamic system defined in this way presents different self-organizing regimes, and the parameter that determines the behavior of the system is the temperature. As the temperature is varied, the system undergoes many phase transitions. The idea is that each phase reflects a particular data structure related to a particular length scale of the problem. Basically, the clustering obtained at one temperature that belongs to a specific phase should not differ substantially from the partition obtained at another temperature in the same phase. On the other hand, the clustering obtained at two temperatures corresponding to different phases must be significantly different, reflecting different organization of the data. These ordering properties are reflected in the susceptibility χ and the spin-spin correlation function Gij . The susceptibility turns out to be very useful for signaling the transition between different phases of the system. The correlation function Gij is used as a similarity index, whose value is determined by both the distance between sites vi and vj and also by the density of points near and between these sites. Separation of the spinspin correlations Gij into strong and weak, as evident in Figure 3b, reflects the existence of two categories of collective behavior. In contrast, as shown in Figure 3a, the frequency distribution of distances dij between neighboring points of Figure 1 does not even hint that a natural cut-off distance, which separates neighboring points into two categories, exists. Since the double-peaked shape of the correlations’ distribution persists at all relevant temperatures, the separation into strong and weak correlations is a robust property of the proposed Potts model. This procedure is stochastic, since we use a Monte Carlo procedure to measure the different properties of the system, but it is completely insensitive to initial conditions. Moreover, the cluster distribution as a function of the temperature is known. Basically, there is a competition between the positive interaction, which encourages the spins to be aligned (the energy, which appears in the exponential of the Boltzmann weight, is minimal when all points belong to a single cluster), and the thermal disorder, which assigns a “bonus” that grows exponentially with the number of uncorrelated spins and, hence, with the number of clusters. This method is robust in the presence of noise and is able to recover
Data Clustering
1837
the hierarchical structure of the data without enforcing the presence of clusters. Also the superparamagnetic method is successful in real-life problems, where existing methods failed to overcome the difficulties posed by the existence of different density distributions and many characteristic lengths in the data. Finally we wish to reemphasize the aspect we view as the main advantage of our method: its generic applicability. It is likely and natural to expect that for just about any underlying distribution of data, one will be able to find a particular method, tailor-made to handle the particular distribution, whose performance will be better than that of SPC. If, however, there is no advance knowledge of this distribution, one cannot know which of the existing methods fits best and should be trusted. SPC, on the other hand, will find any lumpiness (if it exists) of the underlying data, without any fine-tuning of its parameters. Appendix: Clusters and the Potts Model The Potts model can be mapped onto a random cluster problem (Fortuin & Kasteleyn, 1972; Coniglio & Klein, 1981; Edwards & Sokal, 1988). In this formulation, clusters are defined as connected graph components governed by a specific probability distribution. We present this alternative formulation here in order to give another motivation for the superparamagnetic method, as well as to facilitate its comparison to graph-based clustering techniques. Consider the following graph-based model whose basic entities are bond variables nij = 0, 1 residing on the edges < i, j > connecting neighboring sites vi and vj . When nij = 1, the bond between sites vi and vj is “occupied,” © ª and when nij = 0, the bond is “vacant.” Given a configuration N = nij , random clusters are defined as the vertices of the connected components of the occupied bonds (where a vertex connected to “vacant” bonds only is considered a cluster containing a single point). The random cluster model is defined by the probability distribution W(N ) =
qC(N ) Y nij p (1 − pij )(1−nij ) , Z hi,ji ij
(A.1)
where C(N ) is the number of clusters of the given bond configuration, the partition sum Z is a normalization constant, and the parameters pij fulfill 1 ≥ pij ≥ 0. The case q = 1 is the percolation model where the joint probability (see equation A.1) factorizes into a product of independent factors for each nij . Thus, the state of each bond is independent of the state of any other bond. This implies, for example, that the most probable state is found simply by setting nij = 1 if pij > 0.5 and nij = 0 otherwise. By choosing q > 1 the weight of any bond configuration N is no longer the product of local independent factors. Instead, the weight of a configuration is also influenced by the spatial
1838
Marcelo Blatt, Shai Wiseman, and Eytan Domany
distribution of the occupied bonds, since configurations with more random clusters are given a higher weight. For instance, it may happen that a bond nij is likely to be vacant, while a bond nkl is likely to be occupied even though pij = pkl . This can occur if the vacancy of nij enhances the number of random clusters, while sites vk and vl are connected through other (than nkl ) occupied bonds. Surprisingly there is a deep connection between the random cluster model and the seemingly unrelated Potts model. The basis for this connection (Edwards & Sokal, 1988) is a joint probability distribution of Potts spins and bond variables: P(S,N ) =
¤ 1 Y£ (1 − pij )(1 − nij ) + pij nij δsi ,sj . Z hi,ji
(A.2)
The marginal probability W(N ) is obtained by summing P(S,N ) over all Potts spin configurations. On the other hand, by setting ¶ µ Jij pij = 1 − exp − T
(A.3)
and summing P (S,N ) over all bond configurations, the marginal probability (see equation 2.3) is obtained. The mapping between the Potts spin model and the random cluster model implies that the superparamagnetic clustering method can be formulated in terms of the random cluster model. One way to see this is to realize that the SW clusters are actually the random clusters. That is the prescription given in section 3 for generating the SW clusters, defined through the conditional probability P(N |S) = P(S,N )/P(S). Therefore, by sampling the spin configurations obtained in the Monte Carlo sequence (according to probability P(S)), the bond configurations obtained are generated with probability W(N ). In addition, remember that the Potts spin-spin correlation function Gij is measured by using equation 4.3 and relying on the statistics of the SW clusters. Since the clusters are obtained through the spin-spin correlations, they can be determined directly from the random cluster model. One of the most salient features of the superparamagnetic method is its probabilistic approach as opposed to the deterministic one taken in other methods. Such deterministic schemes can indeed be recovered in the zero temperature limit of this formulation (see equations A.1 and A.3); at T = 0 only the bond configuration N0 corresponding to the ground state appears with nonvanishing probability. Some of the existing clustering methods can be formulated as deterministic-percolation models (T = 0, q = 1). For instance, the percolation method proposed by Dekel and West (1985) is obtained by choosing the coupling between spins Jij = θ (R − dij ); that is, the interaction between spins si and sj is equal to one if its separation is smaller
Data Clustering
1839
than the clustering parameter R and zero otherwise. Moreover, the singlelink hierarchy (see, for example, Jain & Dubes, 1988) is obtained by varying the clustering parameter R. Clearly, in these processes the reward on the number of clusters is ruled out, and therefore only pairwise information is used in those procedures. Jardine and Sibson (1971) attempted to list the essential characteristics of useful clustering methods and concluded that the single-link method was the only one that satisfied all the mathematical criteria. However, in practice it performs poorly because single-link clusters easily chain together and are often straggly. Only a single connecting edge is needed to merge two large clusters. To some extent, the superparamagnetic method overcomes this problem by introducing a bonus on the number of clusters, which is reflected by the fact that the system prefers to break apart clusters that are connected by a small number of bonds. Fukunaga’s (1990) valley-seeking method is recovered in the case q > 1 with interaction between spins Jij = θ (R − dij ). In this case, the Hamiltonian (see equation 2.1) is just the class separability measure of this algorithm where a Metropolis relaxation at T = 0 is used to minimize it. The relaxation process terminates at some local minimum of the energy function, and points with the same spin value are assigned to a cluster. This procedure depends strongly on the initial conditions and is likely to stop at a metastable state that does not correspond to the correct answer. Acknowledgments We thank I. Kanter for many useful discussions. This research has been supported by the Germany-Israel Science Foundation. References Ahuja, N. (1982). Dot pattern processing using Voronoi neighborhood. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI, 4, 336–343. Ball, G., & Hall, D. (1967). A clustering technique for summarizing multivariate data. Behavioral Science, 12, 153–155. Baraldi, A., & Parmiggiani, F. (1995). A neural network for unsupervised categorization of multivalued input patterns: An application to satellite image clustering. IEEE Transactions on Geoscience and Remote Sensing, 33(2), 305–316. Baxter, R. J. (1973). Potts model at the critical temperature. Journal of Physics, C6, L445–448. Binder, K., & Heermann, D. W. (1988). Monte Carlo simulations in statistical physics: An introduction. Berlin: Springer–Verlag. Billoire, A., Lacaze, R., Morel, A., Gupta, S., Irback, A., & Petersson, B. (1991). Dynamics near a first-order phase transition with the Metropolis and SwendsenWang algorithms. Nuclear Physics, B358, 231–248. Blatt, M., Wiseman, S., & Domany, E. (1996a). Super-paramagnetic clustering of
1840
Marcelo Blatt, Shai Wiseman, and Eytan Domany
data. Physical Review Letters, 76, 3251–3255. Blatt, M., Wiseman, S., & Domany, E. (1996b). Clustering data through an analogy to the Potts model. In D. Touretzky, Mozer, & Hasselmo (Eds.), Advances in Neural Information Processing Systems (Vol. 8, p. 416). Cambridge, MA: MIT Press. Blatt, M., Wiseman, S., & Domany, E. (1996c). Method and apparatus for clustering data. U.S. patent application. Buhmann, J. M., & Kuhnel, ¨ H. (1993). Vector quantization with complexity costs. IEEE Transactions Information Theory, 39, 1133. Chen, S., Ferrenberg, A. M., & Landau, D. P. (1992). Randomness-induced second-order transitions in the two-dimensional eight-state Potts model: A Monte Carlo study. Physical Review Letters, 69 (8), 1213–1215. Coniglio, A., & Klein, W. (1981). Thermal phase transitions at the percolationthreshold. Physics Letters, A84, 83–84. Cranias, L., Papageorgiou, H., & Piperidis, S. (1994). Clustering: A technique for search space reduction in example-based machine translation. In Proceedings of the 1994 IEEE International Conference on Systems, Man, and Cybernetics. Humans, Information and Technology (Vol. 1, pp. 1–6). New York: IEEE. Dekel, A., & West, M. J. (1985). On percolation as a cosmological test. Astrophysical Journal, 288, 411–417. Duda, R. O., & Hart, P. E. (1973). Pattern classification and scene analysis. New York: Wiley-Interscience. Edwards, R. G., & Sokal, A. D. (1988). Generalization of the Fortuin-KasteleynSwendsen-Wang representation and Monte Carlo algorithm. Physical Review, D 38, 2009–2012. Faber, V., Hochberg, J. G., Kelly, P. M., Thomas, T. R., & White, J. M. (1994). Concept extraction, a data-mining technique. Los Alamos Science, 22, 122–149. Fanty, M., & Cole, R. (1991). Spoken letter recognition. In R. Lippmann, J. E. Moody, & D. S. Touretzky (Eds.), Advances in Neural Information Processing Systems (Vol. 3, pp. 220–226). San Mateo, CA: Morgan-Kaufmann. Foote, J. T., & Silverman, H. F. (1994). A model distance measure for talker clustering and identification. In Proceedings of the 1994 IEEE International Conference on Acoustics, Speech and Signal Processing (Vol. 1, pp. 317–320). New York: IEEE. Fortuin, C. M., & Kasteleyn, P. W. (1972). On the random-cluster model. Physica (Utrecht), 57, 536–564. Fowlkes, E. B., Gnanadesikan, R., & Kettering, J. R. (1988). Variable selection in clustering. Journal of Classification, 5, 205–228. Friedman, J. H. (1987). Exploratory projection pursuit. Journal of the American Statistical Association, 82, 249–266. Fu, Y., & Anderson, P. W. (1986). Applications of statistical mechanics to NPcomplete problems in combinatorial optimization. Journal of Physics A: Math. Gen., 19, 1605–1620. Fukunaga, K. (1990). Introduction to statistical pattern recognition. San Diego: Academic Press. Gdalyahu, Y., & Weinshall, D. (1997). Local curve matching for object recognition without prior knowledge. Proceedings of DARPA, Image Understanding
Data Clustering
1841
Workshop, New Orleans, May 1997. Gould, H., & Tobochnik, J. (1988). An introduction to computer simulation methods, part II. Reading, MA: Addison-Wesley. Gould, H., & Tobochnik, J. (1989). Overcoming critical slowing down. Computers in Physics, 29, 82–86. Hennecke, M., & Heyken, U. (1993). Critical-dynamics of cluster algorithms in the dilute Ising-model. Journal of Statistical Physics, 72, 829–844. Hertz, J., Krogh, A., & Palmer, R. (1991). Introduction to the theory of neural computation. Redwood City, CA: Addison–Wesley. Hoshen, J., & Kopelman, R. (1976). Percolation and cluster distribution. I. Cluster multiple labeling technique and critical concentration algorithm. Physical Review, B14, 3438–3445. Iokibe, T. (1994). A method for automatic rule and membership function generation by discretionary fuzzy performance function and its application to a practical system. In R. Hall, H. Ying, I. Langari, & O. Yen (Eds.), Proceedings of the First International Joint Conference of the North American Fuzzy Information Processing Society Biannual Conference, the Industrial Fuzzy Control and Intelligent Systems Conference, and the NASA Joint Technology Workshop on Neural Networks and Fuzzy Logic (pp. 363–364). New York: IEEE. Jain, A. K., and Dubes, R. C. (1988). Algorithms for Clustering Data. Englewood Cliffs, NJ: Prentice Hall. Jardine, N., & Sibson, R. (1971). Mathematical taxonomy. New York: Wiley. Kamata, S., Eason, R. O., & Kawaguchi, E. (1991). Classification of Landsat image data and its evaluation using a neural network approach. Transactions of the Society of Instrument and Control Engineers, 27(11), 1302–1306. Kamata, S., Kawaguchi, E., & Niimi, M. (1995). An interactive analysis method for multidimensional images using a Hilbert curve. Systems and Computers in Japan, 27, 83–92. Kamgar-Parsi, B., & Kanal, L. N. (1985). An improved branch and bound algorithm for computing K-nearest neighbors. Pattern Recognition Letters, 3, 7–12. Kandel, D., & Domany, E. (1991). General cluster Monte Carlo dynamics. Physical Review, B43, 8539–8548. Karayiannis, N. B. (1994). Maximum entropy clustering algorithms and their application in image compression. In Proceedings of the 1994 IEEE International Conference on Systems, Man, and Cybernetics. Humans, Information and Technology (Vol. 1, pp. 337–342). New York: IEEE. Kelly, P. M., & White, J. M. (1993). Preprocessing remotely-sensed data for efficient analysis and classification. In SPIE Applications of Artificial Intelligence 1993: Knowledge Based Systems in Aerospace and Industry (pp. 24–30). Washington, DC: International Society for Optical Engineering. Kirkpatrick, S., Gelatt Jr., C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220, 671–680. Kosaka, T., & Sagayama, S. (1994). Tree-structured speaker clustering for fast speaker adaptation. In Proceedings of the 1994 IEEE International Conference on Acoustics, Speech and Signal Processing (Vol. 1, pp. 245–248). New York: IEEE. Larch, D. (1994). Genetic algorithms for terrain categorization of Landsat. In Proceedings of the SPIE—The International Society for Optical Engineering (pp. 2–
1842
Marcelo Blatt, Shai Wiseman, and Eytan Domany
6). Washington, DC: International Society for Optical Engineering. MacQueen, J. (1967). Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symp. Math. Stat. Prob. (Vol. I, pp. 281–297). Berkeley: University of California Press. M´ezard, M., & Parisi, G. (1986). A replica analysis of the traveling salesman problem. Journal de Physique, 47, 1285–1286. Miller, D., & Rose, K. (1996). Hierarchical, unsupervised learning with growing via phase transitions. Neural Computation, 8, 425–450. Moody, J., & Darken, C. J. (1989). Fast learning in neural networks of locallytuned processing units. Neural Computation, 1, 281–294. Murphy, P. M., & Aha, D. W. (1994). UCI repository of machine learning databases. http://www.ics.edu/˜mlearn/MLRepository.html, Irvive, CA: University of California, Department of Information and Science. Niedermayer, F. (1990). Improving the improved estimators in O(N) spin models. Physical Letters, B237, 473–475. Nowlan, J. S., & Hinton, G. E. (1991). Evaluation of adaptive mixtures of competing experts. In R. P. Lippmann, J. E. Moody, & D. S. Touretzky (Eds.), Advances in Neural Information Processing Systems (Vol. 3, pp. 774–780). San Mateo: Morgan-Kaufmann. Phillips, W. E., Velthuizen, R. P., Phuphanich, S., Hall, L. O., Clarke, L. P., & Silbiger, M. L. (1995). Application of fuzzy c–means segmentation technique for tissue differentiation in MR images of a hemorrhagic glioblastoma multiforme. Magnetic Resonance Imaging, 13, 277–290. Rose, K., Gurewitz, E., & Fox, G. C. (1990). Statistical mechanics and phase transitions in clustering. Physical Review Letters, 65, 945–948. Srinivasan, A. (1994). UCI Repository of machine learning databases (University of California, Irvine) maintained by P. Murphy and D. Aha. Suzuki, M., Shibata, M., & Suto, Y. (1995). Application to the anatomy of the volume rendering–reconstruction of sequential tissue section for rat’s kidney. Medical Imaging Technology, 13, 195–201. Swendsen, R. H. (1993). Modern methods of analyzing Monte Carlo computer simulations. Physica, A194, 53–62. Swendsen, R. H., Wang, S., & Ferrenberg, A. M. (1992). New Monte Carlo methods for improved efficiency of computer simulations in statistical mechanics. In K. Binder (Ed.), The Monte Carlo Method in Condensed Matter Physics (pp. 75– 91). Berlin: Springer-Verlag. Wang, S., & Swendsen, R. H. (1990). Cluster Monte Carlo algorithms. Physica, A167, 565–579. Wiseman, S., Blatt, M., and Domany, E. (1996). Unpublished manuscript. Wolff, U. (1989). Comparison between cluster Monte Carlo algorithms in the Ising spin model. Physics Letters, B228, 379–382. Wong, Y-F. (1993). Clustering data by melting. Neural Computation, 5, 89–104. Wu, F. Y. (1982). The Potts model. Reviews of Modern Physics, 54(1), 235–268. Yuille, A. L., & Kosowsky, J. J. (1994). Statistical physics algorithms that converge. Neural Computation, 6, 341–356. Received July 26, 1996; accepted January 15, 1997.