Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen
2388
3
Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo
Seong-Whan Lee Alessandro Verri (Eds.)
Pattern Recognition with Support Vector Machines First International Workshop, SVM 2002 Niagara Falls, Canada, August 10, 2002 Proceedings
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Seong-Whan Lee Korea University, Department of Computer Science and Engineering Anam-dong, Seongbuk-ku, Seoul 136-701, Korea E-mail:
[email protected] Alessandro Verri Università di Genova Dipartimento di Informatica e Scienze dell’Informazione Via Dodecaneso 35, 16146 Genova, Italy E-mail:
[email protected] Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Pattern recognition with support vector machines : first international workshop ; proceedings / SVM 2002, Niagara Falls, Canada, August 10, 2002. Seong-Whan Lee ; Alessandro Verri (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Tokyo : Springer, 2002 (Lecture notes in computer science ; Vol. 2388) ISBN 3-540-44016-X
CR Subject Classification (1998): I.5, I.4, F.1.1, F.2.2, I.2.6, I.2.10, G.3 ISSN 0302-9743 ISBN 3-540-44016-X Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York, a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2002 Printed in Germany Typesetting: Camera-ready by author, data conversion by DA-TeX Gerd Blumenstein Printed on acid-free paper SPIN: 10870546 06/3142 543210
Preface With their introduction in 1995, Support Vector Machines (SVMs) marked the beginning of a new era in the learning from examples paradigm. Rooted in the Statistical Learning Theory developed by Vladimir Vapnik at AT&T, SVMs quickly gained attention from the pattern recognition community due to a number of theoretical and computational merits. These include, for example, the simple geometrical interpretation of the margin, uniqueness of the solution, statistical robustness of the loss function, modularity of the kernel function, and overfit control through the choice of a single regularization parameter. Like all really good and far reaching ideas, SVMs raised a number of interesting problems for both theoreticians and practitioners. New approaches to Statistical Learning Theory are under development and new and more efficient methods for computing SVM with a large number of examples are being studied. Being interested in the development of trainable systems ourselves, we decided to organize an international workshop as a satellite event of the 16th International Conference on Pattern Recognition emphasizing the practical impact and relevance of SVMs for pattern recognition. By March 2002, a total of 57 full papers had been submitted from 21 countries. To ensure the high quality of workshop and proceedings, the program committee selected and accepted 30 of them after a thorough review process. Of these papers 16 were presented in 4 oral sessions and 14 in a poster session. The papers span a variety of topics in pattern recognition with SVMs from computational theories to their implementations. In addition to these excellent presentations, there were two invited papers by Sayan Mukherjee, MIT and Yoshua Bengio, University of Montreal. SVM 2002 was organized by the Center for Artificial Vision Research at Korea University and by the Department of Computer and Information Science at the University of Genova. We wish to thank all the members of the Program Committee and the additional reviewers who managed to review the papers in a very short time. We are also grateful to Sang-Woong Lee for developing and maintaining the wonderful web-based paper submission/review system. Finally we thank our sponsors, the Center for Biological and Computational Learning at MIT, the Brain Science Research Center at KAIST, the Statistical Research Center for Complex Systems at Seoul National University, and WatchVision, Inc. for their support. We hope that all presenters and attendees had an enjoyable SVM 2002. There will have been ample time for discussion inside and outside the workshop hall and plenty of opportunity to make new acquaintances. Last but not least, we would like to express our gratitude to all the contributors, reviewers, program committee members, and sponsors, without whom the workshop would not have been possible. May 2002
Seong-Whan Lee Alessandro Verri
Workshop Co-chairs S.-W. Lee Korea University, Korea A. Verri University of Genova, Italy
Program Committee H. Bischof Vienna University of Technology, Austria C.J.C. Burges Lucent Technologies, USA H.-R. Byun Yonsei University, Korea G. Cauwenberghs Johns Hopkins University, USA N. Cristianini University of London, UK X. Ding Tsinghua University, China R.P.W. Duin Delft University of Technology, The Netherlands S. R. Gunn University of Southampton, UK I. Guyon ClopiNet, USA B. Heisele Honda R&D America, USA S. S. Keerthi National University of Singapore, Singapore J. Kittler University of Surrey, UK A. Leonardis National Taiwan University, Taiwan S. Mukherjee MIT, USA J. Park Korea University, Korea P. J. Philips NIST, USA P. Perner IBal Leipzig, Germany I. Pitas University of Thessaloniki, Greece J. Platt Microsoft Research, USA H. Shimodaira JAIST, Japan A. J. Smola Australian National University, Australia H. Taira NTT, Japan K. Tsuda AIST, Japan V. Vapnik AT&T, USA Q. Zhao University of Florida, USA
Organized by Center for Artificial Vision Research, Korea University Computer Science Department, University of Genova
Sponsored by Brain Science Research Center, KAIST Center for Biological and Computational Learning, MIT Statistical Research Center for Complex Systems, Seoul National University WatchVision, Inc.
Organization
In Cooperation with IAPR TC-1 IAPR TC-11 IAPR TC-17
VII
Table of Contents
Invited Papers Predicting Signal Peptides with Support Vector Machines . . . . . . . . . . . . . . . . . . . 1 Neelanjan Mukherjee and Sayan Mukherjee Scaling Large Learning Problems with Hard Parallel Mixtures . . . . . . . . . . . . . . . 8 Ronan Collobert, Yoshua Bengio, and Samy Bengio
Computational Issues On the Generalization of Kernel Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Pablo Navarrete and Javier Ruiz del Solar Kernel Whitening for One-Class Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 David M. J. Tax and Piotr Juszczak A Fast SVM Training Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Jian-xiong Dong, Adam Krzy˙zak, and Ching Y. Suen Support Vector Machines with Embedded Reject Option . . . . . . . . . . . . . . . . . . . 68 Giorgio Fumera and Fabio Roli
Object Recognition Image Kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Annalisa Barla, Emanuele Franceschi, Francesca Odone, and Alessandro Verri Combining Color and Shape Information for Appearance-Based Object Recognition Using Ultrametric Spin Glass-Markov Random Fields . . . . . . . . . . . . . . . . . . . . . . 97 B. Caputo, Gy. Dork´ o, and H. Niemann Maintenance Training of Electric Power Facilities Using Object Recognition by SVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112 Chikahito Nakajima and Massimiliano Pontil Kerneltron: Support Vector ‘Machine’ in Silicon . . . . . . . . . . . . . . . . . . . . . . . . . . .120 Roman Genov and Gert Cauwenberghs
Pattern Recognition Advances in Component-Based Face Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Stanley M. Bileschi and Bernd Heisele
X
Table of Contents
Support Vector Learning for Gender Classification Using Audio and Visual Cues: A Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 L. Walawalkar, Mohammad Yeasin, Anand M. Narasimhamurthy, and Rajeev Sharma Analysis of Nonstationary Time Series Using Support Vector Machines . . . .160 Ming-Wei Chang, Chih-Jen Lin, and Ruby C. Weng Recognition of Consonant-Vowel (CV) Units of Speech in a Broadcast News Corpus Using Support Vector Machines . . . . . . . . . . . . . 171 C. Chandra Sekhar, Kazuya Takeda, and Fumitada Itakura
Applications Anomaly Detection Enhanced Classification in Computer Intrusion Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186 Mike Fugate and James R. Gattiker Sparse Correlation Kernel Analysis and Evolutionary Algorithm-Based Modeling of the Sensory Activity within the Rat’s Barrel Cortex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198 Mariofanna Milanova, Tomasz G. Smolinski, Grzegorz M. Boratyn, Jacek M. Zurada, and Andrzej Wrobel Applications of Support Vector Machines for Pattern Recognition: A Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Hyeran Byun and Seong-Whan Lee Typhoon Analysis and Data Mining with Kernel Methods . . . . . . . . . . . . . . . . .237 Asanobu Kitamoto
Poster Papers Support Vector Features and the Role of Dimensionality in Face Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Fabrizio Smeraldi, Josef Bigun, and Wulfram Gerstner Face Detection Based on Cost-Sensitive Support Vector Machines . . . . . . . . . 260 Yong Ma and Xiaoqing Ding Real-Time Pedestrian Detection Using Support Vector Machines . . . . . . . . . . 268 Seonghoon Kang, Hyeran Byun, and Seong-Whan Lee Forward Decoding Kernel Machines: A Hybrid HMM/SVM Approach to Sequence Recognition . . . . . . . . . . . . . . . . .278 Shantanu Chakrabartty and Gert Cauwenberghs
Table of Contents
XI
Color Texture-Based Object Detection: An Application to License Plate Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Kwang In Kim, Keechul Jung, and Jin Hyung Kim Support Vector Machines in Relational Databases . . . . . . . . . . . . . . . . . . . . . . . . .310 Stefan R¨ uping Multi-Class SVM Classifier Based on Pairwise Coupling . . . . . . . . . . . . . . . . . . .321 Zeyu Li, Shiwei Tang, and Shuicheng Yan Face Recognition Using Component-Based SVM Classification and Morphable Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 Jennifer Huang, Volker Blanz, and Bernd Heisele A New Cache Replacement Algorithm in SMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Jianmin Li, Bo Zhang, and Fuzong Lin Optimization of the SVM Kernels Using an Empirical Error Minimization Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 Nedjem-Eddine Ayat, Mohamed Cheriet, and Ching Y. Suen Face Detection Based on Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . 370 Dihua Xi and Seong-Whan Lee Detecting Windows in City Scenes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Bj¨ orn Johansson and Fredrik Kahl Support Vector Machine Ensemble with Bagging . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Hyun-Chul Kim, Shaoning Pang, Hong-Mo Je, Daijin Kim, and Sung-Yang Bang A Comparative Study of Polynomial Kernel SVM Applied to Appearance-Based Object Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 Eulanda Miranda dos Santos and Herman Martins Gomes Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .419
Predicting Signal Peptides with Support Vector Machines Neelanjan Mukherjee1,2 and Sayan Mukherjee1,3 1
Center for Biological and Computational Learning Massachusetts Institute of Technology 45 Carleton St., Cambridge, MA 02139
[email protected] [email protected] 2 Uinversity of California San Diego, Dept. of Biology 3 Center for Genome Research, Whitehead Institute Massachusetts Institute of Technology
Abstract. We examine using a Support Vector Machine to predict secretory signal peptides. We predict signal peptides for both prokaryotic and eukaryotic signal organisms. Signalling peptides versus non-signaling peptides as well as cleavage sites were predicted from a sequence of amino acids. Two types of kernels (each corresponding to different metrics) were used: hamming distance, a distance based upon the percent accepted mutation (PAM) score trained on the same signal peptide data.
1
Introduction
For both prokaryotic and eukaryotic cells, proteins are transported from their cite of synthesis to other cites either inside or outside the cell. A basic step in the transportation process is to mark proteins for translocation across membranes: e.g. cell membrane, outer membrane, and endoplasmic recticulum (ER). The protein destination depends on the sequence of amino acids located at the nterminus of the nascent protein chain bound to the ribosome. This sequence or targeting signal is called a signal peptide (SP). Discriminating a signal peptide from a non-signal peptide or finding the location of the cleavage site between the two is of practical importance because of the need to find more effective vehicles for protein production in recombinant systems. It is thought that cells recognize signal peptides with almost 100% selectivity and specificity [1]. Signal peptides do have particular characteristics that are consistent for eukaryotic and prokaryotic cells. One characteristic is that signal peptides can typically be separated into three regions. Other characteristics relate to the frequency of occurrence of particular amino acids at particular locations along the sequence. However, because the signal peptides do not have unique consensus sequences these biological characterizations do not provide an accurate enough classification rule. Pattern recognition algorithms maybe appropriate for this problem since there exists a large set of examples from which to infer a set of rules which S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 1–7, 2002. c Springer-Verlag Berlin Heidelberg 2002
2
Neelanjan Mukherjee and Sayan Mukherjee
discriminate between two patterns, either signal peptides vs. non-signal peptides or cleavage site vs. non-cleavage site. In the past neural networks, Hidden Markov Models (HMMs), and neural networks coupled with HMMs [2,1,3] were used for the discrimination. In the paper we explore using SVMs for the discrimination. The reasons for using an SVM are as follows: for a variety of problems SVMs have performed very well [4], unlike a neural network the SVM might give some interesting biological feedback upon examining the protein sequences of the margin SVs (the examples that determine the discrimination boundary), and an HMM can be embedded in an SVM [5] avoiding ad hoc algorithms used to couple the neural networks and HMMs. The paper is organized as follows. Section 2 gives some background about what is known about signal peptides for prokaryotes and eukaryotes and describes the data. Section 3 introduces SVMs and the two types of kernels or distance metrics used. Section 4 describes the results of our prediction algorithms and compares them to other studies.
2
Signal Peptide Properties and Datasets
In this section we summarize some characteristics of signal peptides for eukaryotic and prokaryotic cells and the datasets used. In both types of cells the signal sequence can be separated into three regions: a positively charged n-region followed by a hydrophobic h-region and a neutral but polar c-region. The (−3, −1) rule states that the residues at position −3 and −1, relative to the cleavage site, must be small and neutral for cleavage to occur. We will look at both eukaryotic and prokaryotic cells. The dataset was the same as that used by [1] which was taken from SWISSPROT version 29 [6]. The dataset consisted of Gram-positive and Gram-negative bacteria as examples of prokaryotic cells. For eukaryotic cells we looked at the entire dataset as well as the human subset of the eukaryotic data. The sequence of the signal peptide and the first 30 amino acids of the mature protein from the secretory protein were used to construct positive examples. The first 70 amino acids of each cytoplasmic and for eukaryotes also nuclear proteins were used to construct negative examples of signal peptides. The actual positive and negative samples were constructed by running a moving window of a particular size (21 amino acids in the case of eukaryotic cells and 17 amino acids in the case of prokaryotic cells). Each amino acid was encoded as a real number between 1 − 20. Table (1) states how many signal peptides and non-secretory proteins were used in the various datasets. Table (2) states how many positive (signal peptide) and negative samples (non-secretory proteins) this translates into after processing using the running window.
Predicting Signal Peptides with Support Vector Machines
3
Table 1. Datasets used and number of sequences in datasets Source Signal peptides Non-secretory proteins Human 416 251 Eukaryote 1011 820 E. Coli 105 119 Gram266 186 Gram+ 141 64
Table 2. The effective number of positive and negative examples after processing Source Signal peptides Non-secretory proteins Human 6293 10793 Eukaryote 14755 43460 Gram4541 9858 Gram+ 3380 3392
3
Support Vector Machine Overview and Kernels Used
We are given examples (x1 , y1 ), . . . , (x , y ), with xi ∈ Rn and yi ∈ {−1, 1} for all i. The problem of learning a function that will generalize well on new examples is ill-posed. The classical approach to restoring well-posedness to learning is regularization theory [9]. This leads to the following regularized learning problem: 1 V (yi , f (xi )) + λ||f ||2K . (1) min f ∈H i=1 Here, ||f ||2K is the norm in a Reproducing Kernel Hilbert Space H defined by a positive definite kernel function K, V is a loss function indicating the penalty we pay for guessing f (xi ) when the true value is y, and λ is a regularization parameter quantifying our willingness to trade off accuracy of classification for a function with small norm in the RKHS H. The classical SVM arises by considering the specific loss function V (f (x), y) ≡ (1 − yf (x))+ ,
(2)
(k)+ ≡ max(k, 0).
(3)
where So the problem becomes: 1 f ∈H
min
subject to :
i=1 ξi
+ λ||f ||2K
(4)
yi f (xi ) ≥ 1 − ξi
i = 1, . . . ,
(5)
ξi ≥ 0
i = 1, . . . , .
(6)
4
Neelanjan Mukherjee and Sayan Mukherjee
Under quite general conditions it can be shown that the solution f ∗ to the above regularization problem has the form f ∗ (x) =
ci K(x, xi ).
(7)
i=1
This can be written as the following dual program: max i=1 αi − αT Qα α∈R
subject to :
i=1
yi αi = 0
0 ≤ αi ≤ C where C =
1 2λ .
(8) (9)
i = 1, . . . , ,
(10)
Here, Q is the matrix defined by the relationship Q = Y KY ⇐⇒ Qij = yi yj K(xi , xj ).
(11)
A geometric interpretation the RKHS norm ||f ||2K =
yi yj αi αj K(x, xi ),
i,j=1
is the margin M where M = 1/2||f ||2K . For the case where the data can be perfectly separated (Figure 1) illustrates how minimizing the norm maximizes the margin.
(a)
(b)
Fig. 1. Two hyperplanes with different margin. Intuitively, the large margin hyperplane (b) seems likely to perform better on future examples than the much smaller margin hyperplane (a)
Predicting Signal Peptides with Support Vector Machines
5
We will use two types of kernels. One based upon Hamming distances and one based upon a similarity matrix, the Percent Accepted Mutation (PAM) Matrix. The input vectors xi are not in Rn but are in the discrete space {1, ..., 20}n. The following kernels based upon Hamming distances were used K(xi , xj ) = h(xi , xj ) K(xi , xj ) = (h(xi , xj ) + 1)2 ,
(12) (13)
where h(xi , xj ) is a count of how many elements in each position of the two sequences are identical. The PAM matrix can be thought of as the probability that one amino acid replaces another so the similarity between two amino acids acids for example Leucine (L) and Serine (S) K(L, S) = MLS = P (Leucine is replaced by Serine). We used the PAM250 matrix [7]. Note that this is not a valid kernel and also not a distance metric. However, we used this kernel anyway.
4
Results
We compare our classification results for the various datasets to results reported in [1,3] using neural networks, NNs, and Hidden Markov Models, HMMs. The results reported are 5 fold cross validation results (see tables (3 and (4). The designations SVM1 , SVM2 , and SVM3 correspond to SVMs trained with the linear hamming distance, polynomial hamming distance and linear PAM250 matrix kernels. For most of the datasets the SVM results are at least as accurate as those of neural networks and HMMs. For the general eukaryotic our results are no better than that of the HMM, this is probably do to the fact that this is a large dataset and any reasonable algorithm will perform accurately. A cleavage site prediction defined as correct if the cleavage site falls anywhere in the sliding window. One interesting observation is that SVM3 which is not using a valid kernel performs very well.
Table 3. Classification accuracy for SVMs, NNs, and HMMs for signal peptide versus non-secretory proteins Algorithm NN HMM SVM1 SVM2 SVM3
Eukaryotic 97% 94% 96% 97% 98%
Human 96% 96% 97% 97%
Gram+ 96% 96% 97% 97% 97%
Gram88% 93% 94% 94% 95%
E. coli 89% 91% 91% 92%
6
Neelanjan Mukherjee and Sayan Mukherjee
Table 4. Classification accuracy for SVMs, NNs, and HMMs for predicting cleavage sites Algorithm NN HMM SVM1 SVM2
5
Eukaryotic 70% 70% 73% 75%
Human 68% 69% 72%
Gram+ 68% 65% 82% 84%
Gram79% 81% 80% 79%
E. coli 84% 83% 84%
Conclusions and Future Work
We are able to predict cleavage sites and also discriminate signal peptides from non-secretory peptides using a SVM classifiers. Our results at least as accurate as those using HMMs and NNs on the same task. It would be interesting to examine the support vectors selected in the training phase and analyze them as prototype signaling peptides and look at their statistical structure. It would also be of interest to apply a feature selection algorithm [8] to select which features/positions in the sequence are most relevant in making the above discriminations. Studying the above might yield some interesting biology. It would also be of interest to embed the HMMs used in these classification tasks into an SVM using the Fisher kernel [5]. An interesting note is that the kernel based upon the PAM250 matrix performed well even though it is not a valid kernel.
Acknowledgments This research was sponsored by grants from: Office of Naval Research (DARPA) Contract No. N00014-00-1-0907, National Science Foundation (ITR/IM) Contract No. IIS-0085836, National Science Foundation (ITR) Contract No. IIS0112991, National Science Foundation (KDI) Contract No. DMS-9872936, and National Science Foundation Contract No. IIS-9800032. Sayan Mukherjee would like to acknowledge the Sloan/DOE Fellowship in Computational Molecular Biology.
References 1. Nielsen, H., Brunak. S., von Heijne, G., Protein Engineering, vol. 12, no. 1, pp. 3-9, 1999. 1, 2, 5 2. Baldi, P., Brunak, S., Bioinformatics The Machine Learning Approach, M. I. T. Press, Cambridge, MA, 1999. 2 3. Nielsen, H., Engelbrecht, J., Brunak. S., von Heijne, G., Protein Engineering, vol. 10, no. 1, pp. 1-6, 1997. 2, 5 4. V. Vapnik, Statistical Learning Theory, J. Wiley, 1998. 2 5. Jaakkola, T. and Haussler, D., Exploiting Generative Models in Discriminative Classifiers, NIPS 11, Morgan Kauffmann, 1998. 2, 6
Predicting Signal Peptides with Support Vector Machines
7
6. Bairoch, A. and Boeckmann, B., Nucleic Acids Research, 22, pp. 3578-3580. 2 7. Schwartz, R. M. and Dayhoff, M. O., Atlas of Protein Sequence and Structure, pp. 353-358, 1979. 5 8. Chapelle, O., Vapnik, V., Bousquet, O., Mukherjee, S. Choosing Many Kernel Parameters for Support Vector Machines, Machine Learning, 2002. 6 9. Tikhonov, A. N. and Arsenin, V. Y., Solutions of Ill-Posed Problems, W. H. Winston, Washington D. C., 1977. 3
Scaling Large Learning Problems with Hard Parallel Mixtures Ronan Collobert1, , Yoshua Bengio1 , and Samy Bengio2 1
Universit´e de Montr´eal, DIRO CP 6128, Succ. Centre-Ville, Montr´eal, Canada {collober,bengioy}@iro.umontreal.ca 2 IDIAP CP 592, rue du Simplon 4, 1920 Martigny, Switzerland
[email protected] Abstract. A challenge for statistical learning is to deal with large data sets, e.g. in data mining. Popular learning algorithms such as Support Vector Machines have training time at least quadratic in the number of examples: they are hopeless to solve problems with a million examples. We propose a “hard parallelizable mixture” methodology which yields significantly reduced training time through modularization and parallelization: the training data is iteratively partitioned by a “gater” model in such a way that it becomes easy to learn an “expert” model separately in each region of the partition. A probabilistic extension and the use of a set of generative models allows representing the gater so that all pieces of the model are locally trained. For SVMs, time complexity appears empirically to locally grow linearly with the number of examples, while generalization performance can be enhanced. For the probabilistic version of the algorithm, the iterative algorithm provably goes down in a cost function that is an upper bound on the negative log-likelihood.
1
Introduction
As organizations collect more and more data, the interest in extracting useful information from these data sets with data mining algorithms is pushing much research effort toward the challenges that these data sets bring to statistical learning methods. One of these challenges is the sheer size of the data sets: many learning algorithms require training time that grows too fast with respect to the number of training examples. This is for example the case with Support Vector Machines [11] (SVM) and Gaussian processes [12], both being non-parametric learning methods that can be applied to classification, regression, and conditional probability estimation. Both require O(T 3 ) training time (for T examples) in the worst case or with a poor implementation. Empirical computation time measurements on state-of-the-art SVM implementations show that training time grows much closer to O(T 2 ) than O(T 3 ) [2]. It has also been
Part of this work has been done while Ronan Collobert was at IDIAP, CP 592, rue du Simplon 4, 1920 Martigny, Switzerland.
S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 8–23, 2002. c Springer-Verlag Berlin Heidelberg 2002
Scaling Large Learning Problems with Hard Parallel Mixtures
9
conjectured [3] that training of Multi-Layer Perceptrons (MLP) might also scale between quadratic and cubic with the number of examples1 It would therefore be extremely useful to have general-purpose algorithms which allow to decompose the learning problem in such a way as to drastically reduce the training time, so that it grows closer to O(T ). Another motivation for our work is the availability of cheap parallelism with PC clusters (e.g. Linux clusters). If a decomposition algorithm could separate the work in tasks involving little or rare communication between tasks, then training time could be reduced by one or two orders of magnitude with such loosely-coupled clusters. The basic idea of this paper is to use an iterative divide-and-conquer strategy to learn a partition of the data such that, ideally (1) the partition is “simple”, i.e. it can be learned with good generalization by a classifier with a limited capacity, which we will call the gater, and (2) the learning task in each region of the partition is “simple”, i.e. it can be learned with good generalization by an expert model trained only on the examples of that region. In the end, the prediction on a test point can be obtained by mixing the predictions of the different experts, weighting their predictions with the output of the gater. One therefore obtains a mixture of experts [6], but it will not have been trained in the usual ways (maximum likelihood, mean squared error, etc...). The idea of an SVM mixture is not new, although previous attempts such as Kwok’s paper on Support Vector Mixtures [7] trained each SVM on the whole data set. We instead advocate SVM mixtures in which each SVM is trained only on part of the data set, to overcome the time complexity problem for large data sets. We propose here simple methods to train such mixtures, and we will show that in practice these methods are much faster than training only one SVM, and have experimentally lead to results that are at least as good as one SVM. We conjecture that the training time complexity of the proposed approach with respect to the number of examples is sub-quadratic for large data sets. Moreover this mixture can be easily parallelized, which could improve again significantly the training time. The organization of the paper goes as follows: in the next section, we briefly introduce the SVM model for classification. In section 4 we present two versions of the hard mixture (a non-probabilistic and a probabilistic one), followed in section 5 by some comparisons to related models. In section 6 we show experimental results on two large real-life data sets. One of the drawbacks of the first version of the algorithm is that it is tied to the mean squared error loss function. Another possible drawback is that the gater must be trained on the whole data set, and this operation could be the bottleneck of the whole procedure. To address these two issues, we present a probabilistic version of the hard mixture model in section 4.2. One advantage of the probabilistic formulation is that it 1
This is more debatable and may strongly depend on the data distribution. Although we did not formally test this hypothesis, we conjecture that on very large data sets, with properly tuned stochastic gradient descent, training time of MLPs is much closer to linear than to quadratic in the number of examples.
10
Ronan Collobert et al.
generalizes the approach to other tasks (such as conditional probability estimation). The other is that it can eliminate the bottleneck by splitting the task of the gater into multiple local gaters, one per expert. Each of these local gaters is actually a generative model that gives a high score to input vectors that belong to the region of the associated expert model, and this local expert need only be trained with the examples from that region. This probabilistic decomposition is similar to the MOSAIC [4] model but it is used to form a hard partition and not trained by maximum likelihood. We show that the iterative partitioning algorithm actually minimizes an upper bound on the negative log-likelihood (which corresponds to the loss occurring when having to pick a single expert to make the prediction). Experimental results with the probabilistic version of the hard mixture model are presented in section 7.
2
Introduction to Support Vector Machines
Support Vector Machines (SVMs) [11] have been applied to many classification problems, generally yielding good performance compared to other algorithms. For classification tasks, the decision function is of the form T y = sign yi αi K(x, xi ) + b (1) i=1
where x ∈ IRd is the d-dimensional input vector of a test example, y ∈ {−1, 1} is a class label, xi is the input vector for the ith training example, yi is its associated class label, T is the number of training examples, K(x, xi ) is a positive definite kernel function, and α = {α1 , . . . , αT } and b are the parameters of the model. Training an SVM consists in finding α that minimizes the objective function Q(α) = −
T i=1
1 αi αj yi yj K(xi , xj ) 2 i=1 j=1 T
αi +
T
(2)
subject to the constraints T
αi yi = 0
(3)
i=1
and 0 ≤ αi ≤ C ∀i .
(4)
The kernel K(x, xi ) can have different forms, such as the Radial Basis Function (RBF): −xi − xj 2 K(xi , xj ) = exp (5) σ2 with parameter σ. Therefore, to train an SVM, one must solve a quadratic optimization problem, where the number of parameters is T . This makes the use of SVMs for
Scaling Large Learning Problems with Hard Parallel Mixtures
11
large data sets difficult: computing K(xi , xj ) for every training pair would require O(T 2 ) computation, and solving may take up to O(T 3 ). Note however that current state-of-the-art algorithms appear to have training time complexity scaling much closer to O(T 2 ) than O(T 3 ) [2].
3 3.1
Standard Mixture of Experts Probabilistic Framework
The idea of Mixtures of Experts [5] is simple to explain in a probabilistic framework: given two randoms variables X ∈ IRn and Y ∈ IRd , one would like to represent a conditional distribution P (Y |X) as a decomposition of several simpler conditional distributions called experts. For that, first consider a discrete variable E, the identity of an expert to be most appropriate for (X, Y ). Thus the conditional distribution is rewritten: P (Y |X) =
N
P (E = i|X)Pi (Y |X)
i=1
where The distribution P (E|X) is called the gater, because it probabilistically assigns each example to an expert. Usually this kind of mixture is trained using a T log-likelihood maximization technique, that is, by minimizing − t=1 logP (yt |xt ) over a training set D = {(xt , yt )t=1..T }. 3.2
Non-probabilistic Framework
Here, one would like to represent a function y = f (x) (instead of the conditional distribution P (Y |X)) as a combination of simpler functions which are called again “experts”. More formally, given a training example (x, y) ∈ D, the following decomposition is built: f (x) =
N
wi (x)si (x)
(6)
i=1
where si (.) is the output function for expert i, and w(.) is the gater, which gives a weight for each expert, given an input x. In general one would like to find the gater w(.) and the experts si (.) that minimize the expected value of a loss L(f, (x, y)). The probabilistic and non-probabilistic versions are quite similar, and both could be used in many applications.
4
A New Conditional Mixture
A standard mixture of experts represents a soft decomposition of the data into subsets, thus both the gater and each expert must be trained on the whole data
12
Ronan Collobert et al.
set. Because we want to train complex models on large data sets, we would like instead to take advantage of such a decomposition to split up the training task into small pieces. That’s the key point of the new models. The kind of mixture of experts that is presented here could be applied with any kind of expert learner, but, as our first goal was to apply it with SVMs, let us begin with a non-probabilistic framework, where SVMs fit more easily. 4.1
Hard Non-probabilistic Mixture
The output prediction associated with an input vector x for the hard nonprobabilistic mixture that we propose is similar to that in (6) and is computed as follows: N f (x) = h wi (x)si (x) (7) i=1
where one just added a transformation of the output with a transfer function h, for example the hyperbolic tangent for classification tasks (which we have found to improve results). In the proposed model, the mixture is trained to minimize the cost function which is the sum of squared losses: C=
T
2
[f (xt ) − yt ]
.
(8)
t=1
To train this model, we propose a very simple algorithm: Algorithm 1 Hard non-probabilistic mixture 1. Divide the training set D into N random subsets Di of size near T /N . 2. Train each expert si separately over one of these subsets. 3. Keeping the experts fixed, train the gater w to minimize (8) on the whole training set. 4. Reconstruct N subsets: for each example (xt , yt ), – sort the experts in descending order according to the values wi (xt ), – assign the example to the first expert in the list which has less than (T /N + 1) examples in order to ensure a balance between the experts. 5. If a termination criterion is not fulfilled (such as a given number of iterations or a validation error going up), go to step 2.
Note that step 2 of this algorithm can be easily implemented in parallel as each expert can be trained separately on a different computer. Note also that step 3 can be an approximate minimization (as usually done when training MLPs), that can continue from the solution (parameters) found at the end of the previous outer loop iteration. The idea of this mixture is intuitively obvious: one iterates to discover a good partition of the training set, which ideally could represent in a better way the
Scaling Large Learning Problems with Hard Parallel Mixtures
13
structure of the training set. As this mixture is non-probabilistic, one can apply it directly to SVMs for experts. In the experiments, we have chosen a MLP for the gater, as for usual non-probabilistic mixture-of-experts. 4.2
Hard Probabilistic Mixture
One possible drawback of the previous model is that the gater must be trained over the whole data set, and this could be the training time bottleneck of the whole procedure. Thus, the second idea that we propose here, is that in a probabilistic context, one can break up the gater itself into sub-models, one per expert, that can be trained separately. The idea is similar to that exposed for example in MOSAIC [4]: each expert is associated with a generative model P (X|E = i) that can be trained solely on the subset Di . But unlike MOSAIC, the proposed algorithm forms a hard partition of the data to train the experts. With this new idea in mind, one can easily adapt the previous algorithm as follows:
Algorithm 2 Hard probabilistic mixture 1. 2. 3. 4.
Divide the training set into N random subsets Di of size near T /N . Train each expert Pi (Y |X) separately over Di . Train each local gater P (X|E = i) separately over Di . Estimate the priors P (E = i) by normalizing |Di |, and combine the generative (E=i) models to obtain the function P (E = i|X) = NP (X|E=i)P P (X|E=j)P (E=j)
P
j=1
5. Reconstruct N subsets: for each example (xt , yt ), – sort the experts in descending order according to the posterior t |xt )P (E=i|xt ) , P (E = i|xt , yt ) = NPi (y P (y |x )P (E=j|x )
P
j=1
j
t
t
t
– assign the example to the first expert in the list which has less than (T /N + 1) examples in order to ensure a balance between the experts. 6. If a termination criterion is not fulfilled, go to step 2.
This algorithm is very nice in the sense that it’s a hard version of the standard mixture of experts model. Unfortunately, standard SVMs don’t output probabilities. In the case of a classification problem with several classes, we decided to train one SVM per class (one class against the others) and then to apply a logistic regression on the outputs of the SVMs to obtain probabilities, following [8]. 4.3
What Criterion is Minimized?
The above algorithm iteratively modifies parameters θ to go down on a criterion which is an upper bound on the negative joint log-likelihood: J(θ) = − max J(θ, e) e
(9)
14
Ronan Collobert et al.
where J(θ, e) =
t
eti log Pθ (yt , xt |E = i)Pθ (E = i)
(10)
i
where eti ∈ {0, 1} is a binary variable that selects the i-th expert for example t, with the selection constraints ∀t, i eti = 1 and balancing constraints ∀i, t eti ≈ T /N . Note that the joint likelihood for expert i is Pθ (yt , xt |E = i) = Pθ (yt |xt , E = i)Pθ (xt |E = i) (i.e. the product of the expert output probability and the local gater likelihood). To relate this to Algorithm 2, note that we are trying to perform the double maximization max max J(θ, e) e
θ
t
The idea is to perform a “coordinate descent” on J(θ, e), in which at the first stage of each iteration e is fixed and θ is modified to increase J(θ, e), and at the second stage θ is fixed and e (the assignment of examples to experts) is modified to increase J(θ, e). Note that when e is fixed, the above two probabilities (for the expert and local gater) decouple, so they can be maximized separately, as in steps 2 and 3 of the algorithm. In a second stage of each iteration, θ is fixed, and e is modified with step 5 in order to increase J(θ, e) (here it is an approximate heuristic optimization, to save computations). Furthermore, the criterion J(θ) is an upper bound on the joint negative loglikelihood: log Pθ (yt , xt ) = log( Pθ (yt , xt |E = i)Pθ (E = i)) (11) C(θ) = − t
t
i
since, with the constraints on e Pθ (yt , xt |E = i)Pθ (E = i)) ≥ log( eti Pθ (yt , xt |E = i)Pθ (E = i)) log( i
=
i
eti log(Pθ (yt , xt |E = i)Pθ (E = i)) .
i
The idea of minimizing an upper bound on a more desirable cost function is already found in variational learning methods. Note that both cost functions (the negative log-likelihood and J(θ)) will take close values when the gater manages to compute a harder partition.
5
Other Mixtures of SVMs
The idea of mixture models is quite old and has given rise to very popular algorithms, such as the well-known Mixture of Experts [6] where the cost function is similar to equation (8) but where the gater and the experts are trained, using gradient descent or EM, on the whole data set (and not subsets) and their parameters are trained simultaneously. Hence such an algorithm is quite demanding in
Scaling Large Learning Problems with Hard Parallel Mixtures
15
terms of resources when the data set is large, if training time scales like O(T p ) with p > 1. In the more recent Support Vector Mixture model [7], the author shows how to replace the experts (typically MLPs) by SVMs and gives a learning algorithm for this model. Once again the resulting mixture is trained jointly on the whole data set, and hence does not solve the quadratic barrier when the data set is large. In another divide-and-conquer approach [9], the authors propose to first divide the training set using an unsupervised algorithm to cluster the data (typically a mixture of Gaussians), then train an expert (such as an SVM) on each subset of the data corresponding to a cluster, and finally recombine the outputs of the experts. Here, the algorithm does indeed train separately the experts on small data sets, like the present algorithm, but there is no notion of an iterative re-assignment of the examples to experts according to the prediction made by the gater of how well each expert performs on each example. Our experiments suggest that this element is essential to the success of the algorithm. Finally, the Bayesian Committee Machine [10] is a technique to partition the data into several subsets, train SVMs or Gaussian Processes on the individual subsets and then use a specific combination scheme based on the covariance of the test data to combine the predictions. This method scales linearly in the number of training data, but is in fact a transductive method as it cannot operate on a single test example. Again, this algorithm assigns the examples randomly to the experts (but the Bayesian framework would in principle allow to find better assignments).
6
Experiments: Hard Non-probabilistic Mixture
In this section are presented two sets of experiments comparing the new nonprobabilistic mixtures of SVMs to other machine learning algorithms. Note that all these experiments have been with the Torch library.2 The computers that were used had Athlon 1.2Ghz CPUs. 6.1
A Large-Scale Realistic Problem: Forest
We did a series of experiments on part of the UCI Forest data set3 . We modified the 7-class classification problem into a binary classification problem where the goal was to separate class 2 (the most numerous) from the other 6 classes. Each example was described by 54 input features, each normalized by dividing by the maximum found on the training set. The data set had more than 500,000 examples and this allowed us to prepare a series of experiments as follows: 2 3
available at http://www.torch.ch. The Forest data set is available on the UCI website at the following address: ftp://ftp.ics.uci.edu/pub/machine-learning-databases/covtype/ covtype.info.
16
Ronan Collobert et al.
– A separate test set of 50,000 examples was used compare algorithms. – A validation set of 10,000 examples was used to select among SVM hyperparameters, number of experts, of gater hidden units, and gater training epochs. – Training set size varied from 100,000 to 400,000. – The hard non-probabilistic mixtures had from 10 to 50 expert SVMs with Gaussian kernel; the MLP gater had between 25 and 500 hidden units. Since the number of examples was quite large, the same hyper-parameters were selected for all iterations of the algorithm and for all the SVM experts. We compared our models to – a single MLP trained with a mean-squared error criterion, and where the number of hidden units was selected on the validation set (from 25 to 250 units), – a single SVM, where the parameter of the kernel was also selected on the validation set, – a mixture of SVMs where the gater was replaced by a constant vector, assigning the same weight value to every expert. Table 1 gives the results of a first series of experiments with a fixed training set of 100,000 examples. To select among the variants of the hard SVM mixture we considered performance over the validation set as well as training time. All the SVMs used σ = 1.7. The selected model had 50 experts and a gater with 150 hidden units. A model with 500 hidden units would have given a performance of 8.1% over the test set but would have taken 310 minutes on one machine (and 194 minutes on 50 machines). The hard SVM mixture outperformed all models in terms of training and test error. Note that the training error of the single SVM is high because its hyperparameters were selected to minimize error on the validation set (other values could yield to much lower training error but larger test error). It was also much faster, even on one machine, than the single SVM and since the mixture could easily be parallelized (each expert can be trained separately), we also reported
Table 1. Comparison of performance between an MLP (100 hidden units), a single SVM, a uniform SVM mixture where the gater always output the same value 1/N for each expert, and finally the hard non-probabilistic mixture of SVMs (Algorithm 1) Model used
Train Test Time (minutes) Iteration Error (%) (1 CPU) (50 CPUs) single MLP 17.56 18.15 6 25 single SVM 16.03 16.76 1616 – uniform SVM mixture 19.69 20.31 43 1 1 hard mixture of SVMs 5.91 9.28 119 37 5
Scaling Large Learning Problems with Hard Parallel Mixtures
17
Error as a function of the number of training iterations
Training time as a function of the number of train examples 220
14
200
13
180
12
160
11 Error (%)
Time (min)
the time it took to train on 50 machines. In a first attempt to understand these results, one can at least say that the power of the model does not lie only in the MLP gater, since a single MLP was pretty bad, it is neither only because we used SVMs, since a single SVM was not as good as the hard mixture, and it was not only because we divided the problem into many sub-problems since the uniform mixture also performed badly. It seems to be a combination of all these elements. In order to find how the algorithm scaled with respect to the number of examples, we then compared the same mixture of experts (50 experts, 150 hidden units in the gater) on different training set sizes. Figure 1 shows the validation error of the mixture of SVMs with training set sizes from 100,000 to 400,000. It seems that, at least in this range and for this particular data set, the mixture of SVMs scales linearly with respect to the number of examples, and not quadratically as a classical SVM. It is interesting to see for instance that the mixture of SVMs was able to solve a problem of 400,000 examples in less than 4 hours (on 50 computers) while it would have taken more than one month to solve the same problem with a single SVM. Finally, figure 2 shows the evolution of the training and validation errors of a hard mixture of 50 SVMs gated by an MLP with 150 hidden units, during 5 iterations of the algorithm. This should convince that the iterative partitionning is essential in order to obtain good performance. It is also clear that the empirical convergence of the outer loop is extremely rapid.
140 120
10 9
100
8
80
7
60
6
40 1
1.5
2 2.5 3 Number of train examples
3.5
4 5
x 10
Fig. 1. Comparison of the training time of the same mixture of SVMs (50 experts, 150 hidden units in the gater) trained on different training set sizes, from 100,000 to 400,000
Train error Validation Error
5 1
2 3 4 Number of training iterations
5
Fig. 2. Comparison of the training and validation errors of the mixture of SVMs as a function of the number of training iterations
18
6.2
Ronan Collobert et al.
Verification on Another Large-Scale Problem
To verify that the results obtained on Forest were replicable on other large-scale problems, we tested the SVM mixture on a speech task, the Numbers95 data set [1], turned it into a binary classification problem (separate silence frames from non-silence frames, from a total of 540,000 frames). The training set contains 100,000 randomly chosen frames out of the first 400,000 frames. The disjoint validation set contains 10,000 randomly chosen frames out of the first 400,000. The test set contains 50,000 randomly chosen frames out of the last 140,000. The validation set was used to select the number of experts, the number of hidden units in the gater, and σ. Each frame was parameterized using standard methods (j-rasta coefficients, with first and second temporal derivatives) yielding 45 coefficients times 3 frames (= 135 inputs). Table 2 shows a comparison between a single SVM and a non-probabilistic hard mixture of SVMs, with 50 experts, 50 hidden units in the gater, and σ = 3. The mixture of SVMs was again many times faster than the single SVM (even on a single CPU) but yielded similar generalization performance.
Table 2. Comparison of performance between a single SVM and a mixture of SVMs on the speech data set Model used
Train Test Time (minutes) Error (%) (1 CPU) (50 CPUs) one SVM 0.98 7.57 3395 hard non-prob. mixture of SVMs 4.41 7.32 426 33
7
Experiments: Hard Probabilistic Mixture
The second set of experiments concerns the probabilistic version of the algorithm. As standard SVMs don’t output probabilities, we first present in the first subsection results with Multi-Layered-Perceptrons (MLP) as experts, to confirm that the approach works well with gradient-based learning algorithms. Then we present some results with SVMs as experts, with the SVM outputs being fed to a logistic regressor in order to obtain conditional probabilities, as in [8]. 7.1
MLP Experts
The experiments described here are again with the Forest data set described earlier. The setup is the same as previously for the non-probabilistic mixture. Thus, we just have to specify the probabilistic model architecture: we used Gaussians mixtures for the generative models and one-hidden-layer MLP trained with a log-likelihood maximization criterion for the experts (i.e. maximizing
Scaling Large Learning Problems with Hard Parallel Mixtures
19
t log Pθ (yt |xt , E = i) where the output of the MLP has softmax units which sum to 1 to represent these probabilities). We compared the hard probabilistic mixture with a standard (not hard) probabilistic mixture (with MLPs as experts and an MLP as gater), trained by stochastic gradient ascent on the log-likelihood t log Pθ (yt |xt ). We also compared with a single MLP (also trained to maximize the log-likelihood). Note that with this training criterion, the single MLP gives better results than those obtained with a mean-squared error criterion (which was used in the experiments previously reported in section 6.1). The results are summarized in Table 3. For MLPs and standard mixtures, the iteration column indicates the number of training epochs, whereas for hard mixtures it is the number of outer loop iterations. Note that for hard mixtures, the number of inner loop epochs to train MLP experts was fixed to a maximum of 100 (This number was chosen according to the validation set. Moreover, training was stopped earlier if training error did not decrease significantly.) The hard probabilistic mixture appears to work very well. On this data set, we can obtain better generalization than an MLP, in a reasonable time if we use sequential training (on only one computer), and impressively short time if we use parallelization. If we take the time to do more iterations, the generalization can be impressive too, as shown on a training set size of 400,000 examples. Figure 3
Table 3. Comparison of performance on the Forest data set between one MLP, a standard mixture, and the hard probabilistic mixture proposed in this paper Model used
Error (%) Time (minutes) Iteration Train Valid Test one CPU parallel 100,000 training examples single MLP (500 hidden units) 9.50 11.09 10.96 121 – 150 standard mixture (10 experts, 50 10.57 11.05 11.56 124 – 65 hidden units per expert and 150 units for the gater) 7.01 9.30 9.10 290 – 150 hard prob. mixture (20 experts, 25 7.89 10.76 10.90 21 1.3 15 hidden units per expert and 20 Gaussians per P (X|E = i)) 400,000 training examples single MLP (500 hidden units) 8.39 8.38 8.69 461 – 150 hard prob. mixture (40 experts, 25 6.90 7.74 8.09 126 3.6 7 units per expert and 10 Gaussians per P (X|E = i)) 4.63 5.64 6.24 344 10 20 hard prob. mixture (40 experts, 50 6.68 7.54 8.05 195 5.3 6 units per expert and 10 Gaussians per P (X|E = i)) 3.37 5.60 5.61 624 17 20
20
Ronan Collobert et al.
Log−Likelihood as a function of the number of iterations 0.5 Train Validation
0.45
Log−Likelihood
0.4 0.35 0.3 0.25 0.2 0.15 0.1 0
5
10
15
20
25
Iteration
Fig. 3. Evolution of the log-likelihood with the number of iteration for the hard probabilistic mixture, on 100,000 training examples. The mixture had 20 experts (25 hidden units, 20 Gaussians)
shows the importance of the iterative process of our model for the training as well as generalization error, as previously shown for the non-probabilistic model. We did one more experiment to compare the hard non-probabilistic and probabilistic mixtures, in terms of training time. The experiment is performed with 100,000 training examples (obtaining similar generalization results in both cases). The hard probabilistic mixture has 20 experts, 25 hidden units per expert and 20 Gaussians. The hard non-probabilistic mixture has 20 experts, and an MLP gater with 150 hidden units. The hard non-probabilistic mixture took more than 30 minutes to train, whereas the hard probabilistic mixture took only 1.3 minutes! It seems that the training time bottleneck due to the gater has been broken with the hard probabilistic mixture. 7.2
Dimensionality Reduction for the Gaussian Mixture Models
We used Gaussians Mixture Models (GMM) to estimate P (X|E = i) in the hard probabilistic mixture, and one might think that GMM don’t work well with high dimensional data. Thus, we compare results obtained with and without reducing the dimensionality of the GMM observations, as a preprocessing before applying the mixture. To reduce the dimensionality, we trained as a classifier (with conditional maximum likelihood) an MLP with a small tanh hidden layer and softmax outputs. The hidden layer learns a transformation that has low dimension and is useful to predict the output classes. A single training epoch is performed, on only
Scaling Large Learning Problems with Hard Parallel Mixtures
21
Table 4. The effect of dimensionality reduction for GMMs in the hard probabilistic mixture, on 400,000 examples with 40 experts, 50 hidden units for experts and 10 Gaussians for each P (X|E = i) Model used
Error (%) Train Valid Test Without Dim. Reduction 4.45 5.95 6.25 With Dim. Reduction 3.37 5.60 5.61
a part of the training set if this one is very large (100,000 examples was sufficient on Forest in any case). This is quick, and surprisingly, sufficient to obtain good results. Finally, the hidden layer outputs of the MLP (for each input vector xt ) are given as observations for the GMM. As shown in Table 4, it appears that the dimensionality reduction improves the generalization error, as well as the training error. The dimensionality reduction reduces capacity, but we suspect that the GMMs are so poor in high dimensional spaces that the dimensionality reduction improves results even on the training set, by making it easier to carve the input space in ways that lead to easy training of the experts. 7.3
SVM Experts
Similar experiments were performed on the Forest database with the hard probabilistic mixture, but using SVMs plus logistic as probabilistic experts, rather than MLPs. Table 5 shows the results obtained on the 100,000 examples training set, with different numbers of experts and different choices of gaters. The first experiment uses the methodology already introduced and used with MLP experts, but with 20 SVM experts. Note that training time is much larger than with MLP experts (Table 3, 1.3 min. in parallel), and much larger than with the hard non-probabilistic mixture (Table 1, 37 min. in parallel). One explanation is that convergence is much slower, but we do not understand why. One clue is that when replacing the GMMs by a single MLP gater,4 (with the two other experiments in Table 5), much faster convergence is obtained (down to 21 min. in parallel, i.e. faster than the hard non-probabilistic mixture), but still slower than with MLP experts.
8
A Note on Training Complexity
For both the probabilistic and non-probabilistic mixtures, suppose that we choose the number of experts N such that the number of examples per expert M = T /N 4
Here, the number of inner loop epochs for training the gater was chosen using the validation set, and fixed to 3.
22
Ronan Collobert et al.
Table 5. Comparison of performance of the hard probabilistic mixture, for several setups, on the Forest data set with 100,000 training examples Model used
Error (%) Time (minutes) Iteration Train Valid Test one CPU parallel 20 SVM experts and 10 Gaussians 5.39 10.93 10.70 2240 157 16 per P (X|E = i) 20 SVM experts and a MLP gater 2.63 8.86 8.93 291 30 9 with 150 hidden units 50 SVM experts and a MLP gater 3.22 8.92 9.15 118 21 9 with 150 hidden units
is a fixed fraction of the total number of examples. Then if we suppose that the training time for one expert is polynomial of order p with the number of examples T , then the training time for training the experts in one outer-loop iteration of the hard mixtures is: N M p = T M p−1 = O(T) . If the gater is not localized (e.g. as in the hard non-probabilistic mixture when using a single model as gater, and in the hard probabilistic mixture), then it may be a bottleneck of the algorithm. In the case of the non-probabilistic mixture, we don’t know exactly the cost of training the gater. As it’s a MLP, it’s probably more than O(T ). But for the probabilistic mixture, it appears empirically that O(T ) training time is sufficient for the gater, at each iteration of Algorithm 2!
9
Conclusion
In this paper we have presented a new divide-and-conquer parallelizable hard mixture algorithms to reduce the training time of algorithms such as SVMs. Very good results were obtained compared to classical SVMs either in terms of training time or generalization performance on two large scale difficult databases. Moreover, the algorithms appears to scale linearly with the number of examples, at least between 100,000 and 400,000 examples. Both a probabilistic and a non-probabilistic version of the algorithm were presented, with a demonstration that the probabilistic version actually minimizes a well-defined criterion (that corresponds to the error made by a single chosen expert of the mixture). These results are extremely encouraging and suggest that the proposed method could allow training SVM-like models for very large multi-million data sets in a reasonable time. Two types of “gater” models were proposed, one based on a single MLP, and one based on local Gaussian Mixture Models. The latter have the advantage of being trained very quickly and locally to each expert, thereby guaranteeing linear training time for the whole system (per iteration). However, the best results (even in training time) are often obtained with the
Scaling Large Learning Problems with Hard Parallel Mixtures
23
MLP gater (which needs few epochs and yields in less iterations to a good partition). Surprisingly, even faster results (with as good generalization) are obtained if the SVM experts are altogether replaced by MLP experts. If training of the MLP gater with stochastic gradient takes time that grows much less than quadratically, as we conjecture it to be the case for very large data sets (to reach a “good enough” solution), then the whole method is clearly sub-quadratic in training time with respect to the number of training examples.
Acknowledgments The authors thank Lon Bottou for stimulating discussions. RC would like to thank the Swiss NSF for financial support (project FN2100-061234.00). YB would like to thank NSERC, MITACS and IRIS for funding and support.
References 1. R. A. Cole, M. Noel, T. Lander, and T. Durham. New telephone speech corpora at CSLU. Proceedings of the European Conference on Speech Communication and Technology, EUROSPEECH, 1:821–824, 1995. 18 2. R. Collobert and S. Bengio. SVMTorch: Support vector machines for large-scale regression problems. Journal of Machine Learning Research, 1:143–160, 2001. 8, 11 3. S. E. Fahlman. Fast-learning variations on back-propagation: An empirical study. In D. Touretzky, G. Hinton, and T. Sejnowski, editors, Proceedings of the 1988 Connectionist Models Summer School, pages 38–51, Pittsburg 1988, 1989. Morgan Kaufmann, San Mateo. 9 4. M. Haruno, D. M. Wolpert, and M. Kawato. Mosaic model for sensorimotor learning and control. Neural Computation, 13(10):2201–2220, 2001. 10, 13 5. R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. Adaptive mixture of local experts. Neural Computation, 3:79–87, 1991. 11 6. Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. Adaptive mixtures of local experts. Neural Computation, 3(1):79–87, 1991. 9, 14 7. J. T. Kwok. Support vector mixture for classification and regression problems. In Proceedings of the International Conference on Pattern Recognition (ICPR), pages 255–258, Brisbane, Queensland, Australia, 1998. 9, 15 8. J. Platt. Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. In Smola, Bartlett, Schlkopf, and Schuurmans, editors, Advances in Large Margin Classifiers, pages 61–73. MIT Press, 1999. 13, 18 9. A. Rida, A. Labbi, and C. Pellegrini. Local experts combination through density decomposition. In Proceedings of UAI’99. Morgan Kaufmann, 1999. 15 10. V. Tresp. A bayesian committee machine. Neural Comp., 12:2719–2741, 2000. 15 11. V. N. Vapnik. The nature of statistical learning theory. Springer, 2nd edition, 1995. 8, 10 12. C. K. I Williams and C.E. Rasmussen. Gaussian processes for regression. In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, editors, Advances in Neural Information Processing Systems, volume 8, pages 514–520. MIT Press, 1996. 8
On the Generalization of Kernel Machines Pablo Navarrete and Javier Ruiz del Solar Department of Electrical Engineering, Universidad de Chile {jruizd,pnavarre}@cec.uchile.cl
Abstract. Taking advantage of the linear properties in high dimensional spaces, a general kind of kernel machines is formulated under a unified framework. These methods include KPCA, KFD and SVM. The theoretical framework will show a strong connection between KFD and SVM. The main practical result under the proposed framework is the solution of KFD for an arbitrary number of classes. The framework allows also the formulation of multiclass-SVM. The main goal of this article is focused in finding new solutions and not in the optimization of them.
1
Introduction
Learning problems consist in the estimation of the values of a function at given points [19], so that a so-called learning machine can predict the correct values associated to a new set of points from the known values of a given set of training points. Classically, the first approach to face these problems is to use linear methods, i.e. linear machines, because their simple mathematical form allows the development of simple training algorithms and the study of detailed properties. In the field of Pattern Recognition, one of the most successful methods of this kind has been the optimal hyperplane separation, or Support Vector Machine (SVM), based in the concept of Structural Risk Minimization (SRM) [19]. Besides of the goodness of that approach, the way in which it has been generalized to non-linear decision rules, using kernel functions, has generated a great interest for its application to other linear methods, like Principal Components Analysis (PCA) [12] and Fisher Linear Discriminant (FLD) [7]. The extension of linear methods to non-linear ones, using the so-called kernel trick, is what we call kernel machines. The generalization to non-linear methods using kernels works as follows: if the algorithm to be generalized uses the training vectors only in the form of Euclidean dot-products, then all the dot-products like xTy, can be replaced by a so-called kernel function K(x,y). If K(x,y) fulfills the Mercer’s condition, i.e. the operator K is semipositive definite [3], then the kernel can be expanded into a series K ( x, y ) = ∑ i φ i ( x) φ i ( y ) . In this way the kernel represents the Euclidean dotproduct on a different space, called feature space F, in which the original vectors are mapped using the eigenfunctions Φ:ℜN→F. Depending on the kernel function, the feature space F can be even of infinite dimension, as the case of Radial Basis Function (RBF) kernel, but we are never working in such space. If the kernel function S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 24-39, 2002. Springer-Verlag Berlin Heidelberg 2002
On the Generalization of Kernel Machines
25
does not fulfills the Mercer’s condition the problem probably can still be solved, but the geometrical interpretation and its associated properties will not apply in such case. The most common way in which kernel machines has been obtained [12] [7] is based on the results of Reproducing Kernel Hilbert Spaces (RKHS) [11], which establishes that any vector in F that have been obtained from a linear system using the training vectors, must lie in the span of all the training samples in F. Although this step can be used to obtain many kernel machines, it does not give a general solution for them. For instance, in the case of Fisher Discriminant there is a restriction for working only with two classes. However, there are other ways in order to formulate the same problem. Particularly, this study is focused around a very simple system, that we called the Fundamental Correlation Problem (FCP), from which we can derive the solution of a general kind of kernel machines. This method is well-known in PCA [5], and it has already been mentioned in order to derive the Kernel-PCA (KPCA) algorithm [12] [17]. In this work we show the importance of the FCP by solving the Kernel Fisher Discriminant (KFD) problem for an arbitrary number of classes, using intermediates FCPs. Taking advantage of these results, we can obtain a kernel machine from any linear method that optimizes some quadratic objective function, as for example the SVMs. In this way, we can also explain the relation between the objective functions and the statistical properties of KFD and SVM. The article is structured as follows. In section 2, a detailed analysis of the FCP is presented together with important results that are going to be used in the next sections. In section 3, a general formulation of kernel machines is shown, and known systems (KPCA, KFD, and SVM) are written in this form. In section 4, the solution of multiclass discriminants (KFD and SVM) is obtained. In section 5, some toy experiments are shown, mainly focused in the operation of multiclass discriminants. Finally, in section 6 some conclusions are given.
2
Fundamental Correlation Problem - FCP
This section is focused on a simple and well-known problem that is the base of all the methods analyzed in this paper. For this reason the problem is said to be fundamental. The problematic is very similar to the one of Principal Component Analysis (PCA) when the dimensionality of the feature vectors is higher than the number of vectors [5]. The problem is well-known in applications like Face Recognition [18] [9], and is also the key of the formulation of Kernel-PCA [12] [17]. It is important to understand the properties and results of the correlation problem because is going to be intensively used in the following sections. Given a set of vectors x1, ... , xNV ∈ ℜN, the set is mapped into a feature space F by a set of functions {φ(x)}j=φj(x), j=1,...,M, that we want them to be the eigenfunctions of a given kernel (i.e., satisfying the Mercer’s condition). For the following analysis, we are going to suppose that M>NV. In fact, this is an important purpose of kernel machines in order to give a good generalization ability to the system [19]. In most cases, the dimensionality of the feature space, M=dim(F), is prohibitive for computational purposes, and it could be even infinite (e.g. using a RBF kernel). For this reason, any vector or matrix that have at least one of its dimensions equals to M, is said to be uncomputable. Otherwise the vector or matrix is said to be computable.
26
Pablo Navarrete and Javier Ruiz del Solar
The aim of kernel machines is to work with the set of mapped vectors φ i = φ (xi) ∈ F instead of the original set of input vectors. So we define Φ=[ φ 1 . . . φ NV ] ∈ MM×NV. Then, the correlation matrix of vectors Φ is defined as:
R=
1 NV −1
Φ ΦT ,
(1)
where NV-1 is used to take the average among the mapped vectors so that the estimator is unbiased. The matrix R is semi-positive definite because T NV v R v = ∑i =1 ( v ⋅ φ i ) 2 / (NV − 1) ≥ 0 , with v ∈F, that also shows that its rank is equal to the number of l.i. vectors φ i. Note that any symmetric matrix can be written as (1), which is similar to the Cholesky decomposition but in this case Φ is not a square matrix. Then (1) is going to be called the correlation decomposition of R. The Fundamental Correlation Problem (FCP) for the matrix R, in its Primal form, consists in solving the eigensystem:
R w kR = λ k w kR , w kR ∈ F, k = 1,...,M
(2)
k R
|| w || = 1 . As the rank of R is much smaller than M, (2) became an ill-posed problem in the Hadamard sense [4], and then it demands some kind of regularization. However, R is an uncomputable matrix and then (2) cannot even be solved. In this situation we need to introduce the Dual form of the Fundamental Correlation Problem for R:
K R v kR = λ k v Rk , v kR ∈ ℜNV, k = 1,...,NV
(3)
k R
|| v || = 1 , where KR is the so-called inner-product matrix of vectors Φ:
KR =
1 NV −1
ΦT Φ .
(4)
Note that KR is computable, and the sum over M elements represents the dotproducts between vectors φi that can be computed using the kernel function. Just as it has been indicated in the notation of (2) and (3), the eigenvalues of (3) are equal to a subset of NV eigenvalues of (2). This can be shown by pre-multiplying (3) by Φ, and using (4). As we want to compute the solutions for which λk≠0, k=1,...,q (q≤NV), we can go further and write the expression:
Φ v Rk Φ v Rk , k = 1,...,q, = λk Φ ΦT $!#!" (NV − 1) λ k (NV − 1) λ k $!#!" $!#!" R w Rk w Rk 1 NV −1
k
(5)
where the q vectors w R fulfill the condition || w kR || =1. This directly imply that KR is also semi-positive definite. Moreover, as tr( R ) = tr( KR ), KR has all the non-zeros eigenvalues of R. We are going to see that the solution of a general kind of kernel machines can be written in terms of KR, and then we are going to call it the Fundamental Kernel Matrix (FKM). The following notation will be used in further analysis:
On the Generalization of Kernel Machines
λ 1 ΛR = % , λ M
VR = [ v1R … v RNV ],
WR = [ w 1R … w M ], R
27
(6)
and, as it is going to be important to separate the elements associated with non-zeros and zeros eigenvalues, we also introduce the notation:
~ ΛR Λ R = q ×q 0 (M −q)×q
, 0 (M − q)×(M − q) 0
q×(M −q)
~ VR = VR NV×q
, ~ WR = WR NV×(NV − q) M×q VR0
WR0 , M×(M − q) (7)
~ ~ ~ in which Λ R is the diagonal matrix with non-zeros eigenvalues, VR and WR are respectively the dual and primal eigenvectors associated with non-zeros eigenvalues, and VR0 and WR0 those associated with null eigenvalues. Therefore, by solving the
~
Dual FCP (3) we can compute all the non-zeros eigenvalues Λ R , and the set of dual
~
eigenvectors VR . It must be noted that as q can steel be much smaller than NV, the Dual FCP is also an ill-posed problem, and requires some kind of regularization as well. For the same reason the eigenvalues of R will decay gradually to zero, and then we need to use some criterion in order to determine q. An appropriate criterion is to choose q such that the sum of the unused eigenvalues is less than some fixed percentage (e.g. 5%) of the sum of the entire set (residual mean square error) [15]. ~ Then, using (5), the set of primal eigenvectors WR ∈ MM×q can be written as:
~ WR =
1 NV -1
~ ~ Φ VR Λ −R1/2 .
(8)
~
Expression (8) explicitly shows that the set of vectors WR lie in the span of the training vectors Φ, in accordance with the theory of reproducing kernels [11]. Even if ~ WR is uncomputable, the projection of a mapped vector φ(x) ∈ F onto the subspace
~
~
spanned by WR , i.e. WRT φ(x), is computable. However, it must be noted that this requires a sum of NV inner products in the feature space F that could be computationally very expensive if NV is a large number. Finally, it is also important to show that the diagonalization of R can be written as: T ~ ~ T. ~ Λ R = WR Λ R WR = W R R WR
3
(9)
General Formulation of Kernel Machines
3.1 The Statistical Representation Matrix - SRM The methods in which we are focusing our study, can be formulated as the maximization or minimization (or a mix of both) of positive objective functions that have the following general form:
28
Pablo Navarrete and Javier Ruiz del Solar NS
f (w ) =
1 NV −1
= wT
∑ {(Φ b n =1
Φ E Φ TE NV - 1
n E
)
T
w}
2
=w
T
Φ B E B TE Φ T NV - 1
w
w = w TE w ,
(10)
where BE = [ b1E … b ENS ] ∈ MNV×NS is the so-called Statistical Representation Matrix (SRM) of the estimation matrix E, NS is the number of statistical measures (e.g. NS=NV for the correlation matrix), and ΦE = Φ BE ∈ MM×NS forms the correlation decomposition of E. Note that ƒ(w) represents the magnitude of a certain statistical property in the projections on the w axis. This statistical property is estimated by NS linear combinations of the mapped vectors Φ b nE , n=1,...,NS. Then, it is the matrix BE that defines the statistical property, and is independent from the mapped vectors Φ. Therefore, the SRM is going to be useful in order to separate the dependence on mapped vectors from estimation matrices. In the following sub-sections, several SRMs are explicitly shown for different kind of estimation matrices. Kernel Principal Component Analysis - KPCA In this problem, the objective function, to be maximized, represents the projection variance:
σ 2 (w ) = where
m=
1 NV
NV
1 NV −1
∑ {(φ
n
− m) T w
n =1
}
2
= wT
Φ C Φ TC NV - 1
w = w TC w
(11)
NV ∑i =1 φ n is the mean mapped vector, and C is the covariance matrix.
Then it is simple to obtain:
[
]
Φ C = ( φ 1 − m ) & ( φ NV − m ) ∈ MM×NV ,
(12)
and this can be directly written as ΦC = Φ BC , with:
(B C ) i j = δ i j −
1 NV
∈ MNV×NV ,
(13)
where δi j is the Kronecker delta. The rank of BC is NV-1 because its column vectors have zero mean. It is well-known that the maximization of (11) is obtained by solving the FCP of C for non-zeros eigenvalues. Then, we can write the solution directly by using expression (8):
~ WC =
1 NV -1
~ ~ Φ B C VC Λ C−1/2 .
(14)
~
As in (8), (14) shows that the set of vectors WC lies in the span of the training vectors Φ, but in this case this is due to the presence of the SRM BC.
On the Generalization of Kernel Machines
29
Kernel Fisher Discriminant - KFD In this problem the input vectors (and mapped vectors) are distributed in NC classes, in which the class number i has ni associated vectors. We denote φ (i,j) the mapped vector number j (1≤j≤ni) in the class number i (1≤i≤NC). Then we have two objective functions:
s b (w ) =
s w (w ) = where
m=
NC
1 NV −1
1 NV −1
∑ n {(m
i
i
− m) T w
i =1
NC n i
∑∑ {(φ
(i, j)
i =1 j=1
}
2
,
(15)
− mi )T w } , 2
(16)
i NV ∑i =1 φ n is the mean mapped vector, and m =
1 NV
1 ni
n ∑ j=i 1 φ (i, j) is the
class mean number i. The problem consists in maximizing γ(w)=sb(w)/sw(w), so that the separation between the individual class means respect to the global mean (15) is maximized, and the separation between mapped vectors of each class respect to their own class mean (16) is minimized. As we want to avoid the problem in which sw(w) becomes zero, a regularization constant µ can be added in (16) without changing the main objective criterion and obtaining the same optimal w [16]. Then, the estimation matrices of (15) and (16) are:
Sb = Sw =
NC
1 NV −1
∑n
1 NV −1
∑ ∑ (φ
i
=
1 NV −1
Φ b Φ Tb ,
(17)
=
1 NV −1
Φ w Φ Tw ,
(18)
( mi − m ) ( mi − m )T
i =1
NC
ni
i =1
j=1
(i, j)
− m i ) ( φ (i, j) − m i ) T
The correlation decomposition in (17) and (18) is obtained using:
Φb =
[
[
n1 ( m
1
− m) &
n NC ( m
NC
Φ w = (φ 1 − m C1 ) & (φ NV − m CNV )
]
]
(19)
∈ MM×NV ,
(20)
− m ) ∈ MM×NC,
where we denote as Ci the class of the vector number i (1≤i≤NV). Therefore, the corresponding SRMs so that Φb = Φ Bb and Φw = Φ Bw are:
(B b ) i j =
nj (
1 nj
δ Ci j −
i=1,...,NV; j=1,...,NC
1 NV
) , (B w ) i j = δ i j −
1 n Ci
δ Ci Cj ,
(21)
i=1,...,NV; j=1,..,NV
where δi j is the Kronecker delta. The rank of Bb is NC-1 because its column vectors have zero mean, and the rank of Bw is NV-NC since the column vectors associated with each class have zero mean. Since it is necessary to solve a general eigensystem for Sb and Sw, the solution of this problem requires more job than KPCA, i.e. it cannot be directly solved as a FCP. Moreover, the solution of this problem with more than two classes seems to be unsolved until now. Using the formulation stated in this section, we have solved the
30
Pablo Navarrete and Javier Ruiz del Solar
general problem and, due to the importance of this result, that solution is shown in the next section. Support Vector Machine - SVM The original problem of SVM [19] [1], is to find the optimal hyperplane for the classification of two classes. Its discrimination nature leads us to think about its relation with KFD. Then, our main goal now is to unify their theoretical frameworks and, as a practical consequence, this will give us a method for multiclass-SVM. The SVM finds the largest margin between two classes (that measures the distance between their frontiers), and then it places the optimal hyperplane somewhere in the margin. Now we want to formulate this problem in term of quadratic objective functions like (10). The question is: how we can express the margin using positive statistical measures?. In order to answer this question, we need to go back to the KFD formulation. If we call ci=wTmi we can rewrite expressions (15) and (16) as:
s b (w ) = s w (w ) =
NC
1 NV −1
1 NV −1
∑ n {w
T
i
i =1
NC n i
∑ ∑{ w i =1 j=1
T
m − ci
}
φ (i, j) − c i
2
,
}
2
(22) .
(22)
In this form we can see that the KFD is: first, maximizing the orthogonal distance between the mean mapped vector and a set of parallel hyperplanes that pass through each class mean (where w is the normal vector); and next, minimizing the orthogonal distance between each vector to its associated hyperplane. Figure 1-a shows a scheme of this criterion in a problem of two classes. Therefore, in two-class problems we realized that, by searching the hyperplanes that maximize the orthogonal distance between the class means and that minimize the projection variance of each class, the KFD is actually maximizing the margin between classes using two degrees of freedom instead of one. In this way we can write the margin using positive numbers, because if the margin is negative (non-separable classes), it just means that the sum of the orthogonal distances of each class mean to its frontier is larger than the orthogonal distance between the two class means. Even if KFD does not define optimal hyperplanes, this is a simple task after finding the optimal projection axes. The optimal hyperplane for two classes is defined as wTφ - c, in which we need only to find the optimal scalar parameter c. This parameter can be found by solving a 1D-problem, using the projection of training vectors on the projection axis (for two classes there is only one projection axis). To obtain that, we can think in complex criterions like estimating the probability densities of each group and then finding the intersection of them (Bayes optimal), or we can follow the SVM criterion by searching the point that minimized the number of training errors at both sides of c [19]. For NC classes the problem requires the definition of an optimal decision tree in order to settle the class of a new training vector. To solve this problem: first, we search for the maximum margin, over the projection on the NC-1 axes, so that all the classes become separated into two groups; second, we define an optimal hyperplane for the separation of these two groups (twoclass problem); and afterwards, we repeat the same procedure on each group of
On the Generalization of Kernel Machines
31
classes up to obtain one-class groups. Since KFD is originally formulated for NC classes, this lead to a generalization of the concept of margin for multiclass-SVM. At this point it seems like KFD and SVM are equivalent methods. However, the concept of Support Vectors states a great difference between them. We know that KFD needs all the training points in order to work, and SVM needs only a subset of points, at the frontier of each class, called Support Vectors (SVs). Moreover, we know also that SVM searches for the SVs using Lagrange multipliers, and then it directly maximizes the margin [19]. Nevertheless, we can change the solution of KFD into SVM with the following procedure: first, we solve the KFD of a given problem; second, we find the optimal decision tree and, for each two-group separation, we select the training points whose projections lie between the group means; and third, we train a two-class KFD using only these training points. If we repeat this procedure, the group means of the selected training vectors will move toward the margin. Then, the Fisher’s hyperplanes (see Figure 1-a) will arrive at the frontier of each class, or a negative margin for non-separable cases, using only the training points of these zones (see Figure 1-b), and thus, obtaining the solution of SVM.
x
x
x
x
w c1 = wT m1
w c2 = wT m2
wT m
O
(a) Fisher Discriminant
c1 = wT mSV1
c2 = wT mSV2
OPTIMAL HYPERPLANE
O
(b) Support Vector Machine
Fig. 1. Comparison between KFD and SVM in a two-class problem
Of course, that procedure represents a very complex algorithm in practice. Furthermore, the solution of multiclass-SVM it has been already proposed using different and more efficient frameworks [14] [2]. Then, the advantage of our approach lies on the fact that we can better understand the relation between KFD and SVM. As we have seen, the advantage of KFD is that its definition of the margin is statistically more precise than the one of SVM, and the advantage of SVM is that it only needs the SVs in order to work.
32
Pablo Navarrete and Javier Ruiz del Solar
4
Solution of Multi-class Kernel Methods
In this section we are going to solve the KFD problem for an arbitrary number of classes. This solution can be immediately used to solve the SVM problem, applying the iteration procedure mentioned in section 3. The main problem of the KFD in the feature space F is to solve the general eigensystem:
S b w k = γ k S w w k , w k ∈ F, k = 1,...,M
(24)
k
|| w || = 1, with γk(wk)=sb(wk)/sw(wk), and Sb and Sw the scatter matrices defined in (17) and (18). Unfortunately, this system is not computable and it cannot be solved in this formulation. The problem was originally introduced in [7], where it has been solved using kernels, i.e. not working explicitly in F. The main step of that formulation was to use the fact that the solution can be written as linear combinations of the mapped vectors [7]. In this way a computable general eigensystem can be obtained, but it was necessary to constrain the problem to two classes only. Using the concepts introduced in section 2 and 3, we are going to solve the KFD with an arbitrary number of classes. This method is an adaptation of the solution for Fisher Linear Discriminants (FLDs) shown in [15], using FCPs. The key of this solution is to solve two problems instead of one, using the properties of (24). It is easy to show that the solution of (24), WKFD=[ w1 … wM ] is not orthogonal but it fulfills the following diagonalization properties:
s w (w1 ) T % WKFD S w WKFD = D = s w (w M ) ,
(25)
s b (w1 ) % = DΓ = M ) s ( w b
(26)
T KFD
W
S b WKFD
where Γ is the diagonal matrix of general eigenvalues γk. Moreover, using equations (25) and (26) we can recover the system (24), because if we replace (25) in (26), and we pre-multiply the result by WKFD, we obtain the system T T T (a WKFD WKFD S b WKFD = WKFD WKFD S w WKFD Γ . Then, as the rank of WKFD WKFD correlation-like matrix) is equal to the number of l.i. columns (full rank), it can be inverted recovering (24). Thus, (25) and (26) are necessary and sufficient conditions to solve the KFD problem. Now, as we want only those wk for which γk≠0, that is ~ WKFD = w 1 & w q where q is the number of non-zeros γk, the following conditions hold:
[
]
~T ~ ~ WKFD S w WKFD = D ,
(27)
On the Generalization of Kernel Machines
~T ~ ~~ WKFD S b WKFD = D Γ ,
33
(28)
~
~
where Γ is a diagonal matrix with the correspondent non-zeros γk, and D is the diagonal sub-matrix of D associated with the non-zeros γk. ~ In order to find WKFD that holds the conditions (27) and (28), and whose columns have unitary norm, we are going to solve the following problems: • First, in order to fulfill the condition (27), we solve the FCP of Sw, for which we know its correlation decomposition (18). In this way we obtain the non-zeros ~ eigenvalues Λ w ∈ Mp×p (computable) with p≤(NV-NC), and their associated eigenvectors (uncomputables), by using expression (8):
~ Ww =
1 NV -1
~ ~ Φ B w Vw Λ −w1/2 .
(29)
• Next, in order to fulfill the condition (28), maintaining the diagonalization of Sw, we are going to diagonalize an “hybrid” matrix:
~ ~ ~T ˆ −1/2 ) T S ( W Λ ˆ −1/2 ) = W ∈ MNV×NV , H = ( Ww Λ w b w w H Λ H WH
(30)
ˆ = Λ + µ I is the regularized matrix of eigenvalues of Sw, so that it where Λ w w becomes invertible. Note that, as we use all the eigenvalues and eigenvectors of Sw, H is uncomputable. It is important to consider all the eigenvalues of Sw, including the uncomputable number of zero eigenvalues (regularized), since the smallest eigenvalues of Sw are associated with the largest general eigenvalues γk. Although at this point the problem it seems to be difficult, because we are applying a regularization on the uncomputable matrix Sw, we are going to show that (29) contains all the information that we need. As we know the correlation decomposition of Sb (17), we can write the correlation decomposition of H as: H=
1 NV −1
ˆ −1/2 ) T (B T Φ T W Λ ˆ −1/2 ) . ( B Tb Φ T Ww Λ w b w $!!!#!! !" $! !!#! !w! " Φ H ∈ M M× NC ΦTH ∈ M NC×M
(31)
~
Solving the Dual FCP of H, we obtain its non-zeros eigenvalues Λ H ∈ Mq×q (computable), with q≤(NC-1), and its associated eigenvectors (computable) by using expression (8):
~ WH =
1 NV -1
~ ~ Φ H VH Λ −H1/2 =
1 NV -1
~ ~ −1/2 ˆ −1/2 W T Φ B V . Λ w w b H ΛH
(32)
In order to solve the Dual FCP of H, we need to compute KH:
KH =
1 NV −1
Φ TH Φ H =
1 NV −1
ˆ −1 W T Φ B . B Tb Φ T Ww Λ w b
Using the notation introduced in (7), we can see that:
(33)
34
Pablo Navarrete and Javier Ruiz del Solar
~ + µ I ) −1 W ~ (Λ ~ T + 1 W 0 (W 0 ) T ˆ −w1 WwT = W Ww Λ w w w w w µ
(34)
and, as Ww is an orthonormal matrix, i.e. WwT Ww = Ww WwT = I , we have that
~ ~ Ww0 ( Ww0 ) T = I − Ww WwT establishes the relation between the projection matrices. If
we replace this expression in (34) and then in (33), using also (29), we obtain the following expression for computing KH:
{
}
~ ~− ~ ~ ~− − K H = B Tb K R B w Vw Λ w1/2 ( Λ w + µ I) 1 − µ1 I Λ w1/2 VwT B Tw K R B b + µ1 B Tb K R B b ,
(35)
with KR the FKM defined in (4). • Finally, from (30) is easy to see that the matrix
~ ~ ˆ −1/2 W W = Ww Λ w H
(36)
holds the condition (28) and, with some straightforward algebra, it can be seen that also holds the condition (27), and therefore it solves the problem. Now, if we replace ~ ~ (32) in (36), then replacing (34), using that Ww0 ( Ww0 ) T = I − Ww WwT , and finally
~
replacing (29), we obtain the complete expression for W :
~ W=Φ
{
}
~ ~ ~ ~ ~ ~ ~ ( B w Vw Λ −w1 / 2 (Λ w + µ I) −1 − µ1 I Λ −w1 / 2 VwT B Tw K R B b VH Λ −H1/2 ~ −1/2 ) . ~ Λ (37) + µ1 B b V H H 1 NV-1
~
As (8) and (14), (37) shows that the set of vectors W lies in the span of the training ~ vectors Φ, and it can be written as W = Φ A . However, its column vectors are not
~ ~ norms of the column vectors of W , which must be inverted in the diagonal of N, are normalized. Then we need to post-multiply W by a normalization matrix, N. The
computable and contained at the power of two in the diagonal elements of ~ ~ W T W = (NV − 1) A T K R A . In this way, the conditions (27) and (28) are completely fulfilled by:
~ WKFD = Φ
{
}
~ ~ ~ ~ ~ ~ ~ ( B w Vw Λ w−1 / 2 ( Λ w + µ I) −1 − µ1 I Λ −w1 / 2 VwT B Tw K R B b VH Λ −H1/2 ~ ~ (38) + µ1 B b VH Λ −H1/2 ) N , 1 NV -1
~ ~ Γ = NT ΛH N .
(39)
Then, due to the solution of the FCP of Sw and the FCP of H, that are explicitly present in (38), we have solved the KFD for an arbitrary number of classes.
On the Generalization of Kernel Machines
5
35
Toy Experiments
In order to see how the multiclass-KFD works with more than two classes, in Figure 2-a it is shown an artificial 2D-problem of 4 classes, that we solved using KFD with a polynomial kernel of degree two and a regularization parameter µ=0.001. 1
1
0.5
0.5
0
0
-0.5
-0.5
-1
-1
-1.5
-1.5
-2 -2
-1.5
-1
-0.5
0
0.5
1
-2 -2
-1.5
(a) 4-class problem 1
1
0.5
0.5
0
0
-0.5
-0.5
-1
-1
-1.5
-1.5
-2 -2
-1.5
-1
-0.5
0
(c) γ2 = 41.02
0.5
1
-2 -2
-1.5
-1
-0.5
0
(b) γ1 = 548.79
-1
-0.5
0
0.5
0.5
1
1
(d) γ3 = 0.04
Fig. 2. (a) Artificial example of a 4-class problem. Features of the KFD using a polynomial kernel of degree two: (b) first feature, (c) second feature, and (d) third feature
Figure 2-b shows the first feature found by KFD, in which we see that the two outer classes (the biggest ones) become well separated. Figure 2-c shows the second KFD feature, in which the two inner classes and the outer classes (as a group) become well separated. With these two features KFD can discriminate between all the classes, and these two features have the largest general eigenvalues. The third feature has a small general eigenvalue, showing that the discrimination of classes is low, as it can be seen in Figure 2-d. In Figure 3-a we show another artificial 2D-problem of 4 classes, and this time we apply KFD with a RBF kernel, k(x,y)=exp(-||x-y||2/0.1), and a regularization parameter µ=0.001. As shown in Figure 3-b, 3-c, and 3-d, in this case the three axes are important for the classification of new vectors, and this is also shown in the
36
Pablo Navarrete and Javier Ruiz del Solar
significant values of the general eigenvalues. It also interesting to see that the RBF solution does not show a clear decision in the shared center area, but it tries to take the decision as close as it can. If we decrease the variance of the RBF kernel, the discrimination in this zone will be improved, but the discrimination far from this zone will worsen. Therefore, the kernel function must be adjusted in order to obtain better results. 1
1
0.5
0.5
0
0
-0.5
-0.5
-1
-1
-1.5
-1.5
-2 -2
-1.5
-1
-0.5
0
0.5
1
-2 -2
-1.5
1
1
0.5
0.5
0
0
-0.5
-0.5
-1
-1
-1.5
-1.5
-2 -2
-1.5
-1
-0.5
0
(c) γ2 = 15.09
0.5
-1
-0.5
0
0.5
1
0.5
1
(b) γ1 = 16.44
(a) 4-class problem
1
-2 -2
-1.5
-1
-0.5
0
(d) γ3 = 5.77
Fig. 3. (a) Artificial example of a 4-class problem. Features of the KFD using a RBF kernel: (b) first feature, (c) second feature, and (d) third feature
6
Conclusions
Nowadays the study of kernel machines is mainly focused on the implementation of efficient numerical algorithms. KFD and SVM have been formulated as QP problems [8] [19] [14], in which several optimizations are possible [10] [6]. The main idea in kernel methods is to avoid the storage of large matrices (NV2 elements) and the complexity of the associated eigensystems O(NV3) [13]. These trends are far away from the focus of this study. Our main efforts were focused in understanding the
On the Generalization of Kernel Machines
37
relation between different kernel machines, and also in taking advantage of the linear analysis (using eigensystems) in the feature space F. Nevertheless, the understanding of different kernel machines in a general framework represents an important advance for further numerical algorithms. For instance, we have seen that the main problem to be solved for training kernel machines is the FCP. Then, if we improve the solution of the FCP we are optimizing the training algorithms of many kernel machines. Moreover, even if the computation of a matrix with NV2 elements can be undesirable, we have seen that the only computation that we need to do using kernel functions (an expensive computation) is the computation of the FKM KR, that can be used to solve all the kernel machines here formulated. An important theoretical consequence of the general formulation here presented is the connection between KFD and SVM. Since the first formulation of KFD [7], it has been speculated that some of the superior performances of KFD over SVM can be based in the fact that KFD uses all the training vectors and not only the SVs. In fact, we have seen that KFD uses a measure of the margin that is statistically more precise than the one of SVM. Therefore, we face the trade-off of using all the training vectors to improve the statistical measures, or using a minimum subset of vectors to improve the operation of the kernel machine. In this way, the iteration procedure introduced in section 3 represents a good alternative in order to obtain an intermediate solution. Besides of this practical advantage, the formulation of SVM using the KFD’s margin concept allow us to extend SVM for problems with more than two classes. As we have solved the KFD for an arbitrary number of classes, we can implement multiclass-KFD and multiclass-SVM as well. The main practical result of this study is the solution of multiclass discriminants by use of the solution of two FCPs. The Fisher Linear Discriminant (FLD) is originally formulated for an arbitrary number of classes, and it has shown a very good performance in many applications (e.g. face recognition [9]). Afterwards, KFD has shown important improvements as a non-linear Fisher discriminant, but the limitation for two-class problems impeded its original performance for many problems of pattern recognition. In the examples shown in section 5, we have seen that the multiclass-KFD can discriminate more than two classes with high accuracy, even in complex situations. Finally, the general formulation allows us to use the results of this study with other objective functions, written as (10). The only change must be applied on the SRMs that code the desired statistical measures. Therefore, our results are applicable to a general kind of kernel machines. These kernel machines use second order statistics in a high dimensional space, so that in the original space high order statistics are used. As the algorithms here formulated present several difficulties in practice, further work must be focused in the optimization of them.
Acknowledgements This research was supported by the DID (U. de Chile) under Project ENL-2001/11 and by the join "Program of Scientific Cooperation" of CONICYT (Chile) and BMBF (Germany).
38
Pablo Navarrete and Javier Ruiz del Solar
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.
Burges C. J. C., “A Tutorial on Support Vector Machines for Pattern Recognition”, Data Mining and Knowledge Discovery, vol. 2, pp. 121–167, 1998. Crammer K., and Singer Y., “On the Algorithmic Implementation of Multi-class Kernel-based Vector Machines”, J. of Machine Learning Research, vol. 2, pp. 265-292, MIT Press, 2001. Courant R., and Hilbert D., “Methods of Mathematical Physics”, vol. I, Wiley Interscience, 1989. Hadamard J., “Lectures on Cauchy’s problem in linear partial differential equations”, Yale University Press, 1923. Kirby M., and Sirovich L., “Application of the Karhunen-Loève procedure for the characterization of human faces”, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 103-108, 1990. Mika S., Smola A, and Schölkopf B., “An Improved Training Algorithm for Kernel Fisher Discriminant”, Proc. of the Int. Conf. on A.I. and Statistics 2001, pp. 98-104, 2001. Mika S., Rätsh G., Weston J., Schölkopf B., and Müller K., “Fisher Discriminant Analysis with Kernels”, Neural Networks for Signal Processing IX, pp. 41-48, 1999. Mika S., Rätsh G., and Müller K., “A Mathematical Programming Approach to the Kernel Fisher Analysis”, Neural Networks for Signal Processing IX, pp. 4148, 1999. Navarrete P., and Ruiz-del-Solar J., “Comparative Study Between Different Eigenspace-based Approaches for Face Recognition”, Lecture Notes in Artificial Intelligence 2275 (AFSS 2002), pp. 178-184, Springer, 2002. Platt J., “Fast Training of SVMs using Sequential Minimal Optimization”, In Schölkopf B., Burges C., and Smola A., Advances on Kernel Methods – Support Vector Learning, pp. 185-208, MIT Press, 1988. Saitoh S., “Theory of Reproducing Kernels and its Applications”, Longman Scientific and Technical, Harlow, England, 1988. Schölkopf B., Smola A., and Müller K., “Nonlinear Component Analysis as a Kernel Eigenvalue Problem”, Neural Computation, vol.10, pp. 1299-1319, 1998. Smola A., and Schölkopf B., “Sparse Greedy Matrix Approximation for Machine Learning”, Proc. of the Int. Conf. on Machine Learning 2000, pp. 911-918, 2000. Suykens J., and Vandewalle J., “Multiclass Least Square Support Vector Machine”, Int. Joint Conf. On Neural Networks IJCNN’99, Washington D.C., 1999. Swets D. L., and Weng J. J., “Using Discriminant Eigenfeatures for Image Retrieval”, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 8, pp. 831-836, 1996. Tikhonov A., and Arsenin V., “Solution of Ill-posed Problems”, H.W. Winston, Washington D.C., 1977. Tipping M., “Sparse Kernel Principal Component Analysis”, Advances in Neural Information Processing Systems, vol. 13, MIT Press, 2001.
On the Generalization of Kernel Machines
39
18. Turk M., and Pentland A., “Eigenfaces for Recognition”, J. Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991. 19. Vapnik V., “The Nature of Statistical Learning Theory”, Springer Verlag, New York, 1999.
Kernel Whitening for One-Class Classification David M. J. Tax1 and Piotr Juszczak2 1
Fraunhofer Institute FIRST.IDA Kekul´estr.7, D-12489 Berlin, Germany
[email protected] 2 Pattern Recognition Group Faculty of Applied Science, Delft University of Technology Lorentzweg 1, 2628 CJ Delft, The Netherlands
[email protected] Abstract. In one-class classification one tries to describe a class of target data and to distinguish it from all other possible outlier objects. Obvious applications are areas where outliers are very diverse or very difficult or expensive to measure, such as in machine diagnostics or in medical applications. In order to have a good distinction between the target objects and the outliers, good representation of the data is essential. The performance of many one-class classifiers critically depends on the scaling of the data and is often harmed by data distributions in (nonlinear) subspaces. This paper presents a simple preprocessing method which actively tries to map the data to a spherical symmetric cluster and is almost insensitive to data distributed in subspaces. It uses techniques from Kernel PCA to rescale the data in a kernel feature space to unit variance. This transformed data can now be described very well by the Support Vector Data Description, which basically fits a hypersphere around the data. The paper presents the methods and some preliminary experimental results.
1
Introduction
In almost all machine learning and pattern recognition research, it is assumed that a (training) dataset is available which reflects well what can be expected in practice. On this data a classifier or regressor should be fitted such that good generalization over future instances will be achieved [1]. Unfortunately, it is very hard to guarantee that the training data is a truly identically distributed sample from the real application. In the data gathering process certain events can easily be missed, because of their low probability of occurance, their measuring costs or because of changing environments. In order to detect these ’unexpected’ or ’ill represented’ objects in new, incoming data, a classifier should be fitted, which detects the objects that do not resemble the bulk of the training data in some sense. This is the goal of one-class classification [2,3], novelty detection [4], outlier detection [5] or concept learning [6]. Here, one class of objects, the target class, has to be distinguished from all other possible objects, the outlier objects. S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 40–52, 2002. c Springer-Verlag Berlin Heidelberg 2002
Kernel Whitening for One-Class Classification
41
A common solution for outlier or novelty detection is to fit a probability density on the target data [7,5,8], and classify an object as outlier when the object falls into a region with density lower than some threshold value. This works well in the cases where the target data is sampled well. That means, the sample size is sufficient and the distribution is representative. But density estimation requires large sample sizes. When the boundary of the target class with limited sample size is to be estimated, it might be better to directly fit the boundary instead of estimating the complete target density. This is Vapnik’s principle to avoid solving a more general problem than what is actually needed to solve [9]. Using this principle, the problem is changed from density estimation to domain description. The support vector data description (SVDD, [2]) is a method which tries to fit directly a boundary with minimal volume around the target data without performing density estimation. It is inspired by the (two-class) support vector classifier [9]. All objects inside the hypersphere will be ’accepted’ and classified as target objects. All other objects are labeled outliers. By minimizing the volume of the hypersphere, it is hoped that the chance of accepting outliers is minimized. In [10] a linear one-class classifier is presented, based on the idea to separate the data with maximal margin from the origin. In [11] again a linear classifier was used, but here the problem was posed as a linear programming problem, instead of a quadratic progamming problem. In general, the hypersphere model is not flexible enough to give a tight description of the target class and analogous to the Support Vector Classifier (SVC), the SVDD is made more flexible by transforming the objects from the input space representation to a representation in kernel space. It appears that not all kernels that were proposed for the SVC can be used by the SVDD. In most cases the data classes are elongated, which is useful for discrimination between two classes, but is harmful for one-class classification. An exception is the Gaussian kernel, where good performances can be obtained. Unfortunately, even using the Gaussian kernel still an homogeneous input feature space is assumed, which means that distances in all directions in the space should be comparable. In practice, data is often distributed in subspaces, resulting in very small typical distances between objects in directions perpendicular to the subspace. Moving inside the subspace will change the objects just slightly, but moving out of the subspace will result in an illegal object, or an outlier. Although comparable distances are traveled, the class memberships of the objects differ drastically. This homogeneity of the distances does not just harm the SVDD, but in principle all one-class methods which rely on distances or similarities between the objects. In this paper we propose a rescaling of the data in the kernel feature space, which is robust against large scale differences in scaling of the input data. It rescales the data in a kernel space such that the variances of the data are equal in all directions. We will use the techniques of Kernel-PCA [12]. In section 2 we will present the SVDD, followed by an example where it fails. In section 3 the rescaling of the data is presented, followed by some experiments and conclusions.
42
2
David M. J. Tax and Piotr Juszczak
SVDD
To describe the domain of a dataset, we enclose the data by a hypersphere with minimum volume (minimizing the chance of accepting outlier objects). Assume we have a d-dimensional data set containing n data objects, X tr : {xi , i = 1, .., n} and the hypersphere is described by center a and radius R. We will assume throughout the paper that a sum i will sum over all training objects, which n means i=1 . To allow the possibility of outliers in the training set, the distance from xi to the center a need not be strictly smaller than R2 , but larger distances should be penalized. An extra parameter ν is introduced for the trade-off between the volume of the hypersphere and the errors. Thus, an error function L, containing the volume of the hypersphere and the distances, is minimized. The solution is constrained with the requirement that (almost) all data is within the hypersphere. The constraints can be incorporated in the error function by applying Lagrange multipliers [1]. This yields the following function to maximize with respect to α (for details [2]): L=
i
αi (xi · xi ) −
αi αj (xi · xj ) with
i,j
and a=
0 ≤ αi ≤
1 , nν
αi = 1 (1)
i
αi xi .
(2)
i
The last constraint in (1) influences the effective range of hyperparameter ν. For ν > 1 this constraint cannot be met, and therefore in practice ν is 0 ≤ ν ≤ 1. (This hyperparameter plays the same role of ν in an comparable one-class classifier, the ν-SVC [10].) Now (1) is in a standard quadratic optimization problem. By the box constraints, the free parameters αi after optimization can be in two situations. Most objects xi will satisfy xi −a2 < R2 , αi = 0 and for just a few objects xi αi > 0. Analogous to [9] these objects are called the support objects, because they determine the (center of the) hypersphere via (2). A new object z is accepted by the description (or classified as target object) when: f (z) = z − a2 = (z · z) − 2
i
αi (z · xi ) +
αi αj (xi · xj ) ≤ R2 .
(3)
i,j
The radius R is determined by calculating the distance from the center a to any support vector xi on the boundary. The hyperspherical shape for the boundary of a dataset is very restricting and will not be satisfied in the general case. Analogous to the method of Vapnik [9], we can replace the inner products (x · y) in Equations (1) and in (3) by kernel functions K(x, y) = Φ(x) · Φ(y) (where K is a positive definite kernel, or Mercer kernel). By this replacement of the inner product by K, the data is implicitly
Kernel Whitening for One-Class Classification
43
mapped to a new feature space. Ideally, this mapping would map the data into a spherical constrained domain, such that the assumptions for the SVDD are fulfilled. Several kernels have been proposed [9], mainly in the application of Support Vector Classifiers. A popular choice is the polynomial kernel (x · y) → K(x, y) = (xy + 1)p , which maps the data to a feature space spanned by all monomial features up to p. For one-class classification this kernel works poorly, because it tends to transform the data into elongated, flat structures instead of spherical clusters. Especially for larger degrees p, taking the power will stress the differences in the variances in different feature directions. For large p the direction with largest variance in input space will overwhelm all smaller variances in kernel space. For another popular kernel, the Gaussian kernel, this is not the case: (x · y) → K(x, y) = exp(−x − y2 /σ 2 ) .
(4)
The width parameter σ in the kernel (from definition (4)) determines the scale or resolution at which the data is considered in input space. Although here the data is implicitly mapped to an infinitely dimensional space F [13], the inner products (or the kernel outputs) are between 0 and 1. Furthermore, K(x, x) = 1 indicating that all objects have length 1, placing the objects effectively on a hypersphere with radius 1. For good performance of the SVDD with the Gaussian kernel, still properly scaled distances are required. The new inner product (4) now depends on distance x−y2 . Very inhomogeneous distances will still result in elongated clusters and large empty areas around the target class in input feature space that are still accepted. In figure 1 a scatterplot of an artificial 2-dimensional dataset is show. The SVDD is trained to fit a boundary around it such that about 25% of the target data is on the boundary. Although the SVDD follows the curve in the data, it does not fit tightly the subspace structure in the data. A large strip inside the curve is classified as target object, but does not contain target training objects.
5
4
3
2
1
0
−1
−2
−3
−4
−5 −6
−4
−2
0
2
4
6
Fig. 1. Decision boundary of an SVDD trained on an artificial 2D dataset
44
David M. J. Tax and Piotr Juszczak
This is caused by the large scale difference of the data parallel and perpendicular to the subspace. In the approach of [10] a linear hyperplane instead of an hyperspherically shaped boundary is used. This plane should separate the target data with maximal margin from the origin of the feature space. Although in input space this is incomparable with the hypersphere approach, the method can be ’kernalized” and using the Gaussian kernel this method appears to be identical to the SVDD [2].
3
Kernel Whitening
Instead of directly fitting a hypersphere in the kernel space, we propose to rescale the data to have equal variance. Fitting a hypersphere in the rescaled space F will be identical to fitting an ellipsoid in the original kernel space. The rescaling is easily done, using the derivation of the Kernel PCA [12]. The data is basically mapped onto the principal components (or the largest eigenvalues) of the data covariance matrix and then rescaled by the corresponding eigenvalues. Therefore the eigenvectors and eigenvalues of the covariance matrix in the kernel space have to be estimated. The eigenvectors with eigenvalues close or equal to zero will be disregarded. Assume the data X tr is mapped to the kernel space F by some (possibly nonlinear) mappingΦ : Rd → F. When we also assume that the data is centered in this space, i.e. i Φ(xi ) = 0, the covariance matrix C of the mapped dataset can be estimated by: 1 Φ(xi )Φ(xi )T . (5) C= n i The eigenvectors v and eigenvalues λ satisfy: 1 (Φ(xj ) · v) Φ(xj ) = λv . Cv = n j
(6)
Equation (6) shows that the eigenvectors with non-zero eigenvalue must be in the span of the mapped data {Φ(xi )}, which means that v can be expanded as: v= αi Φ(xi ) . (7) i
Multiplying Equation (6) from the left with Φ(xk ) and using (7) gives: 1 (Φ(xk ) · Φ(xj )) Φ(xj ) · αi Φ(xi ) = λ αi (Φ(xk ) · Φ(xi )) n j i i
∀k .
(8) When again the kernel matrix Kij = Φ(xi ) · Φ(xj ) is introduced, it appears that the coefficients α from Equation (7) can directly be obtained by solving the eigenvalue problem: λα = Kα . (9)
Kernel Whitening for One-Class Classification
45
For normal kernel-PCA the eigenvectors should be normalized to unit length, and this means that for each eigenvector v k the αk are rescaled to: (10) λk αk · αk = 1 . We assumed that the data is centered in F. This can be done by transforming the original kernel matrix. Assume K is the n × n kernel matrix of the training data and K tst the m × n matrix of some new data (or possibly the same training data). The centered kernel matrix is computed by: ˜ = K tst − 1∗ K − K tst 1n + 1∗ K1n K (11) n
n
where 1n is an n × n matrix and 1∗n is an m × n matrix both with all entries 1/n [12]. We will assume that we always have centered the kernel matrices using (11). When the coefficients α are obtained, a new object z can be mapped onto eigenvector v k in F by: (ˆ z)k = (v k · Φ(z)) = αki (Φ(xi ) · Φ(z)) = αki K(xi , x) (12) i
i
ˆ. where (ˆ z)k means the k-th component of vector z To transform the data into a representation with equal variance in each feature direction (for directions with λk > 0) the normalization from Equation (10) has to be slightly adapted. The variance of the mapped data along component v k is: 2 1 1 k 1 tr 2 (ˆ xj ) = αi k(xi , xj ) = (αk )T KKαk . (13) var(X ) = n j n j n i Using Equation (9) this is constant for all features when instead of (10) we use the normalization: λ2k αk · αk = 1 for all considered components k . (14) The dataset Xˆtr , transformed using the mapping (12) with normalization (14), can now be used by any one-class classifier. The dimensionality of this dataset depends on how many principal components v k are taken into account. Not only do all the features have equal variances, by the fact that the data is mapped onto the principal components of the covariance matrix, the data is also uncorrelated. The fact that the data is now properly scaled, makes it ideal to estimate a normal distribution, or to use the SVDD, which in the linear case just fits a hypersphere. In figure 2 an artificial 2D dataset is shown, where the data is distributed in a sinusoidal subspace. In the left subplot the output of the SVDD is shown, in the right subplot the output of the SVDD with the data scaled to unit variance. In order to model this data well, a one-class classifier has to be very flexible, and large amounts of data should be available to follow both the large sinusoidal structure and be tight around the subspace. The SVDD are optimized to have about 30% error on the target set. The decision boundary are given by the white line. It is clear that it does not model the subspace structure in the data.
46
David M. J. Tax and Piotr Juszczak
1
1
0
0
−1
−1 0
2
4
6
0
2
4
6
Fig. 2. The data description of a sinusoidal distributed dataset. Left shows an SVDD trained in the input space, the right shows the decision boundary of the hypersphere in kernel space. In both cases a Gaussian kernel with σ = 4 is used
4
Characteristics Whitening
How efficient the mapping of the data to the new representation with unit variance is, depends on the choice of the kernel and parameters. When this feature extraction captures the data structure, it is easy to train a one-class classifier on this data and obtain good classification performance. In table 4 decision boundaries for the artificial data for different choices of the kernels are shown. The left column shows the results for the polynomial kernel of degree d = 1, d = 2 to d = 3 (from top to bottom). The left column shows the results for the Gaussian kernel, for σ = 5, 15 and 50. Results show large dependence on the choice of the free parameter. The rescaling tend to overfit for high values of the degree d and low values of σ. Visually it can be judged that for the polynomial kernel d = 3 is reasonable, for the Gaussian kernel a σ between 15 and 50 can be used. Applying an ill-fitting kernel results in spurious areas in the input space. Many one-class classifiers rely on the distances between the objects in the input space. When the data is whitened in the kernel space, and all significant eigenvectors are taken into account, the influence of rescaling (one of the) features is eliminated. In table 2 the results of rescaling one of the features is shown. In the middle row a scatterplot of the original data is shown. On this dataset an SVDD, an SVDD on the whitened data with all non-zero principal components, and an SVDD using just the first 5 principal components are trained. It appears that for this data there are just 8 non-zero principal components. In the upper row of the table, the horizontal feature was rescaled to 10% of the original size, while on the lower row the data the feature was 10 times enlarged. The SVDD on the (kernel-) whitened data not only gives a tight description, but is also robust against rescaling the single feature. The SVDD in input space heavily suffers from rescaling the data. Using just a few principal components from the mapped data also results in poorer results and in spurious areas. The fact that the data is unit variance with uncorrelated features makes the normal distribution a good choice for describing the dataset in the kernel space. In figure 3 again the sinusoidal data set is shown, now with one prominent outlier present. Furthermore, typical decision boundaries of the fitted normal distribution (left) and the support vector data description (right) are shown. In most cases the difference in decision boundary between the SVDD and the Gaussian
Kernel Whitening for One-Class Classification
47
Table 1. The influence of the choice of the kernel. The left column shows the results using the polynomial kernel with varying degrees, the right column the Gaussian kernel with varying σ
d=1
2
2
1.5
1.5
1
1
0.5
0.5
0
0
−0.5
−0.5
−1
−1
−1.5
−1.5
−2
d=2
2
4
6
−2 2
1.5
1.5
1
1
0.5
0.5
0
0
−0.5
−0.5
−1
−1
−1.5
−1.5
−2
d=3
0
2
0
2
4
6
−2
2
2
1.5
1.5
1
1
0.5
0.5
0
0
−0.5
−0.5
−1
−1
−1.5 −2
σ=5 0
2
4
6
σ = 15 0
2
4
6
σ = 50
−1.5 0
2
4
6
−2
0
2
4
6
model are minor. In case the training data contains some significant outliers, The SVDD tends to be obtain tighter descriptions, because it can effectively ignore prominent outliers in the data. The normal distribution is still influenced by it, and starts to accept superfluous areas in feature space. This is also visible in figure 3. In both cases the decision boundary was optimized such that 10% of the training data is rejected.
5
Experiments
To show the results on real world datasets, we use the standard Concordia dataset [14], in which the digits are stored in 32 × 32 black-and-white images. Each of the digit classes can be designated to be the target class. Then all other digits are considered outliers. For training 400 objects per class, and for testing 200 objects per class are available. In figure 4 typical images of rejected objects are shown. The one-class classifier was trained on class ’2’ and ’3’ respectively. The 32 × 32 images are first
48
David M. J. Tax and Piotr Juszczak
Table 2. Influence of the scaling of the features. The left column shows the decision boundary of the SVDD, the middle column the results of the data description using the whitening with all non-zero variance directions and the right column shows the output using the five principal components. The middle row shows the original data. In the upper row the horizontal feature is 10 times shrinked, the lower the horizontal feature is 10 times enhanced. For display purposes the data is scaled to show comparable scales scaling
0.1×
SVDD
2
1.5
1.5
1.5
1
1
1
0.5
0.5
0.5
0
0
0
−0.5
−0.5
−0.5
−1
−1
−1
−1.5
−1.5
−1.5
0
0.2
0.4
−2 −0.2
0.6
0
0.2
0.4
−2 −0.2
0.6
2
2
2
1.5
1.5
1.5
1
1
1
0.5
0.5
0.5
0
0
0
−0.5
−0.5
−0.5
−1
−1
−1
−1.5
−1.5
−1.5
−2 −2
10×
whitening reduced d = 5
2
−2 −0.2
1×
kernel whitening
2
0
2
4
−2 −2
6
0
2
4
−2 −2
6
2
2
2
1.5
1.5
1.5
1
1
1
0.5
0.5
0.5
0
0
0
−0.5
−0.5
−0.5
−1
−1
−1
−1.5
−1.5
−1.5
−2 −20
0
20
40
60
−2 −20
0
20
40
60
−2 −20
0
0
0
0.2
2
20
0.4
4
0.6
6
40
60
preprocessed to retain 80% of the variance, to remove all pixels with (almost) zero variance over the whole dataset. Then the data was (kernel-) whitened using a polynomial kernel, degree 2. The first 20 principal components were chosen, the eigenvalues of the other principal components always were a factor 10−6 or more smaller than the largest eigenvalue. On this data a normal SVDD was fitted, such that about 5% of the target data is rejected. The results show that rejected objects are often skewed or are written very fatly, or contain big curls. In figure 5 the results are shown to compare the outliers obtained by using normal PCA and the kernel whitening, using the polynomial kernel with degree 3. In the normal PCA 12 objects are rejected. Some of them look reasonable by human interpretation. In the kernel whitening processing, 10 objects are rejected.
Kernel Whitening for One-Class Classification 1.5
1.5
1
1
0.5
0.5
0
0
−0.5
−0.5
−1
−1
−1.5
0
1
2
3
4
5
6
−1.5
0
1
2
3
4
5
49
6
Fig. 3. Typical decision boundaries of the normal distribution (left) and the support vector data description (right), trained on the normalized data in F. The SVDD tends to be tighter, especially when some outliers are present in the training data
Some of the objects are rejected in both methods, for instance the upper left object in PCA and the second object in the kernel whitening. Other objects are specifically rejected because the do not fit the particular model, for instance the lower right object in both the PCA and kernel whitening. In figure 6 results on all Concordia digit classes are shown. On each of the digit classes one-class classifiers are trained and the ROC curve is computed. The ROC gives the error on the outlier data given varying values for the error on the target class [15]. From the ROC curves a error is derived, called the Area Under the ROC curve (AUC). Low values of the AUC indicates a good separation between the target and outlier data. On each of the classes 6 one-class classifiers have been trained. The first two methods are density models: the Normal Density and the Mixture of Gaussians (with 5 clusters). The third is the basic SVDD directly trained in the input space, optimized such that about 10% of the target class is rejected. In the last three classifiers the data is mapped using the kernel whitening (polynomial kernel, d = 1, 2 and 3). Again the first 20 principal components were considered, retaining about 75% of the variance. In the left subplot, the data is not preprocessed. The density methods are not capable in estimating the density and give the highest AUC error of 0.5. In most cases the best performances is obtained by applying the whitening procedure with d = 2. The SVDD can perform poorly, due to the relative low sample size and the complexity of following the boundary in the high dimensional feature space. Whitening with higher polynomial degrees also suffers from low sample size effects. In the right subplot, the data is preprocessed by basic to retain again 80% of the variance. By the reduction of the dimensionality, in some cases some overlap between the classes is introduced and the performance of the best whitening procedures deteriorate. The density methods now work well and often outper-
50
David M. J. Tax and Piotr Juszczak
Fig. 4. Examples of rejected handwritten digits from the Concordia dataset. An SVDD trained on class 2 and 3, kernel whitened using a polynomial kernel with d=2
Fig. 5. An SVDD trained on digit class ’4’. On the left the data was preprocessed using normal PCA, on the right kernel whitening with polynomial d = 3 is used form the poorer whitening versions. The actual performance increase or decrease is mainly determined by how well the model fits the data. That means for the whitening procedure that good performance is obtained when the data is distributed in some (nonlinear) subspace.
6
Conclusions
This paper presents a simple whitening preprocessing for one-class classification problems. It uses the idea of Kernel PCA to extract the non-linear principal features of the dataset. After mapping the data to this new feature space (implicitly defined by the kernel function), feature directions with (almost) zero variance are removed and the other features are rescaled to unit variance. By the Kernel PCA and rescaling, the resulting data is zero mean with an identity covariance
Kernel Whitening for One-Class Classification 0.5
0.5
0.45
0.45
0.4
0.4
0.35
0.35 0.3 AUC
AUC
0.3 0.25
0.25
0.2
0.2
0.15
0.15 Gauss MoG SVDD Ellipse d=1 Ellipse d=2 Ellipse d=3
0.1 0.05 0
51
0
1
2
3
4 5 classnr
6
7
8
9
Gauss MoG SVDD Ellipse d=1 Ellipse d=2 Ellipse d=3
0.1 0.05 0
0
1
2
3
4 5 classnr
6
7
8
9
Fig. 6. AUC errors on the 10 classes of the Concordia handwritten digits. Left shows the AUC error on all 10 classes for a simple Gaussian density, a Mixture of Gaussians (k = 5), SVDD, whitening with polynomial degree 1, 2 and 3
matrix. Finally, this data can now in principle be described by any one-class classifier. By this preprocessing step one-class classifiers can be trained which contain large differences in scale in the input space. In particular, data in (non-linear) subspaces can be described well. For most one-class classifiers data distributed in subspaces are problematic, because the data contains large differences in typical scale within the subspace and perpendicular to the subspace. By using a suitable kernel in the kernel PCA, these scale differences in the data are recognized and modeled in the mapping. The transformed data now has equal variance in each feature direction. This subspace modeling comes at a price though. The mapping requires a reasonable sample size, in order to extract the more complex non-linear subspaces. Using too complex mappings and too many principal components in combination with small sample sizes will result in overfitting on the data and in poor results on independent test data. A drawback of this rescaling on the Kernel PCA basis is that the expansion in (13) is in general not sparse. This means that for each projection of a test point onto a principal direction, all training objects have to be taken into account. For large training sets this can become very expensive. Fortunately, approximations can be made, which reduce the number of objects in the expansion (7) drastically [10]. Finally, the problem how to choose the kernel function and values for the hyperparameters is still open. When test data is available, both from the target as the outlier class, this can be used for evaluation of the model (which then includes both the whitening and the classifier in the kernel space). In the general case of one-class classification, we have just a very poorly represented outlier class, and estimating the performance on this dataset will give a bad indication
52
David M. J. Tax and Piotr Juszczak
of the expected performance. In these cases we have to rely on, for instance, artificially generated outlier data.
Acknowledgements This research was supported through a European Community Marie Curie Fellowship. The author is solely responsible for information communicated and the European Commission is not responsible for any views or results expressed.
References 1. Bishop, C.: Neural Networks for Pattern Recognition. Oxford University Press, Walton Street, Oxford OX2 6DP (1995) 40, 42 2. Tax, D.: One-class classification. PhD thesis, Delft University of Technology, http://www.ph.tn.tudelft.nl/˜davidt/thesis.pdf (2001) 40, 41, 42, 44 3. Moya, M., Hush, D.: Network contraints and multi-objective optimization for oneclass classification. Neural Networks 9 (1996) 463–474 40 4. Ritter, G., Gallegos, M.: Outliers in statistical pattern recognition and an application to automatic chromosome classification. Pattern Recognition Letters 18 (1997) 525–539 40 5. Bishop, C.: Novelty detection and neural network validation. IEE Proceedings on Vision, Image and Signal Processing. Special Issue on Applications of Neural Networks 141 (1994) 217–222 40, 41 6. Japkowicz, N., Myers, C., Gluck, M.: A novelty detection approach to classification. In: Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence. (1995) 518–523 40 7. Tarassenko, L., Hayton, P., Brady, M.: Novelty detection for the identification of masses in mammograms. In: Proc. of the Fourth International IEE Conference on Artificial Neural Networks. Volume 409. (1995) 442–447 41 8. Surace, C., Worden, K., Tomlinson, G.: A novelty detection approach to diagnose damage in a cracked beam. In: Proceedings of SPIE. (1997) 947 – 943 41 9. Vapnik, V.: Statistical Learning Theory. Wiley (1998) 41, 42, 43 10. Sch¨ olkopf, B., Williamson, R., Smola, A., Shawe-Taylor, J.: SV estimation of a distribution’s support. In: Advances in Neural Information Processing Systems. (1999) 41, 42, 44, 51 11. Campbell, C., Bennett, K. P.: A linear programming approach to novelty detection. In: NIPS. (2000) 395–401 41 12. Scholkopf, B., Smola, A. J., M¨ uller, K. R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation 10 (1998) 1299–1319 41, 44, 45 13. Smola, A.: Learning with kernels. PhD thesis, Technischen University Berlin (1998) 43 14. Cho, S. B.: Recognition of unconstrained handwritten numerals by doubly selforganizing neural network. In: International Cconference on Pattern Recognition. (1996) 47 15. Metz, C.: Basic principles of ROC analysis. Seminars in Nuclear Medicine VIII (1978) 49
A Fast SVM Training Algorithm Jian-xiong Dong1 , Adam Krzy˙zak2 , and Ching Y. Suen1 1
Centre for Pattern Recognition and Machine Intelligence, Concordia University Montreal Quebec, Canada H3G 1M8 {jdong,suen}@cenparmi.concordia.ca 2 Department of Computer Science, Concordia University 1455 de Maisonneuve Blvd. W., Montreal Quebec, Canada H3G 1M8
[email protected] Abstract. A fast support vector machine (SVM) training algorithm is proposed under the decomposition framework of SVM’s algorithm by effectively integrating kernel caching, digest and shrinking policies and stopping conditions. Extensive experiments on MNIST handwritten digit database have been conducted to show that the proposed algorithm is much faster than Keerthi et al.’s improved SMO, about 9 times. Combined with principal component analysis, the total training for ten oneagainst-the-rest classifiers on MNIST took just 0.77 hours. The promising scalability of the proposed scheme can make it possible to apply SVM to a wide variety of problems in engineering.
1
Introduction
In the past few years, support vector machines (SVM) have generated a great interest in the community of machine learning due to its excellent generalization performance in a wide variety of learning problems, such as handwritten digit recognition (see [19] [5]), classification of web pages [11] and face detection [17]. Some classical problems such as multi-local minima, curse of dimensionality and overfitting in neural networks [1], seldom occur in support vector machines. Support vector machines orginated from statistical learning theory (see [20]), a theoretical base of statistical inference. However, training support vector machines is still a bottleneck, especially for a large-scale learning problem [5]. Therefore, it is important to develop a fast training algorithm for SVM in order to apply it to various engineering problems in other fields. Currently there are three important algorithms for training SVM: Chunking [17], Sequential Minimum Optimization (SMO) (see [18] [13]) and SVMlight by Joachims [10]. The Chunking algorithm starts with an arbitrary subset of data. After a general optimizer is used to train SVM on that subset, support vectors are kept in the working set and the rest are replaced by the data from the training set that violate Karush-Kuhn-Tucker conditions [14]. Chunking algorithm first introduces “working set” (the decomposition idea) to the SVM training, which makes large-scale learning possible [17]. But this algorithm is still slower due to an inefficient selection of working set. Joachims [10] further S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 53–67, 2002. c Springer-Verlag Berlin Heidelberg 2002
54
Jian-xiong Dong et al.
developed the decomposition algorithm for SVM in SVMlight . He used the nonzero elements of steepest feasible direction based on a strategy from Zoutendijk’s method to select a good working set. The shrinking strategy and Least Recently Used (LRU) caching policy were also used to speed up training. But one of the inefficiencies of SVMlight comes from its caching policy. It caches some rows of the total kernel Hessian matrix so that it will take up a huge memory, which becomes intractable for a large-scale learning problem. Further, the failure of shrinking strategy will result in the re-optimization of SVM’s learning problem. Platt [18] proposed Sequential Minimum Optimization, which makes a great progress in SVM’s training. He selects the size of the working set to be two and gives an explicit analytical solution to the two-variable optimization problem without using any optimization package. Several heuristics are suggested to select the working set. Keerthi et al. [13] further enhanced the performance of SMO by pointing out the inefficiency of updating one-thresholded parameters in Platt’s algorithm and replacing it with two thresholded parameters. In addtion, it is important to note that Keerthi and Gilbert proved the finite-step termination under a stopping condition [12]. Decoste et al. [5] found that the number of candidate support vectors during the early stage of training is much greater than that of final support vectors while many experimental results seem to show that SMO’s time complexity can ¯ · n), where n is the size of training set and L ¯ is be approximated to about O(L the average number of candidate support vectors during iterations. Effectively ¯ will have an important impact on the performance of SMO. Therefore, reducing L Decoste et al. [5] introduced “digest” idea to avoid this inefficiency: jump out of the full SMO iteration early as the candidate support vectors grow by a large number and switch SMO into “inbound” iteration to “digest” these candidate SV sets. The heuristics reduce the number of kernel re-evaluations. However, DeCoste et al.’s heuristics contain a lot of ad-hoc parameters and caching the entire rows of kernel matrix is still inefficient. Flake [8] found that SMO accesses the kernel matrix in an irregular way. A simple caching policy like LRU may result in the degradation of SMO’s performance. In this paper, an effective integration of decompostion idea, caching, digest and shrinking, is applied to Keerthi et al.’s improved SMO algorithm to achieve very promising perfromance, at least five to ten times faster than the previous algorithm, where caching plays the most important role. The dimension of kernel cache square matrix is the same as the size of the working set, which is not less than the final number of support vectors. Digest restricts the growth of candidate support vectors such that the number of candidate SV sets during the training period is always smaller than the size of the working set. As a result, we can maximize the reusage of caching kernel matrix, which is a key step. In addition, selection of working set, stopping rules of the decompostion method and effective shrinking policy are suggested. This paper is organized as follows: the basic formulae for support vector machines are first introduced. In Section 3, a fast training algorithm for SVM is described in detail and several heuristics are suggested. In Section 4, some
A Fast SVM Training Algorithm
55
experiments have been conducted to investigate the training speed of the proposed method on MNIST handwritten digit database. Finally, we summarize the results and draw a conclusion.
2
Support Vector Machine
This section introduces some basic formulas for SVM in order to facilitate the explanation presented in a later section. The details are referred to Burge’s tutorial [2]. Given that training samples {Xi , yi }, i = 1, · · · , N , yi ∈ {−1, 1}, Xi ∈ Rn where yi is the class label, support vector machine first maps the data to the other Hilbert space H ( also called feature space), using a mapping Φ, Φ : Rn → H.
(1)
The mapping Φ is implemented by a kernel function K that satisfies Mercer’s conditions [4] such that K(Xi , Xj ) = Φ(Xi ) · Φ(Xj ). Then, in the high-dimensional feature space H, we find an optimal hyperplane by maximizing the margin and bounding the number of training errors. The decision function can be given by f (X) = θ(W · Φ(X) − b) N yi αi Φ(Xi ) · Φ(X) − b) = θ(
(2)
i=1 N = θ( yi αi K(Xi , X) − b). i=1
where θ(u) =
1 if u > 0 −1 otherwise
(3)
If αi is nonzero, the corresponding data xi is called support vector. Training a SVM is to find αi , i = 1, · · · , N , which can be achieved by minimizing the following quadratic cost function: N N 1 maximize LD (α) = N i=1 αi − 2 i=1 j=1 αi αj yi yj K(Xi , Xj ). subject to
0 ≤ αi ≤ C i = 1 · · · N N i=1 αi yi = 0
(4)
where C is a parameter chosen by the user, a larger C corresponds to a higher penalty allocated to the training errors. Since kernel K is semi-positive definite and constraints define a convex set, the above optimization reduces to a convex quadratic programming. The weight W is uniquely determined, but with respect to the threshold b, there exist several solutions in the special cases (see [3] [13] [15]). Further, an interesting fact is that the solution is not changed if any non-support vector is removed from eq.(4).
56
Jian-xiong Dong et al.
3
A General Decompostion Framework for Training SVM
In the real world, dividing and conquering1 is a good principle for solving complex problems. That is, a complex problem is first decomposed into a series of sub-problems which can be easily solved with the existing techniques, then solutions to these sub-problems can be combined to solve the original problem. Still, this principle is suitable for SVM’s training algorithm. The decomposition algorithm for SVM2 can be briefly summarized as follows [17]: A Decomposition Framework for Training SVM Input: training set S and the fixed size of the working set is l, where l ≤ N and N is the size of the training set. Output: αi , i = 1, · · · , N . Initialization: αi are set to zero and select an arbitrary working set B such that B ⊆ S. Optimization: Repeat 1. Solve the optimization of a sub-problem in working set B and update αi . 2. Select a new working set with some principles. Until the specified stopping rules are satisfied. The above framework can be applied to the existing training algorithms for SVM such as Chunking, SMO and SVMlight . Also, in order to apply it, the following factors which affect the performance of an algorithm have to be taken into account: – – – –
Optimizer for sub-problems in the working set The size of the working set. The principle for the selection of a new working set. The stopping rules.
Among the existing techniques, SMO is a good choice as the optimizer for subproblems for an explicit analytical solution for two α variables can be provided without using any optimization package. Moreover, kernel evaluations dominate the computational cost in SMO (see [18][7]). Therefore, if we can cache the kernel matrix in the working set, the computational cost can be dramatically reduced. The size of the working set plays an important role in the above framework. It affects not only the selection of a good size of the cache, but also the convergence speed. SMO sets the size of the working set to 2. Usually, it is more likely that a larger size of the working set for the maximal re-usage of kernel caching can lead to a faster convergence. However, in order to cache the whole kernel matrix in the working set, the cache size must be restricted due to the limited computational resources such as the availabe memory. Numerous experiments (see [19][18][7]) 1 2
A good example is quick sort It is also called “working set” method.
A Fast SVM Training Algorithm
57
have shown that the number of support vectors is much smaller than the size of the training set. Thus, the support vectors can always be kept in the working set if we restrict the growth of the candidate support vector using “digest” strategy. Good selection of a new working set has an important impact on the performance of the decomposition algorithms for SVM, which has been remarked by Osuna et al’s on the chunking algorithm: “During the execution of the algorithm, as much as 40% of the computational effort is dedicated to the evaluation of the optimality conditions. At final stages, it is common to have all the data points evaluated, yet only to collect very few of them to incorporate them to the working set.” where Osuna et al. used the violation of Karush-Kuhn-Tucker conditions to select a new working set. The above comments suggest that to select a new working set using the above rule is not efficient. Instead, in our algorithm a queue with FIFO (First In First Out) property is used, which substantially reduces the computational cost. With respect to the stopping rules, most algorithms add a step by having all training examples evaluated. If none violate KKT conditions, the optimization is finished and the optimal solution is guaranteed theoretically, but it sacrifices the computational cost. Instead, in our algorithm a heuristic rule is given and it works very well in several learning problems.
4
A Fast Algorithm for Training SVM
Based on the framework presented in Section 3, a new algorithm for training SVM is proposed, which effectively integrates useful techniques such as kernel caching and “digest” and shrinking strategies. In our algorithm, Keerthi et al.’s improved SMO [13] is used as the optimizer of sub-problems in the working set. The proposed algorithm can be summarized as follows: A Fast Algorithm for Training SVM Input: training set S and the fixed size of the working set is l, where l ≤ N and N is the size of the training set. Also, kernel caching matrix with the dimension l is provided. Output: Sets αi , i = 1, · · · , N . Initialization: Shuffle the training set; set αi to zero and select a working set B such that B ⊆ S. Optimization: Repeat 1. Apply Keerthi et al.’s SMO to solve the optimization of a subproblem3 in working set B, in combination with some effective techniques such as kernel caching and “digest” and shrinking strategies, then update αi . 3
Dr. C.J. Lin pointed out that this strategy was also used in SVM package “mySVM”.
58
Jian-xiong Dong et al.
2. Select a new working set with a queue technique Until the specified stopping conditions are satisfied. Many issues such as the section of the working set and stopping conditions, which are related to the above algorithm, are next discussed in detail. 4.1
Kernel Caching
Usually, cache is a portion of fast memory used to overcome a mismatch between fast CPU and slow memory access in computer systems. Here it means a part of memory that stores the kernel matrix4 in the working set. The size of the working set should be large enough to contain all support vectors in the whole training set and small enough to satisfy the limited memory size. The dimension of the kernel cache matrix is the same as the size of the working set(l), thus each element of the kernel matrix can be evaluated once only during the optimization of a sub-problem. The computational cost can be dramatically reduced. 4.2
Selection of a New Working Set
A queue is a useful data structure which has been widely used in numerous computer applications. The FIFO (First In First Out) property holds for it. Still, it can be used to select a new working set in our method. Two operations associated with a queue data structure are defined by – Enqueue(Q, Xi ): Append training sample Xi at the tail of the queue Q. – Dequeue(Q): Remove a training sample from the head of the queue Q and return the training sample. Each operation takes O(1) time. The queue initially stores the sequential numbers of all training samples as identities. But one important step before that is to shuffle the training set in a random order such that the distribution of training samples from different classes are balanced. The initial working set can be set by applying “Dequeue” operation l times. When the optimization of a sub-problem ends and a new working set needs to be selected, support vectors in the old working set are fixed and non-support vectors will be appended at the tail of the queue by operation “Enqueue” and replaced by training samples removed from the head of the queue by operation “Dequeue”. Compared with algorithms of the previous working set selection, the cost of our method is much lower since no KKT conditions are evaluated at this stage. Moreover, it can be observed in our experiments that only a small number of support vectors in the new working set enter and exit, relative to the number of support vectors in the last working set. Therefore, we can use an up-triangle status matrix5 to track the variation of support vectors at the beginning and ending stages of the optimization of each 4 5
In fact, due to its symmetry only the up-triangle kernel matrix needs to be stored. Each element in the status matrix is boolean variable (0/1), which can be represented by 1 bit in a computer byte.
A Fast SVM Training Algorithm
59
sub-problem so that some kernel elements like K(Xi , Xj ), where Xi and Xj are support vectors, need not be re-evaluated. Besides the above kernel caching, this strategy further reduces the cost, especially when the size of the working set is slightly larger than the final number of support vectors. 4.3
Digest Strategy
Digest strategy in our algorithm is mainly used to limit the growth of the candidate support vectors by switching SMO from the full iteration into inbound iteration. It not only reduces the computational cost, but also protects the candidate support vectors from growing too fast so that the working set has enough space to contain these support vectors. Figure 4.3 shows how “Digest” strategy is applied in SMO’s full iteration.
k = LastK
No
k < numExamples+LastK
exit from loop Yes
Shrinking
numChanged += examineExample (k % numExample);
No numChanged % 10 == 0 and | sv − LastSv | >= 10
k = k+1
Yes examineAll = 0; numChanged = 0; LastSv = sv; LastK = k % numExamples;
In Bound Iteration
Fig. 1. Flowchart corresponding to the full iteration of SMO algorithm
60
Jian-xiong Dong et al.
In Figure 4.3, the variables LastK and LastSv specify the sequential number of the training sample and number of support vectors in the last time. Both are initially set to zero. The variable numExample means the size of working set and sv denotes the number of current support vectors. SMO tracks the number of KKT violations. Once support vectors grow by 10 numbers (or other threshold), we switch SMO full iteration to inbound iteration, where inbound iteration can be regarded as a fast optimizer. 4.4
Shrinking
During SMO’s optimization, only a small number of training samples in the working set become support vectors. If we know which samples are support vectors, we train SMO on these samples and get the same result. This will make SMO optimization smaller and faster to solve. Joachims [11] first used this strategy in SVMlight and found that it was an efficient method to speed up the SVM training. He used the estimation of Lagrange multiplier of a bound constraint to remove some optimization variables during his decomposition algorithm. However, as we observed in the experiments, the estimation of these Lagrange multipliers was not stable, which often resulted in erroneous predictions. Instead, the α values are directly used to predict which samples will become non-support vectors. By considering the history of these α values, if an α value is close to zero in the last two continuous full iterations, it is more likely that the training sample corresponding to this α is non-support vector that can be eliminated during the sub-problem optimization. In order to track the history of these α values, a “count” variable 6 is associated with each training sample. The pseudo codes for shrinking are given below: IF the value of count variable for the current sample is not less than 2 Skip the iteration and continue the next loop. ELSE IF αk < eps Increase the count variable associated with the current sample. ELSE Set the count variable to zero END IF END IF where eps is the float precision in a computer. 4.5
Stopping Conditions
With respect to the stopping conditions, most algorithms have to add an extra step by evaluating all examples with KKT condition before the algorithm is terminated. If none of them violate KKT, the algorithm stops. In practice, its cost is 6
Dr. C.J. Lin pointed out that counting stategies were used in libsvm and svmtorch.
A Fast SVM Training Algorithm
61
high when training set is very large, although this method is acceptable theoretically. Instead, heuristic stopping rules based on working set are suggested. We track the variation of two-thresholded parameters7 in Keerthi et al.’s SMO and of the number of support vectors in the last two working sets. If these variations are all small enough and enough training samples are learned, the algorithm terminates. ∆SV < 5 and ∆b up < threshold and ∆b low < threshold and The number of learned samples > N where N is the size of training samples. Although the optimal solution can not be guaranteed theoretically using the above stopping rules, usually its solution is close to the optimal one especially for a large size of training set because it contains many redundant samples.
5
Experiments
The experimental code was written in C++ and compiled by Microsoft visual C++6.0. This algorithm ran on a 1.7 GHz P4 processor and the operating system was Windows 2000 Professional. The size of the main memory was about 1.5 Gigabytes. The database MNIST is available from the website8 . It contains 60,000 training samples and 10,000 testing samples, which originate from NIST database. The preprocessing was done by LeCun’s research group and a linear transform was performed such that all patterns were centered into a 28 × 28 box while keeping the aspect ratio. The resulting image was gray-level, which was scaled and translated to fall within the range from -1.0 to 1.0. In addition, a 576dimensional directional feature based on gradient of gray-level image [9] was extracted by using Robert edge operator. For support vector machine, the one-against-the-rest method was used to construct ten-class classifier. That is, each classifier was constructed by separating one class from the rest. The classification decision was made by choosing the class with the largest classifier output value. In our method, two parameters, “eps” and “threshold”, are set to 1e-6 and 0.01, respectively. The parameter C is set to 10.
6
Comparisons of Training Performance
Polynomial Kernel In order to compare the proposed method with benchmark ones, the first experiment was conducted on 28 × 28 pixel images. Since patterns on MNIST are not truly located at the center, the preprocessing was performed by first enclosing the pattern with a rectangle, and then translating this rectangle 7 8
The variables b up and b low in Keerthi et al. pseudo code. http://www.research.att.com/˜yann/exdb/mnist/index.html
62
Jian-xiong Dong et al.
into the center of a 28 × 28 box. Then patterns were blurred using the following mask: 121 1 2 4 2. (5) 16 121 After that, we used DeCoste’s [5] idea to normalize each sample by its Euclideannorm scalar value such that the dot product was always within [−1, 1]. The polynomial kernel was (X1 · X2 )7 . The dimension of input vectors was 784 (28 × 28). The size of the working set was set to 5000. The performances of the proposed method and Keerthi et al.’s improved SMO were illustrated in Tables 6 and 6.
Table 1. Performance of the proposed method with polynomial kernel Class cache miss(×108 ) hit ratio 0 0.90 0.81 1 0.45 0.79 2 1.76 0.87 3 1.99 0.87 4 1.12 0.88 5 1.55 0.87 6 0.99 0.80 7 0.98 0.88 8 1.56 0.93 9 1.14 0.91 Total time
SV 1799 843 2844 2865 2224 2668 1604 1963 3411 2499
BSV 0 38 3 12 33 8 9 49 11 62 2.37
Margin 0.057 0.046 0.041 0.037 0.036 0.037 0.046 0.035 0.032 0.029 hours
Bias CPU(s) 0.87 551 1.40 327 0.67 1112 0.85 1307 0.84 770 0.79 1002 1.04 611 0.96 716 0.95 1275 1.33 875
Table 2. Performance of Keerthi et al.’s improved SMO with polynomial kernel Class Kernel evaluation(×108 ) 0 10.36 1 12.89 2 16.76 3 19.95 4 19.92 5 18.32 6 11.50 7 21.19 8 26.18 9 27.89 Total time
SV 2021 973 3234 3228 2586 3092 1822 2316 4006 2939
BSV Margin Bias CPU(s) 0 0.057 0.88 4449 35 0.046 1.43 5720 2 0.041 0.67 6868 11 0.037 0.86 8206 32 0.036 0.85 8336 7 0.037 0.80 7594 8 0.046 1.05 4946 47 0.035 0.98 8934 14 0.032 0.97 10541 65 0.029 1.38 11604 21.44 hours
A Fast SVM Training Algorithm
63
Table 3. Performance of the proposed method with RBF kernel Class cache miss(×108 ) hit ratio 0 1.04 0.74 1 0.79 0.71 2 1.41 0.82 3 1.39 0.81 4 1.25 0.81 5 1.33 0.81 6 0.96 0.77 7 1.25 0.82 8 1.40 0.85 9 1.15 0.85 Total time
SV BSV Margin Bias CPU(s) 2050 0 0.084 0.69 472 1381 3 0.071 0.90 364 2481 0 0.063 0.60 666 2639 0 0.061 0.86 657 2445 0 0.058 0.70 608 2695 0 0.062 0.61 628 1898 0 0.072 0.79 438 2447 3 0.055 0.72 617 2931 0 0.053 0.94 709 2408 2 0.047 0.96 614 1.60 hours
In Table 6, the caching hit ratio is calculated by hit ratio =
cache hit . cache hit + cache miss
The memory for caching in the proposed method is just 50 Mbytes9 , much smaller than DeCoste’s method that used 800 Mbytes cache [5]. It can be seen from Table 6 that the cache hit ratio is very high, with an average value of 0.80. The proposed method is much faster than Keerthi et al.’s improved SMO. In addition, we can see that the number of kernel evaluations is a good measure of training performance, since it is independent of the specified platform. RBF Kernel Training SVM on the good feature set usually can achieve a better performance than on the original pixel values. The good feature set holds more capability to discriminate patterns so that SVM’s training can be speeded up. To facilitate the feature extraction, each pattern was centered into a 22 × 22 box. Then we blurred the image three times using the mask in (5). After that, the default feature extractor was applied to obtain a 576 dimensional feature vector. The bandwidth σ 2 of RBF kernel and the size of working set was set to 0.3 and 5000, respectively. Some performance measures of the proposed method and Keerthi et al.’s were shown in Table 9 and Table 9. The proposed method10 took about 6 hours. Moreover, comparing CPU times in Tables 6 and 9, training based on the discriminant feature set is much faster. The size of the working set in the proposed method affects SVM’s training speed and generalization performance. It is necessary to check whether both are sensitive to the different sizes of working set. Figure 10 shows their relationships. 9 10
1/2 * 5000 * 5000 * sizeof(float) In the same parameters, the proposed method ran on NT4.0 PIII500 Mhz 128M RAM
64
Jian-xiong Dong et al.
Table 4. Performance of Keerthi et al.’s improved SMO with RBF kernel class Kernel evaluation(×108 ) 0 9.11 1 11.45 2 11.34 3 12.05 4 13.46 5 11.43 6 9.24 7 14.40 8 13.51 9 16.43 Total time
SV BSV Margin Bias CPU(s) 2311 0 0.084 0.69 3054 1587 4 0.071 0.89 3899 3155 0 0.063 0.60 3669 3019 0 0.061 0.86 3887 2706 0 0.058 0.71 4420 3697 0 0.062 0.61 3697 2136 0 0.072 0.79 3071 2728 4 0.055 0.72 4682 3316 0 0.053 0.94 4301 2765 2 0.047 0.97 2765 10.4 hours
It can be seen from Figure 10 that the total CPU time is increasing as the size of working set grows. However, the error rate and the number of merged support vectors for ten-class classifiers are almost insensitive during a large range, which is a very good property associated with the proposed method. The user does not need to explicitly tune this parameter. The generalization peformance deteriorates when the size of the working set is 2000. The reason is that the size of working set is not large enough to contain all support vectors. In addition, the number of merged support vectors determines the testing speed. Although the training time is increasing, the testing time does not grow. Feature Compression PCA is a good tool for feature compression. We have also proved [6] that the solution to SVM in the original feature space can be well approximated in the reduced low-dimensional subspace by means of principal component anslysis. The computational cost for training and testing SVM on low dimensional transformed feature sets can be greatly reduced. We draw this conclusion based on a strong assumption that training samples are i.i.d. . It is necessay to check whether this conclusion is true for real world problems. The feature extraction and training parameter setting are the same as above. The original 576-dimensional feature vector was compressed to 200-dimensional and 120-dimensional ones, respectively. The performance measures for 200 and 120-dimensional transformed feature vectors are shown in Tables 10 and 10. Comparing Table 10 with Table 9, it can be observed that the margin and bias (threshold) are almost equal for the corresponding classifier, which indicates that their solutions are equivalent11 . After SVMs generated from both feature sets were tested, their error rates were the same as that on a 576 dimensional feature vector ( 0.6%) [6]. This above phenomenon can be further explained by 11
For original and reduced-dimensional feature vector, the outputs of SVM (f (X)) are almost identical.
A Fast SVM Training Algorithm 2.2
65
1
2
0.95 0.9
1.6 Error Rate(%)
Total CPU time(hours)
1.8
1.4 1.2 1
0.85 0.8 0.75
0.8 0.7 0.6 0.65
0.4 0.2 2
3
4 5 6 The size of working set (× 1000)
7
8
0.6 2
3
4 5 6 The size of working set (× 1000)
7
8
The number of merged support vectors
11000
10000
9000
8000
7000
6000
5000 2
3
4
5
6
7
8
The size of working set (× 1000)
Fig. 2. The above three graphs illustrate the relationship between the size of working set and Total CPU time, error rate and the number of merged support vectors
Mercer theorem [16]. The kernel mapping Φ in the eigen-function space can be written as : Φ(X) = ( λ1 ψ1 (X), λ2 ψ2 (X), · · ·) (6) ∞ K(X1 , X2 ) = λi ψi (X1 )ψi (X2 ). (7) i=1
The features ψi are pairwise orthonormal and the eigenvalues λi specify the importance of the corresponding feature. The features with very small eigenvalues can be removed. So Kernel is a filter in function space. Choosing kernel is analogous to the filter design in the frequency domain. PCA is a filter in the space domain. To discover the theoretical link between them is a potential interesting topic.
7
Conclusion
A fast SVM training algorithm has been proposed under the framework of SVM decomposition algorithm by effectively integrating kernel caching, digest idea,
66
Jian-xiong Dong et al.
Table 5. Training performance when 200 dimensional features were selected class cache miss(×108 ) hit ratio 0 1.02 0.73 1 0.75 0.70 2 1.33 0.81 3 1.25 0.81 4 1.16 0.81 5 1.24 0.81 6 0.63 0.76 7 1.22 0.81 8 1.32 0.84 9 1.18 0.83 Total time
SV BSV Margin Bias CPU(s) 1921 0 0.085 0.69 270 1259 5 0.071 0.89 214 2638 0 0.063 0.60 391 2450 0 0.061 0.87 377 2266 2 0.057 0.70 356 2526 0 0.062 0.61 363 1610 0 0.080 0.79 187 2277 8 0.054 0.72 375 2705 1 0.053 0.97 431 2251 4 0.046 0.98 392 0.93 hours
Table 6. Training performance when 120 dimensional features were selected Class cache miss(×108 ) hit ratio 0 0.90 0.72 1 0.70 0.69 2 1.24 0.81 3 1.14 0.80 4 1.06 0.80 5 1.14 0.80 6 0.82 0.76 7 1.06 0.81 8 1.22 0.83 9 1.00 0.84 Total time
SV BSV Margin Bias CPU(s) 1727 0 0.086 0.69 211 1084 0 0.070 0.90 174 2413 0 0.064 0.60 325 2189 0 0.061 0.88 305 2019 4 0.057 0.71 286 2291 0 0.062 0.60 303 1573 0 0.071 0.78 211 2088 9 0.054 0.71 301 2421 1 0.053 1.01 357 2029 9 0.045 1.00 324 0.77 hours
shrinking policy and convergence rules. The extensive experiments have been conducted to show that the proposed SVM algorithm is about 9 times as fast as Keerthi et al.’s improved SMO. Combined with principal component analysis, the total training for ten one-against-the-rest classifiers on MNIST took just 0.77 hours. The promising scalability of the proposed scheme can makes it possible to apply SVM to a wide variety of problems in engineering.
Acknowledgements We thank Dr. C. J. Lin for his valuable comments on the preliminary version of this paper.
A Fast SVM Training Algorithm
67
References 1. C. M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, Oxford, 1995. 53 2. C. J. C. Burges. A tutorial on support vector machines for pattern recognition. In Data mining and Knowledge Discovery, pages 121–167. 1998. 55 3. C. J. C. Burges and D. J. Crisp. Uniqueness of the svm solution. In Advances in Neural Information Processing Systems, 2001. to appear in NIPS 12. 55 4. N. Cristianini and J. S. Taylor. An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, 2000. 55 5. D. DeCoste and B. Sch¨ olkopf. Training invariant support vector machines. Machine Learning, 46(1-3):161–190, 2002. 53, 54, 62, 63 6. J. X. Dong, C. Y. Suen, and A. Krzy˙zak. A fast svm training algorithm. Technical report, CENPARMI, Concordia University, Montr´eal, Canada, December 2002. 64 7. J. X. Dong, C. Y. Suen, and A. Krzy˙zak. A practical smo algorithm. In Proceedings of the International Conference on Pattern Recognition, Quebec City, Canada, August 2002. 56 8. G. W. Flake and S. Lawrence. Efficient svm regression training with smo. Machine Learning, 46(1-3):271–290, March 2002. 54 9. Y. Fujisawa, M. Shi, T. Wakabayashi, and F. Kimura. Handwritten numeral recognition using gradient and curvature of gray scale image. In Proceedings of International Conference on Document Analysis and Recognition, pages 277–280, India, August 1999. 61 10. T. Joachims. Making large-scale support vector machine learning practical. In B. Sch¨ olkopf, C. Burges, and A. Smola, editors, Advances in kernel methods: Support Vector Machines. MIT Press, Cambridge, MA, December 1998. 53 11. T. Joachims. Text categorization with support vector machine. In Proceedings of European Conference on Machine Learning(ECML), 1998. 53, 60 12. S. S. Keerthi and E. G. Gilbert. Convergence of a generalized smo algorithm for svm classifier design. Machine Learning, 46(3):351–360, March 2002. 54 13. S. S. Keerthi, S. K. Shevade, C. Bhattachayya, and K. R. K. Murth. Improvements to platt’s smo algorithm for svm classifier design. Neural Computation, 13:637–649, March 2001. 53, 54, 55, 57 14. H. Kuhn and A. Tucker. Nonlinear programming. In Proceedings of 2nd Berkeley Symposium on Mathematical Statistics and Probabilistics, pages 481–492. University of California Press, 1951. 53 15. C.-J. Lin. Formulations of support vector machines: A note from an optimization point of view. Neural Computation, 13:307–317, 2001. 55 16. J. Mercer. Functions of positive and negative type and their connection with the theory of integral equations. Philos. Trans. Roy. Soc. London, A(209):415–446, 1909. 65 17. E. Osuna, R. Freund, and F. Girosi. Training support vector machines: An application to face detection. In Proceedings of the 1997 conference on Computer Vision and Pattern Recognition(CVPR’97), Puerto Rico, June 17-19 1997. 53, 56 18. J. C. Platt. Fast training of support vector machines using sequential minimial optimization. In B. Sch¨ olkopf, C. Burges, and A. Smola, editors, Advances in kernel methods: Support Vector Machines. MIT Press, Cambridge, MA, December 1998. 53, 54, 56 19. B. Sch¨ olkopf, C. J. C. Burges, and V. Vapnik. Extracting support data for a given task. In U. M. Fayyad and R. Uthurusamy, editors, Proceedings, First International Conference on Knowledge Discovery and Data Mining, pages 252–257. AAAI Press, Menlo Park, CA, 1995. 53, 56 20. V. N. Vapnik. Statistical Learning Theory. Wiley, New York, 1998. 53
Support Vector Machines with Embedded Reject Option Giorgio Fumera and Fabio Roli Department of Electrical and Electronic Engineering, University of Cagliari Piazza dA ’ rmi, 09123 Cagliari, Italy {fumera,roli}@diee.unica.it
Abstract. In this paper, the problem of implementing the reject option in support vector machines (SVMs) is addressed. We started by observing that methods proposed so far simply apply a reject threshold to the outputs of a trained SVM. We then showed that, under the framework of the structural risk minimisation principle, the rejection region must be determined during the training phase of a classifier. By applying this concept, and by following Vapniks’ approach, we developed a maximum margin classifier with reject option. This led us to a SVM whose rejection region is determined during the training phase, that is, a SVM with embedded reject option. To implement such a SVM, we devised a novel formulation of the SVM training problem and developed a specific algorithm to solve it. Preliminary results on a character recognition problem show the advantages of the proposed SVM in terms of the achievable error-reject trade-off.
1
Introduction
The reject option is very useful to safeguard against excessive misclassifications in pattern recognition applications that require high classification reliability. In the framework of the minimum risk theory, Chow defined the optimal classification rule with reject option [1]. In the simplest case where the classification costs do not depend on the classes, Chows’ rule consists in rejecting a pattern if its maximum a posteriori probability is lower than a given threshold [2]. The optimality of this rule relies on the exact knowledge of the a posteriori probabilities. However, in practical applications, the a posteriori probabilities are usually unknown [19]. Some classifiers, like neural networks and the k-nearest neighbours classifier, provide approximations of the a posteriori probabilities [3,4,19]. In such case, Chows’ rule is commonly used, despite its non-optimality [19]. Other classifiers, like support vector machines (SVMs), do not provide probabilistic outputs. In this case, a rejection technique targeted to the particular classifier must be used. So far, no work in the literature addressed the problem of defining a specific rejection technique for SVM classifiers. The reject option is currently implemented by using two approaches. The first one uses as measure of classification reliability the distance d ( x ) of an input pattern x from the optimal separating hyperplane (OSH), in the feature space induced by the chosen kernel. The rejection rule consists in rejecting S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 68-82, 2002. Springer-Verlag Berlin Heidelberg 2002
Support Vector Machines with Embedded Reject Option
69
patterns for which d ( x ) is lower than a predefined threshold [5]. Since the absolute value f ( x ) of the output of a SVM is proportional to d ( x ) , this rule is implemented by applying a reject threshold to f ( x ) . The second approach for implementing the reject option in SVMs consists in mapping their outputs to posterior probabilities, so that Chows’ rule can be applied. Usually, for distance classifiers (like the Fishers’ linear discriminant) the mapping is implemented using a sigmoid function [6]. This method was also proposed for SVMs in [7], using the following form for the sigmoid function:
P ( y = +1 | x ) =
1 , 1 + exp ( af ( x ) + b )
(1)
where the class labels are denoted as y = +1, −1 , while a and b are constant terms to be defined on the basis of sample data. A similar method was proposed in [8]. In this case the constants a and b are chosen so that P ( y = +1 | x ) = 0.5 , if f ( x ) > 0 , for patterns lying at a distance 1/ w from the OSH. An approximation of the classconditional densities p ( f ( x ) | y = +1) and p ( f ( x ) | y = −1) with Gaussian densities
having the same variance was proposed in [9]. The corresponding estimate of P ( y = +1 | x ) is again a sigmoid function. A more complex method based on a Bayesian approach, the so-called evidence framework, was proposed in [10]. Nonetheless, also in this case the resulting estimate of P ( y = +1 | x ) is a sigmoid-like function. We point out that all the mentioned methods provide estimates of the posterior probabilities that are monotonic functions of the output f ( x ) of a SVM. This implies that Chows’ rule applied to such estimates is equivalent to the rejection rule obtained by directly applying a reject threshold on the absolute value of the output f ( x ) . Indeed, both rules provide a rejection region whose boundaries consist of a pair of hyperplanes parallel to the OSH and equidistant from it. The distance of such hyperplanes from the OSH depends on the value of the reject threshold. Accordingly, we can say that all the rejection techniques proposed so far for SVM classifiers consist in rejecting patterns whose distance from the OSH is lower than a predefined threshold. The above approaches are based on a reasonable assumption, namely, the classification reliability increases for increasing values of the distance of an input pattern from the class boundary constructed by a given classifier. However, this heuristic approach is not coherent with the theoretical foundations of SVMs, which are based on the structural risk minimisation (SRM) induction principle [11]. In this paper, we propose a different approach for introducing the reject option in the framework of SVM classifiers. Our approach is based on the observation that, under the framework of the SRM principle, the rejection region must be determined during the training phase of a classifier. On the basis of this observation, and by following Vapniks’ maximum margin approach to the derivation of standard SVMs, we derive a SVM with embedded reject option (Section 2). In Section 3 we propose a formulation of the training problem for such a SVM, and a training algorithm. In Section 4, we
70
Giorgio Fumera and Fabio Roli
report the results of a preliminary experimental comparison between our SVM with embedded reject option, and the e“ xternal” rejection techniques proposed in the literature for standard SVMs. The experiments were conducted on a large set of twoclass character recognition problems. Conclusions are drawn in Section 5.
2
Support Vector Machines with Reject Option
In Sect. 2.1 we address the problem of classification with reject option under the framework of the SRM principle. It turns out that the SRM principle requires to determine the rejection region during the training phase of a classifier. We then apply this concept to the development of a SVM classifier with embedded reject option. To this aim, we exploit Vapniks’ maximum margin approach to the derivation of standard SVMs. In Sect. 2.2 we propose a formulation of the training problem for such a classifier. 2.1
Classification with Reject Option in the Framework of the SRM Principle
The SRM principle was derived from a result of statistical learning theory, consisting in the definition of an upper bound for the expected risk of a given classifier. In statistical learning theory, a classifier is characterised by the set of decision functions it can implement, f ( x, α ) , where α is a parameter denoting one particular function of the set. For a c-class problem without reject option, decision functions f ( x, α ) take on exactly c values, corresponding to the c class labels. Given a loss function L ( x, y, α ) (where y denotes the class label of pattern x), the expected risk R ( α )
obtained by using any function f ( x, α ) is: c
R ( α ) = ∑ ∫ L ( x, y j , α ) p ( x, y j ) dx .
(2)
j=1
The corresponding empirical risk, R emp ( α ) , is an approximation of R ( α ) constructed on the basis of a given sample ( x1 , y1 ) , …, ( x l , yl ) :
R emp ( α ) =
1 l ∑ L ( x i , yi , α ) . l i =1
(3)
It has been shown that for any real-valued bounded loss function 0 ≤ L ( x, y, α ) ≤ B ,
the following inequality holds true for any function f ( x, α ) , with probability at least 1− η : R ( α ) ≤ R emp ( α ) + where
4R emp ( α ) Bε 1 + 1 + , 2 Bε
(4)
Support Vector Machines with Embedded Reject Option
71
2l h ln + 1 − ln η ε=4 h , l
(5)
and h denotes the VC dimension of the classifier [11]. The SRM principle is aimed at controlling the generalisation capability of a classifier (that is, minimising the expected risk R ( α ) ) by minimising the right-hand side of inequality (4). To this aim, a trade-off between the VC dimension of the classifier and the empirical risk is required. Therefore, training a classifier in the framework of the SRM principle consists in finding the decision function f ( x, α ) which provides the best trade-off between the VC dimension and the empirical risk. Consider now the problem of classification with reject option. For a c-class problem, decision functions f ( x, α ) take on c + 1 values: c of them correspond to the c class labels, while the c + 1st one corresponds to the reject decision. Moreover, loss functions take on at least three values: correct classification, misclassification, and rejection. By the way, note that the expressions of the expected risk (2) and of the empirical risk (3) are valid also for classification with reject option. It is now easy to see that the upper bound (4) on the expected risk of a classifier holds also for this kind of decision and loss functions. Indeed, inequality (4) was derived under the only assumption of a bounded real-valued loss function [11]. This means that the SRM principle can be also applied to classification with reject option. We point out that, according to the above definition of classifier training under the SRM principle, the rejection region should be determined during the training phase of the classifier, besides the c decision regions. On the basis of the above discussion, let us now address the problem of constructing a classifier with reject option by using the SRM principle, as an extension of the SVM classifier. The SVM classification technique has been originally derived by applying the SRM principle to a two-class problem, using a classifier implementing linear decision functions: f ( x, α ) = sign ( w ⋅ x + b ) ,
(6)
and using the 0/1 (indicator) loss function [11]: 0, if f ( x, α ) = y , L ( x, y, α ) = 1, if f ( x, α ) ≠ y .
(7)
The simplest generalisation of linear decision functions (6) to classification with reject option are functions defined by means of pairs of parallel hyperplanes, so that the rejection region is the space delimited by such hyperplanes. Formally, let us denote a pair of parallel hyperplanes as: w ⋅ x + b ± ε = 0, ε ≥ 0 .
(8)
72
Giorgio Fumera and Fabio Roli
The corresponding decision function is then defined as follows: f ( x, α ) = + 1 , if w ⋅ x + b ≥ ε , f ( x, α ) = − 1 , if w ⋅ x + b ≤ −ε ,
(9)
f ( x, α ) = 0 , if -ε < w ⋅ x + b < ε , where α denotes the parameters w, b, ε, while the class labels are denoted with y = +1 and y = −1 , and the reject decision is denoted with y = 0 . The distance between the hyperplanes, that is, the width of the rejection region, is equal to 2ε / w . Analogously, the simplest extension of the indicator loss function (7) to classification with reject option is the following loss function: if f ( x, α ) = y, 0, L ( x, y, α ) = w R , if f ( x, α ) = 0, 1, if f ( x, α ) ≠ y and f ( x, α ) ≠ 0,
(10)
where wR denotes the cost of a rejection. Obviously 0 ≤ w R ≤ 1 . The corresponding expected risk is [2]: R ( α ) = w R P ( reject ) + P ( error ) ,
(11)
where P ( reject ) and P ( error ) denote respectively the misclassification and reject probabilities achieved using the function f ( x, α ) . Accordingly, the expression of the empirical risk (3), for a given decision function and a given training set, is: R emp ( α ) = w R R + E ,
(12)
where R and E denote respectively the misclassification and reject rates achieved by f ( x, α ) on training samples. According to the SRM principle, training this classifier consists in finding the pair of parallel hyperplanes (8), which provide the best tradeoff between the VC dimension and the empirical risk. Let us call such a pair the optimal separating hyperplanes with reject option (OSHR). As pointed out in Sect. 1, also using the rejection rules proposed in the literature the rejection region is delimited by a pair of parallel hyperplanes. Note however that such hyperplanes are constrained to be always parallel and equidistant to a given hyperplane (the OSH), for any value of the reject rate. Instead, since the empirical risk depends on the parameter wR, the position and orientation of the OSHR can change for varying values of the parameter wR, as a result of the training phase. In order to apply the SRM principle to the classifier with reject option defined by linear decision functions (9) and loss function (10), it would be necessary to evaluate its VC dimension h, and to find subsets of decision functions (9) with VC dimension lower than h. Since this was beyond the scope of our work, we propose an operative definition of the OSHR, based on Vapniks’ maximum margin approach to the derivation of standard SVMs. Our approach is suggested by the similarity between the classifier defined by (9,10), and the one without reject option defined by linear
Support Vector Machines with Embedded Reject Option
73
decision functions (6) and indicator loss function (7). For this last classifier, it has been shown that, for linearly separable classes, the VC dimension depends on the margin with which the training samples can be separated without errors. This suggested the concept of optimal separating hyperplane as the one which separates the two classes with maximum margin [12]. The extension of the concept of OSH to the general case of non-linearly separable classes, was based on the idea of finding the hyperplane which minimises the number of training errors, and separates the remaining correctly classified samples with maximum margin [13]. By analogy, we assume that the OSHR can be defined as a pair of parallel hyperplanes (8) which minimise the empirical risk (12), and separate with maximum margin the samples correctly classified and accepted. We remind that a pattern xi is accepted if w ⋅ x i + b ≥ ε . For a pair of parallel hyperplanes (8), we define the margin of an accepted pattern as its distance from the hyperplane w ⋅ x + b = 0 . 2.2
The Optimal Separating Hyperplanes with Reject Option
In this section we show that the OSHR, as defined on the basis of the above assumption, is the solution of an optimisation problem similar to that of standard SVMs. To this aim, we first recall how the optimisation problem for standard SVMs was obtained. As said above, the OSH for non-linearly separable classes was defined as the hyperplane which minimises the number of training errors, and separates the remaining correctly classified samples with maximum margin [13]. It was shown that the number of training errors can be minimised by minimising the functional l
∑ θ (ξ ) , i
(13)
i =1
under the constraints yi ( w ⋅ x i + b ) ≥ 1 − ξi , i = 1, … ,l , ξi ≥ 0, i = 1, …,l ,
(14)
where θ is the step function defined as 0, if u ≤ 0, θ(u) = 1, if u > 0.
(15)
Note that a training error is defined as a pattern xi for which ξi > 0 . An hyperplane which simultaneously separates with maximum margin the correctly classified samples, can be found by minimising under constraints (14) the following functional, for sufficiently large values of the constant C: 1 l w ⋅ w + CF ∑ θ ( ξi ) , 2 i =1 where F ( u ) is a monotonic convex function.
(16)
74
Giorgio Fumera and Fabio Roli
Let us now consider the problem of finding the OSHR as defined at the end of Sect. 2.1. First, a pair of parallel hyperplanes (8) which minimises the empirical risk (12) can be obtained by minimising the following functional, analogous to (13): l
l
∑ h ( ξ , ε ) = ∑ w θ ( ξ − 1 + ε ) + (1 − w ) θ ( ξ − 1 − ε ) i
i =1
R
i
R
i
,
(17)
i =1
under the constraints
yi ( w ⋅ x i + b ) ≥ 1 − ξi , i = 1, … ,l , ξi ≥ 0, i = 1, …,l ,
(18)
0 ≤ ε ≤ 1. Indeed, consider any given pair of parallel hyperplanes (8). It is easy to see that the corresponding values of ξi and ε for which constraints (18) are satisfied and functional (17) is minimised, make this functional equal to the empirical risk (12). This can be explained by looking at Fig. 1, where the behaviour of h ( ξi , ε ) is shown. Constraints (18) imply that, if a pattern xi is accepted and correctly classified by the decision function (9), the minimum of the corresponding term h ( ξi , ε ) in (17) is achieved for 0 ≤ ξi ≤ 1 − ε . This means that h ( ξi , ε ) = 0 , according to loss function (10). Analogously, if xi is rejected, then 1 − ε < ξi ≤ 1 + ε , and h ( ξi , ε ) = w R . If xi is accepted and misclassified, then ξi > 1 + ε , and h ( ξi , ε ) = 1 . Therefore, minimising functional (17) under constraints (18) is equivalent to minimise the empirical risk (12). h(ξi , ε ) 1
wR
ξi 0
1− ε
1+ ε
Fig. 1. The behaviour of function h ( ξi , ε ) is shown, for ε = 0.5 and w R = 0.5
In order to simultaneously maximise the margin of samples accepted and correctly classified, it is necessary to use a function h′ ( ξi , ε ) slightly different than the h ( ξi , ε ) defined above: h′ ( ξi , ε ) = w C θ ( ξi ) + ( w R − w C ) θ ( ξi − 1 + ε ) + (1 − w R ) θ ( ξi − 1 − ε ) ,
(19)
Support Vector Machines with Embedded Reject Option
75
where wC is a constant term such that 0 < w C < w R . The only difference with h ( ξi , ε ) is that h′ ( ξi , ε ) gives a non-null cost wC to patterns for which 0 < ξi ≤ 1 − ε (see also Fig. 1). It is easy to see that these are accepted and correctly classified patterns which lie at a distance less than 1/ w to the hyperplane w ⋅ x + b = 0 . The corresponding value of
∑
l i =1
h ′ ( ξi , ε ) is then an upper bound for the empirical risk (12). Note that
also functional (13) gives a non-null cost to patterns correctly classified which lie at a distance less than 1/ w to the hyperplane w ⋅ x + b = 0 . Now the problem of finding the OSHR can be formulated as follows. Minimise the functional: l 1 w ⋅ w + C ∑ h ′ ( ξi , ε ) , 2 i =1
(20)
under constraints (18). For sufficiently large C and sufficiently small wR the solution of the above problem is a pair of hyperplanes (8) which minimises the empirical risk (12), and separates the samples accepted and correctly classified with maximum margin. Let us now consider the computational issues connected to the optimisation problems for the OSH (16,14) and the OSHR (20,18). Both problems are NPcomplete. A computationally tractable approximation of problem (16,14) for the OSH was obtained by substituting the step function in (16) with the continuous and convex function
∑
l i =1
ξiσ , σ ≥ 1 . Simple quadratic optimisation problems correspond to the
choices F ( u ) = u , and σ = 1, 2 . In particular, the solution of the corresponding problems is unique [13]. Unfortunately, a convex approximation of the objective function (20) for the OSHR seems not feasible. Indeed, a convex approximation of the function h′ ( ξi , ε ) does not allow to adequately represent the empirical risk (12), that is, the trade-off between errors and rejections. Obviously, using a non-convex approximation, the uniqueness of the solution of the corresponding problem would not be guaranteed. Moreover, a non-convex optimisation problem might not exhibit one of the main properties of SVMs, namely the sparseness of the solution. Nevertheless, to compare the error-reject trade-off achievable by the SVM-like classifier with reject option defined in this section, and by the rejection techniques for standard SVMs described in Sect. 1, we devised a non-convex approximation for functional (20), described in Sect. 3. We then developed a specific algorithm for solving the corresponding optimisation problem.
3
Formulation of the Training Problem
A good non-convex approximation of h′ ( ξi , ε ) (19) can be obtained by substituting the step function θ ( u ) with a sigmoid function
76
Giorgio Fumera and Fabio Roli
Sα ( u ) =
1 , 1 + e −αu
(21)
for sufficiently large values of the constant α. To solve the corresponding optimisation problem, the technique of the lagrangian dual problem can be used. However, the above approximation would lead to a trivial solution of the dual problem, namely all the Lagrange multipliers would be equal to zero. To avoid this, we introduce in h′ ( ξi , ε ) a term equal to aξi2 , where a is a constant value. We then obtain: h′′ ( ξi , ε ) = w CSα ( ξi ) + ( w R − w C ) Sα ( ξi − 1 + ε ) + (1 − w R ) Sα ( ξi − 1 − ε ) + a .
(22)
For sufficiently small a, the behaviour of h′′ ( ξi , ε ) adequately represents the trade-off between errors and rejections as h′ ( ξi , ε ) (see Figs. 1 and 2). h ′′ (ξi , ε ) 1
wR wC 0
ξi 1− ε
1+ ε
Fig. 2. The behaviour of function h ′′ ( ξi , ε ) is shown, for ε = 0.5 and w R = 0.5 , as in Fig. 1,
w C = 0.1 , α = 100 and a = 0.05 Note that the introduction of the term aξi2 makes the constraint ξi ≥ 0 not necessary. We have therefore approximated the problem of finding the OSHR as follows. Minimise the functional: l 1 w ⋅ w + C∑ h ′′ ( ξi , ε ) , 2 i =1
(23)
under constraints yi ( w ⋅ x i + b ) ≥ 1 − ξi , i = 1, …,l , 0 ≤ ε ≤ 1.
(24)
Let us now consider the lagrangian dual problem. The corresponding Lagrange function, leaving out the constraints 0 ≤ ε ≤ 1 , is: L ( w, b, x, ε; a ) =
l l 1 w ⋅ w + C∑ h′′ ( ξi , ε ) − ∑ αi y i ( w ⋅ x i + b ) − 1 + ξi . 2 i =1 i =1
(25)
Support Vector Machines with Embedded Reject Option
77
The solution of problem (23,24) can be found by minimising the Lagrange function with respect to w, b, ξ and ε, under constraints 0 ≤ ε ≤ 1 , and then maximising it with respect to the non-negative Lagrange multipliers α [14]. Note that the Lagrange function is the sum of a convex function of w and b, and a non-convex function of ξ and ε. Accordingly, its minimum with respect to w and b can be found by imposing stationarity: ∂L ( w, b, x, ε; a ) ∂w
l
l
i =1
i =1
= w − ∑ αi y i x i = 0, w = ∑ αi y i x i ,
l ∂L ( w, b, x, ε; a ) = − ∑ αi y i = 0 . ∂b i =1
(26)
We point out that the first of the above equations implies that the weight vector w has the same expansion on training vectors as in standard SVMs. The minimum of the Lagrange function with respect to ξ and ε, under the constraints 0 ≤ ε ≤ 1 , can not be found analitically. This implies that the dual objective function is not known in analytical form. Substituting back the relations (26) into the Lagrangian, we obtain the following expression for the dual objective function: l
W ( α1 ,… , αl ) = ∑ αi − i =1
l 1 l l α yi y jαi α j ( x i ⋅ x j ) + C min ∑ h′′ ( ξi , ε ) − i ∑∑ ξi 2 i =1 j=1 C 0 ≤ε≤1 i =1
(27)
The dual problem consists then in maximising the above function, under constraints l
∑y α i
i
=0,
i =1
(28)
αi ≥ 0, i = 1,…,l . The dual problem is similar to the one of standard SVMs. Note that it allows to deal with non-linear decision surfaces by using kernels. Indeed, both in the objective function (27) and in the expression of the weight vector w (26), training points appear only in the form of inner products. Formally, the differences are the additional term l α C min ∑ h′′ ( ξi , ε ) − i ξi ξi C 0 ≤ε≤1 i =1
(29)
in the objective function (27), and the absence of the constraints αi ≤ C, i = 1,…, l . From the computational viewpoint, the drawbacks of the above dual problem are due to the fact that the primal objective function is not convex. As pointed out above, this implies that the dual objective function (27) is not known in analytical form. To evaluate it for given values of the Lagrange multipliers α, the constrained optimisation problem (29) must be solved. Moreover, the sparsity and the uniqueness of the solution are not guaranteed. Furthermore, no necessary and sufficient conditions exist, analogous to the Karush-Kuhn-Tucker ones for standard SVMs, to characterise the solution of the primal (23,24) and dual (27,28) problems. Nevertheless, since the
78
Giorgio Fumera and Fabio Roli
objective function (23) of the primal problem is continuous, the objective function of the dual problem (27) is concave, and therefore has no local maxima [14]. We exploited the last characteristic above to develop an algorithm for solving the dual problem (27,28). Our algorithm is derived from the sequential minimal optimisation (SMO) algorithm, developed for standard SVMs [15]. Basically, SMO iteratively maximises the dual objective function of the standard SVMs dual problem by updating at each step only two lagrange multipliers, while enforcing the constraints. It is worth noting that the corresponding optimisation problem can be solved analytically. The choice of the two multipliers at each step is determined by a heuristic, and the KKT conditions are used as stopping criterion. Since the objective function of problem (27,28) is concave, although it is not known in analytical form, it can be maximised using the same iterative strategy of SMO. It is easy to see that constraints (28) enforce any pair of multipliers to lie on a line segment, or on a half-line. The maximum of the concave objective function with respect to a given pair of multipliers can then be found by using the golden section method. To evaluate the objective function, a specific algorithm was developed for solving the optimisation problem (29). To select a pair of multipliers at each iteration, and to implement a stopping criterion, specific heuristics were used, which exploit the characteristics of problem (27,28). More details about our algorithm can be found in [16].
4
Experimental Results
The aim of our experiments was to compare the performance of our SVM with embedded reject option, with that of standard SVMs with the e“ xternal” reject technique described in Sect. 1. We remark that, when using the loss function (10), the performance of a classifier with reject option can be represented by the classification accuracy achieved for any value of the reject rate (the so-called Accuracy-Reject curve). It has been shown that minimising the expected risk (11) is equivalent to maximise the classification accuracy for any value of the reject probability [2]. The trade-off between errors and rejections depends on the cost of a rejection wR. This implies that different points of the A-R curve correspond to different values of wR. The experiments were carried out with the Letter data set, taken from the University of California at Irvine machine learning database repository (http://www.ics.uci.edu/~mlearn/MLRepository.html). It consists of 20,000 patterns representing the 26 capital letters in the English alphabet, based on 20 different fonts. Each pattern is characterised by 16 features. In our experiments, we considered all possible pairs of classes as two-class problems. We focused only on non-linearly separable problems, since these are the most significant for testing the performance of rejection techniques. The non-linearly separable problems are 193 out of 325, as identified in [17]. For each two-class problem, we randomly subdivided the patterns of the corresponding classes in a training set and a test set of equal size. As explained in Sect. 1, the main rejection technique proposed in the literature consists in training a standard SVM, and rejecting the patterns x for which f ( x ) < D ,
Support Vector Machines with Embedded Reject Option
79
where D is a predefined threshold. To implement this technique, we trained SVMs using the software SVMlight [18], available at http://svmlight.joachims.org. The value of the parameter C was automatically set by SVMlight. In our experiments, we used a linear kernel. The A-R curve achievable using this technique was obtained by computing the values of D which minimise the empirical risk (12), evaluated on the training set, for different values of the rejection cost wR. The corresponding values of D were then used to classify the test set. We considered values of the reject rate up to 30%, since usually these are the values of interest in practical applications. However, for 115 out of the 193 non-linearly separable problems, only one point of the A-R curve with a rejection rate lower than 30% was found, due to the particular distribution of training samples in the feature space. We considered therefore only the remaining 78 problems, which are reported in Table 1. To implement our method we used the training algorithm summarised in Sect. 3, with a linear kernel, and a value of the C parameter equal to 0.1. The A-R curve was obtained by training a classifier for each different value of wR. For any given value of wR, the result of the training phase was a pair of parallel hyperplanes (8), which were used to classify the test set using decision function (9). Our algorithm took an average time of about five minutes to carry out a training phase, for about 800 training patterns, on a PIII 800 workstation running a Linux OS. SVMlight took about one minute. However, it is worth noting that the algorithm we used was not optimised in terms of computational efficiency. Table 1. The 78 two-class non-linearly separable problems obtained from the Letter data set and considered in our experiments. Each problem refers to a pair of letters
AH FY UV AU BR ST
BE GT XZ ET DB DN
BJ HY BI TX FS EQ
BK IP BS BF FX ER
BP JR DH DR GO ES
BV JS GK GM GV FT
CE KO FI HT JQ HK
CL KT HJ HW MU HU
CU LO IZ KS PQ JZ
DJ OR KV LX PS KX
DO DX EK EX EZ OV PR PV RS TY MV NU QX CO KM PY RV SX
The results for the 78 problems of Table 1 can be summarised as follows. For 40 problems (the 51% of the considered problems, reported in the first row of Table 1), our technique achieved on the test set a higher classification accuracy for any value of the reject rate. Four examples are shown in Fig. 3(a)-(d). For 27 problems (the 35% of the considered problems, reported in the second row of Table 1) neither of the two techniques outperformed the other one. Indeed, both techniques exhibited on the test set higher accuracy values for different ranges of the reject rate. Examples are shown in Fig. 3(e),(f). The technique proposed in the literature outperformed our technique only for 12 problems out of 78 (the 14% of the considered problems, reported in the third row of Table 1), as in the example shown in Fig. 3(g). The above results show that our SVM embedding reject option allows achieving a better error-reject trade-off than using standard SVMs in the most of cases (51% of the considered problems). For the 35% of the considered two-class problems, there was no clear winner. The superiority of one method over the other depends on the
80
Giorgio Fumera and Fabio Roli
reject rate required. As pointed out in Sect. 2, the rejection region obtained using both methods is delimited by a pair of parallel hyperplanes. However, our method allows for a greater flexibility in defining their position and orientation, which can change for different values of the cost of a rejection wR. The preliminary experimental results reported above seem to prove that this greater flexibility is useful to achieve a better error-reject trade-off.
(a)
(b)
(b)
(d)
(e)
(f)
(g) Fig. 3. The A-R curves for seven two-class problems are shown. The A-R curves obtained using the proposed method are denoted with SVM-reject, while the ones obtained using the rejection technique proposed in the literature are denoted with SVM-light
To effectively exploit the advantages of our method in terms of the achievable errorreject trade-off, even for problems with larger data sets, the issues related to its computational cost must be addressed. In particular, the optimisation problem we proposed in Sect. 3 is more complex than the one of standard SVMs, due to the non-
Support Vector Machines with Embedded Reject Option
81
convexity of its objective function. Either a different formulation of this problem, or a more efficient algorithm, can make its computational cost comparable to that of algorithms for standard SVMs.
5
Conclusions
In this paper, we proposed an extension of SVMs that directly embeds reject option. This extension was derived by taking into account a theoretical implication of the SRM principle when applied to classification with reject option, and by following Vapniks’ maximum margin approach to the derivation of standard SVMs. We devised a novel formulation of the training task as a non-convex optimisation problem, and developed a specific algorithm to solve it. A pair of parallel hyperplanes delimits the rejection region provided by our SVM. The same holds for the rejection region provided by the commonly used rejection technique. However, our method allows for a greater flexibility in defining the position and orientation of such hyperplanes, which can change for different values of the cost of rejections wR. The experimental results show that this enhanced flexibility allows achieving a better error-reject trade-off. On the basis of these results, further work should focus on defining a formulation of the training problem with lower computational complexity, and on developing an efficient optimisation algorithm for solving it.
References 1. 2. 3. 4. 5. 6.
7.
Chow, C.K.: An optimum character recognition system using decision functions. IRE Trans. on Electronic Computers 6 (1957) 247-254 Chow, C.K.: On optimum error and reject tradeoff. IEEE Trans. on Information Theory 16 (1970) 41-46 Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press (1995) Ruck, D.W., Rogers, S.K., Kabrisky, M., Oxley, M., Suter, B.: The multilayer perceptron as an approximation to a Bayes optimal discriminant function. IEEE Trans. on Neural Networks 4 (1990) 296-298 Mukherjee, S., Tamayo, P., Slonim, D., Verri, A., Golub, T., Mesirov, J.P., Poggio, T.: Support vector machine classification of microarray data. Tech. report. Massachussetts Institute of Thechnology (1998) Duin, R.P.W., Tax, D.M.J.: Classifier conditional posterior probabilities. In: Amin, A., Dori, D., Pudil, P., Freeman, H. (eds.): Advances in Pattern Recognition. Lecture Notes in Computer Science 1451, Springer, Berlin (1998) 611-619 Platt, J.C.: Probabilistic outputs for support vector machines and comparison to regularised likelihood methods. In.: Smola, A.J., Bartlett, P., Schölkopf, B., Schuurmans, D. (eds.): Advances in Large Margin Classifiers. Mit Press (1999)
82
8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19.
Giorgio Fumera and Fabio Roli
Madevska-Bogdanova, A., Nikolic, D.: A new approach on modifying SVM outputs. Proc. of the IEEE-INNS-ENNS Int. Joint Conference on Neural Networks, Vol. 6 (2000) 395-398 Hastie, T., Tibshirani, R.: Classification by pairwise coupling. Technical Report. Stanford University and University of Toronto (1996) Kwok, J.T.-Y.: Moderating the outputs of support vector machines. IEEE Transactions on Neural Networks 10 (1999) 1018-1031 Vapnik, V.N.: Statistical Learning Theory. Wiley, New York (1998) Vapnik, V.N.: Estimation of Dependencies Based on Empirical Data, Addendum 1. Springer-Verlag, New York (1982) Cortes, C., Vapnik, V.N.: Support vector networks. Machine Learning 20 (1995) 1-25 Bazaraa, M.S., Sherali, H.D., Shetty, C.M. Nonlinear Programming. Theory and Algorithms. Wiley (1992) Platt, J.C.: Fast training of supprt vector machines using sequential minimal optimisation. In: B. Schölkopf, Burges, C.J.C., Smola, A.J. (eds.): Advances in Kernel Methods - Support Vector Learning. MIT Press (1999) Fumera, G.: Advanced Methods for Pattern Recognition with Reject Option. Ph.D. Thesis. University of Cagliari (2002) Basu, M., Ho, T.K. The learning behavior of single neuron classifiers on linearly separable or nonseparable input. Proc. of the 1999 Int. Joint Conference on Neural Networks. Washington, DC (1999) Joachims, T.: Making large-scale SVM learning practical. In: Schölkopf, B., Burges, C.J.C., Smola, A.J. (eds.): Advances in Kernel Methods - Support Vector Learning. MIT Press (1999) 169-184 Fumera, G., Roli, F., Giacinto, G.: Reject option with multiple thresholds. Pattern Recognition 33 (2000) 165-167
Image Kernels Annalisa Barla, Emanuele Franceschi, Francesca Odone, and Alessandro Verri INFM - DISI, Universit` a di Genova, Genova, Italy Abstract. In this paper we discuss the mathematical properties of a few kernels specifically constructed for dealing with image data in binary classification and novelty detection problems. First, we show that histogram intersection is a Mercer’s kernel. Then, we show that a similarity measure based on the notion of Hausdorff distance and directly applicable to raw images, though not a Mercer’s kernel, is a kernel for novelty detection. Both kernels appear to be well suited for building effective vision-based learning systems.
1
Introduction
A main issue of statistical learning approaches like Support Vector Machines [19] for solving classification problems is which kernel to use for which problem. A number of general purpose kernels are available in the literature but there is little doubt that the use of an appropriate kernel function can lead to a substantial increase in the generalization ability of the developed learning system. The aim of this paper is to study a few kernels well-suited to solve imagebased classification problems with Support Vector Methods [20,5]. We concentrate on the two learning problems of binary classification and novelty detection – the latter as described in [2]. Building kernel functions might be difficult owing to the mathematical requirements a function needs to satisfy in order to be safely used as a kernel. In this paper we clarify this issue pointing out that these mathematical requirements change with the learning problem. It turns out that the so called Mercer’s conditions can be weakened to a different extent for both novelty detection and binary classification. Consequently, building kernels for novelty detection is substantially easier than for binary classification. All the kernels we discuss are motivated by computer vision considerations. The first kernel, histogram intersection, is a similarity measure well known in the computer vision literature as an effective indexing technique for color based recognition. This function is actually a Mercer’s kernel and we prove this result by finding explicitly the feature mapping which makes histogram intersection an inner product. The second kernel is derived from a similarity measure loosely inspired by the notion of Hausdorff distance [13] which appears to be appropriate for a number of computer vision tasks. In this case we show how this measure, which is a legitimate kernel for novelty detection only, can be modified to become a Mercer’s kernel. The topic of kernels for images is relatively new. A number of studies have been reported about the use of general purpose kernels for image-based classification problems [16,14,7,10,8,9,12], while a family of functions which seem to S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 83–96, 2002. c Springer-Verlag Berlin Heidelberg 2002
84
Annalisa Barla et al.
be better suited than Gaussian kernels for dealing with image histograms has been studied in [3]. In essence the kernels described in [3] ensure heavier tails than Gaussian kernels in the attempt to contrast the well known phenomenon of diagonally dominant kernel matrices in the case of high dimensional inputs. This paper is organized as follows. In section 2 we discuss the issue of the mathematical requirements of kernel functions. Histogram intersection is discussed in section 3, while Hausdorff related kernels in section 4. while we draw the conclusions of our work in section 5.
2
Kernel Functions
In this section we summarize the mathematical requirements a function needs to satisfy in order to be a legitimate kernel for SVMs. We distinguish between the problems of binary classification and novelty detection. We first establish the notation. 2.1
Support Vector Machines
Following [6], many problems of statistical learning [19,20] can be cast in an optimization framework in which the goal is to determine a function minimizing a functional I of the form
I[f ] =
1 V (f (xi ), yi ) + λf 2K i=1
(1)
where the pairs {(x1 , y1 ), (x2 , y2 ), ..., (x , y )}, the examples, are i.i.d. random variables drawn from the space X × Y according to some fixed but unknown probability distribution, V is a loss function measuring the fit of the function f to the data, · K the norm of f induced by a certain function K, named kernel, controlling the smoothness – or capacity – of f , and λ > 0 a trade-off parameter. For several choices of the loss function V , the minimizer of the functional in (1) takes the general form αi K(x, xi ), (2) i=1
where the coefficients αi depend on the examples. The mathematical requirements on K must ensure the convexity of (1) and hence the uniqueness of the minimizer (2). SVMs for classification [19], for example, correspond to choices of V like V (f (xi ), yi ) = |1 − yi f (xi )|+ , with |t|+ = t if t ≥ 0, and 0 otherwise, and lead to a convex QP problem with linear constraints in which many of the αi vanish. The points xi for which αi = 0 are termed support vectors and are the only examples needed to determine the solution (2). Before discussing the mathematical requirements on K we briefly consider the two cases we are interested in: binary classification and novelty detection.
Image Kernels
2.2
85
Binary Classification
In the case of binary classification [19] we have yi ∈ {−1, 1} for i = 1, ..., , and the dual optimization problem can be written as
max
αi ,i=1,...,
subject to
|αi | −
i=1
αi αj Kij
(3)
i=1 j=1
αi = 0,
i=1
0 ≤ αi ≤ C
if yi = 1,
−C ≤ αi ≤ 0 if yi = −1 with Kij = K(xi , xj ) sometimes called kernel matrix. A new point is classified according to the sign of the expression
αi K(x, xi ) + b,
(4)
i=1
where the coefficient b can be determined from the Kuhn-Tucker conditions. A closer look to the QP problem (3) reveals that the uniqueness of the solution is ensured by the convexity of the objective function in the feasible region. Ignoring a common scaling factor this is equivalent to require that
αi αj Kij ≥ 0
(5)
i=1 j=1
subject to
αi = 0.
i=1
2.3
Novelty Detection
In the case of novelty detection described in [2] for all training points we have yi = 1 and the optimization problem reduces to max
αi ,i=1,...,
subject to
αi Kii −
i=1
αi αj Kij
(6)
i=1 j=1
αi = 1
i=1
0 ≤ αi ≤ C. Here again we have that Kij = K(xi , xj ). If K(x, x) is constant over the domain X, a novelty is detected if the inequality i=1
αi K(x, xi ) ≥ τ
(7)
86
Annalisa Barla et al.
is violated for some fixed value of the threshold parameter τ > 0. Similarly to the case of above, the uniqueness of the solution is ensured by requiring the convexity of the quadratic form in the objective function which in this case, ignoring again a common scaling factor, gives
αi αj Kij ≥ 0
(8)
i=1 j=1
subject to αi ≥ 0, for i = 1, ..., . 2.4
Kernel Requirements
Typically, the convexity of the functionals (3) and (6) is guaranteed by requiring the positive definiteness of the function K. We remind that a function K : X × X → IR is positive definite if for all n and all possible choices of x1 , x2 , ..., xn ∈ X, and α1 , α2 , ..., αn ∈ IR n n
αi αj K(xi , xj ) ≥ 0.
(9)
i=1 j=1
Inequality (9) says that all kernel matrices Kij of all orders built from a positive definite function K are positive semidefinite. Consequently, if K is positive definite, inequalities (5) and (8) are always satisfied with no restriction on the coefficients αi . This raises the question of whether it is possible to find weaker conditions on K such that inequality (9) is satisfied subject to certain conditions on the coefficients αi . Before answering this question we briefly discuss a useful characterization of positive definite functions. A theorem of functional analysis due to Mercer [4] allows us to write a positive definite function as an expansion of certain functions φk (x) k = 1, ..., N , with N possibly infinite, or N K(x, x ) = φk (x)φk (x ). (10) k=1
A positive definite function is also called a Mercer’s kernel, or simply a kernel. For a rigorous statement of Mercer’s theorem the interested reader is referred to [4,15]. Whether or not this expansion is explicitly computed, it suffices to note that for any kernel K, K(x, x ) can be thought of as the inner product between the vectors φ(x) = (φ1 (x), φ2 (x), ..., φN (x)) and φ(x ) = (φ1 (x ), φ2 (x ), ..., φN (x )). Conditions (5) and (8) are always satisfied by a Mercer’s kernel. This can be readily seen by verifying that for a function K which can be written as in (10), with x = xi and x = xj , inequality (9) is true with no restriction on the αi . We are now in a position to deal with the question raised before. In the case of binary classification, for example, it is well known [15] that a weaker condition on K is given by the notion of conditionally positive definiteness. We remind that
Image Kernels
87
a function K is conditionally positive definite (of order 1) if K satisfies inequality (9) under the constraint αi = 0. A simple example of a conditionally positive definite function which is not positive definite is (11) K(x, x ) = −x − x 2 . Plugging this expression in (9) and using the fact that the average of the αi vanishes, after simple algebraic manipulations one obtains 2 n n n − αi αj xi − xj 2 = 2 αi xi ≥ 0. i=1 j=1
i=1
A conditionally positive definite function can thus be used as a legitimate kernel for binary classification without being a Mercer’s kernel, because it always leads to a kernel matrix satisfying (5). For a thorough discussion of the relation between conditionally positive definite functions, positive definite function, and Mercer’s theorem see [15]. More on conditionally positive definite functions and the relation with distance matrices, instead, can be found in [11]. In the case of novelty detection, a condition on K weaker than positive definiteness can be easily obtained by noticing that any function K which takes on only nonnegative values leads to a kernel matrix trivially satisfying (8). This observation widen considerably the spectrum of the functions which can be chosen as a kernel for novelty detection by including all similarity measures taking on nonnegative values. In summary a legitimate kernel for binary classification can be obtained by requiring conditionally positive definiteness, whilst for novelty detection it is sufficient to require point-wise nonnegativity. We can now begin our discussion on kernels for images.
3
Histogram Intersection Kernel
In this section we investigate a kernel function which might be very useful in histogram based image classification problems. Possibly the simplest way to represent color information in images is provided by histograms. Different color spaces (RGB, HSV, HSI, etc.) give rise to different representations. Histogram intersection is a technique proposed in [17] for color indexing with application to object recognition. From the reported results and subsequent works we know that histogram intersection is an effective representation which makes it possible to build reasonably effective color-based recognition systems. Intuitively, the color histogram intersection Kint measures the degree of similarity between two color histograms. This similarity measure is well suited to deal with image color and scale changes and can be successfully used even in the presence of non-segmented objects. Color histogram intersection can be computed efficiently and adapted to search for partially occluded objects in images.
88
3.1
Annalisa Barla et al.
Is Histogram Intersection a Mercer’s Kernel?
In this section we demonstrate that histogram intersection is a Mercer’s kernel. We prove it by showing that histogram intersection is an inner product in a suitable feature space. Without loss of generality we consider the simpler case of 1-D images of N pixels. We denote with A and B the histograms of images Aim and Bim . Both histograms consist of m bins, and the i-th bin for i = 1, ..., m is denoted with Ai and Bi respectively. By construction we have m
Ai = N and
i=1
m
Bi = N.
i=1
The intersection Kint between histograms A and B can be written as Kint (A, B) =
m
min{Ai , Bi }
(12)
i=1
We now represent A with an N × m-dimensional binary vector A defined as A1
A2
Am
A = (1, 1, ..., 1, 0, ..., 0, 1, 1, ..., 1, 0, 0, ..., 0, ..., 1, 1, ..., 1, 0, 0, ..., 0), N −A1
N −A2
(13)
N −Am
and similarly B with B. Notice the redundant representation of the information content of histogram A in the binary vector A of (13). The histogram intersection Kint (A, B) in (12) can be readily seen to be equal to the standard inner product between the two corresponding vectors A and B, or Kint (A, B) = A · B. We thus have that histogram intersection is a Mercer’s kernel and that the binary vector (13) describes explicitly the mapping between input and feature space. Two comments are in order. First, the generalization of this result to higher dimension like 2-D images and 3-D color space representations is straightforward. Second, in the case of images of different size all what is needed is to standardize the bins’ number, normalize the histogram areas, and repeat the same construction of above on the resulting histograms. Concluding, histogram intersection, being a Mercer’s kernel, can be used for both binary classification and novelty detection. 3.2
Comparison with Classical Kernels
We now compare the classification results obtained using the histogram intersection kernel against classic off-the-shelf kernels on an image classification problem: indoor-outdoor image classification, a problem with interesting applications in the film development industry and color image printing. We trained a number of
Image Kernels
89
Table 1. Recognition rates (r.r.) for SVMs with different kernels on the indooroutdoor classification problem kernel r.r. (%) histogram intersection 93.1 linear 88.8 2-nd deg polynomial 89.2 3-rd deg polynomial 89.4 4-th deg polynomial 88.1 Gaussian kernel (σ = 0.1) 89.1 Gaussian kernel (σ = 0.3) 86.5 Gaussian kernel (σ = 0.5) 87.8 Gaussian kernel (σ = 0.9) 88.9
SVMs for binary classification on a set of 300 indoor and 300 outdoor images1 of various size – typically of 104 ÷ 105 pixels. Each image was used to construct a color histogram in the HSV (HueSaturation-Value) color space consisting of 15 × 15 × 15 bins. The recognition rates on a test set of 123 indoor images and 260 outdoor images, obtained using the sign of expression (4) for various kernels, are shown in Table 1. As it can be seen by inspection of Table 1, the recognition rate of the color histogram intersection kernel, slightly above 93%, appears to be 3% ÷ 4% better than the best general purpose kernels (either polynomial or Gaussian). These results can be compared to those reported in [18] in which a fairly complex classification procedure was proposed and tested on a different database reaching a recognition rate slightly better than 90%. Overall, the good recognition rates obtained by all kernels are presumably due to the fact that the color histogram representation captures an essential aspect of color appearance for this classification problem.
4
Hausdorff Kernels
In this section we study a type of kernels specifically developed to deal with grey level images. In several applications the training set might consist of image sequences in which the spatio-temporal intensity differences between close frames is rather small. At run time, the problem could be to classify a new image as a novel view of the same scene possibly partly occluded or slightly changed due to a number of different reasons. As proposed in [1] one way to tackle this problem is through the use of the novelty detection approach proposed in [2]. 1
Images were downloaded from http://www.benchathlon.net/img/todo/index.html and http://www.cs.washington.edu/research/imagedatabase/groundtruth/.
90
Annalisa Barla et al.
A relatively large number of views of the same object or scene are gathered and used to build the sphere in feature space enclosing the examples. With virtually no preprocessing and without computing the correspondence between image features a kernel able to capture the image similarity between frames is essential. In the remainder of this section we start by describing the essence of the similarity measure proposed in [13], and show how it can be used as a kernel, KHau , for novelty detection. We also show, by means of a counterexample, that this measure does not origin a Mercer’s kernel. Next, we present some experimental results of KHau at work on a real application. This section concludes with the description of a variant of KHau which is a Mercer’s kernel. 4.1
A Hausdorff-Based Kernel for Novelty Detection
Suppose we have two N × N grey level images, Aim and Bim , of which we want to compute the degree of similarity. In order to compensate for small grey level changes or local transformations we define a neighborhood O for each pixel (i, j) and evaluate the expression H(Aim , Bim ) =
N
U ( − |Aim [i, j] − Bim [ˆi, ˆj]|)
(14)
i,j=1
where U is the unit step function, and (ˆi, ˆj) ∈ O denotes the pixel of Bim most similar to Aim [i, j] in O. For each (i, j) the corresponding term in the sum (14) equals 1 if Aim [i, j] − Bim [ˆi, ˆj] ≤ and 0 otherwise. Expression (14) counts the number of pixels (i, j) of Aim the grey value of which differ by no more than > 0 from at least one grey value of Bim over the neighborhood O. The rationale behind (14) is that it evaluates the degree of overlap of two images bypassing the problem of computing pixelwise correspondence. Unless the set O coincides with (i, j), the function H is not symmetric. Symmetry can be restored, for example, by taking the average KHau (Aim , Bim ) =
1 (H(Aim , Bim ) + H(Bim , Aim )) 2
The quantity KHau (Aim , Bim ) can be computed in three steps: 1. Expand the two images Aim and Bim into 3-D binary matrices A and B respectively, the third dimension being the grey value. For example, for A we write 1 if Aim [i, j] = k; (15) A[i, j, k] = 0 otherwise. and similarly for B. Notice that a 3-D matrix like A[i, j, k] for each choice of (i, j) equals 1 only for one value of k.
Image Kernels
91
2. Dilate both matrices by growing their nonzero entries by a fixed amount /2 in the grey value dimension and half the linear size of the neighborhood O ˜ the resulting 3-D dilated binary in the spatial dimensions. Let A˜ and B matrices. This dilation varies according to the degrees of similarity required and the transformations allowed. 3. To obtain KHau (Aim , Bim ) compute the size of the intersections between A ˜ and B and A, ˜ and take the average of the two values obtaining and B, KHau (Aim , Bim ). Thinking of the 3-D binary matrices as (binary) vectors in obvious notation we could also write
1 ˜ +B·A ˜ A·B (16) KHau (Aim , Bim ) = 2 Under appropriate conditions both H and KHau are closely related to the partial directed Hausdorff distance between the binary matrices A and B thought of as 3-D point sets. As a similarity measure, the function KHau in (16) has been shown to have several properties, like tolerance to small changes affecting images due to geometric transformations, viewing conditions, and noise, and robustness to occlusions. Since KHau ≥ 0 on all image pairs, we immediately see that KHau can be used as a kernel for novelty detection. That KHau is not a Mercer’s kernel, instead, can be seen through a counterexample. The three 1-pixel images A1 = 10;
A2 = 12;
A3 = 14,
with = 3 yield KHau (A1 , A2 ) = 1;
KHau (A1 , A3 ) = 0;
KHau (A2 , A3 ) = 1,
and the kernel matrix Kij = KHau (Ai , Aj ), or 110 111 011 has negative determinant. 4.2
Comparison with Classical Kernels
We now proceed to present some experimental comparison between classical kernels and the kernel KHau , on 3-D object representation. The scenario consists of a training stage in which image sequences of a number of 3-D objects are acquired by means of a video camera. The sequences are acquired by moving the camera around each object in the attempt to capture the object’s typical views. For each object the image sequence is used to train an SVM for novelty detection. At run time each novelty detector, for a fixed value of threshold τ of expression (7), can be tested as an object identification system. For a given object choice, a new image is fed to the corresponding novelty detector
92
Annalisa Barla et al.
and – depending on the truth value of (7) – the system decides whether, or not, the new image contains the chosen object. Table 2 presents the results obtained on a problem of identifying a specific object (a statue), given a relatively small training set (148 images) and a test set of 637 positive examples (that is, new images of the same statue) and 3690 difficult negative examples (that is, images of similar statues). Table (2) presents the equal error rates – or the error rates obtained for the value of the threshold τ for which the percentages of false negative and false positive are the same – for several kernels. By inspection, it can clearly be seen that the Hausdorff kernel outperforms the best results obtained with general purpose kernels. This is consistent with the observation that the general purpose kernels are all based on a pointwise match between input points, while the images we used are not in pointwise correspondence. Not surprisingly, the best results for the Hausdorff kernel are obtained allowing for some dilation along the spatial dimensions, dilation which compensates for image misalignment and small grey level changes across the sequence. The last two columns of Table (2) show the maximum and minimum values of each kernel matrix on the training set. The values of the linear and of the sum of square differences kernel of (11) can be used to appreciate the difficulty of the task and the explain the reason of the large gap between the behavior
Table 2. Equal error rates (e.e.r.) for SVMs with different kernels for an object identification task. For the Hausdorff kernel, different values for the size of the dilation ∆ along the grey level and spatial dimensions respectively are shown, and the matrix entries were scaled by a factor n = 10−4 . For the linear and polynomial kernels, the inputs were scaled by a factor n = 10−4 . For the Gaussian kernel, we tried different σs. The last two rows report the maximum and minimum value of each matrix in the training set kernel e.e.r. (%) Kmax Kmin Hausdorff ∆ = (0,0,0) 19 4.176 0.010 Hausdorff ∆ =(1,0,0) 15 4.176 0.033 Hausdorff ∆ =(1,1,1) 4 4.176 0.257 Hausdorff ∆ =(3,1,1) 3 4.176 0.457 Hausdorff ∆ =(4,3,3) 4 4.176 1.489 Sum of sq. diff. 48 0.000 -5231 linear 36 0.734 0.320 2-nd deg polynomial 37 2.009 0.743 3-rd deg polynomial 39 4.219 1.300 Gaussian (σ = 1000) 50 1.000 0.000 Gaussian (σ = 3000) 28 1.000 0.055 Gaussian (σ = 10000) 47 1.000 0.770
Image Kernels
93
of the Hausdorff kernel and the general purpose kernels. The difference between the maximum and minimum values for both kernels tells us that all the training images are strongly correlated. Since each image consists of 58 × 72 pixels, from the minimum value of the sum of square difference kernel matrix, for example, we gather that the average grey value difference between the two most different training images is less than 2 grey values. 4.3
A Hausdorff-Based Mercer’s Kernel
Let us now show that by controlling the amount of dilation it is possible to obtain a modified similarity measure which is a Mercer’s kernel. Neglecting “boundary entries” and denoting with n the number of entries in each neighborhood, the key is to redefine the dilation step as A[i , j , k ] ≥ A[i, j, k] (17) A˜ [i, j, k] = A[i, j, k] + 1/n i ,j ,k
where the sum ranges through the entries the neighborhoods of which contain the entry (i, j, k). Note that if A˜ [i, j, k] = 0 thenA˜ [i, j, k] ≥ 1/n. Unlike the previous definition, the new dilation is obtained combining linearly various contributions with a weight strictly less than 1. We now define ˜ H (Aim , Bim ) = A · B and show that the similarity measure
1 1 ˜ + B · A ˜ (H (Aim , Bim ) + H (Bim , Aim )) = A·B 2 2 (18) is an inner product and thus a Mercer’s kernel. We immediately see that symmetry and linearity follow easily from the fact is computed using standard inner products. For the nonnegativity that KHau we need to show that ˜≥0 (19) M·M (Aim , Bim ) = KHau
for an arbitrary matrix M (not necessarily binary or corresponding to an image) ˜ obtained as in (17). and its dilation M If we denote with Mp the p-th component of the P -dimensional vector M (in some 1-to-1 correspondence with the entry (i, j, k) of M ), plugging (17) in (19) with A = M gives P P n 1 2n Mp2 + 2 Mp Mqp,i 2n p=1 p=1 i=1 where the components Mqp,1 , ..., Mqp,n identify the n components of M corresponding to the entries (i , j , k ) in (17). Neglecting the (1/2n) factor, if we
94
Annalisa Barla et al.
expand and rearrange the terms of the above sums we obtain M12 + ... + M12 + M22 + ... + M22 +... + MP2 + ... + MP2 2n 2n 2n
+ 2M1 Mq1,1 + ... + Mq1,n + 2M2 Mq2,1 + ... + Mq2,n + ...
+ 2MP MqP,1 + ... + MqP,n . Since for each p = 1, .., P there are n mixed products of the type Mp Mqp,i i = 1, ..., n and n of the type Mqp,i Mp , is easy to conclude that the whole expression can be written as a sum of P × n squares, with each square term of the type (Mp + Mq )2 thereby proving that inequality (19) is always true. Two comments are in order. First, it is clear that the number and relative weight of the various terms make it possible to obtain a sum of squares because the dilation contributions originating from the same entry sum up to 1. This is actually the only constraint, as different and space variant dilation contributions (provided their sum does not exceed 1) would not change the substance of the proof. Notice that if the sum exceeds 1 the nonnegativity property is lost. As a second final remark we observe that for relatively large images and dilation neighborhood of small size, even the function KHau , though not a positive definite function, in practice often leads to kernel matrices which are positive semidefinite. We conclude this section by discussing the relation between the Mercer’s kernel of (18) and the Hausdorff similarity measure defined by (14). First we denote with δ(i, j) the Kronecker δAim [i,j]Bim [i,j] , which equals 1 if Aim [i, j] = Bim [i, j] and 0 otherwise. Then, using the same notation of before for H (Aim , Bim ) we have N
wij δ(i, j) + (1 − δ(i, j))U ( − |Aim [i, j] − Bim [ˆi, ˆj]|) (20) i,j=1
with
˜ [i, j, Aim [i, j]]. wij = B
Similarly to (14), also in this case expression (20) counts the number of pixels (i, j) of Aim the grey value of which differ by no more than > 0 from at least one grey value of Bim over the neighborhood O. Two are the main differences between expressions (14) and (20). First, consistently with the new definition of dilation, in (20) the case in which the two images takes on exactly the same value at the same location is treated separately. The second difference is that to each match is also attributed a weight proportional to the value of the dilation. The weight wij for exact matches is never less than 1 because if Aim [i, j] = Bim [i, j] using (17) and (15) with A = B we have wij ≥ B[i, j, Aim [i, j]] = 1.
Image Kernels
95
˜ (or wij ) is at least For nonexact matches, instead, B[i, j, Aim [i, j]] = 0 but B 1/n. The Kronecker δ ensures that for each pair (i, j) only one of the two unit step functions is strictly positive.
5
Conclusions
In this paper we have presented a few kernels specifically designed to deal with image classification problem. While discussing their properties we have clarified a few issues relative to the mathematical requirements a function needs to satisfy in order to be a legitimate kernel for Support Vector methods for binary classification (standard SVMs) and novelty detection. The two main results of this paper are the proof that histogram intersection is a Mercer’s kernel and the fact that the Hausdorff similarity measure proposed in [1], which was correctly used as a kernel for novelty detection, can be modified to become a Mercer’s kernel. Our results confirm the importance of kernel design for exploiting the prior knowledge on the problem domain in the development of effective learning systems. Acknowledgments This work has been partially supported by the EU Project Kermit and by the I.N.F.M. Advanced Research Project MAIA. We thank John Shawe-Taylor, Yoram Singer, Tommy Poggio, and Sayan Mukerjee for many discussions and Gianluca Pozzolo for performing some of the experiments on color histogram intersection.
References 1. A. Barla, F. Odone, and A. Verri. Hausdorff kernel for 3D object acquisition and detection. In Proceedings of the European Conference on Computer Vision, 2002. 89, 95 2. C. Campbell and K. P. Bennett. A linear programming approach to novelty detection. Advances in Neural Information Processing Systems, 13, 2001. 83, 85, 89 3. I. Chapelle, P. Haffner, and V. Vapnik. SVMs for histogram-based image classification. IEEE Transactions on Neural Networks, special issue on Support Vectors, 1999. 84 4. R. Courant and D. Hilbert. Methods of Mathematical Physics, Vol. 2. Interscience, London UK, 1962. 86 5. N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, 2000. 83 6. T. Evgeniou, M. Pontil, and T. Poggio. Regularization networks and support vector machines. Advances in Computational Mathematics, 13:1–50, 2000. 84 7. G. Guodong, S. Li, and C. Kapluk. Face recognition by support vector machines. In Proc. IEEE International Conference on Automatic Face and Gesture Recognition, pages 196–201, 2000. 83
96
Annalisa Barla et al.
8. B. Heisele, P. Ho, , and T. Poggio. Face recognition with support vector machines: global versus component-based approach. In Proceedings of the Eigth International Conference on Computer Vision, pages 688–694, 2001. 83 9. B. Heisele, T. Serre, M. Pontil, and T. Poggio. Component-based face detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 657–662, 2001. 83 10. K. Jonsson, J. Matas, J. Kittler, and Y. Li. Learning support vectors for face verification and recognition. In Proc. IEEE International Conference on Automatic Face and Gesture Recognition, 2000. 83 11. C. A. Micchelli. Interpolation of scattered data: distance matrices and conditionally positive definite functions. Constructive Approxive, 2:11–22, 1986. 87 12. A. Mohan, C. Papageorgiou, and T. Poggio. Example-based object detection in images by components. IEEE Trans. Patt. Anal. Mach. Intell., 23:349–361, 2001. 83 13. F. Odone, E. Trucco, and A. Verri. General purpose matching of grey level arbitrary images. In G. Sanniti di Baja C. Arcelli, L. P. Cordella, editor, 4th International Workshop on Visual Forms, Lecture Notes on Computer Science LNCS 2059, pages 573–582. Springer, 2001. 83, 90 14. C. Papageorgiou and T. Poggio. A trainable system for object detection. International Journal of Computer Vision, 38(1):15–33, 2000. 83 15. T. Poggio, S. Mukherjee, R. Rifkin, A. Rakhlin, and A. Verri. b, 2002. 86, 87 16. M. Pontil and A. Verri. Support vector machines for 3d object recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 637–646, 1998. 83 17. M. J. Swain and D. H. Ballard. Color indexing. International Journal of Computer Vision, 7(1):11–32, 1991. 87 18. M. Szummer and R. W. Picard. Indoor-outdoor image classification. In IEEE Intl Workshop on Content-based Access of Image and Video Databases, 1998. 89 19. V. Vapnik. The nature of statistical learning Theory. John Wiley and sons, New York, 1995. 83, 84, 85 20. V. Vapnik. Statistical learning theory. John Wiley and sons, New York, 1998. 83, 84
Combining Color and Shape Information for Appearance-Based Object Recognition Using Ultrametric Spin Glass-Markov Random Fields B. Caputo1 , Gy. Dork´ o2 , and H. Niemann2 1
2
Smith-Kettlewell Eye Research Institute 2318 Fillmore Street, San Francisco, 94115 California, USA Department of Computer Science, Chair for Pattern Recognition Erlangen-Nuremberg University Martenstrasse 3, 91058 Erlangen, Germany
Abstract. Shape and color information are important cues for object recognition. An ideal system should give the option to use both forms of information, as well as the option to use just one of the two. We present in this paper a kernel method that achieves this goal. It is based on results of statistical physics of disordered systems combined with Gibbs distributions via kernel functions. Experimental results on a database of 100 objects confirm the effectiveness of the proposed approach.
1
Introduction
Object recognition is a challenging topic of research in computer vision [8]. Many approaches use appearance-based methods, which consider the appearance of objects using two-dimensional image representations [9,15,23]. Although it is generally acknowledged that both color and geometric (shape) information are important for object recognition [11,22], few systems employ both. This is because no single representation is suitable for both types of information. Traditionally, the solution proposed in literature consists of building up a new representation, containing both color and shape information [11,22,10]. Systems using this kind of approach show very good performances [11,22,10]. This strategy solves the problems related to the common representation; a major drawback is that the introduction of a new representation does not permit the use of just color or just geometrical information alone, depending on the task considered. A huge literature shows that color only, or shape only representations work very well for many applications (see for instance [8,9,21,23]). Thus, the goal should be a system that uses both forms of information while keeping them distinct, allowing the flexibility to use the information sometimes combined, sometimes separate, depending on the application considered. Another important point is the dimension of the feature vector relative to the new representation. If it carries as much information about color and shape as separate representations do, then we must expect the novel representation to have more parameters than each separate representation alone, with all the risks S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 97–111, 2002. c Springer-Verlag Berlin Heidelberg 2002
98
B. Caputo et al.
of a curse of dimensionality effect. If the dimension of the new representation vector is kept under control, this means that the representation contains less color and shape information than single representations. In this paper we propose a new strategy to this problem. Given a shape only and color only representation, we focus attention on how they can be combined together as they are, rather than define a new representation. At the end, we use a new kernel method: Spin Glass-Markov Random Fields (SG-MRF) [2]. They are a new class of MRF that integrates results of statistical physics of disordered systems with Gibbs probability distributions via non linear kernel mapping. The resulting model, using a Hopfield energy function [1], has shown to be very effective for appearance-based object recognition and to be remarkably robust to noise and occlusion. Here we extend SG-MRF to a new SG-like energy function, inspired by the ultrametric properties of the SG phase space. We will show that this energy can be kernelized as the Hopfield one, thus, it can be used in the SG-MRF framework. The structure of this energy provides a natural framework for combining shape and color representations together, without any need to define a new representation. There are several advantages to this approach: – it permits us to use existing and well tested representations both for shape and color information; – it permits us to use this knowledge in a flexible manner, depending on the task considered. To the best of our knowledge, there are no previous similar approaches to the problem of combining shape and color information for object recognition. Experimental results show the effectiveness of the new proposed kernel method. The paper is organized as follows: after a review of existing literature (Section 2), we will define the general framework for appearance-based object recognition (Section 3) and Spin Glass-Markov Random Fields (Section 4). Section 5 will present the new ultrametric energy function, show how it can be used in a SG-MRF framework (Section 5.1) and how it can be used for combining together shape and color representation for appearance-based object recognition (Section 5.2). Experiments are presented in Section 6; the paper concludes with a summary discussion.
2
Related Work
Appearance-based object recognition is an alternative approach to the geometrybased methods [8]. In an appearance-based approach [17] the objects are modeled by a set of images, and recognition is performed by matching directly the input image to the model set. Swain and Ballard [23] proposed representing an object by its color histogram. The matching is performed using histogram intersection. The method is robust to changes in the orientation, scale, partial occlusion and changes of the viewing position. Its major drawbacks are its sensitivity to lighting conditions, and that many object classes cannot be described only by color. Therefore, color histograms have been combined with geometric information
Combining Color and Shape Information
99
(see for instance [22,10]). In particular, the SEEMORE system [11] uses 102 different feature channels which are each sub sampled and summed over a pre-segmented image region. The 102 channels comprise color, intensity, corner, contour shape and Gabor-derived texture features. Strikingly good experimental results are given on a database of 100 pre-segmented objects of various types. Most interestingly, a certain ability to generalize outside the database has been observed. Schiele and Crowley [21] generalized this method by introducing multidimensional receptive field histograms to approximate the probability density function of local appearance. The recognition algorithm calculates probabilities for the presence of objects based on a small number of vectors of local neighborhood operators such as Gaussian derivatives at different scales. The method obtained good object hypotheses from a database of 100 objects using small number of vectors. Principal component analysis has been widely applied for appearance-based object recognition [24,14,7,19]. The attractiveness of this approach is due to the representation of each image by a small number of coefficients, which can be stored and searched efficiently. However, methods from this category have to deal with the sensitivity of the eigenvector representation to changes of individual pixel values, due to translation, scale changes, image plane rotation or light changes. Several extensions have been investigated in order to handle complete parameterized models of objects [14], to cope with occlusion [7,19] and to be robust to outliers and noise [9]. Recently, Support Vector Machines (SVM) have gained in interest for appearance based object recognition [5,16]. Pontil [18] examined the robustness of SVM to noise, bias in the registration and moderate amount of partial occlusions, obtaining good results. Roobaert et al. [20] examined the generalization capability of SVM when just a few views per object are available.
3
Probabilistic Appearance-Based Object Recognition
Appearance-based object recognition methods consider images as feature vectors. Let x ≡ [xij ], i = 1, . . . N , j = 1, . . . M be an M × N image. We will consider each image as a feature vector x ∈ G ≡ m , m = MN . Assume we have k different classes Ω1 , Ω2 , . . . , Ωk of objects, and that for each object is given a set of nj data samples, dj = {xj1 , xj2 , . . . , xjnj }, j = 1, . . . k. We will assign each object to a pattern class Ω1 , Ω2 , . . . , Ωk . The object classification procedure will be a discrete mapping that assigns a test image, showing one of the objects, to the pattern class the presented object corresponds to. How the object class Ωj is represented, given a set of data samples dj (relative to that object class), varies for different appearance-based approaches: it can consider shape information only, or color information only or both (see Section 2 for a review). Here we will concentrate on probabilistic appearance-based methods. The probabilistic approach to appearance-based object recognition considers the image views of a given object Ωj as random vectors. Thus, given the set
100
B. Caputo et al.
of data samples dj and assuming they are a sufficient statistic for the pattern class Ωj , the goal will be to estimate the probability distribution PΩj (x) that has generated them. Then, given a test image x, the decision will be made using a Maximum A Posteriori (MAP) classifier: j ∗ = argmax PΩj (x) = argmax P (Ωj |x), j
j
and, using Bayes rule, j ∗ = argmax P (x|Ωj )P (Ωj ). j
(1)
where P (f |Ωj ) are the Likelihood Functions (LFs) and P (Ωj ) are the prior probabilities of the classes. In the rest of the paper we will assume that the prior P (Ωj ) is the same for all object classes; thus the Bayes classifier (1) simplifies to j ∗ = argmax P (x|Ωj ). (2) j
Many probabilistic appearance-based methods do not model the pdf on raw pixel data, but on features extracted from the original views. The extension of equation (2) to this case is straightforward: consider a set of features {hj1 , hj2 , . . . , hjnj }, j = 1, . . . k, where each feature vector hjnj is computed from the image xjnj , hjnj = T (xjnj ), hjnj ∈ G ≡ m . The Bayes classifier (2) will be in this case j ∗ = argmax P (h|Ωj ). j
(3)
Probabilistic methods for appearance-based object recognition have the double advantage of being theoretically optimal from the point of view of classification, and to be robust to degradation of the data due to noise and occlusions [21]. A major drawback in these approaches is that the functional form of the probability distribution of an object class Ωj is not known a priori. Assumptions have to be made regarding to the parametric form of the probability distribution, and parameters have to be learned in order to tailor the chosen parametric form to the pattern class represented by the data dj . Thus, the performance will depend on the goodness of the assumption for the parametric form, and on whether the data set dj is a sufficient statistic for the pattern class Ωj and thus, permits us to estimate properly the distribution’s parameters.
4
Spin Glass-Markov Random Fields
A possible strategy for modeling the parametric form of the probability function is to use Gibbs distributions within a Markov Random Field framework. MRF provides a probabilistic foundation for modeling spatial interactions on lattice systems or, more generally, on interacting features. It considers each element of the random vector h as the result of a labeling of all the sites representing h,
Combining Color and Shape Information
101
with respect to a given label set. The MRF joint probability distribution is given by 1 P (h) = exp (−E(h)) , Z= exp (−E(h)) . (4) Z {h}
The normalizing constant Z is called the partition function, and E(h) is the energy function. P (h) measures the probability of the occurrence of a particular configurations h; the more probable configurations are those with lower energies. Thus, using MRF modeling for appearance-based object recognition, eq (2) will become (5) j ∗ = argmax P (h|Ωj ) = argmin E(h|Ωj ) j
j
Only a few MRF approaches have been proposed for high level vision problems such as object recognition [26,13], due to the modeling problem for MRF on irregular sites (for a detailed discussion about this point, we refer the reader to [2]). Spin Glass-Markov Random Fields overcome this limitation and can be effectively used for appearance-based object recognition [2]. To the best of our knowledge, SG-MRF is the first and only successful MRF-based approach to appearance-based object recognition. The rest of this Section will review SG-MRFs (Section 4.1) and how they can be derived from results of statistical physics of disordered systems (Section 4.2). Section 5 will show how these results can be extended to a new class of energy function and how this extension makes it possible to use this approach for appearance-based object recognition using shape and color features combined together. 4.1
Spin Glass-Markov Random Fields: Model Definition
Spin Glass-Markov Random Fields (SG-MRFs) [2] are a new class of MRFs which connect SG-like energy functions (mainly the Hopfield one [1]) with Gibbs distributions via a non linear kernel mapping. The resulting model overcomes many difficulties related to the design of fully connected MRFs, and enables us to use the power of kernels in a probabilistic framework. Consider k object classes Ω1 , Ω2 , . . . , Ωk , and for each object a set of nj data samples, dj = {xj1 , . . . xjnj }, j = 1, . . . k. We will suppose to extract, from each data sample dj a set of features {hj1 , . . . hjnj }. For instance, hjnj can be a color histogram computed from xjnj . The SG-MRF probability distribution is given by PSG−MRF (h|Ωj ) = Z=
{h}
with
1 exp [−ESG−MRF (h|Ωj )] , Z
exp [−ESG−MRF (h|Ωj )] ,
(6)
102
B. Caputo et al.
ESG−MRF (h|Ωj ) = −
pj 2 ˜ (µj ) ) , K(h, h
(7)
µ=1
˜ (µj ) ) is a Generalized Gaussian kernel [27]: where the function K(h, h da,b (x, y) = |xai − yia |b . K(x, y) = exp{−ρda,b (x, y)},
(8)
i p
˜ (µj ) } j , j ∈ [1, k] are a set of vectors selected (according to a chosen ansatz, {h µ=1 [2]) from the training data that we call prototypes. The number of prototypes per class must be finite, and they must satisfy the condition: ˜ (i) , h ˜ (l) ) = 0, K(h
(9)
for all i, l = 1, . . . pj , i = l and j = 1, . . . k. Note that SG-MRFs are defined on features rather than on raw pixels data. The sites are fully connected, which ends in learning the neighborhood system from the training data instead of choosing it heuristically. As we model the probability distribution on feature vectors and not on raw pixels, SG-MRF is not a generative model. Another key characteristic of the model is that in SG-MRF the functional form of the energy is given by construction. This is achieved using results for statistical physics of Spin Glasses. The next Section sketches the theoretical derivation of the model. The interested reader will find a more detailed discussion in [2]. 4.2
Spin Glass-Markov Random Fields: Model Derivation
Consider the following energy function: Jij si sj E=−
i, j = 1, . . . N,
(10)
(i,j)
Fig. 1. Gaussian kernels map the data to an infinite dimension hyper-sphere of radius unity. Thus, with a proper choice of ρ, it is possible to orthogonalize all the training data in that space
Combining Color and Shape Information
103
where the si are random variables taking values in {±1}, s = (s1 , . . . , sN ) is a configuration and J = [Jij ], (i, j) = 1, . . . , N is the connection matrix, Jij ∈ {±1}. Equation (10) is the most general Spin Glass (SG) energy function [1,12]; the study of the properties of this energy for different Js has been a lively area of research in the statistical physics community for the last 25 years. An important branch in the research area of statistical physics of SG is represented by the application of this knowledge for modeling brain functions. The simplest and most famous SG model of an associative memory was proposed by Hopfield; it assumes Jij to be given by Jij
p 1 (µ) (µ) = ξ ξj , N µ=1 i
(11)
where the p sets of {ξ (µ) }pµ=1 are given configurations of the system (that we call prototypes) having the following properties: (a) ξ (µ) ⊥ ξ (ν) , ∀µ =ν; (b) p = αN, α ≤ 0.14, N → ∞. Under these assumptions it has been proved that the {ξ (µ) }pµ=1 are the absolute minima of E [1]; for α > 0.14 the system loses its storage capability [1]. These results can be extended from the discrete to the continuous case (i.e. s ∈ [−1, +1]N , see [6]); note that this extension is crucial in the construction of the SG-MRF model. It is interesting to note that the energy (10), with the prescription (11), can be written as: 1 (µ) 1 (µ) (µ) ξi ξj si sj = − (ξ · s)2 . (12) E=− N i,j µ N µ Equation (12) depends on the data through scalar products, thus it can be kernelized, as to say it can be written as 1 EKAM = [K(ξ (µ) , s)]2 . (13) N µ The idea to substitute a kernel function, representing the scalar product in a higher dimensional space, in algorithms depending on just the scalar products between data is the so called kernel trick [25], which was first used for Support Vector Machines (SVM); in the last few years theoretical and experimental results have increased the interest within the machine learning and computer vision community regarding the use of kernel functions in methods for classification, regression, clustering, density estimation and so on. We call the energy given by equation (13) Kernel Associative Memory (KAM). KAM energies are of interest in two different research fields: in the formulation given by equation (13) it is a non linear and higher order generalization of the Hopfield energy function [4]. The other research field is computer vision, on which we concentrate the attention here. Indeed, we can look at equation (13) as follows: E=
1 1 1 (µ) (ξ · s)2 = − [Φ(hµ ) · Φ(h)]2 = − [K(hµ , h)]2 N µ N µ N µ
(14)
104
B. Caputo et al.
provided that Φ is a mapping such that (see Figure 1): Φ : G ≡ m → H ≡ [−1, +1]N , N → ∞, that in terms of kernel means K(h, h) = 1, ∀ h ∈ m , dim(H) = N, N → ∞.
(15)
If we can find such a kernel, then we can use the KAM energy, with all its properties, for MRF modeling. As the energy is fully connected and the minima of the energy are built by construction, the usage of this energy overcomes all the modeling problems relative to irregular sites for MRF [2]. Conditions (15) are satisfied by generalized Gaussian kernels (8). Regarding the choice of prototypes, given a set of nk training examples {xκ1 , xκ2 , . . . , xκnκ } relative to class Ωκ , the condition to be satisfied by the prototypes is ξ (µ) ⊥ ξ (ν) , ∀(µ = ν ) in the mapped ˜ (µ) ) ⊥ Φ(h ˜ (ν) ), ∀µ = space H, that becomes Φ(h ν in the data space G. The measure of the orthogonality of the mapped patterns is the kernel function (8) that, due to the particular properties of Gaussian Kernels, has the effect of orthogonalize the patterns in the space H (see Figure 2). Thus, the orthogonality condition is satisfied by default: if we do not want to introduce further criteria for the choice of prototypes, the natural conclusion is to take all the training samples as prototypes. This approximation is called the naive ansatz. Note that when a single feature vector is computed from each view, the naive ansatz approximation becomes exact. dimm
11111111111 00000000000 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111 G ≡ m
dimN
dim1
Φ
1 0000000000 1111111111 1111111111 0000000000 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 1 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 −1 1
dim1 H ≡ [−1, +1]N , N → ∞
Fig. 2. The kernel trick maps the data from a lower dimension space G ≡ m to a higher dimension space H ≡ [−1, +1]N , N → ∞. This permits to use the H-L energy in a MRF framework
5
Ultrametric Spin Glass-Markov Random Fields
SG-MRF, with the Hopfield energy function (10)-(11), have been successfully applied to appearance-based object recognition. The modeling has been done on
Combining Color and Shape Information
105
raw pixels [3], on shape representations [2] and on color representations [4]. In all cases, it has shown to be very effective, and has shown remarkable robustness properties. A major drawback of the Hopfield energy function is the condition of orthogonality on the set of prototypes. When the modeling is done on the raw pixel data (as in [3]), or when a single feature vector is computed from a single image (as in [2,4]), then the naive ansatz approximation becomes exact. But there are many applications in which the number of prototypes (as to say the number of features extracted from a single image) can be > 1. This is the case for example in most texture classification problems; it is also the case if we want to combine together shape and color features, as it is the purpose here. Two problems arises in this case: first, whether it is possible or not to combine together these representations. Second, assuming the answer is yes, whether the property of generalized Gaussian kernels is sufficient to ensure the orthogonality of prototypes. In other words, the naive ansatz can turn out to be in some cases too rough an approximation. The solution we propose consists in kernelizing a new SG energy function, that allows us to store non mutually orthogonal prototypes. As this energy was originally derived taking into account the ultrametric properties of the SG configuration space, we will refer to it as the ultrametric energy. The interested reader will find a complete description of ultrametricity and of the ultrametric energy in [1,12]. In the rest of the Section we will present the ultrametric energy and we will show how it can be kernelized (Section 5.1); we will also show how it can be used for appearance-based object recognition using shape and color information contained in different representations. 5.1
Ultrametric Spin Glass-Markov Random Fields: Model Derivation
Ancestor → µ
→
ξ
ξ
Descendant
Descendant
Descendant
Fig. 3. Hierarchical structure induced by the ultrametric energy function
106
B. Caputo et al.
Consider the energy function (10) E=−
Jij si sj
ij
with the following connection matrix: qµ p 1 (µ) (µ) 1 (µν) (µν) ξ ξ (η − aµ )(ηj − aµ ) Jij = 1+ N µ=1 i j ∆(aµ ) ν=1 i with (µν)
ξi
(µ) (µν)
= ξi ηi
,
a2µ =
(16)
N 1 (µν) (µλ) η ηi . N i=1 i
This energy induces a hierarchical organization of stored prototypes ([1], see Figure 3). The set of prototypes {ξ (µ) }pµ=1 are stored at the first level of the hierarchy and are usually called the ancestor. Each of them will have q descenqµ (µν) dants {ξ (µν) }ν=1 . The parameter ηi measures the similarity between ancestors and descendants; the parameter aµ measures the similarity between descendants. ∆(aµ ) is a normalizing parameter, that guarantees that the energy per site is finite. In the rest of the paper we will limit the discussion to the case1 a2µ = a2 . The connection matrix thus becomes: qµ p 1 (µ) (µ) 1 (µν) (µν) Jij = ξ ξ (η − a)(ηj − a) 1+ N µ=1 i j 1 − a2 ν=1 i =
qµ p p 1 (µ) (µ) 1 (µ) (µ) (µν) (µν) ξi ξj + ξ ξ (η − a)(ηj − a) N µ=1 N (1 − a2 ) µ=1 i j ν=1 i
= Term1 + Term2. Term1 is the Hopfield energy (10)-(11); Term2 is a new term that allows us to store as prototypes patterns correlated with the {ξ (µ) }pµ=1 , and correlated p between each other. This energy will have p + µ=1 q µ minima, of which p p absolute (ancestor level) and ( µ=1 q µ ) local (descendant level). When a → 0, the ultrametric energy reduces to a hierarchical organization of Hopfield energies; it is remarkable to note that in this case the prototypes at each level of the hierarchy must be mutually orthogonal, but they can be correlated between different levels. Note also that we limited ourselves to two levels, but the energy can be easily extended to three or more. For a complete discussion on the properties of this energy, we refer the reader to [1]. 1
Considering the general case would not add anything from the conceptual point of view and would make the notation even heavier.
Combining Color and Shape Information
107
Here we are interested in using this energy in the SG-MRF framework shown in Section 4. To this purpose, we show that the energy (10), with the connection matrix (16), can be written as a function of scalar product between configurations: qµ p 1 1 (µ) (µ) 1 (µν) (µν) E=− ξ ξ (η − a)(ηj − a) si sj 1+ N ij N µ=1 i j 1 − a2 ν=1 i =−
qµ p p 1 (µ) 1 (ξ · s)2 + (ξ (µν) · s)2 − N µ=1 N (1 − a2 ) µ=1 ν=1
qµ qµ p p 2a a2 (µ) (µ) (ξ · s)(ξ · s) + (ξ (µ) · s)2 . N (1 − a2 ) µ=1 ν=1 N (1 − a2 ) µ=1 ν=1
(17)
If we assume that a → 0, as to say we impose orthogonality between prototypes at each level of the hierarchy, the energy reduces to p qµ p 1 (µ) 2 (µν) 2 (ξ · s) + (ξ · s) . (18) E=− N 2 µ=1 µ=1 ν=1 The ultrametric energy, in the general form (17) or in the simplified form (18) can be kernelized as done for the Hopfield energy and thus can be used in a MRF framework. We call the resulting new MRF model Ultrametric Spin GlassMarkov Random Fields (USG-MRF). 5.2
Ultrametric Spin Glass-Markov Random Fields: Model Application
Consider the probabilistic appearance-based framework described in Section 3. Given a view xjnj , we will suppose to extract two feature vectors from it, hsjnj containing shape information and hcjnj containing color information. USG-MRF provides a straightforward manner to use the Bayes classifier (3) using both these two representations separately. We will consider hcjnj as the ancestor and hsjnj as the descendant; for each level there will be a single prototype, thus the naive ansatz approximation will be exact. The USG-MRF energy function will be in this case:
(19) EUSG−MRF = − [Kc (hcjnj , hc)]2 + [Ks (hsjnj , hs)]2 , that leads to the Bayes classifier
j ∗ = argmin − [Kc (hcjnj , hc)]2 + [Ks (hsjnj , hs)]2 . j
(20)
The indexes c and s relative to the two kernels acting on the two different representations indicates that it is possible to use, at different levels of the hierarchy, different kernels. Here, we would like to remark, that this newly introduced model is adaptable for the combination of any kind of features similary to the demonstarted color and shape.
108
6
B. Caputo et al.
Experiments
In order to show the effectiveness of USG-MRF for appearance-based object recognition, we perform several sets of experiments. All of them were ran on the COIL database [15], which can be seen as a benchmark for object recognition algorithms. It consists of 7200 color images of 100 objects (72 views for each of the 100 objects); each image is of 128 × 128 pixels. The images were obtained by placing the objects on a turntable and taking a view every 5◦ . In all the experiments we performed, the training set consisted of 12 views per object (one every 30◦ ). The remaining views constituted the test set. Among the many representations proposed in literature, we chose a shape only and color only representation, and we ran experiments using these representations separated, combined together in a common feature vector and combined together in the USG-MRF. The purpose of these experiments is to prove the effectiveness of the USG-MRF model rather than select the optimal combination for the shape and color representations. Thus, we limited the experiments to one shape only and one color only representations. As color only representation, we chose two dimensional rg Color Histogram (CH), with resolution of bin axis equal to 8 [23]. The CH was normalized to 1. As shape only representation, we chose Multidimensional receptive Field Histograms (MFH). This method was proposed by Schiele in order to extend the color histogram approach of Swain and Ballard; the main idea is to calculate multidimensional histograms of the response of a vector of receptive fields. An MFH is determined once we chose the local property measurements (i.e., the receptive fields functions), which determine the dimensions of the histogram, and the resolution of each axis. SG-MRF has been successfully used many times combined with MFH for appearance-based object recognition [2,4]. Here we chose for all the experiments we performed two local characteristics based on Gaussian derivatives: x y Dy = − 2 G(x, y) Dx = − 2 G(x, y); σ σ where 2 x + y2 G(x, y) = exp − 2σ 2 is the Gaussian distribution. Thus, our shape only representation consisted of two dimensional MFH, Dx Dy , with σ = 1.0 and resolution of bin axis equal to 8. The histograms were normalized to 1. These two representations were used for performing the following sets of experiments: 1. Shape experiments: we ran the experiments using the shape features only. Classification was performed using SG-MRF with the kernelized Hopfield energy (10)-(11). The kernel parameters (a, b, ρ) were learned using a leave-one-out strategy. The results were benchmarked with those obtained with a χ2 and ∩ similarity measures, which proved to be very effective for this representation.
Combining Color and Shape Information
109
Fig. 4. Examples of different views
2. Color experiments: we ran the experiments using the color features only. Classification and benchmarking were performed as in the shape experiment. 3. Color-shape experiments: we ran the experiments using the color and shape features combined together to form a unique feature vector. Again, classification and benchmarking were performed as in the shape experiment. 4. Ultrametric experiment: we ran a single experiment using the shape and color representation disjoint in the USG-MRF framework. The kernel parameters relative to each level (as , bs , ρs and ac , bc , ρc ) are learned with the leave-one-out technique. Results obtained with this approach cannot be directly benchmarked with other similarity measures. Anyway, it is possible to compare the obtained results with those of the previous experiments. Table 1 reports the error rates obtained for the 4 sets of experiments. Results presented in Table 1 show that for all series of experiments, for all representations, SG-MRF always gave the best recognition result. Moreover, the overall best recognition result is obtained with USG-MRF. USG-MRF has an increase of performance of +2.73% with respect to SG-MRF, best result, and of +5.92% with respect to χ2 (best result obtained with a non SG-MRF technique). The fact that the error rates for the color experiments are all above 20% is an
Table 1. Classification results; we report for each set of experiments the obtained error rates. The kernel parameters learned for SG-MRF, for the color experiment were ac = 0.5, bc = 0.4, ρc = 0.1. For the shape experiment were as = 0.4, bs = 1.3, ρs = 0.1. For the color-shape experiment were acs = 0.3, bcs = 0.6, ρcs = 0.1 and finally for the ultrametric experiment were ac = 0.5, bc = 0.4, ρc = 0.016589, as = 0.4, bs = 1.3, ρs = 2.46943 Color (%) Shape (%) Color-Shape (%) Ultrametric (%) 2
χ ∩ SG-MRF
23.47 25.68 20.10
9.47 24.94 6.28
19.17 21.72 8.43
3.55
110
B. Caputo et al.
indicator that the color representation we chose is far from being optimal. These results confirm our theoretical expectation and show the effectiveness of USGMRF for color and shape appearance-based object recognition.
7
Summary
In this paper we presented a kernel method that permits us to combine color and shape information for appearance-based object recognition. It does not require us to define a new common representation, but use the power of kernels to combine different representations together in an effective manner. This result is achieved using results of statistical mechanics of Spin Glasses combined with Markov Random Fields via kernel functions. Experiments confirm the effectiveness of the proposed approach. Future work will explore the possibility to use different representations for color and shape and will benchmark this approach with others, presented in literature. Acknowledgments This work has been supported by the “Graduate Research Center of the University of Erlangen-Nuremberg for 3D Image Analysis and Synthesis”, and by the Foundation BLANCEFLOR Boncompagni-Ludovisi.
References 1. D. J. Amit, “Modeling Brain Function”, Cambridge University Press, 1989. 98, 101, 103, 105, 106 2. B. Caputo, H. Niemann, “From Markov Random Fields to Associative Memories and Back: Spin Glass Markov Random Fields”, SCTV2001. 98, 101, 102, 104, 105, 108 3. B. Caputo, J. Hornegger, D. Paulus, H. Niemann, “A Spin Glass-Markov Random Field”, Proc ICANN01 workshop on Kernel Methods, Vienna, 2001. 105 4. B. Caputo, “A new kernel method for object recognition and scene modeling: Spin Glass-Markov Random Fields”, PhD thesis, to appear. 103, 105, 108 5. T. Evgeniou, M. Pontil, C. Papageorgiou, T. Poggio, “Image representations for object detection using kernel classifiers” ACCV, 2000. to appear. 99 6. J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons”, Proc. Natl. Acad. Sci. USA, Vol. 81, pp. 3088- 3092, 1984. 103 7. C.-Y. Huang, O. I. Camps, “Object recognition using appearance-based parts and relations”,CVPR’97:877-883, 1997. 99 8. A.Jain,P. J. Flynn, editors. “Three–Dimensional Object Recognition Systems”, Amsterdam, Elsevier, 1993. 97, 98 9. A. Leonardis, H. Bischof, “Robust recognition using eigenimages”, CVIU,78:99118, 2000. 97, 99 10. J. Matas, R. Marik, J. Kittler, “On representation and matching of multi-coloured objects”, Proc ICCV95, 726-732, 1995. 97, 99 11. B. W. Mel, “SEEMORE: combining color, shape and texture histogramming in a neurally-inspired approach to visual object recognition”, NC, 9: 777-804, 1997 97, 99
Combining Color and Shape Information
111
12. M. Mezard, G. Parisi, M. Virasoro, “ Spin Glass Theory and Beyond”, World Scientific, Singapore, 1987. 103, 105 13. J. W. Modestino, J. Zhang. “A Markov random field model–based approach to image interpretation”. PAMI, 14(6),606–615,1992. 101 14. H. Murase, S. K. Nayar, “Visual Learning and Recognition of 3D Objects from Appearance”, IJCV,14(1):5-24, 1995. 99 15. Nene, S. A., Nayar, S. K., Murase, H., “Columbia Object Image Library (COIL100)”, TR CUCS-006-96, Dept. Comp. Sc., Columbia University, 1996. 97, 108 16. E. Osuna, R. Freund, F. Girosi, “Training support vector machines: An application to face detection”, CVPR’97: 130-136, 1997. 99 17. J. Ponce, A. Zisserman, M. Hebert, “Object Representation in Computer Vision— II”, Nr. 1144 in LNCS. Springer, 1996. 98 18. Pontil, M., Verri, A. “Support Vector Machines for 3D Object Recognition”, PAMI, 20(6):637-646, 1998. 99 19. R. P. N. Rao, D. H. Ballard, “An active vision architecture based on iconic representations”,AI:461– 505, 1995. 99 20. D. Roobaert, M. M. Van Hulle, “View-based 3D object recognition with support vector machines”, Proc. IEEE Workshop. on NNSP, 1999. 99 21. B. Schiele, J. L. Crowley, “Recognition without correspondence using multidimensional receptive field histograms”, IJCV, 36(1),:31- 52, 2000. 97, 99, 100 22. 97, 99 D. Slater, G. Healey, “Combining color and geometric information for the illuminantion invariant recognition of 3-D objects”, Proc ICCV95, 563-568, 1995. 23. M. Swain, D. Ballard, “Color indexing”,IJCV, 7(1):11-32, 1991. 97, 98, 108 24. M. Turk, A. Pentland, “Eigenfaces for recognition”, Journal of Cognitive Neuroscience,3(1):71–86, 1991. 99 25. V. Vapnik, Statistical learning theory, J. Wiley, New York, 1998. 103 26. M. D. Wheeler, K. Ikeuchi. Sensor modeling, probabilistic hypothesis generation, and robust localization for object recognition. PAMI, 17(3):252–265, 1995. 101 27. B. Sch¨ olkopf, A. J. Smola, Learning with kernels, 2002, the MIT Press, Cambridge, MA. 102
Maintenance Training of Electric Power Facilities Using Object Recognition by SVM Chikahito Nakajima1 and Massimiliano Pontil2 1
2
Central Research Institute of Electric Power Industry Tokyo 201-8511 JAPAN
[email protected] Department of Information Engineering, University of Siena Via Roma 56, 53100 Siena, Italy
[email protected] Abstract. We are developing a support system for maintenance training of electric power facilities using augmented reality. To develop in the system, we evaluated the use of Support Vector Machine Classifier (SVM) for object recognition. This paper presents our experimental results of object recognition by combinations of SVMs. The recognition results of over 10,000 images show very high performance rates. The support system that uses the combinations of SVMs works in real time without special marks or sensors.
1
Introduction
We are developing a support system for maintenance training of electric power facilities using augmented reality[1]. Many systems in augmented reality have been developed in a variety of applications such as maintenance, repair, assistance of surgery, and guidance of navigation [2,3,4]. They typically use special marks or sensors on the target objects to facilitate detection and classification of objects. We intend to detect and recognize objects without special marks or sensors in the augmented reality system. To develop the system, we evaluated the use of Support Vector Machine Classifier (SVM) for object recognition. SVM is a technique for learning from examples that is well-founded in statistical learning theory. SVM has received a great deal of attention and has been applied to a lot of areas such as pedestrian detection, text categorization and object detection [5,6,7,8]. SVM belongs to the class of maximum margin classifiers and performs pattern recognition between two classes. Recently several methods have been proposed in order to expand the application field of SVM to multi-class problems [9,10,11]. In the system, a method of multi-class SVMs is used to recognize objects. This paper presents our experimental results of object recognition by combinations of SVMs. The recognition results using over 10,000 images show very high performance rates. The system that includes multi-class SVMs works in real time. The paper is organized as follows. Section 2 presents a description S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 112–119, 2002. c Springer-Verlag Berlin Heidelberg 2002
Maintenance Training of Electric Power Facilities
113
of the system outline. Section 3 describes the experimental results. Section 4 summarizes our work.
2
System Outline
We mainly use a Head-Mounted display (HMD) device, small cameras and imageprocessing techniques, such as object recognition and the chroma-key image synthesis, for development of the augmented reality system. The system consists of three modules: Image I/O, Pre-processing and Object Recognition. Fig. 1 shows the system overview. Each image from the camera is forwarded to the Pre-processing module through the Image I/O module. Objects in the images are detected from the background and color features of the objects are calculated in the Pre-processing module. Based on these features, the object identity and pose are determined in the Object Recognition module. An operation method or the dynamic inside movement of the recognized object is displayed on the HMD as a computer graphics overlay by the chroma-key image synthesis. Each module is working independently on different computers.
Fig. 1. An outline of the system
2.1
Pre-processing
The Pre-processing module consists of two parts: detection of an object and calculation of the object features. Object Detection The system uses two steps to detect an object in an image sequence. The first step, known as background subtraction, computes the difference between the current image and the background image. The background image is calculated over the k most recent images1 of the sequence. The result of background subtraction usually includes a lot of noise. For this reason we 1
In our experiments we chose k = 3.
114
Chikahito Nakajima and Massimiliano Pontil
Fig. 2. An image of sequences and an extracted object
add a second step which extracts the silhouette of the moving object using edge detection methods. Assuming that the object is slightly moving between two frames, we perform edge detection by subtracting two consecutive images of the sequence. The left image in Fig. 2 shows an image from the sequence and the right image shows the combined result of the two steps. Feature Extraction Once the object has been detected and extracted from the background, the system calculates object features. A color histogram is one of the most popular features in object recognition and image retrieval applications. In spite of the fact that the histogram is a very simple feature, it has shown good results in practice[11,12]. We used a color histogram for the reason in experiments of the system. Experimental results using other features were described in [12,14]. Two-dimensional normalized color histograms, r = R/(R + G + B), g = G/(R + G + B), are calculates from the extracted object. We chose 32 bins for each color channel: r, g. Overall, the system extracts 1024 (32 × 32) features from a single object. 2.2
Recognition
Object Recognition Fig. 3 shows an example of training images for one class. The m in Fig. 3 is the number of training images for the class. Each class is represented by a set of m vectors, each vector consists of the 1024 features extracted above. The system uses a linear SVM [9] that determines the hyperplane w · x + b which best separates two classes. The w is the weight vector, the x is the vector of features, and b is a constant. This hyperplane is the one which maximizes the distance or margin between the two classes. The margin, equal to 2/w, is an important geometrical quantity because it provides an estimator of the similarity of the two classes. The SVM is computed for each pair of n classes. Recognition of the object in a new image is based on the decision graph of SVMs [10]. The graph for four classes (A, B, C, D) is shown in Fig. 4. Each node in the graph is associated with a pair of classes. Classification of an input vector starts from the root node (A/D) of the graph. Notice that the classification result depends on the initial position of each class in the graph. A possible heuristic to improve classification
Maintenance Training of Electric Power Facilities
115
Fig. 3. An example of a training data set for SVM
performance consists of selecting the SVMs with the largest margin in the top node (A/D) of the graph; we use this strategy. A similar method based on a binary decision graph of SVMs is also discussed in [9,13,14]. In both cases, classification of an input vector requires the evaluation of n − 1 SVMs.
Fig. 4. A decision graph of SVMs
3
Experiments
In this section we report recognition results by the SVMs using over 10 thousand images. Those images were recorded in focus, background and lighting were almost invariant.
Fig. 5. Imeges of four objects of the COIL databese[15]
116
Chikahito Nakajima and Massimiliano Pontil
Fig. 6. Recognition rate of 100 objects (The total number of images: 7200) Evaluation Using the COIL Database The recognition methods have been tested on the COIL (Columbia Object Image Library) database [15] consisting of 7,200 images of 100 objects. The 7,200 images are color images of 128 × 128 pixels. The objects are positioned in the center of a turntable and observed from a fixed viewpoint. The turntable is rotated of five degree per image. Fig. 5 shows image examples of four objects in the COIL database. The COIL is one of the popular and well-evaluated data sets in object recognition area. The task in the first experiment is to distinguish 100 different objects by the multi-SVMs. We used 5%, 10%, 20%, 30% or 40% of the images as training examples and the remaining images as test examples. Two kind of kernel, linear and polynomial, were used for the multi-SVM in the experiment. We evaluated different types of classifiers: k-Nearest Neighbor (k-NN) 2 , too. Fig. 6 shows recognition results of 100 objects. Recognition results of the multi-SVMs outperform the 1-NN classification. System Performance The augmented reality system has to work in real time. The second task in our experiments is to distinguish five objects in real time. In the system, a captured color image from the camera is 640 × 480 pixels. We used 100 images as training examples and 400 images as test examples for one object. In the experiment, the total number of the training images is 500 and the total number of the test image is 2,000 images. Fig. 7 shows examples of recognition results by the system. The lower left corner of each image shows the result of the SVM classification. The system got perfect recognition for the five objects and the recognition speed was 0.38 second 3 per one image. 2 3
Where k is 1, 3 and 5. In the k-NN experiments, K=1 shows the best performance. We used a SGI-Octane workstation (dual R10000/250Mhz) in the measurement.
Maintenance Training of Electric Power Facilities
117
Fig. 7. Examples of recognition results (recognition time of one object is 0.38 second.) Applications We are developing a support system for maintenance training of electric power facilities using augmented reality. The system has to recognize an object from the camera images automatically and show the operation method or the dynamic inside movement of the object using a graphical overlay on the HMD. From the above experiments, the multi-SVMs classification is able to use for the purpose. Fig. 8 shows application examples of the system. The left images in Fig. 8 show camera images and the right images show the operation methods of the objects. In the right image of Fig. 8, the object background of each image is replaced by an image of electric power facilities. Fig. 9 shows other results for a rotated object that is a model of nuclear power plant. The system recognizes poses of the plant by the multi-SVMs and puts words on the plant detection point, which are selected automatically.
4
Conclusion
To develop the augmented reality system, we evaluated the use of the multiSVMs classification by over 10,000 images The recognition results showed high performance rats by the multi-SVMs. From the reason, we used the multi-SVMs as an object recognition tool in the system. The system can recognize and detect objects in real time from image sequences without special image marks or sensors and can show information about the objects through a head-mounted display.
118
Chikahito Nakajima and Massimiliano Pontil
Fig. 8. Recognition and instructions
Fig. 9. Recognition of a rotated object (a model of nuclear power plant)
Maintenance Training of Electric Power Facilities
119
References 1. C. Nakajima, N. Itoh, M. Pontil and T. Poggio: Object recognition and detection by a combination of support vector machine and Rotation Invariant Phase Only Correlation, Proc. ICPR (2000) 787-790 112 2. J. Rekimoto and K. Nagao: The world through the computer: Computer augmented interaction with real world environments. Proc. of UIST (1995) 112 3. S. Feiner, B. MacIntyre, T. Hollerer, and A. Webster: A touring machine: prototyping 3d mobile augmented reality system for exploring the urban environment. Proc. of ISWC (1997) 112 4. J. Nash: Wiring the Jet Set, WIRED, Vol. 5, No. 10 (1997) 128-135 112 5. C. Papageorgiou and T. Poggio: Trainable pedestrian detection, Proc. of ICIP (1999) 25-28 112 6. B. Heisele, T. Poggio and M. Pontil: Face detection in still gray images, MIT AI Memo (2000) 112 7. A. Mohan, C. Papageorgiou and T. Poggio: Example-based object detection in image by components, IEEE Trans. PAMI, Vol. 23, No.4 (2001) 349-361 112 8. C. Cortes and V. Vapnik: Support vector networks, Machine Learning, Vol. 20 (1995) 1-25 112 9. M. Pontil and A. Verri. Support vector machines for 3-D object recognition. IEEE Trans. PAMI (1998) 637-646 112, 114, 115 10. J. Platt, N. Cristianini, and J. Shawe-Taylor. Large margin DAGs for multiclass classification. MIT Press Advances in Neural Information Processing Systems (2000) 112, 114 11. O. Chapelle, P. Haffner and V. N. Vapnik: Support vector machines for histogram based image classification, IEEE Trans. Neural Networks, Vol.10, No.5 (1999) 1055-1064 112, 114 12. F. Tsutsumi and C. Nakajima: Hybrid approach of video indexing and machine learning for rapid indexing and highly precise object recognition, Proc. ICIP (2001) 114 13. C. Nakajima, M. Pontil and T. Poggio. People recognition and pose estimation in image sequences. Proc. of IJCNN (2000) 115 14. C. Nakajima, M. Pontil, B. Heisele and T. Poggio: People recognition in image sequences by supervised learning MIT A. I. Memo No. 1688, C. B. C.L No. 188 (2000) 114, 115 15. S. A. Nene, S. K. Nayar and H. Murase: Columbia Object Image Library (COIL100) , Technical Report CUCS-006-96 (1996) 115, 116
Kerneltron: Support Vector ‘Machine’ in Silicon Roman Genov and Gert Cauwenberghs Department of Electrical and Computer Engineering, Johns Hopkins University Baltimore, MD, 21218, USA {roman,gert}@jhu.edu http://bach.ece.jhu.edu
Abstract. Detection of complex objects in streaming video poses two fundamental challenges: training from sparse data with proper generalization across variations in the object class and the environment; and the computational power required of the trained classifier running realtime. The Kerneltron supports the generalization performance of a Support Vector Machine (SVM) and offers the bandwidth and efficiency of a massively parallel architecture. The mixed-signal VLSI processor is dedicated to the most intensive of SVM operations: evaluating a kernel over large numbers of vectors in high dimensions. At the core of the Kerneltron is an internally analog, fine-grain computational array performing externally digital inner-products between an incoming vector and each of the stored support vectors. The three-transistor unit cell in the array combines single-bit dynamic storage, binary multiplication, and zero-latency analog accumulation. Precise digital outputs are obtained through oversampled quantization of the analog array outputs combined with bit-serial unary encoding of the digital inputs. The 256 input, 128 vector Kerneltron measures 3 mm × 3 mm in 0.5 µm CMOS, delivers 6.5 GMACS throughput at 5.9 mW power, and attains 8-bit output resolution.
1
Introduction
Support vector machines (SVM) [1] offer a principled approach to machine learning combining many of the advantages of artificial intelligence and neural network approaches. Underlying the success of SVMs are mathematical foundations of statistical learning theory [2]. Rather than minimizing training error (empirical risk), SVMs minimize structural risk which expresses an upper bound on the generalization error, i.e., the probability of erroneous classification on yet-to-beseen examples. This makes SVMs especially suited for adaptive object detection and identification with sparse training data. Real-time detection and identification of visual objects in video from examples is generally considered a hard problem for two reasons. One is the large degree of variability in the object class, i.e., orientation and illumination of the object or occlusions and background clutter in the surrounding, which usually necessitates a large number of training examples to generalize properly. The S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 120–134, 2002. c Springer-Verlag Berlin Heidelberg 2002
Kerneltron: Support Vector ‘Machine’ in Silicon
121
other is the excessive amount of computation incurred during training, and even in run-time. Support vector machines have been applied to visual object detection, with demonstrated success in face and pedestrian detection tasks [3,4,5,6]. Unlike approaches to object detection that rely heavily on hand-crafted models and motion information, SVM-based systems learn the model of the object of interest from examples and work reliably in absence of motion cues. To reduce the computational burden of real-time implementation to a level that can be accommodated with available hardware, a reduced set of features are selected from the data which also result in a reduced number of support vectors [5]. The reduction in implementation necessarily comes at a loss in classification performance, a loss which is more severe for tasks of greater complexity. The run-time computational load is dominated by evaluation of a kernel between the incoming vector and each of the support vectors. For a large class of permissible kernels, which include polynomial splines and radial kernels, this computation entails matrix-vector multiplication in large dimensions. For the pedestrian detection task in unconstrained environments [5], highest detection at lowest false alarm is achieved for very large numbers (thousands) of input dimensions and support vectors, incurring millions of matrix multiply-accumulates (MAC) for each classification. The computation recurs at different positions and scales across each video frame. The Kerneltron offers a factor 100-10,000 improvement in computational efficiency (throughput per unit power) over the most advanced digital signal processors available today. It affords this level of efficiency at the expense of specificity: the VLSI architecture is dedicated to massively parallel kernel computation. Speed can be traded for power dissipation. Lower power is attractive in portable applications of kernel-based pattern recognition, such as visual aids for the blind [7]. Section 2 briefly summarizes feature extraction and SVM classification for object detection in streaming video. Section 3 describes the architecture and circuit implementation of the Kerneltron. Experimental results, scalability issues and application examples are discussed in Section 4.
2
Object Detection with Support Vector Machines
A support vector machine is trained with a data set of labeled examples. For pattern classification in images, relevant features are typically extracted from the training set examples using redundant spatial filtering techniques, such as overcomplete wavelet decomposition [4]. The classifier is trained on these feature vectors. In run time, images representing frames of streaming video are scanned by moving windows of different dimensions. For every unit shift of a moving window, a wavelet feature vector is computed and presented to the SVM classifier to produce a decision. The general block diagram of such a system is outlined in Fig. 1. A brief functional description of the major components follows next.
sr
Anr
Xn
WAVELET
Xmn SUPPORT
DECOMPOSITION
VECTOR MACHINE
ym
OUTPUT MODULE
Roman Genov and Gert Cauwenberghs
INPUT MODULE
122
CORE RECOGNITION PROCESSOR
Fig. 1. Functional block diagram of the SVM classifier. The core of the system is a support vector machine processor for general object detection and classification. An overcomplete wavelet decomposition of the incoming sensory data at the input generates redundant input features to the SVM, providing for robust and relatively invariant classification performance 2.1
Overcomplete Wavelet Decomposition
An overcomplete wavelet basis enables the system to handle complex shapes and achieve a precise description of the object class at adequate spatial resolution for detection [4]. The transformation of the sensory data s into the feature vector X is of the linear form Xn =
R
Anr sr ,
n = 1, · · · , N,
(1)
r=1
where the wavelet coefficients Anr form an overcomplete basis i.e., N > R. In visual object detection overcomplete Haar wavelets have been successfully used on pedestrian and face detection tasks [4,5]. Haar wavelets are attractive because they are robust and particularly simple to compute, with coefficients Anr that are either −1 or 1. 2.2
Support Vector Classification
Classification of the wavelet transformed features is performed by a support vector machine (SVM) [1]. From a machine learning theoretical perspective [2], the appealing characteristics of SVMs are: 1. The learning technique generalizes well even with relatively few data points in the training set, and bounds on the generalization error can be directly estimated from the training data. 2. The only parameter that needs tuning is a penalty term for misclassification which acts as a regularizer [8] and determines a trade-off between resolution and generalization performance [9].
Kerneltron: Support Vector ‘Machine’ in Silicon
123
3. The algorithm finds, under general conditions, a unique separating decision surface that maximizes the margin of the classified training data for best out-of-sample performance. SVMs express the classification or regression output in terms of a linear combination of examples in the training data, in which only a fraction of the data points, called “support vectors,” have non-zero coefficients. The support vectors thus capture all the relevant data contained in the training set. In its basic form, a SVM classifies a pattern vector X into class y ∈ {−1, +1} based on the support vectors Xm and corresponding classes ym as y = sign(
M
αm ym K(Xm , X) − b),
(2)
m=1
where K(·, ·) is a symmetric positive-definite kernel function which can be freely chosen subject to fairly mild constraints [1]. The parameters αm and b are determined by a linearly constrained quadratic programming (QP) problem [2,10], which can be efficiently implemented by means of a sequence of smaller scale, subproblem optimizations [3], or an incremental scheme that adjusts the solution one training point at a time [11]. Most of the training data Xm have zero coefficients αm ; the non-zero coefficients returned by the constrained QP optimization define the support vector set. In what follows we assume that the set of support vectors and coefficients αm are given, and we concentrate on efficient run-time implementation of the classifier. Several widely used classifier architectures reduce to special valid forms of kernels K(·, ·), like polynomial classifiers, multilayer perceptrons1 , and radial basis functions [13]. The following forms are frequently used: 1. Inner-product based kernels (e.g., polynomial; sigmoidal connectionist): K(Xm , X) = f (Xm · X) = f (
N
Xmn Xn )
(3)
n=1
2. Radial basis functions (L2 norm distance based): K(Xm , X) = f (Xm − X) = f ((
N
1
|Xmn − Xn |2 ) 2 )
(4)
n=1
where f (·) is a monotonically non-decreasing scalar function subject to the Mercer condition on K(·, ·) [2,8]. With no loss of generality, we concentrate on kernels of the inner product type (3), and devise an efficient scheme of computing a large number of highdimensional inner-products in parallel. Computationally, the inner-products 1
with logistic sigmoidal activation function, for particular values of the threshold parameter only
124
Roman Genov and Gert Cauwenberghs
comprise the most intensive part in evaluating kernels of both types (3) and (4). Indeed, radial basis functions (4) can be expressed in inner-product form: 1
f (Xm − X) = f ((−2Xm · X + Xm 2 + X2 ) 2 ),
(5)
where the last two terms depend only on either the input vector or the support vector. These common terms are of much lower complexity than the innerproducts, and can be easily pre-computed or stored in peripheral registers. The computation of the inner-products takes the form of matrix-vector mulN tiplication (MVM), n=1 Xmn Xn ; m = 1, . . . M , where M is the number of support vectors. For large scale problems as the ones of interest here, the dimensions of the matrix M × N are excessive for real-time implementation even on a high-end processor. As a point of reference, consider the pedestrian and face detection task in [5], for which the feature vector length N is 1,326 wavelets per instance, and the number of support vectors M is in excess of 4,000. To cover the visual field over the entire scanned image at reasonable resolution (500 image window instances through a variable resolution search method) at video rate (30 frames per second), a computational throughput of 75 × 109 multiply-andaccumulate operations per second, is needed. The computational requirement can be relaxed through simplifying and further optimizing the SVM architecture for real-time operation, but at the expense of classification performance [4,5].
3
Kerneltron: Massively Parallel VLSI Kernel Machine
The Kerneltron offers the computational power required for the unabridged SVM architecture to run in real-time, for optimal out-of-sample classification performance. The architecture is described next. 3.1
Core Recognition VLSI Processor
At the core of the system is a recognition engine, which very efficiently implements kernel-based algorithms, such as support vector machines, for general pattern detection and classification. The implementation focuses on inner-product computation in a parallel architecture. Both wavelet and SVM computations are most efficiently implemented on the same chip, in a scalable VLSI architecture as illustrated schematically in Fig. 2. The diagram is the floorplan of the Kerneltron, with matrices projected as 2-D arrays of cells, and input and output vector components crossing in perpendicular directions alternating from one stage to the next. This style of scalable architecture also supports the integration of learning functions, through local outer product parameter updates [12], compatible with the recently developed incremental SVM learning rule [11]. The architecture maintains low input/output data rate. Digital inputs are fed into the processor through a properly sized serial/parallel converter shift register. A unit shift of a scanning moving window in an image corresponds to one shift of a new pixel per classification cycle, while a single scalar decision is produced at the output.
Kerneltron: Support Vector ‘Machine’ in Silicon
Anr
MATRIX-VECTOR MULTIPLIER
Xn
Xmn
X.Xm
sr s
αmymf(X.Xm) SERIAL/PARALLEL CONVERTER
y CODER
WAVELET DICTIONARY
125
KERNEL LOOK-UP TABLE AND SVM INFERENCE
Fig. 2. The architecture of the core recognition processor, combining overcomplete wavelet decomposition with generalized support vector machine classification. Communication with outside modules is through a serial digital input/output interface for maximal flexibility and programmability, while the core internal computations are parallel and analog for optimal efficiency
The classification decision is obtained in digital domain by thresholding the weighted sum of kernels. The kernels are obtained by mapping the inner-products X · Xm through the function f (·) stored in a look-up table. By virtue of the inner-product form of the kernel, the computation can be much simplified without affecting the result. Since both the wavelet feature extraction and the inner-product computation represent linear transformations, they can be collapsed into a single linear transformation by multiplying the two matrices: N Xmn Anr . (6) Wmr = n=1
Therefore the architecture can be simplified to one that omits the (explicit) wavelet transformation, and instead transforms the support vectors.2 For simplicity of the argument, we proceed with the inner-product architecture excluding the overcomplete wavelet feature extraction stage, bearing in mind that the approach extends to include wavelet extraction by merging the two matrices. 2
Referred to the input prior to wavelet Stransformation, support vectors sm need to N be transformed twice: Wmr = n=1 p=1 Anp Anr smp .
126
3.2
Roman Genov and Gert Cauwenberghs
Mixed-Signal Computation
Computing inner-products between an input vector X and template vectors Wm in parallel is equivalent to the operation of matrix-vector multiplication (MVM) Ym =
N −1
Wmn Xn ,
(7)
n=0
with N -dimensional input vector Xn , M -dimensional output vector Ym , and M ×N matrix of coefficients Wmn . The matrix elements Wmn denote the support vectors Xmn , or the wavelet transformed support vectors (6) for convenience of notation.3 Internally Analog, Externally Digital Computation The approach combines the computational efficiency of analog array processing with the precision of digital processing and the convenience of a programmable and reconfigurable digital interface. The digital representation is embedded in the analog array architecture, with matrix elements stored locally in bit-parallel form Wmn =
I−1
(i) 2−i−1 wmn
(8)
i=0
and inputs presented in bit-serial fashion Xn =
J−1
γj x(j) n ,
(9)
j=0
where the coefficients γj are assumed in radix two, depending on the form of input encoding used. The MVM task (7) then decomposes into Ym =
N −1
Wmn Xn =
n=0
I−1
2−i−1 Ym(i)
(10)
i=0
with MVM partials Ym (i) =
J−1
γj Ym(i,j) ,
(11)
j=0
and Ym (i,j) =
N −1
(i) (j) wmn xn .
(12)
n=0
The binary-binary partial products (12) are conveniently computed and accumulated, with zero latency, using an analog MVM array [14]-[17]. For this purpose we developed a 1-bit multiply-and-accumulate CID/DRAM cell. 3
In the wavelet transformed case, s should be substituted for X in what follows.
Kerneltron: Support Vector ‘Machine’ in Silicon
RS(i)m
127
Vout(i)m M1
M2
M3
CID DRAM (i) w mn
x(j) n
RS(i)m x(j)n Vout(i)m 0 Vdd/2 Vdd
Write
0 Vdd/2 Vdd
Compute (a)
0 Vdd/2 Vdd
(b)
Fig. 3. (a) CID computational cell with integrated DRAM storage. Circuit diagram, and charge transfer diagram for active write and compute operations. (b) Micrograph of the Kerneltron prototype, containing containing an array of 256 × 128 CID/DRAM cells, and a row-parallel bank of 128 algorithmic ∆Σ ADCs. Die size is 3 mm × 3 mm in 0.5 µm CMOS technology CID/DRAM Cell and Array The unit cell in the analog array combines a CID (charge injection device [18]) computational element [16,17] with a DRAM storage element. The cell stores one bit of a matrix element wmn (i) , performs a one-quadrant binary-unary (or binary-binary) multiplication of wmn (i) and xn (j) in (12), and accumulates the result across cells with common m and i indices. The circuit diagram and operation of the cell are given in Fig. 3 (a). It performs non-destructive computation since the transferred charge is sensed capacitively at the output. An array of cells thus performs (unsigned) binaryunary multiplication (12) of matrix wmn (i) and vector xn (j) yielding Ym (i,j) , for values of i in parallel across the array, and values of j in sequence over time. A 256 × 128 array prototype using CID/DRAM cells is shown in Fig. 3 (b). To improve linearity and to reduce sensitivity to clock feedthrough, we use differential encoding of input and stored bits in the CID/DRAM architecture using twice the number of columns and unit cells as shown in Fig. 4 (a). This amounts to exclusive-OR (XOR), rather than AND, multiplication on the analog array, using signed, rather than unsigned, binary values for inputs and weights, xn (j) = ±1 and wmn (i) = ±1.
128
Roman Genov and Gert Cauwenberghs
RS(i)m Vout(i)m AND AND XOR
w(i) mn
x(j) n
x(j) n
w(i) mn
(a)
(b)
Fig. 4. (a) Two charge-mode AND cells configured as an exclusive-OR (XOR) multiply-and-accumulate gate. (b) Measured linearity of the computational array configured for signed multiplication on each cell (XOR configuration). Wave(i) forms shown are, top to bottom: the analog voltage output, V outm , on the sense (j) line; input data (in common for both input, xn , and weight, wmn (i) , shift register); and input shift register clock
In principle, the MVM partials (12) can be quantized by a bank of flash analog-to-digital converters (ADCs), and the results accumulated in the digital domain according to (11) and (10) to yield a digital output resolution exceeding the analog precision of the array and the quantizers [19]. Alternatively, an oversampling ADC accumulates the sum (11) in the analog domain, with inputs encoded in unary format (γi = 1). This avoids the need for high-resolution flash ADCs, which are replaced with single-bit quantizers in the delta-sigma loop. Oversampling Mixed-Signal Array Processing The precision of computation is limited by the resolution of the analog-to-digital converters (ADC) digitizing the analog array outputs. The conventional delta-sigma (∆Σ) ADC design paradigm allows to reduce requirements on precision of analog circuits to attain high resolution of conversion, at the expense of bandwidth. In the presented architecture a high conversion rate is maintained by combining delta-sigma analogto-digital conversion with oversampled encoding of the digital inputs, where
Kerneltron: Support Vector ‘Machine’ in Silicon
W m,0
+
(0) wm,1
+ +
(1) wm,1
+ +
(2) wm,1
+
+
+
(3) wm,1
+
+
+
(3) wm,0
+
+
+
(2) wm,0
+
+
+
(1) wm,0
+
+
+
(0) wm,0
Wm,N-1 Y
W m,1
(0,0) m
(0) wm,N-1 (1,0)
Y
m
(1) wm,N-1 (2,0)
Y
m
(2) wm,N-1 (3,0)
Y
m
(3) wm,N-1
Y(0,15) m Y(1,15) m Y(2,15) m
∆Σ ADC 1/2 ∆Σ ADC
+
∆Σ ADC
+
∆Σ ADC
+
(3,15)
Y
m
129
+
+
1/2 +
+
1/2 +
+
Qm
ANALOG
x(15) 0
x(15) 1
x(15) N-1
x(0)0
x(0)1
x(0) N-1
DIGITAL
SERIAL BINARY-TO-UNARY CONVERTER
X0
X1
X N-1
Fig. 5. Block diagram of one row of the matrix with binary encoded elements w(i) mn , for a single m and I = 4. Data flow of bit-serial unary encoded inputs x(j) n and corresponding partial product outputs Y (i,j) m , with J = 16 bits. The full product for a single row Y (i) m is accumulated and quantized by a deltasigma ADC. The final product is constructed in the digital domain according to (10)
the delta-sigma modulator integrates the partial multiply-and-accumulate outputs (12) from the analog array according to (11). Fig. 5 depicts one row of matrix elements Wmn in the ∆Σ oversampling architecture, encoded in I = 4 bit-parallel rows of CID/DRAM cells. One bit of a unary-coded input vector is presented each clock cycle, taking J clock cycles to complete a full computational cycle (7). The data flow is illustrated for a digital input series xn (j) of J = 16 unary bits. Over J clock cycles, the oversampling ADC integrates the partial products (12), producing a decimated output Q(i) m
≈
J−1
γj Ym(i,j) ,
(13)
j=0
where γj = 1 for unary coding of inputs. Decimation for a first-order delta-sigma modulator is achieved using a binary counter. Higher precision are obtained in
130
Roman Genov and Gert Cauwenberghs
Table 1. Measured performance Technology Area Power Supply Voltage Dimensions Throughput Output Resolution
0.5 µm CMOS 3mm × 3mm 5.9 mW 5V 256 inputs × 128 templates 6.5 GMACS 8-bit
the same number of cycles J by using a residue resampling extended counting scheme [21]. Additional gains in precision can be obtained by exploiting binomial statistics of binary terms in the analog summation (12) [20]. In the present scheme, this would entail stochastic encoding of the digital inputs prior to unary oversampled encoding.
4
Experimental Results and Discussion
4.1
Measured Performance
A prototype Kerneltron was integrated on a 3 × 3 mm2 die and fabricated in 0.5 µm CMOS technology. The chip contains an array of 256 × 128 CID/DRAM cells, and a row-parallel bank of 128 algorithmic ∆Σ ADCs. Fig. 3 (b) depicts the micrograph and system floorplan of the chip. The processor interfaces externally in digital format. Two separate shift registers load the templates (support vectors) along odd and even columns of the DRAM array. Integrated refresh circuitry periodically updates the charge stored in the array to compensate for leakage. Vertical bit lines extend across the array, with two rows of sense amplifiers at the top and bottom of the array. The refresh alternates between even and odd columns, with separate select lines. Stored charge corresponding to matrix element values can also be read and shifted out from the chip for test purposes. All of the supporting digital clocks and control signals are generated on-chip. Fig. 4 (b) shows the measured linearity of the computational array, configured differentially for signed (XOR) multiplication. The case shown is where all complementary weight storage elements are actively set, and an alternating sequence of bits in blocks N is shifted through the input register.4 For every shift in the input register, a computation is performed and the result is observed on the output sense line. The array dissipates 3.3 mW for a 10 µs cycle time. The bank of ∆Σ ADCs dissipates 2.6 mW yielding a combined conversion rate of 12.8 Msamples/s. Table 1 summarizes the measured performance. 4
wmn (i) = 1; xn (j) = −1 for n = 1, . . . N ; and xn (j) = −1 for n = N + 1, . . . 2N .
Kerneltron: Support Vector ‘Machine’ in Silicon
4.2
131
System-Level Performance
Fig. 6 compares template matching performed by a floating point processor and by the Kerneltron, illustrating the effect of quantization and limited precision in the analog array architecture. An ’eye’ template was selected as a 16 × 16 fragment from the Lena image, yielding a 256-dimensional vector. Fig. 6 (c) depicts the two-dimensional cross-correlation (inner-products over a sliding window) of the 8-bit image with the 8-bit template computed with full precision. The same computation performed by the Kerneltron, with 4-bit quantization of the image and template and 8-bit quantization of the output, is given in Fig. 6 (d). Differences are relatively small, and both methods return peak inner-product values (top matches) at both eye locations in the image.5 The template matching operation is representative of a support vector machine that combines nonlinearly transformed inner-products to identify patterns of interest. 4.3
Large-Scale Computation
The design is fully scalable, and can be expanded to any number of input features and support vectors internally as limited by current fabrication technology, and externally by tiling chips in parallel. The dense CID/DRAM multiply-accumulate cell (18λ × 45λ where λ is the technology scaling parameter) supports the integration of millions of cells on a single chip in deep submicron technology, for thousands of support vectors in thousand dimensional input space as the line-width of the fabrication technology continues to shrink. In 0.18 µm CMOS technology (with λ = 0.1µm), 64 computational arrays with 256 × 128 cells each can be tiled on a 8mm × 8mm silicon area, with two million cells integrated on a single chip. Distribution of memory and processing elements in a fine-grain multiply-andaccumulate architecture, with local bit-parallel storage of the Wmn coefficients, avoids the memory bandwidth problem that plagues the performance of CPUs and DSPs. Because of fine-grain parallelism, both throughput and power dissipation scale linearly with the number of integrated elements, so every cell contributes one kernel unit operation and one fixed unit of dissipated energy per computational cycle. Let us assume a conservative cycle time of 10 µs. With two million cells, this gives a computational throughput of 200 GOPS, which is adequate for the task described in Section 2.2. The (dynamic) power dissipation is estimated6 to be less than 50 mW which is significantly lower than that of a CPU or DSP processor even though computational throughput is many orders of magnitude higher. 4.4
Applications
The Kerneltron benefits real-time applications of object detection and recognition, particularly in artificial vision and human-computer interfaces. Applica5 6
The template acts as a spatial filter on the image, leaking through spectral components of the image at the output. The Lena image was mean-subtracted. The parameters of the estimate are: λ = 0.1µm; 3 V power supply; 10 µs cycle time.
132
Roman Genov and Gert Cauwenberghs
(a)
(b)
(c)
(d)
Fig. 6. Cross-correlation of fragments of Lena (a) and the eye template (b) computed by a 32-bit floating point processor with 8-bit encoded inputs (c) and by Kerneltron with 8-bit quantization and 4-bit encoded inputs (d)
tions extend from SVMs to any pattern recognition architecture that relies on computing a kernel distance between an input and a large set of templates in large dimensions. Besides throughput, power dissipation is a main concern in portable and mobile applications. Power efficiency can be traded for speed, and a reduced implementation of dimensions similar to the version of the pedestrian classifier running on a Pentium PC (27 input features) [4,5] could be integrated on a chip running at 100 µW of power, easily supported with a hearing aid type battery for a lifetime of several weeks. One low-power application that could benefit a large group of users is a navigational aid for visually impaired people. OpenEyes, a system developed for this purpose [7] currently runs a classifier in software on a Pentium PC. The
Kerneltron: Support Vector ‘Machine’ in Silicon
133
software solution offers great flexibility to the user and developer, but limits the mobility of the user. The Kerneltron offers the prospect of a low-weight, low-profile alternative.
5
Conclusions
A massively parallel mixed-signal VLSI processor for kernel-based pattern recognition in very high dimensions has been presented. Besides support vector machines, the processor is capable of implementing other architectures that make intensive use of kernels or template matching. An internally analog, externally digital architecture offers the best of both worlds: the density and energetic efficiency of a charge-mode analog VLSI array, and the convenience and versatility of a digital interface. An oversampling configuration relaxes precision requirements in the quantization while maintaining 8-bit effective output resolution, adequate for most vision tasks. Higher resolution, if desired, can be obtained through stochastic encoding of the digital inputs [20]. A 256 × 128 cell prototype was fabricated in 0.5 µm CMOS. The combination of analog array processing, oversampled input encoding, and delta-sigma analogto-digital conversion yields a computational throughput of over 1GMACS per milliwatt of power. The architecture is easily scalable and capable of delivering 200GOPS at 50mW of power in a 0.18 µm technology— a level of throughput and efficiency by far sufficient for real-time SVM detection of complex objects on a portable platform. Acknowledgments: This research was supported by ONR N00014-99-1-0612, ONR/DARPA N00014-00-C-0315 and WatchVision Corporation. The chip was fabricated through the MOSIS service.
References 1. Boser, B., Guyon, I. and Vapnik, V., “A training algorithm for optimal margin classifier,” in Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, pp 144-52, 1992. 120, 122, 123 2. Vapnik, V. The Nature of Statistical Learning Theory, Springer Verlag, 1995. 120, 122, 123 3. Osuna, E., Freund, R., and Girosi, F., “Training support vector machines: An application to face detection,” in Computer Vision and Pattern Recognition, pp. 130-136, 1997. 121, 123 4. Oren, M., Papageorgiou, C., Sinha, P., Osuna, E. and Poggio, T. “Pedestrian detection using wavelet templates,” in Computer Vision and Pattern Recognition, pp. 193-199, 1997. 121, 122, 124, 132 5. Papageorgiou, C.P, Oren, M. and Poggio, T., “A General Framework for Object Detection,” in Proceedings of International Conference on Computer Vision, 1998. 121, 122, 124, 132
134
Roman Genov and Gert Cauwenberghs
6. H. Sahbi, D. Geman and N. Boujemaa, “Face Detection Using Coarse-to-Fine Support Vector Classifiers,” IEEE Int. Conf. Image Processing (ICIP’2002), Rochester NY, 2002. 121 7. S. Kang, and S.-W. Lee,“Handheld Computer Vision System for the Visually Impaired,” Proc. of 3rd International Workshop on Human-Friendly Welfare Robotic Systems, Daejeon, Korea, pp. 43-48, 2002. 121, 132 8. Girosi, F., Jones, M. and Poggio, T. “Regularization Theory and Neural Networks Architectures,” Neural Computation, vol. 7, pp. 219-269, 1995. 122, 123 9. Pontil, M. and Verri, A., “Properties of Support Vector Machines,” Neural Computation, vol. 10, pp. 977-996, 1998. 122 10. Burges, C., “A Tutorial on Support Vector Machines for Pattern Recognition,” in U. Fayyad, editor, Proceedings of Data Mining and Knowledge Discovery, pp. 1-43, 1998. 123 11. Cauwenberghs, G. and Poggio, T., “Incremental and Decremental Support Vector Machine Learning,” in Adv. Neural Information Processing Systems, Proc. of 2000 IEEE NIPS Conf., Cambridge MA: MIT Press, 2001. 123, 124 12. Cauwenberghs, G. and Bayoumi, M., Learning on Silicon, Analog VLSI Adaptive Systems, Norwell MA: Kluwer Academic, 1999. 124 13. Sch¨ olkopf, B., Sung, K., Burges, C., Girosi, F., Niyogi, P., Poggio, T. and Vapnik, V. “Comparing Support Vector Machines with Gaussian Kernels to Radial Basis Functions Classifiers,” IEEE Transactions on Signal Processing (to appear 1997). 123 14. A. Kramer, “Array-based analog computation,” IEEE Micro, vol. 16 (5), pp. 40-49, 1996. 126 15. A. Chiang, “A programmable CCD signal processor,” IEEE Journal of Solid-State Circuits, vol. 25 (6), pp. 1510-1517, 1990. 16. C. Neugebauer and A. Yariv, “A Parallel Analog CCD/CMOS Neural Network IC,” Proc. IEEE Int. Joint Conference on Neural Networks (IJCNN’91), Seattle, WA, vol. 1, pp. 447-451, 1991. 127 17. V. Pedroni, A. Agranat, C. Neugebauer, A. Yariv, “Pattern matching and parallel processing with CCD technology,” Proc. IEEE Int. Joint Conference on Neural Networks (IJCNN’92), vol. 3, pp. 620-623, 1992. 126, 127 18. M. Howes, D. Morgan, Eds., Charge-Coupled Devices and Systems, John Wiley & Sons, 1979. 127 19. R. Genov, G. Cauwenberghs “Charge-Mode Parallel Architecture for MatrixVector Multiplication,” IEEE T. Circuits and Systems II, vol. 48 (10), 2001. 128 20. R. Genov, G. Cauwenberghs, “Stochastic Mixed-Signal VLSI Architecture for High-Dimensional Kernel Machines,” to appear in Advances in Neural Information Processing Systems, Cambridge, MA: MIT Press, vol. 14, 2002. 130, 133 21. G. Mulliken, F. Adil, G. Cauwenberghs, R. Genov, “Delta-Sigma Algorithmic Analog-to-Digital Conversion,” IEEE Int. Symp. on Circuits and Systems (ISCAS’02), Scottsdale, AZ, May 26-29, 2002. ISCAS’2002. 130
Advances in Component-Based Face Detection Stanley M. Bileschi1 and Bernd Heisele2 1
Center for Biological and Computational Learning, M.I.T. Cambridge, MA, USA 2 Honda R&D Americas, Inc. Boston, MA, USA {bileschi,heisele}@ai.mit.edu
Abstract. We describe a component based face detection system trained only on positive examples. On the first layer, SVM classifiers detect predetermined rectangular portions of faces in gray scale images. On the second level, histogram based classifiers judge the pattern using only the positions of maximization of the first level classifiers. Novel aspects of our approach are: a) using selected parts of the positive pattern as negative training for component classifiers, b) The use of pair wise correlation between facial component positions to bias classifier outputs and achieve superior component localization.
1
Introduction
Object recognition is a well-studied area of computer vision. Face detection, the act of finding the set of faces in an image, is perhaps the most common of visual object detection tasks. In the following we give a brief overview of face detection methods. In [8,5] neural networks are used to discriminate between face and non-face images. In [4] Support Vector Machines (SVMs) using polynomial kernels are trained to detect frontal faces. The above systems have in common that the whole face pattern is represented by a single feature vector which is then fed to a classifier. This global approach works well for frontal faces but is sensitive to rotations in depth and to partial occlusions. More recently, component based classification methods have been developed to overcome these problems. A na¨ıve Bayesian approach for face detection is presented in [6]. In this system, the empirical probabilities of occurrence of small rectangular intensity patterns within the face are determined. Another probabilistic component based approach is proposed in [3]. Here, the geometrical configuration of the parts is matched to a model configuration by a conditional search. Most of the above systems are trained on a positive training set of face images and a negative training set of background patterns (non-face images). The negative class contains all possible non-face objects in images, making it difficult to represent it with a reasonably sized training set. We propose a method that circumvents this problem completely by training a face detection system on positive examples only. As starting point, we use the face detection system S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 135–143, 2002. c Springer-Verlag Berlin Heidelberg 2002
136
Stanley M. Bileschi and Bernd Heisele
developed in [2]. It consists of a two level hierarchy of SVM classifiers. On the first level, a set of 14 component classifiers are shifted over every position of a novel input image. Each classifier’s maximum output is then propagated to a second level SVM, which makes decisions whether or not the pattern is a face. Instead of training the SVM component classifiers on face (i.e. nose, mouth etc) and non-face patterns, we train them on face patterns only. As negative examples we extract other parts of the face (i.e. not the nose, or not the mouth). Not only do we avoid dealing with background patterns, we also improve the detection accuracy of the component classifier given a face as input image: shifting a component classifier over a face image it is more likely to peak at the correct location when other nearby parts of the face were included in the negative training examples. The second level SVM classifier in [2] bases its decision on the maximum outputs of the component classifiers within predefined search regions. In contrast, we propose a second level classifier which only uses the position of the detected components, i.e. the position of the peaks of the component classifiers within a 58×58 sub window. For training we determine the empirical distribution of the position of each component in the training data and assume that the positions of the components in non-face images will be approximately uniformly distributed. In a first experiment we implement a na¨ive Bayesian classifier, assuming that the positions of the components are independent. In a second experiment we develop a classification scheme that takes pair wise position statistics into consideration.
2
Procedure
The discussion of the architecture of our face detection system begins with a description of the processes involved in the system’s construction. An outline of the data flow when detecting in a novel image follows. 2.1
Construction of the Face Detector
The construction of our component based face detector consists of three major parts. First we must acquire training images of faces, and from them extract training sets, both positive and negative, for our 14 components. From this data we must also build histograms representing the expected positions of said components. Histograms of expected relative positions must also be recorded at this point. Finally, we train the component classifiers. The training data consists of 2,646 synthetic images of faces at a resolution of 100×100 pixels per image. The images were rendered from 21 different 3D head models under 63 conditions of rotation and illumination, as described in [1]. Figure 1 exhibits 4 example training images from the set. Because these images are projections of a 3D head model, dense correspondence in the form of the x y positions of many face sentinels is also available, as in Figure 1 at right. The 14 components of our system are the same as those defined in [2]. Each component is centered at one of the 25 sentinels. The component is defined as
Advances in Component-Based Face Detection
137
Fig. 1. Example images from the training data. The image on the far right has the 25 position sentinals visualized
a rectangle around the sentinel including all pixels within a fixed distance up, down, to the left and right of the center pixel. For instance, the mouth component includes the pixel defined as the center of the mouth, 15 pixels to the left, 15 pixels to the right, 7 pixels up, and 7 pixels down. Given the face images, the positions of the sentinels, and the definitions of our 14 components, the extraction of the positive training set is simple. Each of the 2,646 images in the positive training set of a component is a cropped portion of a synthetic face image. The negative training data for each component is extracted in a similar manner. Instead of cropping the portion defined to be the component, four random crops are taken from each face image. These cropped portions are of the same size as the positive component and are guaranteed by the extraction algorithm to overlap the defined position of the component by no more than 35% of the area. Thus, the negative training set of each component consists of 10,584 non-component patterns. These patterns are hereafter referred to as facial noncomponent patterns. Figure 2 depicts a typical mouth pattern, a typical facial non-mouth pattern, and also a non-face non-mouth pattern. In order to construct our second level histogram classifier, we must construct position histograms of each classifier. The position histogram for component n is simply an image of the same scale as the synthetic input image, but whose pixel value at position i, j is representative of how many times the center of component n was at position i, j in the window. The 58×58 pixel image at the center of the histogram is then cropped out and the border discarded. This is done in order to more closely represent the part of the face we are interested in detecting, between the brow and chin but not including the hairline or much
Fig. 2. Left: An example mouth pattern. Middle: An example non-mouth pattern extracted from the face. Right: An example non-mouth pattern extracted from a non-face
138
Stanley M. Bileschi and Bernd Heisele
Fig. 3. Position histograms, darker areas are more likely to be the center of the component. Left: the expected position of the mouth component in any 58×58 window. Right: the expected position of the left eye given that the bridge of the nose is at the center of the crosshair. The inner square is 58 pixels in length
of the background. This 58×58 square becomes our definition of a face. In none of the training images do any of the sentinels fall outside this 58×58 window. Figure 3 shows the position histogram for the mouth classifier. Darker areas are areas more likely to be the center of the mouth in a given training image. In our second experiment we will be using statistics of relative positions of pairs of components. During the training phase of the system, a pair wise position histogram is constructed for each pair of components. This histogram is a representation of where a component is likely to be centered given the center of another component. For instance, the left eye center is most likely approximately 15 pixels to the left and one pixel down from the bridge of the nose. Each first and second order position histogram is convolved with a small (5 pixel radius) blur in order account for slight rotations and variations in faces not well modeled by our training data. Also the histograms are linearly normalized before recording. The position histogram is normalized between one half and one, and the pair wise position histogram is normalized between the 13th root of one half and one. Figure 3 on the right shows a representation of the expected position of the left eye given the that the center of the bridge of the nose is mapped to the center of the crosshairs. For reference, the inner rectangle has a size of 58×58 pixels. Finally, the component classifiers are trained using a linear kernel, and the construction of the system is complete. Before this training takes place however, each data point is histogram equalized in order to make the system more robust to variations in illumination. 2.2
Testing on a Novel Image
Given an image, which may or may not contain a face, one might ask how, given our 14 component classifiers and our position histograms, we detect faces. We
Advances in Component-Based Face Detection
139
begin the detection by compensating for the scale and position sensitivity via an exhaustive search through the test image at all positions and scales. This windowing method transforms our task to deciding face from non-face in each 58×58 pattern. Given one rescaling of the test image, we check for 58×58 faces in the following way. First, a result image is created for each component. Figure 4 shows the result images from the brow, nose, mouth classifiers. Each result image is the same scale as the rescaled input image, but has pixel value i, j equal to the value returned when the rectangle centered at position i, j is fed through the component classifier. Once the result images have been computed, we move a 58×58 window through the rescaled test image, at each translation recording the positions of maximization of each result image inside the 58×58 sub-window. In this way, we are able to find the position of the best example of each component in each sub-window of our test image. We refer to this set of positions and values of maximization of the components as the constellation of a sub-window. Ideally, when the sub window is one exactly surrounding a face, the constellation will correspond to the positions of the 14 components of the face, as defined. In our first experiment, the value returned by the classifier for each sub window is the product of 14 values calculated by indexing into the position histograms. If classifier n maximizes at position xn , yn , then its contribution to the product will be the value in the position histogram stored at position xn , yn . Effectively, we are assuming that the position of each classifier is an independent random variable, and the probability of a given constellation, given that the image is a face, is simply the product of the probabilities of finding each component where the corresponding component classifier found it. Since we assume that the maximization of components in non-face images is uniformly distributed, the probability of any constellation is equal, given the image is a non-face. This is analogous to a na¨ive Bayes formulation to this detection problem, to which the solution is always a likelihood ratio test. Since the likelihood of the pattern being a non-face is constant for all constellations, to decide face from non-face, given only the positions of maximization of our classifiers, we must only compare the product statistic to a threshold. To decide whether the entire image has a face in it or not, we only use the maximum product of all the 58×58 windows.
Fig. 4. Leftmost: A sample synthetic test image. Right: Three result images from the brow, nose, and mouth classifiers in order
140
Stanley M. Bileschi and Bernd Heisele
Fig. 5. Leftmost: A sample synthetic test image. Right: Three result images after greedy optimization (brow, nose, and mouth) In the first experiment, the position of component i in a given sub-window was simply the maximum of result image i in that sub window. In our second experiment, after finding the positions of maximum stimulation of the classifiers, we employed the following greedy optimization rule to obtain a second estimate of the component position which utilizes both information from the image as well as information about the expected pair wise positioning of the components. If classifier i maximizes at position xi yi , then for all result images n not equal to i we will multiply the result image n by the expected position of component n given the position of component i. This has the effect of biasing component n to maximize where it likely is in relation to component i. Ideally, if all the components are in their correct positions except one, then the 13 other components will all constructively bias the mistaken component at the point where we expect to find it. Since the minimum value returnable by the pair wise position histogram is the 13th root of one half, at worst any value in the result image will be cut in half. At best, if all the other components expect the component at the same location, the result image value will not change. After biasing the result image, the new maximum value of the result image in the sub window is recorded. Afterwards, given this new constellation, the computation continues as per experiment one. Figure 5 shows the same classifiers and the same data as Figure 4 except this time the results are subject to the greedy optimization rule.
3
Results
In order to test the system, we reserved 1,536 artificial face images and 816 nonface images. We chose to test our system on images of synthetic heads instead of real images of heads in order to save time. In the synthetic face images, we can be sure that the face is of the right size so we do not have search across scale. Also, with synthetic heads, we are able to extract the ground truth components in order to test the component classifiers individually and independent of the face detection task. The component vs. non-face non-component curves were trained on the same positive data, but used random extractions from 13,654 nonface images as the negative training set (vis. the rightmost image in Figure 2.
Advances in Component-Based Face Detection
141
Fig. 6. A comparison of the complete system using classifiers trained on facial negatives (solid line) and non-facial negatives (dotted line) Analogous to the classifiers defined in the procedure, the data for these classifiers were histogram equalized before training. Each image, positive or negative, is first resized to 100×100 pixels. As per the discussion of experiment one in the procedure, the best result of all the 58×58 windows is the one that is recorded for the entire image. After every image has been evaluated we compute the ROC curve. Figure 6 shows the ROC curves for both the new system as discussed, and a similar system using component classifiers as trained in [2], vis. using non-faces as the negative training data. Figure 7 also shows two ROC curves, this time comparing the result of our system both using and not using the pair wise position histograms to bias the output. Again, it is worth emphasizing that every test image, positive or negative, is handled identically by the system. Our final set of results, Figure 9 compares the results of our pair wise position utilizing system on synthetic and real data. The real data, 100 images from the CMU PIE face detection set [7], does not have the correspondence in position and scale that the synthetic data has. In order to compensate, our system was made to search for faces at 10 different scales between 60×60 pixels up to 100×100 pixels. Figure 8 is an example image from this test database.
4
Conclusions and Future Work
By using the remainder of the face as the negative training data for a component classifier, we aimed to engineer a classifier at least on par with the corresponding component classifier of [2]. We can say that the first level classifiers built in this project are at least as powerful as those trained with the method outlined in [2], even with fewer training examples. Replacing the second level SVM constellation classifier with a classifier based on multiplying the outputs of histograms of positions, we were able to build a face detecting classifier using only faces as training data. Our system generalizes to images of real faces even though the
142
Stanley M. Bileschi and Bernd Heisele
Fig. 7. A comparison of the complete system without (solid line) and with (dashed line) greedy optimization of position data
Fig. 8. Example faces from the CMU PIE database
Fig. 9. A comparison of the complete system, using greedy optimization, on the synthetic test set (dashed line) and the CMU PIE face test set(solid line)
Advances in Component-Based Face Detection
143
training data was synthetic. Also we show that we can improve the accuracy of our system by utilizing information in pair wise position statistics. As the system is currently designed, the data returned from each component is utilized equally. Empirical evidence suggests, however, that certain classifiers are more robust than others. We will overlay a weighting scheme on the processing of the constellations to give more weight to the components that have more credence.
References 1. V. Blanz and T. Vetter. A morphable model for synthesis of 3D faces. In Computer Graphics Proceedings SIGGRAPH, pages 187–194, Los Angeles, 1999. 136 2. B. Heisele, T. Serre, M. Pontil, and T. Poggio. Component-based face detection. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 657–662, Hawaii, 2001. 136, 141 3. T. K. Leung, M. C. Burl, and P. Perona. Finding faces in cluttered scenes using random labeled graph matching. In Proc. International Conference on Computer Vision, pages 637–644, Cambridge, MA, 1995. 135 4. E. Osuna. Support Vector Machines: Training and Applications. PhD thesis, MIT, Department of Electrical Engineering and Computer Science, Cambridge, MA, 1998. 135 5. H. A. Rowley, S. Baluja, and T. Kanade. Neural network-based face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1):23–38, 1998. 135 6. H. Schneiderman and T. Kanade. Probabilistic modeling of local appearance and spatial relationships for object recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 45–51, Santa Barbara, 1998. 135 7. T. Sim, S. Baker, and M. Bsat. The CMU pose, illumination, and expression (PIE) database of human faces. Computer Science Technical Report 01-02, CMU, 2001. 141 8. K.-K. Sung. Learning and Example Selection for Object and Pattern Recognition. PhD thesis, MIT, Artificial Intelligence Laboratory and Center for Biological and Computational Learning, Cambridge, MA, 1996. 135
Support Vector Learning for Gender Classification Using Audio and Visual Cues: A Comparison L. Walawalkar, Mohammad Yeasin, Anand M. Narasimhamurthy, and Rajeev Sharma Department of Computer Science and Engineering, The Pennsylvania State University University Park, PA-16802 {lwalaval,yeasin,narasimh,rsharma}@cse.psu.edu
Abstract. Computer vision systems for monitoring people and collecting valuable demographics in a social environment will play an increasingly important role in enhancing user’s experience and can significantly improve the intelligibility of a human computer interaction (HCI) system. For example, a robust gender classification system is expected to provide a basis for passive surveillance and access to a smart building using demographic information or can provide valuable consumer statistics in a public place. The option of an audio cue in addition to the visual cue promises a robust solution with high accuracy and ease-of-use in human computer interaction systems. This paper investigates the use of Support Vector Machines(SVMs) for the purpose of gender classification. Both visual (thumbnail frontal face) and audio (features from speech data) cues were considered for designing the classifier and the performance obtained by using each cue was compared. The performance of the SVM was compared with that of two simple classifiers namely, the nearest prototype neighbor and the k-nearest neighbor on all feature sets. It was found that the SVM outperformed the other two classifiers on all datasets. The best overall classification rates obtained using the SVM for the visual and speech data were 95.31% and 100%, respectively.
1
Introduction
As multi-modal human computer interaction (HCI) evolves, computer vision systems for monitoring people and collecting valuable demographics in a social environment will play an increasingly important role in our lives. A robust gender classification system is expected to provide a basis for passive surveillance and smart living environments and can provide valuable consumer information in public places. Gender classification can significantly improve the intelligibility and user’s experience with HCI systems. In general, both the studies using vision and speech have shown that face images and speech features contain important information to classify gender. Gender classification has received attention from both the computer vision and speaker recognition researchers. In this paper the S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 144–159, 2002. c Springer-Verlag Berlin Heidelberg 2002
Support Vector Learning for Gender Classification
145
problem of gender classification using SVMs is addressed and the performance for both the visual (thumbnail frontal face) and audio cue are compared. The performance of the SVM is compared with that of two other classifiers, namely, the k-nearest neighbor and the nearest prototype neighbor. Support Vector Machines (SVMs) have been proposed as an effective pattern recognition tool in the recent past. SVMs have been used in various applications such as text categorization [28,15], face detection [19] and so on. A brief overview is presented in Appendix A. The choice of SVM [26] as a classifier can be justified by the following facts. The first property that distinguishes the SVM from other nonparametric techniques such as neural networks is that SVMs minimize the structural risk i.e., the probability of misclassifying a previously unseen data point drawn randomly from a fixed but unknown probability distribution instead of minimizing the empirical risk i.e., the misclassification on training data. Thus SVMs have good generalization. For a given learning task, with a given finite amount of training data, the best generalization performance will be achieved if the right balance is struck between the accuracy on that training set and the capacity of the machine to learn any training set without error [3]. Thus capacity control is important for good generalization. SVMs allow for good generalization capability since the number of parameters required for “capacity control” is very small. Hence they provide a trade-off between accuracy and generalization capability. The concept of capacity is not discussed here. The interested reader is referred to Vapnik [27] and Burges [3] for a detailed discussion on capacity. Secondly, an SVM condenses all the information contained in the training set relevant to classification in the support vectors. This effectively reduces the size of the training set, identifying the most important points, and makes it possible to efficiently perform classification. Finally SVMs are quite naturally designed to perform classification in high dimensional spaces, especially where the data is generally lineraly non-separable. SVMs can be used to construct models which are simple enough to analyze mathematically yet complex enough for real world applications. The training of an SVM is actually the solution of a Quadratic Programming Problem. (see Appendix A). The Quadratic Problem is usually dense and quite large. However a number of techniques have been developed to deal with large quadratic programming(QP) problems and have hence made SVMs more accessible. Some of the strategies include “chunking” [25], decomposition into sub-problems [19] and a number of Active Set methods. Sequential Minimal Optimization(SMO) [21] is an algorithm that can be used for fast training of SVMs. Joachims [14] describes an algorithm for large SVMs. The software which runs on this algorithm is called SV M Light . Both SMO and SV M Light can be classified as active set methods. In this paper SV M Light was used for implementation. The rest of the paper is organized as follows. Section 2 describes the state-ofthe-art in gender classification. The proposed classification scheme and the experimental set up are described in Section 3. Experimental results are presented
146
L. Walawalkar et al.
in Section 4 followed by discussion in Section 5. Finally, Section 6 concludes the paper.
2
Previous Work
The early attempts of applying computer vision techniques to gender recognition was reported in 1991. Cottrell and Metcalfe [8] used neural networks for face, emotion and gender recognition. In their network, “Empath”, 40 features were extracted from 64 × 64 pixel images via an auto encoder network that were given as inputs to another network for training and recognition. Their experiments on a small data set of 20 individuals for gender recognition reported perfect results. Golomb et al., [11] trained a fully connected two-layer neural network, (Sexnet), to identify gender from 30 × 30 human face images. Their experiment on a set of 90 photographs (45 males and 45 females) gave an average error rate of 8.1%. Similar to the above techniques, Tamura et al., [24] applied multilayer neural network to classify gender from face images of multiple resolutions ranging from 32 × 32 to 16 × 16 to 8 × 8 pixels with an error rate of 10%, 10% and 13%, respectively. Brunelli and Poggio [2] used a different approach in which a set of geometrical features (e.g., pupil to eyebrow separation, eyebrow thickness, and nose width) was computed from the frontal view of a face image without hair information. The results on a database of 20 male and 20 female images show an average error rate of 21%. Gutta et al., [13] proposed a hybrid method that consists of an ensemble of neural networks (RBF Networks) and inductive decision trees (DTs) with Quinlan’s algorithm. Experimental results on FERET face image database show that the best average error rate of their hybrid classifier was 4%. Gutta et al., [12] further used mixture of experts consisting of ensembles of radial basis functions and report an accuracy rate of 96%. Inductive decision trees and SVMs are used to decide which of the experts should be used to determine the classification output and restrict the support of input space. In a recent work Moghaddam [16] used low resolution 21 × 12 thumbnail faces processed from 1755 images from the FERET database. SVMs were used for classification of gender and were compared to traditional pattern classifiers like linear, quadratic, Fisher Linear Discriminant, Nearest Neighbor and the Radial Basis Function (RBF). The author observes that the SVMs outperform other methods. A very favorable classification with an error rate of 3.4% was reported in their work. The difference in performance between low and high-resolution tests with SVMs was only 1%, which shows the degree of robustness and relative scale invariance for classification. The task of machine recognition of speech and speaker identification has been researched extensively. One of the earliest works related to gender recognition was by Childers et al. , [5]. The experiments were carried out on clean speech obtained from a very small database of 52 speakers. A follow up study [5,6] concluded that gender information is time invariant, phoneme invariant, and speaker independent for a given gender. Authors also suggested that render-
Support Vector Learning for Gender Classification
147
ing speech to a parametric representation such as LPC, Cepstrum, or reflection coefficients is more appropriate approach for gender recognition than using fundamental frequency and formant feature vectors. Fussell [10] extracted cepstral coefficients for very short 16 ms segments of speech to perform gender recognition using a simple Gaussian classifier. The accuracy achieved was about 93% when the classifier was trained and tested on a particular phoneme of a particular class and the performance decreased for different phonemes and further for out-of-class testing. Slomka and Sridharan [22] tried further to optimize gender recognition for language independence. The results show that the combination of mel-cepstral coefficients and average estimate of pitch gave the best overall accuracy of 94.5%. The past work related to gender recognition points out the major challenges in solving this problem. The individual attempts of gender recognition using either audio or visual cue point out the advantages and shortcomings of each approach. For example, the visual gender classifier is not invariant to head orientation and requires fully frontal facial images. While it may function quite well with standard “mugshots” (e.g. passport photos) the inability to recognize gender from different viewpoints is a limitation, which can affect its utility in unconstrained imaging environments. This problem can be addressed by taking speech cue into account but it could be very noisy in a social environment or may not be available to the system always.
3
Overview of the Proposed Approach
Gender classification was carried out independently using visual and speech cue, respectively. The overall objectives of the experimental investigation are: 1) to compare the performance of gender classification based on visual and audio cues, and 2) to compare the performance of the SVM with two simple classifiers, namely, the nearest prototype neighbor and the k-nearest neighbor classifiers. The block diagram depicting the overall scheme for gender classification is shown in Fig. 3. From the training data features are extracted and subsequently used to train the classifier. The feature vectors extracted from test samples are fed to the classifier - which then classifies each test sample into one of the predefined classes. In the subsequent subsections, the experimental procedures for visual and audio cues are described. In this paper the results of experiments performed on visual and speech data collected from different sources are presented. Since the experiments were performed independently on unrealated datasets, it is difficult to make conclusive statements regarding which feature set (visual data or speech data) provides better discrimination. Even so, valuable insight can still be gained with the experiments performed. For the visual data, two sets of features were experimented with:
148
L. Walawalkar et al.
– Raw image data. The gray level intensity values of the pixels in the image comprised the feature vector. – Image data projected into an eigenspace.(Principal Component Analysis) A principal components analysis on the training set was performed and the first few eigenvectors were chosen to approximate the data. The projections on the space spanned by these eigenvectors comprised the feature vector. For speech data, cepstral coefficients [18] were extracted. The first 12 coefficients of the Cepstrum formed the feature vector. Three classifiers, namely the SVM, the k-nearest neighbor and the nearest prototype neighbor were used on each of the feature sets. The performance of the SVM was benchmarked against the performance of the k-nearest neighbor and the nearest prototype neighbor. A cross validation approach was used in our experiments. The pitch was also extracted from speech samples. Since the pitch feature is a one-dimensional “vector” only the nearest prototype neighbor and the k-nearest neighbor were used.
Test Data
Training Data
Feature extraction
Training
Feature extraction
Classification
Output
Fig. 1. Block diagram: Gender classification system
3.1
Vision-Based Gender Classifier
A large database consisting of frontal un-occluded face images was used for our experiments. It is generally accepted that if the the training and test data are similar, (for instance if they are drawn from the same database consisting of images collected under very similar conditions) the recognition rate would be high. However, it may not always be possible to control the conditions under which the data is collected. Moreover for real-world applications the classification must be robust to these variations. In order to explore this aspect, image data was collected from different sources. Thus there was a significant amount of variation within the database in terms of size of images, their resolution, illumination, and lighting conditions. The database consisted of 1640 images (883 male and 757
Support Vector Learning for Gender Classification
149
female). A five-fold cross validation procedure was used i.e. , roughly 4/5th of the data was used for training and the remaining 1/5th for testing. This was followed by four subsequent rotations. The average size of the training set was 1312 (706 male and 606 female) and that of the test set was 328 (177 male 151 female). A face detector was used to detect faces in the images. The face detector used in this research was the Neural Network based face detector developed by Yeasin et al., [29]. The faces detected were extracted and rescaled if necessary to a 20 × 20 pixel size. Contrast normalization was performed on these images. The feature vector was a 400 dimensional vector in which each component represented the intensity value of a pixel. Two sets of experiments were performed using the facial images. In the first set of experiments, the intensity values of the pixels in the images were inputs to the classifier. In the second set of experiments these vectors were projected into an eigenspance spanned by the training vectors. The eigenspace based method proposed by Murase and Nayar [17] was used for this purpose. The training vectors are projected into a space spanned by the K eigenvectors (where K d, the dimension of the training vectors. ) and these coefficients may be used to train the classifier. In our experiments the value of K was 50. Thus the dimension of the resulting training data was 50 as compared to 400 in the original data. Fig. 2 shows the plot of percentage of information contained by the most significant eigenvectors versus number of eigenvectors. The following quantity was used as the measure of total information contained in the first K eigenvectors: K 2 i=1 σi , p = d 2 σ i=1 i where d = dimension of the training vectors and σi = singular value corresponding to the ith eigenvector. From Fig. 2 it is evident that more than 98.8% of the total energy is contained in the first 50 eigenvectors. Moreover, it was observed that there was no significant change in performance when 100 significant eigenvectors were used. Hence K = 50 was determined to be an adequate number of eigenvectors to represent the data. The software SV M Light [14] was used for training and classification in all our experiments. The results are discussed in section 4. Although in our experiments the recognition rate using the raw data was better than that of the eigenspace based approach, the following are potential advantages of the eigenspace based approach: – The dimension of the vectors in the eigenspace is much lower than that of the raw feature vectors – Real time computation benefits may be obtained.
150
L. Walawalkar et al.
Percentage of energy captured by the eigenvectors
99.8
99.6
99.4
99.2
99
98.8
98.6
98.4
98.2
98 0
10
20
30
40 50 60 Number of eigenvectors
70
80
90
100
Fig. 2. Plot of percentage of total energy contained in the first few eigenvectors versus number of eigenvectors 3.2
Speech-Based Gender Classifier
Unlike in vision, in speech the dimensionality of feature vector is low and hence it was sufficient to use a database of smaller size. Two different set of features extracted from the speech database were experimented with. The design of the speech-based classifier was accomplished in four main stages: 1) Data Collection, 2) feature extraction, 3) system training and 4) performance evaluation. The first step of the experiment was to collect enough data for training the classifier. Data Collection: The only major restriction on the selection for the speech data was the balance of male and female samples within the database. The ISOLET Speech Corpus was found to meet this criterion and was chosen for the experiment. ISOLET is a database of letters of the English alphabet spoken in isolation. The database consists of 7800 spoken letters, two productions of each letter by 150 speakers. The database was very well balanced with 75 male and 75 female speakers. A total of 447 utterances (255 male and 192 female) were used. A three-fold cross validation procedure was used i.e. , two-thirds of the data was used for training and the remaining one-third for testing followed by two subsequent rotations. The average size of the training set was 298 (170 male and 128 female) and that of the test set was 149 (85 male and 64 female). The samples were chosen so that both the training and testing sets had mixed utterances. Once the data was collected the next step was to extract feature parameters from these utterances.
Support Vector Learning for Gender Classification
151
Cepstral Feature-Baesd Gender Classification: Speech exhibits significant variation from instance to instance for the same speaker and text. The amount of speech generated by short utterances is quite large. Whereas these large amounts of information is needed to characterize the speech waveform, the essential characteristic of the speech process changes relatively slowly, permitting a representation requiring significantly less data. Thus feature extraction for speech aims at reducing data while still retaining unique information for classification. This is accomplished by windowing the speech signal. Speech information is primarily conveyed by the short time spectrum, the spectral information contained in about 20 ms time period [18]. Previous research in gender classification using speech has shown that gender information in speech is time invariant, phoneme invariant, and speaker independent for a given gender. Also research has proved that using parametric representations such as LPC or reflection coefficient as a feature for speech is practically plausible. Hence, cepstral features were used for gender recognition. Different set of features extracted from the speech database were experimented with. The input speech waveform is divided into frames of duration 16 ms in this case with overlap of 10 ms. Each frame is windowed to reduce distortion and zero padded to a power of two. The next important step of feature extraction is to move the speech signal to the frequency domain via a fast Fourier transform (FFT). The cepstrum was computed by taking the FFT inverse of the log magnitude of the FFT [23]. The cepstrum can be considered the spectrum of the log spectrum. The Mel-warped cepstra [18] are obtained by inserting the intermediate step of transforming the frequency scale to place less emphasis on high frequencies before taking the inverse FFT. The first 12 coefficients of the Cepstrum were retained and given for as input for training. Pitch-Based Gender Classification: The ISOLET Speech Corpus was also used to extract pitch information. As mentioned previously the ISOLET database consisted of 150 speakers speech for the utterance of letters A to Z. A mixed training and testing set was prepared to verify the performance of this feature for its suitability to gender classification. The training set consisted of 252 utterances and the testing set consisted of 100 utterances. A combination of utterances was used to comprise the training and testing sets. The next step was to extract pitch information from the utterances. The pitch of human voice is a prominent feature distinguishing the gender. The range of pitch for the male voice is 60 Hz to 120 Hz while the range of pitch for the female voice is 120 Hz to 200Hz [20]. The pitch feature from the speech utterances was extracted using the COLEA Matlab software tool for speech analysis [7]. The average lowest frequency value f0 was used as the feature. The training data comprised of 252 one dimensional “vectors” and the size of the test data was 100. The nearest prototype and the k-nearest neighbor (with k = 5) classifiers were used. The results are shown in Table 4.
152
4
L. Walawalkar et al.
Experimental Results
One of our experiments was to examine the most suitable features for the task of gender classification. A secondary aim was to verify each feature’s performance on a fairly large size database. In addition to exploring these three features there was particular interest in (i) the performance of SVM classification using speech and (ii) the comparison of the performance of SVM against simple classifiers. 4.1
Vision Based Gender Classifier
As mentioned earlier, two sets of features were used. The first one was the raw image data (intensity values of the pixels) and the other was the image data projected into an eigenspace. A five-fold cross validation procedure as described in Section 3.1 was used. The performance of the SVMs was compared with that of k-nearest neighbor and nearest prototype neighbor. The best overall accuracy on the test set obtained using the SVM was 95.31% for raw data and 90.94 % for data projected into eigenspace. The corresponding values for the k nearest neighbor (k = 5) were 84.85% (raw data) and 84.55% (data projected into eigenspace). For the nearest prototype neighbor the values were 74.55% (raw data) and 72.81% (data projected into eigenspace). The best performance obtained using each classifier for each of the three categories, namely male, female and overall are shown in Tables 4.1 and 2, respectively. The results corresponding to the raw image data are shown in Table 4.1 and those corresponding to the eigenspace based method are shown in Table 2, respectively. It was observed that the the performance of the SVM was better than that of the nearest prototype neighbor and the k-nearest neighbor on all datasets.
Table 1. Results for vision based gender classification on raw image data Classifier SVM with Gaussian RBF kernel SVM with Quadratic Polynomial kernel k-nearest neighbor Nearest prototype neighbor
4.2
Accuracy (% ) Male Female Overall 94.77% 95.95% 95.31% 89.32% 90.54% 87.19% 93.02% 77.63% 84.85% 79.09% 80.92% 74.55%
Results for Speech-Based Gender Classification
Two different types of features extracted from the speech database, namely the cepstral feature and the pitch were experimented with.
Support Vector Learning for Gender Classification
153
Table 2. Results for vision based gender classification on image data projected to an eigenspace Classifier SVM with Gaussian RBF kernel SVM with Quadratic Polynomial kernel k-nearest neighbor Nearest prototype neighbor
Accuracy (% ) Male Female Overall 92.13% 87.16% 90.94% 87.79% 93.24% 90.31% 92.13% 76.31% 84.55% 74.01% 78.29% 72.81%
Table 3. Results for speech based gender classification using cepstral features Classifier SVM with Gaussian RBF kernel SVM with Quadratic Polynomial kernel k-nearest neighbor Nearest prototype neighbor
Male
Accuracy (% ) Female Overall
100.00% 100.00% 100.00% 100.0% 100.0% 100.00% 98.82% 89.06% 94.63% 84.70% 51.56% 70.47%
Results of Classification Using Cepstral Feature: The dimensionality of the feature vectors was significantly lower (12 cepstral coefficients were extracted per utterance and used for experimentation.) as compared to the vision based classification. A total of 447 utterances (255 male and 192 female) were used. A three-fold cross validation procedure was used i.e., two-thirds of the data was used for training and the remaining one-third for testing followed by two subsequent rotations. The best recognition accuracy obtained was found to be 100%. The results are shown in Table 3. Results of Classification Using Pitch: The average lowest frequency value f0 was used as the feature. The training data comprised of 252 one dimensional “vectors” and the size of the test data was 100. The nearest prototype and the k-nearest neighbor (with k = 5) classifiers were used. The results are shown in table 4. The training and test data sets were switched and the experiments were repeated. These experiments also gave equally good results, indicating that even a reasonably small number of training data can provide good classification performance.
154
L. Walawalkar et al.
Table 4. Results for gender recognition using pitch Accuracy (% ) Classifier Male Female Overall k-nearest neighbor 94.52% 99.06% 96.43% Nearest prototype neighbor 97.26% 98.11% 97.61%
5
Discussion
Humans can easily tell the difference between men and women . A number of studies have shown that gender judgments are made very fast and that it proceeds independently of the perception of identity [1,4]. Human beings also have the ability to learn and remember the patterns they hear and see and associate a category to each pattern (e.g, age, gender, and emotional state of the person). It is important to note that in the context of machine learning gender classfication using visual data has some inherent limitations. For example, it may function quite well if frontal view of the face is available to the system but the inability to recognize gender from different viewpoints can affect its utility in unconstrained imaging environments. Additionally, the vision-based gender classification also suffers from the inherent limitations of appearance-based methods. These problems can be partially addressed by taking speech cue into account. Both visual and audio cues were explored for the purpose of gender classification. The classification performance of each cue (visual and audio) was tested on a large data set. The experimental results suggest that the speech signal could provide better discrimination than the visual signal. The suitability of the SVM classifier for the feature sets used in our experiments was evaluated by comparing the performance using the SVM with that obtained from the k-nearest neighbor and the nearest prototype neighbor. It was found that the SVM-based classifier performed better than other “simpler” classifiers. This is due to the relatively sparse nature of the high dimensional training data, which accounts for the poor performance of the nearest prototype neighbor and the k-nearest neighbor classifiers for the visual data - a manifestation of “curse of dimensionality”. Even in the case of cepstral features extracted from speech data, where the dimension of the feature vectors was much smaller, the performance of the SVM was significantly better.
6
Conclusions
In this paper, gender classification using SVMs for both audio and visual cues has been investigated. For the visual cue both raw image data and a pricipal component analysis based approach were used for the experimentation. Similarly, for
Support Vector Learning for Gender Classification
155
the audio cue cepstral and pitch features were used. Experimental results suggest that the speech signal could provide better discrimination as compared to the visual signal. Another advantage with the speech signal is that the dimension of feature vectors is much smaller than that of the visual signal. This could be potentially exploited in real-time applications. It is to be noted that since the visual and speech data used for the experiments were from different sources and not from the same set of subjects, general conclusions cannot be drawn regarding which feature provides better discrimination. This is proposed to be addressed by experimenting on a multimodal database such as the M2VTS [9]. The performance of the SVM was compared to that of two simple classifiers, namely the nearest prototype neighbor and the k-nearest neighbor. It was observed that the SVM outperformed the other two classifiers, strongly suggesting that it is more suitable for classification in high dimensional spaces. To exemplify the performance of the proposed approach, experimentation was done on a large database using a cross validation approach. The best overall classification rates for the SVM obtained using face images and cepstral features were 95.31%, 100% respectively. It was also observed that pitch of a human voice is a good distinguishing feature for the task of gender classification. In this case a simple classifier could be used for gender classification with a little compromise in the recognition accuracy.
Acknowledgments This work was supported in part by the NSF CAREER Grant IIS-97-33644 and NSF Grant IIS.-0081935.
A
Appendix
Overview of Support Vector Machines The main idea behind the SVM algorithm is to separate two classes with a surface which maximizes the margin. This is equivalent to minimizing a bound on generalization error. The decision surface can be represented in the form: g(x) =
l
yi λi K(x, xi ) + b.
i=1
The class label of the vector x is sgn(g(x)) where, yi ∈ {-1,1} is the class label of the data point xi , (the set of xi i = 1, . . . , l is a subset of the training set), K(.) is one of the many possible kernel functions, λi is a multiplier (a Lagrange multiplier) corresponding to xi and b is the bias. The Lagrange parameters (Λ = (λ1 , λ2 , . . . , λl )T ) are obtained as the solution of a convex quadratic programming problem. The xi s are the only data points in the training set relevant to classification since the decision surface is expressed in terms of these points alone. These points are referred to as the Support Vectors. We consider three cases as in Osuna et. al. [19, Section 2.3].
156
L. Walawalkar et al.
1. Linear Classifier and Linearly separable problem This case corresponds to finding a hyperplane for linearly separable data. The dual problem is considered since this can be easily generalized to include linearly non-separable data. The quadratic programming problem is: Maximize 1 F (Λ) = Λ.1 − Λ.DΛ 2 subject to Λ.y = 0 Λ≥0
(1)
where Λ = (λ1 , . . . , λl )T is the vector of Lagrange multipliers, y = (y1 , y2 , . . . yl )T is the vector of class labels of the support vectors and D is a symmetric l × l matrix with elements Dij = yi yj xi .xj The decision function is: l f (x) = sign( yi λ∗i (x.xi ) + b∗ )
(2)
i=1
where x = The test vector to be classified into one of the two predefined classes ( {1,-1} ). l = Number of support vectors xi = i th support vector λ∗i = Lagrange parameter of the i th support vector. b* = bias 2. Linearly Non-Separable Case : Soft Margin Hyperplane The soft margin hyperplane is a generalization of the first case. A penalty is incurred for every training example which “violates” the separation by the hyperplane and the aim is to find a separating hyperplane which incurs the least penalty overall. The penalty is expressed in the form of a penalty term C, which is the penalty per unit distance from the separating hyperplane (on the wrong side of it). The dual problem may be formulated as: (Eqn 30 in Osuna et. al [19]) Maximize F (Λ) = (Λ.1) − 12 Λ.DΛ subject to Λ.y = 0 Λ ≤ C1 Λ≥0 The decision function is the same as in equation 2.
(3)
Support Vector Learning for Gender Classification
157
3. Nonlinear Decision Surfaces The SVM problem formulation can be easily generalized to include nonlinear decision surfaces (for linearly non-separable data) with the aid of kernels. The basic idea behind non linear decision surfaces may be outlined as below. (a) Map the input space into a (higher dimensional) feature space such that the data is linearly separable in the feature space. Specifically, an input vector x is mapped into a (possibly infinite) vector of “feature” variables as below: x → φ(x) = (a1 φ1 (x), a2 φ2 (x), . . . , an φn (x), . . .)
(4)
∞ where {ai }∞ i=1 are real numbers and {φi }i=1 are real functions.
(b) Find the optimal hyperplane in the feature space. The optimal hyperplane is defined as the one with the maximal margin of separation between the two classes. (This has the lowest capacity for the given training set.) It is to be observed that in all problem formulations and the final decision function, the vectors enter into the expressions only in the form of dot products. It is thus convenient to introduce a kernel function K such that K(x, y) = φ(x).φ(y). Using this quantity the quadratic programming problem and the decision function are as follows : Maximize F (Λ) = Λ.1 − 12 Λ.DΛ subject to Λ.y = 0 Λ ≤ C1
(5)
Λ≥0 where y = (y1 , y2 , . . . yl )T and D is a symmetric l × l matrix with elements Dij = yi yj K(xi , xj ) 1 f (x) = sign( yi λ∗i K(x, xi ) + b) i=1
With the aid of kernels (and considering the dual problem) the SVM problem formulation for this case is almost identical to that for the linearly separable case. A significant advantage with kernels is that all computations can be
158
L. Walawalkar et al.
Table 5. List of commonly used kernels Name of kernel Linear
Formula K(x, y ) = x.y
Parameters None. d (degree of the polynomial)
Polynomial kernel K(x, y ) = (1 + x.y )d Gaussian Radial −y 2 Basis Function K(x, y ) = exp(− x2σ ) σ 2 Multilayer Perceptron K(x, y) = tanh(κx.y − θ) κ, θ
performed in the input space itself rather than in the high dimensional space. It is to be noted that the mapping function φ(.) need not be known at all since all the required manipulations can be performed just by knowing the kernel function. Osuna et. al. [19, pages 11-14] provide an overview of kernels. A fairly detailed overview may be found in Burges [3]. Table 3 contains a listing of commonly used kernels.
References 1. V. Bruce, A. Burton, N. Dench, E. Hanna, P. Healey, O. Mason, A. Coombes, R. Fright, and A. Linney. Sex discrimination: How do we tell the difference between male and female faces? Perception, 22, 1993. 154 2. R. Brunelli and T. Poggio. Hyperbf networks for gender classification. IUW, 92:311–314. 146 3. C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2:121–167, 1998. 145, 158 4. A. Burton, V. Bruce, and N. Dench. What’s the difference between men and women? evidence from facial measurement. Perception, 22:153–176, 1993. 154 5. D. Childers, K. Wu, K. Bae, and D. Hicks. Automatic recognition of gender by voice. In Proceedings of the IEEE ICASSP-88, pages 603–606, 1988. 146 6. D. G. Childers and K. Wu. Gender recognition from speech. part 2: Fine analysis. Journal of the Acoustical Society of America, pages 1841–1856, 1991. 146 7. COLEA. A matlab software tool for speech analysis. http://www.utdallas.edu/∼loizou/speech/colea.htm. 151 8. G. Cottrell, J. Metcalfe, and E. Face. Gender and emotion recognition using holons. In R. P. Lippman, J. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems, volume 3, pages 564–571. Morgan Kaufmann., 1991. 146 9. M2VTS database. http://www.tele.ucl.ac.be/projects/m2vts/m2fdb.html. 155 10. J. Fussell. Automatic sex identification from short segments of speech. In Proceedings of the IEEE ICASSP-91, Toronto, Canada, pages 409–412, 1991. 147 11. B. A. Golomb, D. T. Lawrence, and Terrence J. Sejnowksi. Sexnet: A neural network identifies sex from human faces. In Richard P. Lippmann, John E. Moody, and David S. Touretzky, editors, Advances in Neural Information Processing Systems, volume 3, pages 572–579. Morgan Kaufmann Publishers, Inc., 1991. 146
Support Vector Learning for Gender Classification
159
12. S. Gutta, J. R. J. Huang, P. Jonathon, and H. Wechsler. Mixture of experts for classification of gender, ethnic origin, and pose of human faces. IEEE Trans. on Neural Networks, 11(4):948–960, 2000. 146 13. S. Gutta, H. Wechsler, and P. J. Phillips. Gender and ethnic classification of human faces using hybrid classifiers. In Proc. of the IEEE International Conference on Automatic Face and Gesture Recognition, pages 194–199, 1998. 146 14. T. Joachims. Making large-scale svm learning practical. In B. Sch¨ olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT-Press, 1999. Software downloadable from ftp://ftp-ai.cs.unidortmund.de/pub/Users/thorsten/svm light/current/svm light.tar.gz. 145, 149 15. Thorsten Joachims. Text categorization with support vector machines: learning with many relevant features. In Claire N´edellec and C´eline Rouveirol, editors, Proceedings of ECML-98, 10th European Conference on Machine Learning, number 1398, pages 137–142, Chemnitz, DE, 1998. Springer Verlag, Heidelberg, DE. 145 16. B. Moghaddam and M. Yang. Gender classification with support vector machines. In Proc. of 4th IEEE Intl. Conf. on Automatic Face and Gesture Recognition, 2000. 146 17. S. Nayar, S. Nene, and H. Murase. Real-time 100 object recognition system. Technical Report CUCS-019-95, Columbia University, 1994. 149 18. A. M. Noll. Cepstrum pitch determination. Journal of the Acoustical Society of America, 41:293–309, 1967. 148, 151 19. Edgar E. Osuna, Robert Freund, and Federico Giorsi. Support vector machines:training and applications. Technical report, MIT Artificial Intelligence Laboratory and Center for Biological and Computational Learning Department of Brain and Cognitive Sciences, March 1997. 145, 155, 156, 158 20. Eluned S parris and Michael J Carey. Language independent gender identification. In (ICASSP), volume 2, pages 685–688, 1996. 151 21. John C. Platt. Fast training of svms using sequential minimal optimization. In B. Sch¨ olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods Support Vector Learning, pages 185–208. MIT Press, 1998. 145 22. S. Slomka and Sridharan. S. Automatic gender identification optimized for language independence. In IEEE TENCON - Speech and Image Technologies for Computing and Telecommunications, pages 145–148, 1997. 147 23. S. Stevens and J. Volkmann. The relation of pitch to frequency: A revised scale. American Journal of Psychology, 53:329–353, 1940. 151 24. S. H. Tamura, Kawai, and H. Mitsumoto. Male/female identification from 8 x 6 very low resolution face images by neural network¨ o. Pattern Recogntion, 29(2):331– 335, 1996. 146 25. V. Vapnik. Estimation of Dependencies Based on Empirical Data. Springer-Verlag, 1982. 145 26. Vladimir N. Vapnik. The nature of statistical learning theory. Springer Verlag, Heidelberg, DE, 1995. 145 27. V. N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 1995. 145 28. Y. Yang. An evaluation of statistical approaches to text categorization. Journal on Information Retrieval, 1998. 145 29. M. Yeasin and Y. Kuniyoshi. Detecting and tracking human face and eye using a space-variant vision sensor and an active vision head. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 168–173, 2000. 149
Analysis of Nonstationary Time Series Using Support Vector Machines Ming-Wei Chang1 , Chih-Jen Lin1 , and Ruby C. Weng2 1
Department of Computer Science and Information Engineering National Taiwan University, Taipei 106, Taiwan
[email protected], http://www.csie.ntu.edu.tw/~cjlin 2 Department of Statistics National Chengchi University, Taipei 116, Taiwan
Abstract. Time series from alternating dynamics have many important applications. In [5], the authors propose an approach to solve the drifting dynamics. Their method directly solves a non-convex optimization problem. In this paper, we propose a strategy which solves a sequence of convex optimization problems by using modified support vector regression. Experimental results showing its practical viability are presented and we also discuss the advantages and disadvantages of the proposed approach.
1
Introduction
Time series from dynamics systems arise from different real-world applications. It is therefore important to find algorithms which can solve this kind of problems. In this paper, we are interested in drifting dynamics model. That is, at some time steps, the series is a mixture of several operating modes. Our task is to find the weight of each mode. This problem has been studied in, for example, [6,7,5]. Among them, [5] is a straightforward approach for the analysis of switching and drifting dynamics, where a non-convex global optimization problem is solved. Here, we consider the same formulation but tackle it by solving a sequence of convex problems. To be more precise, we hope to separate the mixed series to different functions (modes) and obtain their weights that reflect how series are mixed. Therefore, we propose an iterative procedure which alters in approximating the function of each mode and updating its weight. To approximate the function of each mode, we consider a modified support vector regression (SVR) formulation. Recently support vector machine (SVM) [11] has been a promising method for data classification and regression. However, its use in time series modeling has not been exploited much. An earlier work using SVR for time series segmentation is [3] which considers a simpler problem of switching dynamics. This paper is organized as follows. In Section 2, we define the problem and present our approach. Section 3 demonstrates experimental results on some data S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 160–170, 2002. c Springer-Verlag Berlin Heidelberg 2002
Analysis of Nonstationary Time Series Using Support Vector Machines
161
sets. Discussion about the advantages and disadvantages of the proposed method is in Section 4.
2
An Algorithm for Drifting Dynamics
Given a stationary time series {yt }, usually we can model it by using yt = f (xt ), where xt = [yt−1 , yt−2 , . . . , yt−d ]T , and d is the embedding dimension. However, this is not true if this time series is non-stationary. For example, if the time series is from switching dynamics, we can not determine one unique f to satisfy the whole sequence. Instead, several functions may have to be used. If m is the number of modes in this system, then we find out f1 , . . . , fm for modeling the time series. That is, (1) yt = frt (xt ), ∀t = 1, . . . , l, where l is the length of the time series. The index rt ∈ {1, . . . , m} means functions alter according to rt . Many papers have tackled this issue by using neural networks. See [10,9,4] and references therein. Recently an approach using support vector machines is [3]. One more difficult problem is how to model time series from drifting dynamics. For this type of time series, yt can not be determined by one single function but a mixture of several functions. We can assume that there are m functions such that m pti fi (xt ), ∀t = 1, . . . , l, (2) yt = i=1
with
m
pti = 1, ∀t = 1, . . . , l
i=1
and 0 ≤ pti ≤ 1, ∀t = 1, . . . , l, i = 1, . . . , m. This kind of problems is very complicated because of the high degrees of freedom in the formula (2). In other words, there are simply too many possible choices of pti and fi (xt ). In [5], when proposing an approach for drifting dynamics, the authors had to make some assumptions such that the problem is easier. In this paper, we will follow these assumptions–for example, we assume that the number of functions is known a priori. Furthermore, we consider problems where for each i, pti , t = 1, . . . , l, is slowly changing. If we take this problem as an optimization issue, then it has the following form: l m min (yt − pti fi (xt ))2 . (3) t pi ,fi
t=1
i=1
Unfortunately, (3) is a difficult global optimization problem. In [5], the authors use Nadaraya-Watson kernel estimators to model each fi : l yk K(xk , xt )(pki )2 fi (xt ) = k=1 , (4) l k 2 k=1 K(xk , xt )(pi )
162
Ming-Wei Chang et al. 2
where pki become the only variables. K(xk , xt ) = e−γxk −xt is usually called the RBF kernel. Then they solve a complex non-convex problem: min t pi
l
(yt −
t=1
subject to
m
m
pti fi (xt ))2 + λ1
l m
(pti − pt−1 )2 i
t=2 i=1
i=1
pti = 1,
(5)
i=1
0 ≤ pti ≤ 1, t = 1, . . . , l, i = 1, . . . , m, with fi defined in (4). Here, λ1 serves as a penalty parameter so that pti and pt−1 i are close. This reflects that pti , t = 1, . . . , l are slowly changing. Here, we propose to separate (3) into two easier parts: modeling fi and updating the weights pti . pti with a random value in [0, 1] while keeping mAt tfirst, we initialize each t i=1 pi = 1. Then, we fix pi and model fi of the system. After obtaining an approximate fi , we then fix each fi (xt ) to find the best pti . We repeat this process until some stopping criteria are satisfied. 2.1
Modeling the Dynamical System
Unlike [5] which used Nadaraya-Watson kernel estimators, in this paper, we use a modified support vector regression (SVR) formulation to model the dynamical system. In particular, we approximate each fi (x) with a linear function wiT φ(x) + bi , where x is mapped to a higher dimensional space by φ. Note that the original formulation of support vector regression is: 1 T w w+C (ξi + ξi∗ ) 2 i=1 l
min ∗
w,b,ξ,ξ
(6)
subject to yi − (wT φ(xi ) + b) ≤ + ξi , (wT φ(xi ) + b) − yi ≤ + ξi∗ , ξi , ξi∗ ≥ 0, i = 1, . . . , l,
where ξi is the upper training error ( ξi∗ is the lower) subject to the -insensitive tube |y − (wT φ(x) + b)| ≤ . The constraints imply that we would like to put most data xi in the tube |y − (wT φ(x) + b)| ≤ . If xi is not in the tube, there is an error ξi or ξi∗ which we would like to minimize in the objective function. For traditional regression, , is always zero and data are not mapped into higher dimensional spaces. Hence, SVR is a more general and flexible treatment on regression problems. Note that SVR uses a linear loss function li=1 (ξi + ξi∗ ) l instead of the quadratic one: i=1 (ξi2 + (ξi∗ )2 ). The parameters which control the regression quality are the cost of error C, the width of the tube , and the mapping function φ.
Analysis of Nonstationary Time Series Using Support Vector Machines
163
Next, we discuss how to modify SVR for our purpose. In (5), if pti are fixed scalars, it reduces to an unconstrained problem: min fi
l
(yt −
t=1
m
pti fi (xt ))2 .
(7)
i=1
With fi (x) = wiT φ(x) + bi , 12 wiT wi , i = 1, . . . , m, as regularization terms, and the linear -insensitive loss function, (7) is extended to 1 T wi wi + C (ξt + ξt∗ ) 2 i=1 t=1 m
min
wi ,bi ,ξ,ξ ∗
l
subject to − − ξt∗ ≤
m
pti (wiT φ(xt ) + bi ) − yt ≤ + ξt ,
(8)
i=1
ξt ≥ 0, ξt∗ ≥ 0, t = 1, . . . , l. From (7) to (8), an important difference is the add of regularization terms. They avoid data overfitting m t and smooth the functions. Note that m i=1 pi bi can be considered as a single variable b. Then with the condition i=1 pti = 1, we can simply have bi = b, i = 1, . . . , m. Usually we solve SVR through its dual. Here, we also derive the dual of (8) through its Lagrangian: L=
m l 1 T wi wi + C (ξt + ξt∗ ) 2 i=1 t=1
−
l
αt ( + ξt + yt −
m
t=1
−
pti wiT φ(xt ) − b)
i=1
l
α∗t ( + ξt∗ − yt +
t=1
−
(9)
m
pti wiT φ(xt ) + b)
i=1
l
µt ξt −
t=1
l
µ∗t ξt∗ ,
t=1
where α, α∗ , µ, and µ∗ are nonnegative Lagrangian multipliers. By calculating the gradient of L on wi , we have wi = −
l t=1
=
l t=1
αt pti φ(xt ) +
l
α∗t pti φ(xt )
t=1
(−αt + α∗t )pti φ(xt ).
(10)
164
Ming-Wei Chang et al.
In addition, ∂L = 0 = C − αt − µt , ∂ξt ∂L = 0 = C − α∗t − µ∗t , ∂ξt∗
(11) (12)
l l ∂L = αt − α∗t = 0. ∂b t=1 t=1
(13)
Using (10), wiT wi
1 pi ∗ T = (α − α ) . . .
K
p1i
..
∗ (α − α ),
.
pli
pli
where K is an l by l matrix and Kij = φ(xi )T φ(xj ) is the kernel function. By combining (10)-(13) back to (9), the dual problem is 1 (α − α∗ )T Q(α − α∗ ) + eT (α + α∗ ) + y T (α − α∗ ) 2 l l αt − α∗t = 0, subject to min
α,α∗
t=1
0≤
t=1 αt , α∗t ≤
(14)
C, t = 1, . . . , l,
where e is the vector of all ones, y ≡ [y1 , . . . , yl ]T , and Q=
m
diag(pi )Kdiag(pi ).
i=1
Here, we denote pi ≡ [p1i , . . . , pli ]T and diag(pi ) is a diagonal matrix with elements of pi on the diagonal. Immediately, we also know that (14) is still a convex optimization problem. We can further prove that if K is positive definite, then (14) is a strictly convex problem: Theorem 1. If K is positive definite, then so is Q. Proof. Since Q is positive semi-definite, if the result is wrong, then there exists a vector v = 0 such that (15) v T Qv = 0. Using the definition of Q, we assume v = [v 1 , . . . , v l ]T , so v T Qv =
m i=1
viT Kvi ,
Analysis of Nonstationary Time Series Using Support Vector Machines
165
where vi = [vi1 , . . . , vil ]T and vit ≡ v t pti , ∀t = 1, . . . , l. Since K is positive definite, (15) implies vi = 0, i = 1, . . . , m. Therefore, vt = vt
m
pti =
i=1
m
vit = 0, ∀t = 1, . . . , l.
i=1
Then v = 0 contradicts our assumption. After α and α∗ are obtained, our approximate functions are fi (x) =
l
pti (−αt + α∗t )K(xt , x) + b.
t=1
We modify LIBSVM [2], a library for support vector machines, to solve the new formulation (14). 2.2
Updating pti
Based on the current models (i.e., fi , i = 1, . . . , m), we then find the best pti . This is via fixing fi (xt ) in (5) and solve an optimization problem with variables pti . To be more precise, it has the same form as (5) except that in (5), fi (xt ) is a function of pti but here it is a constant. Then m the optimization problem becomes convex. To handle the linear constraint i=1 pti = 1, we add a penalty term in the objective function so an optimization problem with only bounded constraints is obtained: min t pi
l m l m (yt − pti fi (xt ))2 + λ1 (pti − pt−1 )2 + i t=1
λ2 subject to 0
l
t=1 ≤ pti
t=2 i=1
i=1
(1 −
m
pti )2
(16)
i=1
≤ 1, ∀t = 1, . . . , l, i = 1, . . . , m,
where λ2 is a coefficient for the penalty term. We choose the software TRON by Lin and Mor´e [8] to find the optimal solution. TRON implements a Newton’s method which requires the first order (gradient) and the second order (Hessian) information. Therefore, we work on the AMPL interface [1] of TRON which can automatically generate and use the gradient and Hessian provided the objective function is available.
166
2.3
Ming-Wei Chang et al.
Stopping Criterion
A stopping criterion is needed for the proposed algorithm. We define the error function as l m (yt − pti fi (xt ))2 . t=1
i=1
Then we consider the following stopping condition: |err ˆ − err| ≤ 0.02, |err| where err ˆ and err are errors of two consecutive iterations.
3
Experiments
The same examples [5] are used here to illustrate our approach. We generate time series from drifting dynamics using the Mackey-Glass delay differential equation: dx(t) 0.2x(t − td ) = γtd = −0.1x(t) + . dt 1 + x(t − td )10 The first example consists of five segments with three stationary modes and two non-stationary (mixture) modes. The three stationary modes are established by using time delay td to be 17, 23, and 30, respectively. The mixture is generated as follow: dx(t) = aγ17 + bγ23 + cγ30 . (17) dt First we obtain 100 points by using the stationary mode with td = 17, and then another 100 points as mixtures of three modes by using (17) with a = 0.6, b = 0.3, and c = 0.1. Next, we generate 100 points by using the stationary mode with td = 23 and then 100 points by mixing three modes with a = 0.2, b = 0.3, and c = 0.5. Finally, for t = 401, . . . , 500, it is the stationary mode with td = 30. We then apply our iterative method on this data set. For the modified SVR, 2 we set C = 0.5 and = 0. The RBF kernel K(xi , xj ) = e−γxi −xj is used with γ = 1.0. In (16), we set λ1 = 0.5 and λ2 = 100. The embedding dimension is six. That is, yt is the one-step ahead value of xt = [yt−1 , . . . , yt−6 ]T . The procedure of our approach is in Figure 1. Each sub-figure presents the weights of three modes. Typically the number of iterations is ten and we show only the first, second, third, and the final iterations due to space limitation. We can see that our algorithm works very well for stationary parts and is reasonably well for t = 101 to 200. We fail to precisely solve the period t = 301 to 400 and this situation is the same as [5]. Nevertheless, the order of the weights of three modes is still correct. For the second experiment we consider continuous drifting dynamics where there are no stationary parts anymore. Using (17), we generate 500 points by
Analysis of Nonstationary Time Series Using Support Vector Machines
167
Fig. 1. First, second, third, and final iterations: three mixture modes in five segments
168
Ming-Wei Chang et al.
Fig. 2. Final iteration of the second experiment: continuous drifting systems
setting a = 0.5 + 0.5 sin(πt/100), b = 1 − a, and c = 0. That is, two modes are mixed together. To solve this problem, we must modify one parameter of (16) since the changing rate of weights is higher. We set a smaller λ1 = 0.03 and keep all other parameters the same. The final result is in Figure 2. It can be seen the our method easily separates the mixed series. Interestingly this continuous drifting case is easier to be solved. For the previous example, as weight are not continuous at t = 100, 200, 300, and400, the penalty term (pti − p( t − 1)i )2 of (16) may not be very appropriate at these points.
4 4.1
Discussion Avoid Local Minima
The major advantage of our algorithm is that we iteratively solve easier convex optimization problems. Unfortunately, as the goal is still to solve a non-convex problem, sometimes our algorithm can fall into local minima. A possible solution is to consider in the beginning that series are not mixed but only switching dynamics. Then by the competition of experts using approaches in [3,10,9,4], we obtain a better initial set of pti . Then the algorithm is less likely to be trapped into local minima. 4.2
Scaling Models (Functions)
Another possible problem of our approach is that fi (x) may have values in a larger range than those of yt provided. To make fi (x) in the same range of yt , t = 1, . . . , l, we can linearly scale [fi (x1 ), . . . , fi (xl )] back to the interval [ymin , ymax ] where ymin ≡ min yt and ymax ≡ max yt . t=1,...,l
Then the scaled values are used for updating pti .
t=1,...,l
Analysis of Nonstationary Time Series Using Support Vector Machines
169
For the above two examples, our approach already works without such mechanisms. It is interesting to see [5] inherently conducts this scaling. In (4), fi (xk ) is actually a weighted average of y1 , . . . , yl . As K(xk , xt )(pki )2 , k = 1, . . . , l are all positive, fi (xt ) must be in [ymin , ymax ]. 4.3
Comparisons with Approaches for Time Series from Switching Dynamics
In an earlier work [3] we consider time series which alter in time. That is, each yt is related to only one model (see (1)). We also use pti to indicate the weight that the tth point is associated with the ith function. For each time point t, only one of the true pti , i = 1, . . . , m, is one and the others are zero. Hence, the optimization problem considered is min t
pi ,fi
m l
pti (yt − fi (xt ))2 .
i=1 t=1
When pti are fixed, we employ SVR to approximate fi : 1 T wi wi + C pti (ξit + ξit,∗ ) 2 t=1 l
min
wi ,bi
subject to − − ξit,∗ ≤ yt − (wiT φ(xt ) + bi ) ≤ + ξit , ξit
≥
0, ξit,∗
(18)
≥ 0, t = 1, . . . , l,
Therefore, m SVRs have to be solved. If the changing rate is slow, given ∆, xt−∆ , . . . , xt , . . . , xt+∆ should be with the same series. Based on this property, there is a method to update pti . Note that (7) can also be considered as the formulation for such time series as finally we require one pti = 1 and others are zero. Then following the approach in this paper, there is an advantage that only one SVR is required in each iteration. Note that dual problems of (18) and (7) both have 2l variables. If m SVRs (18) are replaced by (7) and the same rule for updating pti in [3] is used, the solution quality is nearly the same. However, if this particular rule for updating pti is not used and instead, (7) and (16), i.e., the whole approach proposed in this paper, are used, results are less stable. This clearly shows that time series from switching dynamics is much simpler than the drifting case discussed in this paper. For the non-drifting case, the property that eventually only one of pt1 , . . . , ptm is one provides useful information and makes the problem easier. 4.4
Future Directions
We hope to develop a simple way for updating pti . Then, the optimization problem in Section 2.2 is not needed.
170
Ming-Wei Chang et al.
Acknowledgments This work was supported in part by the National Science Council of Taiwan via the grant NSC 90-2213-E-002-111. The authors thank J. Kohlmorgen who kindly provided us the code for generating non-stationary series using the Mackey-Glass equation.
References 1. NEOS AMPL interfaces to TRON. Software available at http://neos.mcs.anl.gov/neos/solvers/BCO:TRON-AMPL/. 165 2. C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm. 165 3. M.-W. Chang, C.-J. Lin, and R. C. Weng. Analysis of switching dynamics with competing support vector machines. In Proceedings of IJCNN, 2002. 160, 161, 168, 169 4. A. Kehagias and V. Petridis. Time-series segmentation using predictive modular neural networks. Neural Computation, 9:1691–1709, 1997. 161, 168 5. J. Kohlmorgen, S. Lemm, G. R¨ atsch, and K.-R. M¨ uller. Analysis of nonstationary time series by mixtures of self-organizing predictors. In Proceedings of IEEE Neural Networks for Signal Processing Workshop, pages 85–94, 2000. 160, 161, 162, 166, 169 6. J. Kohlmorgen, K.-R. M¨ uller, and K. Pawelzik. Segmentation and identification of drifting dynamical systems. pages 326–335, 1997. 160 7. J. Kohlmorgen, K.-R. M¨ uller, J. Rittweger, and K. Pawelzik. Identification of nonstationary dynamics in physiological recordings. Biological Cybernetics, 83:73– 84, 2000. 160 8. C.-J. Lin and J. J. Mor´e. Newton’s method for large-scale bound constrained problems. SIAM J. Optim., 9:1100–1127, 1999. 165 9. K.-R. M¨ uller, J. Kohlmorgen, and K. Pawelzik. Analysis of switching dynamics with competing neural networks. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E78–A(10):1306–1315, 1995. 161, 168 10. K. Pawelzik, J. Kohlmorgen, and K.-R. M¨ uller. Annealed competition of experts for a segmentation and classification of switching dynamics. Neural Computation, 8(2):340–356, 1996. 161, 168 11. V. Vapnik. Statistical Learning Theory. Wiley, New York, NY, 1998. 160
Recognition of Consonant-Vowel (CV) Units of Speech in a Broadcast News Corpus Using Support Vector Machines C. Chandra Sekhar, Kazuya Takeda, and Fumitada Itakura Center for Integrated Acoustic Information Research Dept. of Information Electronics, Nagoya University, Nagoya, Japan
Abstract. This paper addresses the issues in recognition of the large number of subword units of speech using support vector machines (SVMs). In conventional approaches for multi-class pattern recognition using SVMs, learning involves discrimination of each class against all the other classes. We propose a close-class-set discrimination method suitable for large-class-set pattern recognition problems. In the proposed method, learning involves discrimination of each class against a subset of classes confusable with it and included in its close-class-set. We study the effectiveness of the proposed method in reducing the complexity of multi-class pattern recognition systems based on the one-against-the rest and one-against-one approaches. We discuss the effects of symmetry and uniformity in size of the close-class-sets on the performance for these approaches. We present our studies on recognition of 86 frequently occurring Consonant-Vowel units in a continuous speech database of broadcast news.
1
Introduction
Approaches to large vocabulary continuous speech recognition are based on acoustic modeling of subword units of speech such as context-dependent phones (diphones and triphones) [11] and syllables [7]. Recognition of these subword units is a large-class-set pattern classification problem because of the large number (typically, a few thousands) of units. Hidden Markov models and neural network models (such as multilayer perceptrons and recurrent networks) are commonly used for acoustic modeling of subword units [11] [9]. Recently, support vector machine based approaches have been explored for tasks such as recognition of isolated utterances of vowels [3], recognition of a small set of consonants in continuous speech [13], and acoustic modeling of context-independent phone units [6]. Extension of these approaches to recognition of context-dependent units involves large-class-set pattern classification. In many languages, the ConsonantVowel (CV) units have the highest frequency of occurrence among different forms of subword units. Therefore, recognition of CV units with a good accuracy is crucial for development of a speech recognition system. Confusability (partial acoustic similarity) among several CV units is high mainly because of the similarities in the speech production mechanism of the CV utterances. Recognition S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 171–185, 2002. c Springer-Verlag Berlin Heidelberg 2002
172
C. Chandra Sekhar et al.
of utterances of letters in E-set of English alphabet is known to be a difficult task [14]. Since the number of confusable classes among CV units is much larger, recognition of CV units is a more challenging task. When the classes are confusable, the emphasis of the classification methods should be on discrimination between classes [4], and the generalization ability of the classifier should be based on the property of closeness between the data of classes [16]. Pattern classification using support vector machines (SVMs) meets these requirements. In this paper, we address the issues in recognition of a large number of CV units using SVMs. When the number of classes is large, each class may be close to only a subset of classes. A close-class-set may be identified for each class based on the confusability between the classes. It is expected that the data points belonging to a particular class and the classes in its close-class-set fall in adjacent regions in the pattern space. The data points of the other classes are expected to be far from the region of the class under consideration, and may not be important in construction of the decision boundary for that class. In such a case, the decision boundary may be constructed by discriminating a class against the classes in its close-class-set only. In the close-class-set discrimination method, we exploit the property of SVMs that the decision boundary for a class is determined mainly by the data points near the boundary. In this paper, we demonstrate that the close-class-set discrimination method can be used to reduce the complexity of recognition systems for a large number of subword units of speech. The organization of the paper is as follows: In Section 2, we review different approaches to multi-class pattern recognition using support vector machines and then present the proposed close-class-set discrimination method. In Section 3, we present our studies on recognition of continuous speech segments of a large number of CV units in a broadcast news database.
2
Multi-class Pattern Recognition Using SVMs
Support vector machines are originally designed for two-class pattern classification. Multi-class pattern recognition problems can be solved using a direct extension of the support vector learning method for multiple classes [15] or using a combination of binary SVMs [1] [5]. The direct method involves solving a large optimization problem and may not be suitable for large-class-set pattern recognition. The combination method uses a number of binary SVMs and a decision strategy to decide the class of the input pattern. Now we present the approaches for decomposition of the learning problem in multi-class pattern recognition into several two-class learning problems. The training data set {(xi , ci )} consists of N examples belonging to M classes. The class label ci ∈ {1, 2, ..., M }. We assume that the number of examples for each class is the same, i.e., N/M .
Recognition of Consonant-Vowel (CV) Units of Speech
2.1
173
Full-Class-Set Discrimination Based Approaches
Conventional approaches for multi-class pattern recognition involve discrimination of a class against all the other classes. Two commonly used approaches are one-against-the rest approach and one-against-one approach [15]. One-Against-the-Rest Approach In this approach, an SVM is constructed for each class by discriminating that class against the remaining (M − 1) classes. The recognition system based on this approach consists of M SVMs. A test pattern x is classified by using the winner-takes-all decision strategy, i.e., the class with the maximum value of the discriminant function D(x) is assigned to it. All the N training examples are used in constructing an SVM for a class. The SVM for class l is constructed using the set of training examples and their desired outputs, {(xi , yi )}. The desired output yi for a training example xi is defined as follows: yi = +1, = −1,
if if
ci = l ci =l
(1)
An optimal hyperplane is constructed to separate N/M positive examples from N (M − 1)/M negative examples. The much larger number of negative examples leads to an imbalance, resulting in the dominance of negative examples in determining the decision boundary [5]. The extent of imbalance increases with the number of classes and is significantly high for large-class-set pattern recognition. One-Against-One Approach In this approach, an SVM is constructed for every pair of classes by training it to discriminate the two classes. The number of SVMs used in this approach is M (M − 1)/2. An SVM for a pair of classes (l, m) is constructed using 2N/M training examples belonging to the two classes only. The desired output yi for a training example xi is defined as follows: yi = +1, = −1,
if if
ci = l ci = m
(2)
The decision strategies to determine the class of a test pattern x in this approach are: (1) Maxwins strategy, and (2) Decision directed acyclic graph (DDAG) strategy [10]. In the maxwins strategy, a majority voting scheme is used. If Dlm (x), the value of the discriminant function of the SVM for a pair of classes (l, m), is positive, then class l wins a vote. Otherwise, class m wins a vote. The class with maximum number of votes is assigned to the test pattern. When there are multiple classes with the maximum number of votes, the class with maximum value of the average magnitude of discriminant functions (AMDF) is assigned. The average magnitude of discriminant functions for class l that wins n votes is defined as follows: AM DFl =
1 |Dlm (x)| n m
(3)
174
C. Chandra Sekhar et al.
where the summation is over all m against which class l has a win. The maxwins strategy needs evaluation of discriminant functions of all the SVMs in deciding the class of a test pattern. The DDAG strategy uses a rooted binary acyclic graph with M (M − 1)/2 internal nodes and M leaf nodes [10]. Each internal node corresponds to the SVM for a pair of classes. For deciding the class of a test pattern, the graph is traversed starting at the root node. The path of traversal is based on the values of the discriminant functions of SVMs. If the value at a node is positive, then the left child of the node is visited next. Otherwise, the right child of the node is visited next. The traversal is complete when a leaf node is reached. The class of the leaf node reached is assigned to the test pattern. In this strategy, it is necessary to evaluate the discriminant functions of the SVMs corresponding to the (M − 1) nodes in the traversal path only. The two approaches for decomposition of the M -class learning problem can be viewed as different methods for training the recognition system to discriminate each class against the remaining (M − 1) classes. These approaches that involve learning to discriminate among all the classes are called full-class-set discrimination based approaches. For large-class-set pattern recognition, these approaches have the following limitations: The one-against-the-rest approach has the drawback of high imbalance between the number of positive and negative examples. The one-against-one approach needs construction of a large number of SVMs. We propose close-class-set discrimination based approaches [12] to overcome these limitations. 2.2
Close-Class-Set Discrimination Based Approaches
Construction of decision boundaries in support vector machines is based on the property of closeness among the data points of the classes being separated. For a suitable pattern representation, the closeness among the data points is dependent on confusability among the classes. In large-class-set pattern recognition problems, generally a small subset of classes, called as close-class-set, have high confusability with a particular class. In such a case, the decision boundary may be constructed by discriminating a class against the classes in its close-class-set only. In the close-class-set discrimination method, we exploit the property of SVMs that the decision boundary for a class is determined mainly by the data points near the boundary. We consider three different criteria for estimating the confusability between a pair of classes: (1) Description of classes, (2) A measure of similarity between the example patterns of classes, and (3) Margin of pairwise classification SVMs for the pair of classes. In the description based method, a class is represented using a set of F descriptive features (f1 , f2 , ..., fF ). Confusability between a pair of classes (l, m) based on their description may be quantified as: Cdes (l, m) =
F j=1
δ(fj (l), fj (m))
(4)
Recognition of Consonant-Vowel (CV) Units of Speech
175
where fj (l) is the jth feature for class l, and δ(.) is the Kronecker delta function. The estimate Cdes indicates the number of descriptive features that are shared by a pair of classes. For a fixed (user-chosen) value of the minimum degree of partial similarity D, the close-class-set for class l, CCS(l), is defined as: CCS(l) = {m|Cdes (l, m) ≥ D}, m =l, m ∈ {1, 2, ..., M }
(5)
The close-class-sets identified using the description of classes are symmetric, i.e., the close-class-sets of any two classes l and m satisfy the following condition: l ∈ CCS(m)
and m ∈ CCS(l)
(6)
The size of the close-class-set, denoted by Ncd , indicates the number of classes against which a class is discriminated during learning. For a chosen value of D, the size of the close-class-set may not be the same for all the classes. If there exists a class for every possible combination of F descriptive features, then the value of Ncd is the same for all the classes. Confusability between a pair of classes can also be estimated using a similarity function between the example patterns of the classes [4]. Let s(xi , xj ) be a symmetric function that measures similarity between two pattern vectors xi and xj . For SVMs, the similarity measure is meaningful when it is computed in the high-dimensional feature space corresponding to the nonlinear kernel function used. The kernel function K(xi , xj ) corresponds to the inner product operation in the high-dimensional feature space. Since the inner product is a meaningful measure of similarity [8], we consider s(xi , xj ) = K(xi , xj ) as the similarity function. An estimate of confusability between a pair of classes (l, m) is obtained by computing the similarity function between each of n examples of class l and each of n examples of class m, as follows: n n 1 Csim (l, m) = 2 K(xli , xmj ) n i=1 j=1
(7)
The close-class-set of a fixed (user-chosen) size, denoted by Ncd , for class l is chosen as follows: For class l, an ordered list of all the other classes is obtained in the decreasing order of the estimates Csim (l, m), m = 1, 2, ..., M , and m =l. The first Ncd classes in the ordered list are included in the close-class-set for class l, CCS(l). It may be noted that this method for identification of closeclass-sets is dependent on the kernel function chosen to implement the SVMs. The margin of a pairwise classification SVM also indicates the confusability between a pair of classes. The margin of an SVM is dependent on the norm of its weight vector w. It is not possible to explicitly compute the norm ||w||. However, the value of the discriminant function indicates the distance of a pattern from the hyperplane. The magnitude of the discriminant function is expected to be small for confusable classes. An estimate of confusability is obtained from the values of the discriminant function for n examples of class l. Depending on whether the examples of class l are used as positive examples or negative examples in
176
C. Chandra Sekhar et al.
training an SVM for a pair of classes (l, m), an estimate of confusability between them is obtained as follows: 1 Dlm (xli ), n i=1 n
Cdf (l, m) =
1 −Dml (xli ), n i=1
if
yi = +1
(8)
n
Cdf (l, m) =
if
yi = −1
(9)
Here Dlm is the discriminant function of an SVM trained with examples of class l as positive examples, and Dml is the discriminant function of an SVM trained with examples of class l as negative examples. The close-class-set of a fixed size is chosen as in the similarity measure based method, except that the ordered list is obtained in the increasing order of the estimates Cdf (l, m), m = 1, 2, ..., M , and m =l. The discriminant function based method for identification of close-classsets uses the pairwise SVMs trained in the one-against-one approach based on full-class-set discrimination. It may be noted that the similarity measure based method and the discriminant function based method do not guarantee symmetry among the close-class-sets of classes. In the one-against-the-rest approach based on close-class-set discrimination, the decision boundary for class l is constructed using N/M positive examples and (N ∗ Ncd )/M negative examples belonging to the Ncd classes in CCS(l). The desired output yi is defined as: yi = +1, = −1,
if if
ci = l ci ∈ CCS(l)
(10)
The extent of imbalance between the number of positive and negative examples in construction of an SVM for a class depends on the value of Ncd . When the close-class-sets of classes are not uniform in size, the extent of imbalance varies for each class. Compared to the full-class-set discrimination based approach, the imbalance is reduced by a factor of (M−1) Ncd . In the one-against-one approach based on close-class-set discrimination, the pairwise classification SVMs to be used depends on the sizes of the close-classsets as well as the extent of symmetry among the close-class-sets. When the close-class-sets are symmetric for all pairs of classes and are of the same size, the number of SVMs is (M ∗ Ncd )/2. Otherwise, the number of SVMs is greater than (M ∗ Ncd )/2. However, it is typically much smaller than (M (M − 1))/2. The maxwins decision strategy can be used when the close-class-sets are uniform in size. When the size of the close-class-set is not the same for all the classes, the decision strategy is modified as follows: Consider each of the classes for which the number of votes won by it is equal to the size of its close-class-set. Amongst these classes, the class with the maximum value of AM DF , defined in Eq.(3), is assigned to the test pattern. The number of discriminant functions to be evaluated in making the decision is the same as the number of SVMs used in the system. In the close-class-set discrimination method, the DDAG strategy can be considered only when the close-class-sets are symmetric [12].
Recognition of Consonant-Vowel (CV) Units of Speech
177
In the next section, we address the issues in recognition of continuous speech segments of CV units in a corpus of broadcast news. We study the CV recognition performance for different approaches to multi-class pattern recognition.
3
Studies on Recognition of CV Units
Phonemes, the sounds corresponding to consonants and vowels, are the basic speech units of a language. A consonant is mostly followed or preceded by a vowel to form a speech production unit. The Consonant-Vowel (CV) units have a high frequency of occurrence in a text for many languages. Therefore, the CV segments cover a significant portion of continuous speech. The focus of our studies is on recognition of CV units in Indian languages. In continuous speech, the temporal and spectral characteristics of subword units vary significantly depending on the context and the speaking rate. Automatic speech recognition of broadcast news data has become a challenging research topic in recent years [2]. Because of the channel distortion, the broadcast news speech data is noisy and degraded. Discriminative training based approaches are important for acoustic modeling of subword units occurring in noisy and degraded speech data. In this section, we present our studies on recognition of CV segments excised from continuous speech in a broadcast news corpus for Indian languages built at Speech and Vision Laboratory, Indian Institute of Technology, Madras, India. Speech data of television broadcast news in Telugu, a south Indian language, is collected for about five hours over 20 sessions of news reading by 11 male readers and 9 female readers. The speech data is digitized at a sampling frequency of 16 kHz. Boundaries of CV syllables in continuous speech are manually marked. In our studies, the data in 15 sessions of news reading by 8 male speakers and 7 female speakers is used for training. The data in the remaining 5 sessions by 3 male speakers and 2 female speakers is used for testing. The CV units have a varying frequency of occurrence in the database. There are 33 consonants and 10 vowels in Telugu leading to a total of 330 CV units. We consider 86 CV classes that have a frequency of occurrence greater than 100 in our studies. Out of a total of about 39,750 CV segments in the training data, about 35,450 segments (i.e., about 89%) belong to these 86 CV classes. The frequency of occurrence for these classes in the training data varies in the range of 105 to 1892. The test data includes about 12,800 CV segments belonging to the 86 CV classes. 3.1
Description of CV Classes
Three descriptive features for consonants are: (1) Category of consonant, (2) Place of articulation, and (3) Manner of articulation. Five categories of consonants are: (1) Stop consonants, (2) Affricates, (3) Nasals, (4) Semivowels, and (5) Fricatives [11]. Consonants are produced by a total or partial constriction of vocal tract. The point of constriction along the vocal tract specifies the place of articulation of a consonant. The places of articulation for consonants in Indian languages are: (1) Glottal, (2) Velar, (3) Palatal, (4) Alveolar, (5) Dental,
178
C. Chandra Sekhar et al.
Table 1. List of 86 Consonant-Vowel (CV) classes Category of Place of Manner of Consoarticulation nant consonant articulation Stop Velar Unvoiced k consonants Velar Unvoiced-Aspirated kh Velar Voiced g Alveolar Unvoiced t. Alveolar Voiced d. Dental Unvoiced t Dental Voiced d Dental Voiced-Aspirated dh Bilabial Unvoiced p Bilabial Voiced b Bilabial Voiced-Aspirated bh Affricates Palatal Unvoiced c Palatal Voiced j Nasals Alveolar Voiced n. Dental Voiced n Bilabial Voiced m Semivowels Palatal Voiced y Alveolar Voiced r Dental Voiced l Bilabial Voiced w Fricatives Alveolar Unvoiced sh Palatal Unvoiced s. Dental Unvoiced s Glottal Unvoiced h
CV classes ka, kA, ki, ku, kO kha ga, gA, gi, gu t.a, t.A, t.i, t.I, t.u d.a, d.A, d.i, d.u ta, tA, ti, tI, tu, tU, te, tO da, dA, di, du, dE dhA, dhi pa, pA, pi, pu, pO ba, bA bha, bhA ca, cA, ci, cE ja, jA, ju n.a na, nA, ni, nu, nE ma, mA, mi ya, yA, yi, yu ra, rA, ri, ru, rO la, lA, li, lu, lE, lO wa, wA, wi, wE shA s.a sa, sA, si, su ha, hA
and (6) Bilabial. The manner of articulation specifies the presence or absence of vibration of vocal cords and aspiration during production of a consonant. The manners of articulation are: (1) Unvoiced, (2) Unvoiced-aspirated, (3) Voiced, and (4) Voiced-aspirated. Consonants of a language typically correspond to a small subset of all possible combinations of these descriptive features. There are 10 vowels consisting of five short vowels /a/, /i/, /u/, /e/, /o/, and five long vowels /A/, /I/, /U/, /E/ and /O/ in Indian languages. The list of 86 CV classes considered in our studies are given in Table 1. Each CV class is described using the three descriptive features of the consonant, and the vowel present in it. Description based close-class-sets for a CV class can be identified using Eq. (4) and Eq. (5). For a chosen value of the minimum degree of partial similarity D, the size of the close-class-sets is not the same for all the classes. This is because the number of CV classes in a language is much smaller than the number of all possible combinations of descriptive features.
Recognition of Consonant-Vowel (CV) Units of Speech
3.2
179
Representation of CV Segment Data
Depending on the context in continuous speech and the speaking rate, the duration of segments of CV units vary significantly. In the broadcast news data, the average duration of segments for a CV class is in the range of 75 milliseconds to 205 milliseconds. For CV recognition using SVMs, patterns extracted from CV segments should be of a fixed length.The method used for extraction of fixed length patterns from varying duration CV segments is as follows: A CV segment is analyzed frame by frame, with a frame size of 25 milliseconds and a frame shift of 10 milliseconds. The length of a segment, SL, specified as the number of frames is dependent on its duration as follows: SL =
Duration − F ramesize +1 F rameshif t
(11)
Mel-frequency cepstral coefficients have been shown to give the best performance for broadcast news speech data [2]. Therefore, each frame is represented by a parametric vector consisting of 12 mel-frequency cepstral coefficients, energy, their first order derivatives (delta coefficients), and their second order derivatives (acceleration coefficients). Thus the dimension of each frame is 39. For a chosen pattern length P L specified as the number of frames, the fixed length patterns are obtained by linear compaction or elongation of CV segments. If the segment length SL is greater than P L, a few frames of the segment are omitted. If the segment length SL is smaller than P L, a few frames of the segment are repeated. The linear relationship between the index s of a frame in the CV segment and the index p of a frame in the CV pattern is as follows: p ∗ SL , p = 0, 1, ..., P L − 1, and s = 0, 1, ..., SL − 1 (12) PL The length of the CV segment is also included in the pattern as an additional parameter. We study the performance of CV recognition systems for different values of P L. s=
3.3
Full-Class-Set Discrimination Based Approaches
The performance of 86-class CV recognition systems using hidden Markov models (HMMs) is compared with the performance of Gaussian kernel SVM based systems. A 5-state left-to-right continuous density HMM using multiple mixtures with diagonal covariance matrix is trained for each class. The number of mixtures is 4 for the CV classes with a frequency of occurrence less than 500 in the training data. For the other classes, the number of mixtures is 8. All the frames of a CV segment are used in training and testing the HMM based system. For the SVM based systems, the width of the Gaussian kernel is 50 and the trade-off parameter C is 100. The SVMs are trained using the one-against-the-rest approach. We consider the pattern lengths of 3, 10 and 14 frames for all the classes. When the length is 3, the pattern typically includes a frame in the consonant region, a frame in the vowel region and a frame in the transition region of a CV
180
C. Chandra Sekhar et al.
Table 2. Performance of CV recognition systems built using the one-againstthe-rest approach based on full-class-set discrimination Models P L N SV HMMs — SVMs 3 10 14 ASL
— 1370 2472 3707 2735
Accuracy (in %) 1-Best 2-Best 5-Best 59.71 74.58 88.35 54.10 67.28 81.30 62.00 74.88 87.21 61.64 75.37 87.62 61.01 74.41 85.96
segment. The length of 10 frames corresponds to the average length of all the CV segments in the training data. The average segment length for a CV class is in the range of 6 to 19 frames. For most of the 86 CV classes, the average segment length is less than or equal to 14 frames. Therefore, we also consider the pattern length of 14 frames. For a particular value of P L, patterns are derived from each of the segments in the training and test data using the method explained earlier. Depending on the average segment length for the class and the value of P L, the distortion due to compaction or elongation varies for each class. To minimize the distortion for all the classes, we also consider the average segment length of a class, ASL, as the fixed length of patterns used in training and testing the SVM for that class. The 86-class CV recognition performance of the HMM based system and the SVM based systems trained using different values of P L is given in Table 2. The performance is given as the N -best correct classification accuracy for different values of N on the test data. The N -best accuracy corresponds to the case when the correct class of a test pattern is among the top N alternatives given by the system. The SVM based system gives the highest 1-best accuracy when P L is 10. A larger value of 14 also gives about the same performance. However, the complexity of the system, indicated by the average number of support vectors per class, is significantly higher. When the average segment length of a class is used as P L, the accuracy is marginally reduced and the complexity is marginally higher. For all these systems, the 1-best accuracy is better than that of the HMM based system. For SVMs trained using 3-frame patterns, the accuracy is poor indicating that the loss of information in extracting the patterns from CV segments is high. The 2-best and 5-best accuracies suggest that the correct class is present among the top two alternatives for about 75% of the test patterns and among the five alternatives for about 88% of the test data. This performance is significant considering that many CV classes are confusable and the speech data is noisy and degraded. It is observed that many errors in CV recognition are due to misclassification of short vowels as long vowels and vice versa. In most
Recognition of Consonant-Vowel (CV) Units of Speech
181
Table 3. Performance of different approaches for multi-class pattern recognition. The complexity of recognition systems is indicated by the number of SVMs, the average number of support vectors per SVM (N SV ) and the number of discriminant functions evaluated (N DF ) No. of N SV N DF Accuracy SVMs (in %) One-against-the-rest 86 2472 86 62.00 One-against-one with 3655 372 3655 58.40 maxwins strategy One-against-one with 3655 372 85 58.47 DDAG strategy Approach
of such cases, the correct CV class is present in the second position among the alternatives given by the system. We consider the SVM based systems trained using 10-frame patterns for the remaining studies presented in this paper. The performance of the one-againstthe-rest approach based system is compared with that of the one-against-one approach based systems in Table 3. The performance of the one-against-one approach based systems is approximately the same for the maxwins strategy and the DDAG strategy. However, their accuracy is less by about 3.5% compared to the one-against-the-rest approach based system. 3.4
Close-Class-Set Discrimination Based Approaches
The close-class-sets based on the description of CV classes are symmetric. However, they are not of the same size for all the classes. The close-class-sets based on the similarity measure and the discriminant function are identified using 20 example patterns of each CV class. For these methods, the close-class-sets are not guaranteed to be symmetric though they are uniform in size. We study the effects of these properties of close-class-sets on recognition of CV segments. One-Against-The-Rest Approach We first consider the close-class-sets identified using the description of classes. The performance of 86-class CV recognition systems is compared for different extents of confusability indicated by the minimum degree of partial similarity D. The four descriptive features used for CV classes are: (1) Category of consonant, (2) Place of articulation of the consonant, (3) Manner of articulation of the consonant, and (4) Vowel. The size of the closeclass-sets, Ncd , is in the range of 46 to 83 for D = 1, in the range of 15 to 59 for D = 2, and in the range of 3 to 22 for D = 3. The full-class-set discrimination based system corresponds to the case of D = 0 and Ncd = 85. Even for the same value of D, the extent of imbalance between the number of positive and negative examples varies for each class. The performance of recognition systems
182
C. Chandra Sekhar et al.
Table 4. Performance for one-against-the-rest approach based on close-class-set discrimination. Close-class-sets are identified using the description of classes Minimum degree of Range of Average N SV Accuracy Ncd (in %) partial similarity D Ncd 0 85 85 2472 62.00 1 46-83 69 2435 61.87 2 15-59 36 2246 59.28 3 3-22 10 1694 46.25
Table 5. Performance for one-against-the-rest approach based on close-classset discrimination. Methods for identification of close-class-sets based on the similarity measure and the discriminant function are compared Ncd Similarity measure based close-class-sets N SV Accuracy (in %) 85 2472 62.00 36 2286 61.27 10 1603 52.34
Discriminant function based close-class-sets N SV Accuracy (in %) 2472 62.00 2346 60.26 1879 54.50
for different values of D is given in Table 4. For D = 1, the average number of support vectors and the classification accuracy is almost the same as that of the full-class-set discrimination based system. For D = 2, the average number of support vectors reduces by 9.1% and the accuracy decreases by 2.7%. For D = 3, the poor accuracy indicates the inadequacy of the classes in the close-class-sets to provide the necessary discrimination. The performance for the similarity measure and discriminant function based methods of close-class-set identification is given in Table 5. The performance is given for Ncd values of 36 and 10 that correspond to the average Ncd in the description based method for D = 2 and D = 3 respectively. For these values of Ncd , the decrease in the average number of support vectors is significant for all the three methods. It is noted that the classification accuracy is better for the similarity measure and discriminant function based methods compared to the description based method. The better performance is mainly due to the uniform size of the close-class-sets that leads to lesser variations in the extents of imbalance for different CV classes. The results of these studies clearly demonstrate that it is possible to form a decision boundary for each class by discriminating it against only 36 confusable classes in its close-class-set, and achieve almost the same performance as that of the full-class-set discrimination based system.
Recognition of Consonant-Vowel (CV) Units of Speech
183
One-Against-One Approach The performance for the one-against-one approach based on close-class-set discrimination using the description of classes for close-class-set identification is given in Table 6. Performance is given for different values of the minimum degree of partial similarity D. For D = 1, the accuracy is almost the same as that of the full-class-set discrimination based system, and the number of pairwise SVMs is reduced by 18%. For D = 2, the accuracy decreases by 2.3% for a reduction in the number of SVMs by about 57%. For D = 3, there is a significant decrease of 8.2% in the accuracy and the number of SVMs reduces by about 87% compared to the full-class-set discrimination based system. Because of the symmetry among close-class-sets, the reduction in the number of SVMs is significant even for D = 1 though the value of average Ncd is large. . For the close-class-sets identified using the similarity measure and the discriminant function based methods, the number of SVMs is dependent on the size Ncd as well as the extent of symmetry among the close-class-sets. The number of SVMs and the classification accuracy are given in Table 7 for different values of Ncd . It is seen that the decrease in the accuracy is less than 2.0% for the close-class-set sizes of 40 and above. For Ncd = 40, the number of SVMs decreases by about 41% for the similarity measure based method, and by about 35% for the discriminant function based method. For the values of Ncd above 20, both the methods give almost the same accuracy. However, the number of SVMs is larger for the discriminant function based method. It is also seen that the extent of symmetry is higher for the similarity measure based close-classsets. Among different methods for close-class-set identification, the symmetry of close-class-sets in the description based method leads to a smaller number of SVMs in the system for approximately the same classification accuracy. In this section, we presented our studies on recognition of CV units. The SVM based systems give a marginally higher classification accuracy than the HMM based system. The best performance is obtained when the pattern length is equal to the average segment duration of all the CV classes. For the full-classset discrimination method, the one-against-the-rest approach based system gives a better performance than the one-against-one approach based systems. For the close-class-set discrimination method, the performance is studied for different methods of close-class-set identification. In the one-against-the-rest approach,
Table 6. Performance for one-against-one approach based on close-class-set discrimination. Close-class-sets are identified using the description of classes D Range of Average No. of N DF Accuracy Ncd Ncd SVMs (in %) 0 85 85 3655 3655 58.40 1 46-83 69 2970 2970 58.33 2 15-59 36 1571 1571 56.11 3 3-22 10 467 467 50.22
184
C. Chandra Sekhar et al.
Table 7. Performance of CV recognition systems built using one-against-one approach based on close-class-set discrimination. Methods for identification of close-class-sets based on the similarity measure and the discriminant function are compared Ncd M ∗ Ncd /2 Similarity measure based close-class-sets No. of Accuracy SVMs (in %) 85 3655 3655 58.40 70 3010 3360 58.15 60 2580 3024 57.96 50 2150 2616 57.61 40 1720 2159 56.83 30 1290 1707 55.36 20 860 1198 52.32 10 430 656 48.03
Discriminant function based close-class-sets No. of Accuracy SVMs (in %) 3655 58.40 3483 58.02 3207 57.79 2836 57.18 2395 56.42 1919 55.34 1366 54.19 725 50.49
the similarity measure based method gives the best performance. In the oneagainst-one approach, the description based close-class-sets give a higher reduction in the number of pairwise SVMs. Results of the studies on recognition of 86-class CV segments demonstrate that by learning to discriminate a class against a set of about 36 confusable classes, it is possible to achieve a classification accuracy close that of the full-class-set discrimination based approaches.
4
Summary and Conclusions
In this paper, we have addressed the issues in recognition of Consonant-Vowel (CV) type subword units of speech using support vector machines. Recognition of CV units involves multi-class pattern recognition for a large number of classes with high confusability among several classes. We have proposed the close-classset discrimination method for decomposition of the multi-class learning problem. We have demonstrated the effectiveness of the proposed method through experimental studies on recognition of CV units. Theoretical validation of the proposed method will involve a comparison of the capacity estimate for multi-class pattern recognition system with that of the full-class-set discrimination based system. The proposed method may also be considered for reducing the complexity of the optimization problem in support vector learning for multiple classes [15].
Acknowledgment This research work is supported by the Japan Society for the Promotion of Science (JSPS) through a post-doctoral fellowship for C.Chandra Sekhar. Au-
Recognition of Consonant-Vowel (CV) Units of Speech
185
thors would like to thank Prof.B.Yegnanarayana, Indian Institute of Technology, Madras, India, for making the broadcast news data available for our studies.
References 1. E. L. Allwein, R. E. Schapire, and Y. Singer. Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113–141, December 2000. 172 2. P. Beyerlein, X. Aubert, R. Haeb-Umbach, M. Harris, D. Klakow, A. Wendemuth, S. Molau, H. Ney, M. Pitz, and A. Sixtus. Large vocabulary continuous speech recognition of broadcast news - The Philips/RWTH approach. Speech Communication, 2002. 177, 179 3. P. Clarkson and P. J. Moreno. On the use of support vector machines for phonetic classification. In Proceedings of ICASSP, pages 585–588, March 1999. 171 4. R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. John Wiley & Sons, Inc., 2001. 172, 175 5. U. Kreßel. Pairwise classification and support vector machines. In B. Scholk¨ opf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 255–268. The MIT Press, 1999. 172, 173 6. A. Ganapathiraju. Support Vector Machines for Speech Recognition. PhD thesis, Mississippi State University, Mississippi, 2002. 171 7. A. Ganapathiraju, J. Hamaker, J. Picone, M. Ordowski, and G. R. Doddington. Syllable-based large vocabulary continuous speech recognition. IEEE Transactions on Speech and Audio Processing, 9(4):358–366, May 2001. 171 8. S. Haykin. Neural Networks – A Comprehensive Foundation. Prentice Hall, 1999. 175 9. S. Katagiri, editor. Handbook of Neural Networks for Speech Processing. Artech House, 2000. 171 10. J. Platt, N. Cristianini, and J. Shawe-Taylor. Large margin DAGs for multiclass classification. In S. A. Solla, T. K. Leen, and K-R. Muller, editors, Advances in Neural Information Processing Systems, 12, pages 547–553. The MIT Press, 2000. 173, 174 11. L. R. Rabiner and B. H. Juang. Fundamentals of Speech Recognition. Prentice Hall, 1993. 171, 177 12. C. Chandra Sekhar, Kazuya Takeda, and Fumitada Itakura. Close-class-set discrimination method for large-class-set pattern recognition recognition using support vector machines. In International Joint Conference on Neural Networks, May 2002. 174, 176 13. H. Shimodaira, K. Noma, M. Nakai, and S. Sagayama. Support vector machine with dynamic time-alignment kernel for speech recognition. In Proceedings of Eurospeech, pages 1841–1844, September 2001. 171 14. N. Smith and M. Gales. Speech recognition using SVMs. In Advances in Neural Information Processing Systems, 2001. 172 15. J. Weston and C. Watkins. Multi-class support vector machines. Technical Report CSD-TR-98-04, Royal Holloway, University of London, May 1998. 172, 173, 184 16. B. Yegnanarayana. Artificial Neural Networks. Prentice Hall of India, 1999. 172
Anomaly Detection Enhanced Classification in Computer Intrusion Detection Mike Fugate and James R. Gattiker Los Alamos National Laboratory {fugate,gatt}@lanl.gov
Abstract. This paper describes experiences and results applying Support Vector Machine (SVM) to a Computer Intrusion Detection (CID) dataset. This is the second stage of work with this dataset, emphasizing incorporation of anomaly detection in the modeling and prediction of cyber–attacks. The SVM method for classification is used as a benchmark method (from previous study [1]), and the anomaly detection approaches compare so–called “one class” SVMs with a thresholded Mahalanobis distance to define support regions. Results compare the performance of the methods, and investigate joint performance of classification and anomaly detection. The dataset used is the DARPA/KDD-99 publicly available dataset of features from network packets classified into non–attack and four attack categories.
1
Introduction
This paper describes work with the goal of enhancing capabilities in computer intrusion detection. The work builds upon a study of classification performance, that compared various methods of classifying information derived from computer network packets into attack versus normal categories, based on a labeled training dataset[1]. This previous work examines the well-studied dataset and the classification task, described various approaches to modeling the data emphasizing the application of SVMs which had not been applied previously to this data, and validates our classification methods compared to other studies with detailed presentation of performance. The previous work clears the way through exploratory data analysis and model validation, toward studying whether and how anomaly detection can be used to enhance performance. The DARPA project that initiated the dataset used here concluded that anomaly detection should be examined to boost the performance of machine learning in the computer intrusion detection task[2]. In this discussion, the term anomaly detection will mean making a model from unlabeled data, and using this model to make some inference about future (or hold–out) data. Our data is a feature vector derived from network packets, which we will call an “example” or “sample”. On the other hand, classification will mean building a model from labeled data, and using that model to classify future (or hold–out) examples. S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 186–197, 2002. c Springer-Verlag Berlin Heidelberg 2002
Anomaly Detection Enhanced Classification
187
One technique to meld these approaches is to stage the two techniques, using anomaly detection to segment data into two sets for classification. In our previous work, we observe that the data has substantial nonstationarity[1] between the training set and the temporally and procedurally distinct test set. With classification methods that can be thought of as learning a decision surface between two statistical distributions, performance is expected to degrade significantly when classifying examples that are from regions not well represented in the training set. Anomaly detection can be seen as a problem of learning the density (landscape) or the support (boundary) of a data distribution. Nonstationarity can then be thought of as data that departs from the support of the distribution. Since we can judge that these “anomalous” examples will be classified poorly because they are not representative of the classifier training set, we can treat them differently (or not at all). A second technique examined uses anomaly detection with an assumption that any examples that are different are suspicious, which is an assumption that may or may not be true depending on the application. As in our previous work, this paper does not attempt to address issues in dataset generation or feature selection. The details of the network and data collection process as well as the way in which this “raw data” is transformed into well-defined feature vectors is a very important problem, unfortunately beyond the scope of this study.
2
Dataset Description
The data is described in more detail in [1][2]. Briefly, we are using data derived from a DARPA project which set up a real network and logged normal and attack network traffic. This experiment yielded a training set, and a test set. The test set was recorded after the training set, and is known to reflect somewhat different activity, which is a significant feature of our analysis. The data from this experiment were transformed into a “clean” dataset for the 1999 KDDCup, a competition associated with the Knowledge Discovery and Datamining conference. This dataset has 41 features for every example, with a training and test set size of approximately 500,000 and 300,000 examples, respectively. The data are labeled as attack or normal, and furthermore are labeled with an attack type that, although too fine-grained to allow experimentation, can be grouped into four broad categories of attacks: denial of service (DoS), probe, user to root (u2r), and remote to local (r2l). This is of particular interest since performance was shown previously to be very different for these categories, plausibly because they exhibit distinct nonstationarity. We have found it useful to further segment the dataset. The training set from KDD was broken into three parts to investigate modeling on a stationary dataset: 10% was sampled for model training, 5% for model tuning (adjusting modeling parameters), and the remainder is used for validation (assessment of performance on the stationary data). Although this makes the model training set a small part of the available data it is sufficient. To reach this conclusion the performance was observed as the data size is increased, and through a large range
188
Mike Fugate and James R. Gattiker
Table 1. Test data performance comparison of the SVM-RBF and Mahalanobis outlier detection methods Attack % inlier %outlier %outlier %outlier type by both by SVM by Mahal by both Normal 95.24 1.15 1.99 1.62 DoS 98.17 0.45 0.67 0.71 Probe 55.11 2.02 32.12 10.75 R2L 91.10 0.12 7.73 1.04 U2R 40.35 0.44 37.28 21.93
above this dataset size no significant improvement in generalization performance was seen. Since the object of this study was the investigation of methods and approaches, rather than exhaustive parametric optimization for a final product, the trade-off between marginal performance gains and convenience (mostly in terms of SVM training times) for exploratory investigation is appropriate. The test set remains intact as a separate dataset so that the impact of nonstationarity can be explored – the test and training data are not drawn from a uniformly mixed dataset. Our experience has been that data nonstationarity in on-line classification systems is a significant application issue. The methods chosen assume ordered numeric data. Therefore, ANOVA transformation is applied to all variables, both categorical (by individual values) and real (by intervals). Each discrete subset (value or interval) is modeled by the observed probability of attack in the training data given the category or range, for each variable independently. This results in a transformed dataset of the same size but with a monotonic and consistently scaled metric basis. This dataspace mapping will have a significant effect on the results of the automated learning, and in previous work has shown itself to be a valuable technique in managing learning methods on large datasets. 2.1
Dataset Nonstationarity
Some data summaries will indicate the nonstationarity present. Figure 1’s plots of the first three principal components show the distinction between the training set and the test set by attack type. Another view of nonstationarity can be observed through prediction performance shown in Table 1. We will use methods to draw a boundary around a dataset, and then check whether new data falls within that boundary. From the presentation in Fig. 1 we expect there to be a distinct difference in the test examples compared to the boundary generated on the training data. We used the training set, including both normals and attack examples, to derive such a boundary (within which lies 98% of the training set). Then, we examine the test set attack data, as to whether it lies within this boundary or not. We use two methods, Mahalanobis distance (MHD) and a Support Vector Machine with radial basis kernel (RBF), to construct the boundary. Table 1 shows the percent
Anomaly Detection Enhanced Classification
normal
DoS
Probe
u2r
189
r2l
Fig. 1. Plots of the four attack types. In each plot Black points are the training set attacks, green points are the test set attacks of each attack type in the test set that were called inliers1 by both methods, that were called outliers by only one method, or that were called outliers by both methods. Table 2 is similar for the validation set (stationary with respect to the training data). We can make two broad observations from this table. First, some attack types have apparently changed significantly in the test set. Second, the methods do not perform identically, since in some cases there are significant portions of the attack that were classified as outlier by one method, and inliers by the other. Table 3 shows that the proportion of normals is similar between the training and test sets, but the attacks are not. This is a constructed feature of the datasets, and they are not only nonstationary in frequency, but also in type. This is a representative performance of the method.
3
Description of Learning Methods
It is important to note that significant effort was spent investigating alternative parameter settings and learning settings, the results here show only the most successfully optimized results. 1
we have adopted the term “inliers” to mean those points that are inside the boundary
190
Mike Fugate and James R. Gattiker
Table 2. Validation data performance comparison of the SVM-RBF and Mahalanobis outlier detection methods Attack % inlier %outlier %outlier %outlier type by both by SVM by Mahal by both Normal 90.29 3.78 3.00 2.93 DoS 98.77 0.70 0.49 0.04 Probe 67.95 0.25 30.34 1.46 R2L 57.30 0.00 39.61 3.09 U2R 30.00 0.00 48.00 22.00
Table 3. Distribution of categories in the train and test datasets Attack Normal DoS PROBE R2L U2R
3.1
Train(%) 19.69 79.24 0.83 0.23 0.01
Test (%) 19.48 73.90 1.34 5.20 0.07
Anomaly Detection Methods
Mahalanobis Distance Let y be a p× 1 random vector in the Euclidean space Rp . Assume that the mean vector of y is µ and the covariance matrix is Σ. The (squared) Mahalanobis distance from y to µ is defined to be D2 = (y−µ) Σ −1 (y− µ). The Mahalanobis distance is often used to measure how far a random vector is from the center of its distribution, see [4] and [5]. µ and Σ are unknown and are estimated from data with the sample mean, y¯ and the sample covariance ˆ respectively. matrix, Σ, If a future observation has a Mahalanobis distance greater than d(99), the distance that yields the 99th percentile on the training data, then this new observation is considered an outlier, otherwise it is considered an inlier. ˆ −1 (y − y¯) defines an ellipsoid in Rp . GeometriThe equation dN = (y − y¯) Σ cally, the above procedure for identifying outliers amounts to calling any point outside this ellipsoid an outlier and any point inside the ellipsoid is an inlier. This is a very constrained model compared to the flexibility of the SVM methods. One-class Support Vector Machines Sch¨ olkopf et al.[6] adapted the to estimate the support of a distribution. A fraction, ν, of the observed data to be outliers is specified, and a “small”, region, say S, in feature space that contains at least (1 − ν) × 100% of the observed data. Any point outside of S is an outlier. In general S need not be an ellipsoid. Assuming that the training data is a random sample from an unknown distribution P , Sch¨ olkopf et al. provide a bound on the probability that a new observation drawn from P will be outside of S. Technical details that we do not address can be found in [6].
Anomaly Detection Enhanced Classification
191
Table 4. The reference SVM radial basis function classifier performance Overall DoS Probe error% det% fp% det% det% % attacks, test data 91.8 1.7 Validation 0.07 99.94 0.11 99.99 99.06 Test 6.86 91.83 1.43 97.30 79.26
R2L det% 6.5 90.02 18.29
U2R det% 0.09 20.00 25.88
A considerable amount of effort was spent exploring the relative performance of different SVM kernel and parameter settings. Our explorations led us to a choice of the RBF kernel, considering also linear and polynomial kernels of degrees up to seven. 3.2
Method of Categorization
Since we previously explored in detail the optimized performance of alternative methods for classification [1], in this study we have chosen a single method in order to limit the number of options. The method chosen is Support Vector Machines using the radial basis function kernel[3]. An examination of the performance of this classifier is shown in Table 4. Note that this one performance point does not represent the entire spectrum of performance of the method across different detection rates. This provides indicative performance, and more detail is available in the report [1]. 3.3
Anomaly Detection to Preprocess for Classification
Performance in a region of the dataspace well populated in the training data, i.e., the training set support, is expected to be better than overall performance, and therefore also better than the examples outside this support. However, how to treat the performance of the anomalous examples is an open issue. Should they be considered as “normals”, lowering the detection rate, or as “attacks”, raising the false positive rate, should they not be considered at all, or should they be classified using a different methodology or at least a different model? The performance results documented allow the impact of various system assumptions to be assessed.
4
Results
In Table 5 we show the results from defining a region of feature space that contains 98% of the training data. Two methods were used to define a region: support vector machines with a radial basis kernel (SVM-RBF) and Mahalanobis distance (MHD). For each type of attack we present the number of observations that are considered inliers and outliers. In addition we also show the distribution of attacks conditional on being an outlier; these are the entries in the column
192
Mike Fugate and James R. Gattiker
Table 5. Predicted outliers by known class for the validation and test sets. The %O column shows the percentage of outliers represented by each category
Norm DoS Probe R2L U2R Total
RBF validation test in out %O in out 79596 5726 68.4 58916 1677 341002 2543 30.4 227172 2681 3567 62 0.7 3634 532 942 30 0.4 16000 189 39 11 0.1 177 51 425146 8372 305899 5130
%O 32.7 52.3 10.4 3.7 0.9
MHD validation test in out %O in out 80257 5065 59.5 58408 2185 341708 1837 21.6 226680 3173 2475 1154 13.6 2380 1786 557 415 4.9 14768 1421 15 35 0.4 93 135 425012 8506 302329 8700
%O 25.1 36.5 20.5 16.3 1.6
labeled %O. Tables 1 and 2 highlight the degree of (dis)agreement between the two methods. The overwhelming number of examples for both the validation and test data correspond to DoS attacks: 79% for the validation data and 74% for the test data. There seems to be a bias on the part of both SVM-RBF and MHD to learn the region of feature space populated by DoS attacks. In the validation set, 68.4% of the outliers identified by RBF are normals and for MHD nearly 60% of the outliers are normals. The outlier selection rate of SVM-RBF on the test set is peculiar. Both SVM-RBF and MHD were trained so that approximately 2% of the observations would be beyond the support. In the test set, SVM-RBF identifies only 1.65% observations as outliers; MHD identifies 2.8% of the test data as outliers. Recall that the test data is nonstationary while the validation set is not. Not only was the distribution of attacks different from the training data, but the types of attacks were also different. Examining performance on the test set we find that for both SVM-RBF and MHD a lower percentage of the outliers are normals. MHD is identifying a much higher percentage of probe, R2L, and U2R attacks as outliers than is SVMRBF. In fact, these three categories is where the nonstationarity of the test data is concentrated. In table 6 for both the validation and test data we show the composition of the predicted classes by attack type. The classifier here is the supervised SVM discriminator described in Section 3.2. In the validation set, nearly all the DoS attacks are being classified as attacks and within the observations classified as attack, but in the test set, while the distribution of attack types is somewhat similar to the validation data, the composition of correctly predicted attacks is quite different. For example, in the validation set, nearly 99% of the probe attacks are identified as attacks but in the test data only 75% are identified as attacks. Given an observation is classified as an attack, there is a 1.03% chance that observation is a probe attack for the validation set and a 1.38% chance if we look at the test set. The results presented in table 7 contrast how the prediction method performs for data considered as inliers versus data identified as outliers. We compare
Anomaly Detection Enhanced Classification
193
Table 6. Predicted class by known category for the validation and test sets, using the support vector machine supervised classifier. The %A column shows the percentage of attacks represented by each category
Norm DoS Probe R2L U2R Total
validation test Normal Attack % Attack %A Normal Attack % Attack %A 85222 100 0.12 0.03 60272 321 0.53 0.14 69 343476 99.98 98.70 6961 222892 96.97 98.43 50 3579 98.62 1.03 1043 3123 74.96 1.38 155 817 84.05 0.23 16143 46 0.28 0.02 50 0 0.00 0.00 172 56 24.56 0.02 85546 347972 100.00 84591 226438 100.00
Table 7. Prediction: % classified as attack for outliers and inliers, by attack SVM-RBF validation test inlier outlier inlier outlier Normal 0.12 0.07 0.50 1.55 DoS 99.98 99.57 97.48 53.64 Probe 98.74 91.94 82.69 22.18 R2L 84.93 56.67 0.28 0.53 U2R 0.00 0.00 31.64 0.00
MHD validation test inlier outlier inlier outlier 0.08 0.77 0.21 9.11 99.99 97.44 97.86 33.47 98.59 98.70 95.63 47.42 86.54 80.72 0.19 1.27 0.00 0.00 40.86 13.33
performance on the validation and test set using both SVM-RBF and MHD to identify inliers and outliers. It is important to keep in mind that the prediction method was trained on the entire training set and not on just the observations that would be considered inliers. The SVM-RBF method is less likely to call an example from normal, DoS, probe, and R2L an attack if it is classified as an outlier than if it is classified as an inlier. For non–attack examples in the validation data, the prediction model is less likely to call an example identified as an outlier by SVM-RBF an attack than it is if SVM-RBF calls that example an inlier. In contrast, if MHD identifies the example as an outlier the prediction model is more likely to classify that example as an attack than if it is considered an inlier. On the test set, the prediction model is more likely to call a normal example an attack if it is identified as a outlier than if it is identified as an inlier for both SVM-RBF and MHD. A non-attack example in the test set that is called an outlier by MHD is much more likely to be classified as an attack than a normal example called an outlier by SVM-RBF. For DoS attacks in the validation set, the prediction model works about the same on inliers and outliers for both SVM-RBF and MHD; slightly fewer DoS attacks identified as outliers by MHD are classified as attacks than are DoS attacks identified as inliers. On the test set there is a dramatic difference in performance between inliers and outliers. If SVM-RBF or MHD call a DoS attack an inlier the the prediction model classifies nearly 98% of these as attacks.
194
Mike Fugate and James R. Gattiker
Table 8. Performance on test set, with different selections of the data by anomaly detection MHD det% fp% Overall 90.3 0.5 Inliers Only 91.9 0.2 Outliers as Normals 89.5 0.2 Outliers as Attacks 92.1 3.8
SVM-RBF det% fp% 90.3 0.5 90.9 0.5 89.7 0.5 91.0 3.3
However, if SVM-RBF calls a DoS an outlier, only 54% of these are classified as an attack; if MHD identifies the example as an outlier, only 34% these are predicted to be attacks. Because SVM-RBF identifies so few probe, R2L, and U2R as outliers, as shown in Table 5 we should be cautious about any inferences we might want to make with respect to these attack types. For probe attacks from the validation set, the prediction model is classifying approximately the same percent as attacks if MHD call the example an inlier or outlier; if SVM-RBF calls the example an outlier then it is less likely to be classified as an attack than if called an inlier. For probe attacks in the test set the prediction model is less likely to call an outlier an attack than it is an inlier, for both SVM-RBF and MHD. If MHD calls the example an outlier the model is more likely to classify it as an attack than if SVM-RBF calls the example an outlier. For R2L attacks in the validation set, approximately inlier classification is approximately 85% correct for both methods. For examples identified as outliers by SVM-RBF, only 57% are classified correctly by the model. In contrast, of the outliers identified by MHD, the model correctly classifies about 81%. This non– parity is an interesting effect of the method, showing that anomaly detection can have very different performance. The prediction model applied to the test data works poorly with respect to R2L attacks, regardless of whether or not the example is called an inlier or an outlier. In the validation set the prediction model incorrectly classifies all (50) of the U2R attacks as normal. In the test set, SVM-RBF identifies 51 out of 228 examples as outliers and MHD identifies 135. The prediction model correctly classifies 32% of the SVM-RBF identified inliers and none of the SVM-RBF identified outliers. For MHD inliers the model correctly classifies 41% of the inliers and 13% of the outliers. Table 8 summarizes the performance of the overall system including anomaly detection. In this evaluation, the simpler MHD method out–performs the SVMRBF method. As expected the inliers have better performance in both cases. In a real situation, the outliers must be accounted for, and the results show what happens if we label by default all of the outliers as either attacks or normals. Labeling them as normals lowers the detection rate from the baseline (overall), with some improvement to the false positive rate (even though this is not signif-
Anomaly Detection Enhanced Classification
195
icant for the SVM-RBF). Labeling outliers as attacks raises the detection rate, but also raises the false-positive rate significantly.
5
Discussion
The practical import of this analysis is not in terms of a finished algorithm product, since this study was on static and historical data. The primary contribution is the significance of considering network-based attack detection as distinct attack types, and the impact of anomaly detection on nonstationarity. The information presented shows clearly that different types of attacks have both very different signatures, as well as very different types of change. Also, simply the dominance in numbers of some categories will have a large effect on automated learners, as they try to minimize a criterion related to overall error minimization. The difference in performance between attack classes is important, as they each presumably have different associated misclassification costs. That these methods can treat the types differently, both as inliers and outliers, is an important consideration to the application engineer. If these ideas were to be incorporated into a working system, the question of what to do with the outlier class arises. Choosing an arbitrary performance level on test set, the classifier along on all the data performance with a detection rate of 90.3% with a false-positive rate of 0.53%. On only the inliers the performance increases, but the outliers still need to be accounted for. Table 8 summarized these results. Further exploration of a staged approach where inliers and outliers have different detection thresholds, or even different models altogether will be a solution to improving overall performance. The anomaly detection segmentation increases the classification performance of inliers, as was hypothesized. The details of the performance, as discussed in Section 4, are sometimes puzzling and counterintuitive. For example, the percentage of outliers decreases from the validation to test sets for the SVMRBF overall, and for some categories in the MHD, when natural expectation is that they would increase for a nonstationary dataset. Also, why the individual attack categories have their respective behavior with respect to nonstationarity in particular is not understood. These algorithms are suitable for inclusion on a high speed network analysis tool, such as the programmable FPGA based NIW Sensor developed at Los Alamos[9]. This hardware package is capable of analyzing network traffic at gigabit speeds, and is the flip-side of this project in algorithm development. Finally, we will comment on our experience in using these methods. SVMs with nonlinear kernels are challenging to use as a stand-alone tool for exploratory data analysis. Our experience has been that changes in parameters (e.g., kernel, regularization) can have significant effects in the performance of the algorithm, but yet these changes typically don’t have clear causes. In an data analysis situation, it often isn’t enough to simply tune for the best performance. In this case, how to tune for particular effects is not at all clear. One also wants to
196
Mike Fugate and James R. Gattiker
gain a better understanding of the data and problem. Kernel SVMs (and other nonlinear learners) are often deficient in this respect. However, as these results show, in comparison to an intuitively understandable method such Mahalanobis distance, SVMs can be a valuable tool for gaining information regarding high-dimensional data, as well as good classification performance. If no comparative method is used, it would not be apparent whether the SVM is approximating Gaussian forms, or whether, as is the case here, the SVM is fitting a more wandering boundary. The analysis here clearly shows two things: the data is not approximately Gaussian (as is also suggested by the graphs), and the degree of flexibility in the model of the support has a significant effect on the results both overall and by category. We explored and used both the currently popular libSVM and SVMLight software for this work[7][8]. Currently, neither tool yields continuous values for outlier status, which, although arguably unsound, can be used for exploration of performance around the margin, and would provide an ad-hoc method for rank-selecting outliers.
6
Conclusion
Computer network attack detection is potentially tractable using automated learners and classifiers. Challenges remain for this methodology. One challenge is to develop an understanding of whether core attack types have a long-term signature; if not, tedious filtering data by hand to generate labeled datasets at short intervals is required. Anomaly detection methods have significant promise in this area, but they have not been demonstrated to have a performance with significant enough probability of detection at acceptable false-alarm rates. Anomaly detection used as a method for filtering nonstationary example and ensure that classifiers operate in domains that were populated sufficiently in their training sets has been demonstrated to increase performance in this problem domain, as expected. The question remains of how to treat the outlier data robustly so that performance can be increased overall. One solution to this would be to relax the degree of discrimination of inliers, so that the training set will yield enough outliers to train an outlier-specific model. Another method could employ pure anomaly detection methods for the outliers. These are interesting directions for future work. In this case, the SVM method did not lead to the boost in performance of the Mahalanobis distance method. There are several possible reasons for this. One is that there is perhaps not enough data to accurately assess the support of the distribution in all cases. The strong assumptions in the Mahalanobis distance measure, i.e. that the data can be represented by estimated mean and covariance, may provide required regularization. On the other hand, it is true that the SVM can be tuned to produce a more rigid classification surface, and can probably provide similar performance in this way (although expanding the margin to include all data is costly). Another possible explanation is that the margin attention of the SVM emphasized different classes naturally, and so can
Anomaly Detection Enhanced Classification
197
provide a richer range of performance tradeoffs. The SVM method has provided insight into the data characteristics, and are an additional for data exploration and classification. Additional areas for research suggest themselves. On-line adaptive anomaly detection is an intuitively interesting area, but whether an adaptive method can be biased with sufficient accuracy to distinguish attacks from non-attacks is an open question. Classification models of each category, with corresponding methods for distinguishing what is an inlier vs. outlier for each category seems like a compelling direction for improving performance. Studying how these machine learning methods complement rule-based systems is important in this application domain. This leads to the general topic of model ensembles: how capable families of models can be constructed, and how much performance increase can be realized.
References 1. Mike Fugate, James R. Gattiker, “Detecting Attacks in Computer Networks”, Los Alamos National Laboratory Technical Report, LA-UR-02-1149. 186, 187, 191 2. Richard P. Lippmann et al., “Evaluating Intrusion Detection Systems: The 1998 DARPA Off-line Intrusion Detection Evaluation”, Proc of the DARPA Information Survivability Conf., vol. 2, pp. 12-26, 1999. 186, 187 3. Trevor Hastie, Robert Tibshirani, Jerome Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer-Verlag, 2001. 191 4. Ronald Christensen (1996), Plane Answers to Complex Questions: The Theory of Linear Models, Second Edition. New York: Springer-Verlag. 190 5. Ronald Christensen (2001), Advanced Linear Modeling, Second Edition. New York: Springer-Verlag. 190 6. Bernhard Sch¨ olkopf, et al. (2000). “Estimating the Support of a High-Dimensional Distribution”, Technical report MSR-TR-99-87, Microsoft Research, Microsoft Corporation. 190 7. C. Chang, C. Lin, ”LIBSVM: a library for support vector machines”, http://www.csie.ntu.edu.tw/ cjlin/papers/libsvm.ps.gz 196 8. T. Joachims, “Making large-Scale SVM Learning Practical”, Advances in Kernel Methods - Support Vector Learning, B. Sch¨ olkopf and C. Burges and A. Smola (ed.), MIT-Press, 1999. 196 9. M. Gokhale, D. Dubois, A. Dubois, M. Boorman, ”Gigabit Rate Network Intrusion Detection Technology”, Los Alamos National Laboratory Technical Report, LAUR-01-6185. 195
Sparse Correlation Kernel Analysis and Evolutionary Algorithm-Based Modeling of the Sensory Activity within the Rat’s Barrel Cortex Mariofanna Milanova1 , Tomasz G. Smolinski2 , Grzegorz M. Boratyn2, Jacek M. Zurada2 , and Andrzej Wrobel3 1
Department of Computer Science, University of Arkansas at Little Rock Little Rock, AR 72204, USA
[email protected] http://pandora.compsci.ualr.edu/ 2 Computational Intelligence Laboratory Department of Electrical and Computer Engineering, University of Louisville Louisville, KY 40292, USA {tomasz.smolinski,grzegorz.boratyn,jacek.zurada}@louisville.edu http://www.ci.louisville.edu/ 3 Laboratory of Visual System Department of Neurophysiology, Nencki Institute of Experimental Biology 3 Pasteur Street, 02-093 Warsaw, Poland
[email protected] http://www.nencki.gov.pl/labs/vslab/vislab.html
Abstract. This paper presents a new paradigm for signal decomposition and reconstruction that is based on the selection of a sparse set of basis functions. Based on recently reported results, we note that this framework is equivalent to approximating the signal using Support Vector Machines. Two different algorithms of modeling sensory activity within the barrel cortex of a rat are presented. First, a slightly modified approach to the Independent Component Analysis (ICA) algorithm and its application to the investigation of Evoked Potentials (EP), and second, an Evolutionary Algorithm (EA) for learning an overcomplete basis of the EP components by viewing it as probabilistic model of the observed data. The results of the experiments conducted using these two approaches as well as a discussion concerning a possible utilization of those results are also provided.
1
Introduction
The standard methods for decomposition and analysis of evoked potentials are band pass filtering, identification of peak amplitudes and latencies, Principal Component Analysis (PCA) and wavelet-based analysis. A common way to represent real-valued signals is using a linear superposition of basis functions. One S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 198–212, 2002. c Springer-Verlag Berlin Heidelberg 2002
Sparse Correlation Kernel Analysis
199
might roughly characterize the second-order methods (PCA and Factor Analysis) by saying that their purpose is to find a faithful representation of the data in the sense of signal’s reconstruction (e.g. based on the mean-square error measure) [1,2]. In contrast, the highest-order methods (Projection Pursuit, Blind De-convolution, Independent Component Analysis) are characterized by their attempt of finding a meaningful representation of the signal. Of course, meaningfulness is a task-dependant property [3,4]. The standard wavelet analysis gives the coefficients for expressing the signal as a linear combination of ”wavelet packets”, which can include scaled and translated versions of the father and mother wavelets, as well as scaled and translated versions that contain additional oscillations. The wavelet coefficient is an inner product of the wavelet and the data. Bases such as the Fourier or wavelet can provide a useful representation of some signals, but they are limited, because they are not specialized for the signals under consideration [5]. An alternative and more general method of signal representation uses the socalled ”overcomplete bases” (also called overcomplete dictionaries), which allows for a greater number of basis functions than samples in the input signals [6,7,8]. Overcomplete bases are typically constructed by merging a set of complete bases (e.g. Fourier, wavelet, Gabor) or by adding basis functions to a complete basis (e.g. adding frequencies to a Fourier basis). Relatively little research has been done in the area of decomposition of Evoked Potentials (EP) using non-orthogonal components. Olshausen and Field [9,10] pretend that the basis functions shared many properties with neurons in primary visual cortex, suggesting that overcomplete representations might be a useful model (see also [11]). Under an overcomplete basis, the decomposition of a signal is not unique, but this can be in our advantage – we have greater flexibility in capturing structure in the data. Subsequently, two different algorithms of modeling sensory activity within the barrel cortex of a rat are presented. First, we slightly modify the traditional approach to the Independent Component Analysis (ICA) and apply it to EP’s investigation, and second, we propose a new, evolutionary algorithm-based approach to decomposition of evoked potentials. More specifically, we propose an algorithm for learning an overcomplete basis of the EP components by viewing it as probabilistic model of the observed data. In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input data. Overcomplete representation has been advocated because it has greater robustness in presence of noise, it is more sparse, and has greater flexibility in matching structure in the data [9]. From this model, we derive a simple, robust learning algorithm by maximizing the data likelihood over the modeled data based on the basis functions. After surveying published work in the literature within this field, it appears that our approach of decomposition of EP’s in terms of a set of overcomplete basis functions and the process of learning them by evolutionary algorithm is unique.
200
Mariofanna Milanova et al.
The paper is organized into the following sections: in Sect. 2, we discuss the proposed data model an then, in Sect. 3, the formal learning algorithm is defined as well as our evolutionary algorithm-based methodology used to learn the set of basis functions. In Sect. 4, we present results of decomposition and comparison between the proposed methods.
2
Bayesian Motivated Model
The primary step in measuring the form and variance of evoked potentials (EP) or event-related potentials (ERP) is to decompose them into parts. Each part has three aspects. The first one is the elementary curve. The elementary curves are called basis functions. When a selected set of such curves is added together, the sum should closely conform to the shape of the ERP. Second, each part has a set of numbers or coefficients that denote its amplitude. Third, each part defines one or more coordinates, and the set of numbers denotes distances along the axes of a coordinate system. The number of coefficients in the set of basis functions specifies the dimensions of the measurement space. From this point of view the measurement of each ERP gives a vector in that space [12]. We assume that each data vector can be described with a set of basis functions plus some additive noise: x = Ma + ε,
(1)
where x is the signal, M is the matrix of basis functions, a is the vector of coefficients (i.e. the representation of the signal), and ε represents Gaussian noise. Let n denote the number of time points in each recorded EP. Let xi (t ) denote the value of each i-th recorded EP at time t, where t = 1,2,. . . ,n, and i = 1,2,. . . ,m. The model specifies that xi (t ) is a weighted sum of unknown EP components, where the weights depend on the EP component and each individual measurement. Let r denote the number of EP components, which is assumed known, and let Mj (t) denote the unknown value of EP component (basis function) j at time t, where j = 1,2. . . ,r. Let aij be the unknown weight of EP component j for individual signal i. Then we assume a model of the form: xi (t) =
r
aij Mj (t) + εi (t),
(2)
j=1
where εi (t) represents Gaussian noise. The unknown parameters to be estimated are aij and Mj (t). Developing efficient algorithms to solve this equation is an active research area. A given data point can have many possible representations, nevertheless this redundancy can be removed by a proper choice for the prior probability of the basis coefficients: P (aij ) = exp(−S(aij )),
(3)
Sparse Correlation Kernel Analysis
201
where: a 2 ij S(aij ) = β log 1 + , (4) σ and β and σ are scaling factors. This specifies the probability of the alternative representations. Standard approaches to signal representation do not specify a prior for the coefficients. Assuming zero noise, representation of a signal in this model is unique. If M is invertible, the decomposition of the signal x is given by a = M−1 x. Since M−1 is expensive to compute, the standard models use the basis matrices that are easily inverted, by, for instance, restricting the basis functions to be orthogonal (in PCA) or by limiting the set of basis functions to those, for which there exist fast computational algorithms, such as Fourier or wavelet analysis. Usually, to define a unique set of basis functions we have to impose unrealistic mathematical constraints. For example, PCA assumes that the data distribution has a Gaussian structure and fits in appropriate orthogonal basis. ICA, generalized PCA, assumes that the coefficients have non-Gaussian structure and this allows the basis functions to be non-orthogonal. In all of these techniques, the number of basis vectors is equal to the number of inputs. A more general approach is to use the information theory and the probabilistic formulation of the problem [13]. Rather than making prior assumption about the shape or form of the basis functions, those functions are adapted to the data using an algorithm that maximizes the log-probability of the data under the model. The coefficients from (1) can be inferred from x by maximizing the conditional probability of a given x, P(a|x, M), which can be expressed via Bayes’ rule as: a = arg maxP (a|x, M) ∝ arg maxP (x|a, M)P (a). a
a
(5)
The first term of the right hand side of the proportion specifies the likelihood of the signal under the model for a given state of the coefficients: λ 2 |x − Ma| , (6) P (x|a, M) ∝ exp − ZσN where ZσN is normalizing constant, λ = 1/σ 2 , and σ is the standard deviation of the additive noise. The second term specifies the prior probability distribution over the basis coefficients, where: P (aij ) . (7) P (ai ) = j
Thus, the maximization of the log-probability in (5) becomes:
2
n
r r
λN
xi (t) − aij = arg min aij Mj (t)
+ S(aij ) . aij 2 t=1
j=1 j=1
(8)
202
Mariofanna Milanova et al.
This formulates the problem as one of the density estimation and is equivalent to minimizing the Kullback-Leibler (KL) divergence between the model density and the distribution of the data. The functional (8) that is minimized, consists of an error term and a sparseness term. Based on recently reported results [14], we note that this framework is equivalent to approximating the signal using Support Vector Machines (SVM). In SVM we approximate the signal x(t) as: Mi (t) = K(t; ti ) ∀i = 1, . . . , l,
(9)
where K(t;y) is the reproducing kernel of a Reproducing Kernel Hilbert Space (RKHS) H and {(ti , yi )li=1 } is a data set, which has been obtained by sampling, in absence of noise, the target function xi (t ) [15]. While Olshausen et al., in their overcomplete models, measure the reconstruction error with an L2 criterion, the Support Vector Machine measures the true distance, in the H norm, between the target function and the approximating function. Depending on the value of the sparseness parameter, the number of coefficients aij that differ from zero will be smaller than r (the number of basis functions) (see (2)). The data points associated with the non-zero coefficients are called support vectors and it is these support vectors that comprise our sparse approximation.
3
Learning Objective
From the model presented in Sect. 2 we derive a simple and robust learning algorithm by maximizing the data likelihood over the basis functions. The learning objective is to find the most probable explanation for the input data. In other words, we wish to develop a generative model that encodes the probabilities of the input data. The algorithm must be able to: – find a good matrix M for coding input data, – infer the proper state of the coefficients a for each input signal. In a special case of zero noise, in the data model presented in Sect. 2, and a complete representation (i.e. M is invertible), the problem leads to the wellknown Independent Component Analysis algorithm [16]. ICA allows the learning non-orthogonal bases for data with non-Gaussian distribution. First we proposed to use ICA. Some data distributions, however, cannot be modeled adequately by either PCA or ICA. The second and more substantial method is to place sparse prior constraints on the base probabilities of coefficient activation. This sparse coding constraint encourages a model to use relatively few basis functions to represent any specific input signal. If the data has certain statistical properties (it is ”sparse”), this kind of coding leads to approximate redundancy reduction [17]. Sparse encoding within neural networks has previously been shown to create more biologically plausible receptive fields (Olshausen & Field).
Sparse Correlation Kernel Analysis
3.1
203
Evolutionary Algorithm for Proposed Sparse Coding
Some research has been done in applying genetic algorithms (GA) to the blind source separation (BSS) and ICA in which the Kullback-Leibler entropy is computed by analyzing the signals one by one and therefore needs exhausted computation [18]. In our work, an evolutionary algorithm (EA) is used to solve the problem of finding the best representation of a given signal in terms of basis functions and coefficients. EA searches for an optimum by changing iteratively population of temporary solutions encoded into chromosomes. Each chromosome represents the matrix of basis functions M and the matrix of coefficients a. Fitness function minimized in our case is based on (8) and consists of two parts: 1) error of reconstructed signals and 2) sparse cost of the values of coefficients:
m n
r r
xi (t) − aij Mj (t)
+ S(aij ) , (10) f=
t=1 i=1 j=1 j=1 where xi (t ) is the i-th input signal. Genetic operators used in this algorithm are: crossover, mutation and macromutation. The crossover operator replaces, with crossover probability, each gene (number) in a chromosome with corresponding gene from another chromosome. The mutation operator changes, with mutation probability, each number in a chromosome, by adding to it or subtracting from it a random value from mutation range. The macromutation operator makes the same changes to the chromosome as mutation, but with higher values of mutation probability and mutation range parameters. This additional operator brings more diversty to the population and is used when there is no improvement of the best solution found so far. The main steps performed by EA to find the optimal set of basis functions are as follows: Initialize the population of chromosomes with random values from range [-1, 1] While the best fitness value found so far is not acceptable do: In each generation repeat: Calculate fitness for each individual Select best individuals according to the roulette selection rule Apply crossover and mutation Find best chromosome (lowest fitness value) in generation Save the best chromosome found so far If the difference between best chromosome found so far and the best chromosome found 20 generations ago equals 0, then decrease mutation range 2 times If mutation range is less than 0.00001, then apply macromutation and set mutation range to the initial value.
204
Mariofanna Milanova et al.
The algorithm can create representation of a given signal in terms of any number of basis functions. When the matrix of basis functions or coefficients is not square, it is impossible to find an inversion of it, which would allow to compute representation of a new signal in terms of basis functions or coefficients already obtained. Such transformation can be done with the same evolutionary algorithm, but this time the chromosome consists only of coefficients or basis functions – whichever are being computed, and the fitness function is still (10). The function in (10) is difficult to optimize due to large number of variables, which creates huge search space for the EA. Thus the problem can be divided into sub-problems in the following way. Let us present (10) as: m n r |Ej (t)| + S(aij ) , (11) f= i=1
t=1
j=1
where: [Ej (t)] = E = Ma − x.
(12)
The signal x can be divided along time, such that: x = [x1 x2 . . . xk ]T .
(13)
Then, (12) can be written as: [E1 E2 . . . Ek ]T = [m1 m2 . . . mk ]T a − [x1 x2 . . . xk ]T ,
(14)
[m1 m2 . . . mk ] = M,
(15)
[E1 E2 . . . Ek ] = E,
(16)
Ei = mi a − xi , i = 1, 2, . . . , k.
(17)
where:
and
Thus, having fixed the matrix of coefficients a, all the parts of basis functions matrix mi ’s can be computed independently. The pairs of mi and ai for each part of the input signals xi can be computed using the evolutionary algorithm described above. Then the basis functions Mi corresponding to each coefficients matrices ai for the whole input signals can be computed. The pair Mi , ai , which gives the lowest value of the fitness function (10) is the obtained representation of the input signals. Therefore the algorithm is as follows:
Sparse Correlation Kernel Analysis
205
Divide the signals into parts along time: x1 , x2 , ..., xk For each signal part xi Compute with EA: mi , ai For each ai Compute with EA basis functions for the whole signal: Mi Choose the pair ai Mi that gives the smallest value of fitness function (10).
4 4.1
Results and Discussion Data
In the experiments conducted at the Laboratory of Visual System, Department of Neurophysiology, Nencki Institute of Experimental Biology, Warsaw, Poland, a piezoelectric stimulator was attached to a vibrissa of a rat [19,20]. An electrical impulse of 5 V amplitude and 1 ms duration was applied to the stimulator causing a deflection of the vibrissa. Evoked Potentials were then registered – each of them related to a single stimulus. Evoked potentials have been used from many years as a measurement of dynamic events occurring in nervous systems that accompany and are related to some defined sequences of behavior [12]. Based on same previous work, a hypothesis about a relation between two components of the registered evoked potentials and particular brain structures (i.e. supra- and infra-granular pyramidal cells) was stated. In order to verify the hypothesis, two additional types of stimuli were applied: 1) a cooling event applied to the surface of the cortex (allowing to temporarily ”switch off” some structures of the brain), and 2) an aversive stimulus – electrical shock applied to the rat’s ear (in order to cope with the phenomenon of habituation). Main goal of these experiments was to investigate those stimuli in the sense of their impact on the brain activity represented by evoked potentials. A single, four-level electrode positioned in the cortex of a rat collected the data. The electrode registered brain activity in a form of evoked potentials on four depths simultaneously as described in [19]. The channels were defined as: channel 1 – 1.7 mm, channel 2 – 1.05 mm, channel 3 – 0.4 mm, channel 4 – surface. Each evoked potential (lasting ≈ 50 ms and separated by a 3 second period) was sampled with frequency of 2kHZ and thus is being described in the database by 100 values. The data sets contain 882 evoked potentials for each depth registered in the experiment, so the complete database consists of: – four data sets for each channel – 882 records in each data set – each record described by 100 attributes (values in time) Based on the description of the neuro-physiological experiments it is known which records (evoked potentials) correspond to a cooling event and roughly what the strength of this particular cooling event was.
206
Mariofanna Milanova et al.
Fig. 1. Single (here averaged) evoked potential - one record in our database
Sample (averaged) evoked potential from the database along with its division into three waves (two positive and one negative one) is presented in Fig. 1. Because of the fact that the third channel’s electrode (0.4 mm) was located in the closest position to the granular cells (laying in the middle between supraand infra- granular, pyramidal cells – see [19,20,21]) and yielded the most representative perspective at the activity of the cortex, this level was quite often acknowledged as the most meaningful and interesting one and was given particular attention. 4.2
Analysis
The proposed model of sensory activity (the proposed model performance was analyzed via two methods. First, we applied the Independent Component Analysis (using the EEG/ICA Toolbox for Matlab by Scott Makeig, et al., [16]) to all the four channels. Second, the decomposition of EP by sparse coding and EA was analyzed. Method 1: Averaged signals for all the four levels of the rat’s cortex were used as an input to the algorithm (Fig. 2). Those averaged potentials were treated as four separate channels similarly to the traditional application of ICA to EEG data [16]. The full list of the ICA parameters is presented in Table 1.
Sparse Correlation Kernel Analysis
207
Table 1. Independent Component Analysis parameters Parameter
Value
Channels (sources of activation) 4 Frames (points per one record) 100 Epochs 1 Sampling rate (in Hz) 2000 Limits (in milliseconds) [0 50]
As a result, we received a 4 x 4 mixing matrix, which allowed us to decompose the input signals into four components. The components obtained by using this technique correspond very closely to the previous results derived from PCA (i.e. first two components create the N1 wave and their amplitudes change over time) [19,20,21]. Thus, we received a new representation of the signals in terms of: – Time courses of activation of the Independent Components (Fig. 3) – Independent Components (Fig. 4)
Fig. 2. Averaged signals for four separate channels
208
Mariofanna Milanova et al.
Fig. 3. Time courses of activation of all four components (ordered by latency)
Fig. 4. Four independent components in the averaged 3rd channel signal
Sparse Correlation Kernel Analysis
209
This experiment has resulted in a new representation of the input signals in terms of their statistically independent components. These components, yielded by an alternative technique (i.e. ICA), coincided with the ones discovered in previous work (i.e. PCA). This transformation (or coding) could be simply considered a data preprocessing methodology and, based on that, some further analysis could be performed. For instance, the values of the course of activation of the independent components (Fig. 3) could be used as new input data instead of the original signals and the analysis of the properties of such a new model might be of interest. On the other hand, generation of brand new data in the domain of the quantitative description of the ICA representation (e.g. minimum (maximum) value of each component, number of the minimum (maximum) component, etc.) would create another great possibility for investigation of the variability of the model’s properties depending on the changing environment (i.e. cooling event). Method 2: The experiment was performed in order to examine the effectiveness of the proposed algorithm in decomposing signals into components and compare the results with the ICA method. Normalized averaged signals for all levels of rat’s cortex, treated as four separate channels, were used as an input to the algorithm (as in Method 1). The input signals are shown in Fig. 3. Initial values of the parameters of the algorithm are presented in Table 2. We have received 10 basis functions and 4 vectors of coefficients, which decompose the input signals. Three of the components of the averaged 3rd channel signal, presented in Fig. 5 are very similar to those obtained by ICA. Slight differences in shapes and amplitudes are due to greater number of basis functions, which decompose the signals. One of those similar components seems to correspond to N1 wave. Then, the representations of both, the averaged normal and cooled 3rd channel signals were computed. The representation of the averaged normal and cooled 3rd channel signals in the domain of the obtained basis functions (as values of coeffi-
Table 2. Initial values of the parameters of the proposed algorithm in this experiment Parameter
Value
Population size (number of chromosomes) 50 Crossover probability 0.5 Mutation probability 1 / no. of elements in M Mutation range 0.1 Macromutation mutation probability 0.5 Macromutation mutation range 0.01 Number of basis functions 10 σ parameter in (4) 0.3 β parameter in (4) 1.0
210
Mariofanna Milanova et al.
Fig. 5. Three arbitrarily selected components in the averaged 3rd channel signal
cients) is presented in Fig. 6. Having more basis functions, more differences can be discovered between studied input signals. This may be very useful in signal classification. Waves can be divided into more groups, depending on dissimilarities, not discovered in the domain of smaller number of basis functions. The most important advantage of this method is the fact that number of basis functions is independent on the number of input signals. This evolutionary algorithm-based sparse coding of evoked potentials could be very useful in terms of some more detailed investigation of the input signals. For instance, it can be seen in Fig. 6 that the normal and cooled potentials have obviously different 6th , 7th , 8th , and 10th components. Such differences can be now thoroughly examined with some automatic data mining techniques like classification rules discovery or pattern matching in order to find some relations and dependencies between different kinds of evoked potentials.
5
Conclusions
On the basis of the experiments and the analysis described above we can conclude that ICA is quite a reasonable and effective tool in terms of evoked potentials’ decomposition and transformation. The results, coherent with previous work in terms of the signal’s main components, along with a high insensitivity of this method to noise and other types of distortion, encourage its further application for this type of problem. Using this clear and intuitive decomposition of signals,
Sparse Correlation Kernel Analysis
211
Fig. 6. Coefficients for the averaged normal and cooled 3rd channel signals
one may try to perform an investigation of the behavior of the components within a changing environment of impulse stimuli, cooling and habituation that may be able to answer many questions about mechanisms ruling brain function. Overcomplete representation, on the other hand, potentially allows a basis to better approximate the underlying statistical density of the data. It also creates the opportunity to discover more independent signal sources than the dimensionality of the input data, which opens new possibilities for data analysis. Along with the application of sparse coding constraints put on the learning algorithm, this is definitely a methodology worth of further exploration. Both, the ICA- and the sparse coding-based modeling of evoked potentials seem to be a very reasonable and useful techniques for the data preprocessing. Based on those transformations, further investigation and analysis of the data via, for instance, classification or clustering algorithms is possible. Such algorithms, in turn, may allow researchers to discover quite new rules standing behind the mechanisms governing our brains’ function.
References 1. Haykin, S.: Neural networks: a comprehensive foundation. 3rd edn. Prentice Hall, Upper Saddle River, NJ (1999) 199 2. Oja, E.: A simplified neural model as a principal components analyzer. J. of Mathematical Biology 15 (1982) 267–273 199
212
Mariofanna Milanova et al.
3. Amari, S.-I. and Cichocki, A.: Adaptive blind signal processing - neural network approaches. Proc. of the IEEE 86 (1998) 2026–2048 199 4. Barlow, H. B.: Possible principles underlying the transformations of sensory messages. In: Rosenblith, W. A. (ed.): Sensory Communication. MIT Press, Cambridge, MA (1961) 217–234 199 5. Raz, J., Dickerson, L., and Turetsky, B.: A Wavelet Packet Model of Evoked Potentials. Brain and Language 66 (1999) 61–88 199 6. Chen, S., Donoho, D. L., and Saunders, M. A.: Atomic decomposition by basis pursuit. Technical report, Dept. Stat., Stanford University (1996) 199 7. Lewicki, M. and Sejnowski, T.: Learning overcomplete representations. Neural Computation 12 (2000) 337–365 199 8. Mallat, S. G. and Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE Trans. on Signal Processing 41(12) (1993) 3397–3415 199 9. Olshausen, B. and Field, D. J.: Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research 37(23) (1997) 3311–3325 199 10. Olshausen, B.: Sparse codes and spikes. In: Rao, R. P. N., Olshausen, B. A., Lewicki, M. S. (eds.): Probabilistic Models of Perception and Brain Function. MIT Press, Cambridge, MA (2001) 199 11. Milanova, M., Wachowiak, M., Rubin, S., and Elmaghraby, A.: A Perceptual Learning Model on Topological Representation. Proc. of the IEEE International Joint Conference on Neural Networks, Washington, DC, July 15-19 (2001) 406–411 199 12. Freeman, W. J.: Measurement of Cortical Evoked Potentials by Decomposition of their Wave Forms. J. of Cybernetics and Information Science, 2-4 (1979) 22–56 200, 205 13. Lewicki, M. S. and Olshausen, B. A.: Probabilistic Framework for Adaptation and Comparison of Image Codes. J. Opt. Soc. of Am., 16 (1999) 1587–1600 201 14. Girosi, F.: An Equivalence Between Sparse Approximation and Support Vector Machines. Neural Computation 10 (1998) 1455–1480 202 15. Vapnik, V.: The nature of Statistical Learning Theory. Springer-Verlag, Berlin Heidelberg New York (1995) 202 16. Makeig, S., et al.: ICA Toolbox Tutorial. Available: http://www.cnl.salk.edu/∼scott/tutorial/ 202, 206 17. Field, D. J.: What is the goal of sensory coding? Neural Computation 6 (1994) 559–601 202 18. Yoshioka, M. and Omatu, S.: Independent Component Analysis using time delayed sampling. Presented at the IEEE International Joint Conference on Neural Networks, Como, Italy, July 24-27 (2000) 203 19. Kublik, E. and Musial, P.: Badanie ukladow czuciowych metoda potencjalow wywolanych (in Polish). Kosmos 46 (1997) 327–336 205, 206, 207 20. Wrobel, A., Kublik, E., and Musial, P.: Gating of the sensory activity within barrel cortex of the awake rat. Exp. Brain Res. 123 (1998) 117–123 205, 206, 207 21. Kublik, E., Musial, P., and Wrobel, A.: Identification of Principal Components in Cortical Evoked Potentials by Brief Surface Cooling. Clinical Neurophysiology. 112 (2001) 1720–1725 206, 207
Applications of Support Vector Machines for Pattern Recognition: A Survey 1
2
Hyeran Byun and Seong-Whan Lee 1
Department of Computer Science, Yonsei University Shinchon-dong, Seodaemun-gu, Seoul 120-749, Korea
[email protected] 2 Department of Computer Science and Engineering, Korea University Anam-dong, Seongbuk-gu, Seoul 136-701, Korea
[email protected] Abstract. In this paper, we present a comprehensive survey on applications of Support Vector Machines (SVMs) for pattern recognition. Since SVMs show good generalization performance on many real-life data and the approach is properly motivated theoretically, it has been applied to wide range of applications. This paper describes a brief introduction of SVMs and summarizes its numerous applications.
1
Introduction
SVMs are a new type of pattern classifier based on a novel statistical learning technique that has been recently proposed by Vapnik and his co-workers [1-3]. Unlike traditional methods (e.g. Neural Networks), which minimize the empirical training error, SVMs aim at minimizing an upper bound of the generalization error through maximizing the margin between the separating hyperplane and the data [4]. Since SVMs are known to generalize well even in high dimensional spaces under small training sample conditions [5] and have shown to be superior to traditional empirical risk minimization principle employed by most of neural networks [6], SVMs have been successfully applied to a number of applications ranging from face detection, verification, and recognition [5-11,26,50-56,76-80,83], object detection and recognition [12-15,24,47,57], handwritten character and digit recognition [16-18,45], text detection and categorization [19,58-61], speech and speaker verification, recognition [20-23], information and image retrieval [33-36,87], prediction [37-41] and etc.[22,27-32,41,42,53,62,64,65,74]. In this paper, we aim to give a comprehensive survey on applications of SVMs for pattern recognition. This paper is organized as follows. We give a brief explanation on SVMs in Section 2 and a detailed review of SVMs-related techniques in Section 3. Section 4 describes the limitations of SVMs. We conclude this paper in Section 5.
S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 213-236, 2002. Springer-Verlag Berlin Heidelberg 2002
214
2
Hyeran Byun and Seong-Whan Lee
Support Vector Machines
Classical learning approaches are designed to minimize error on the training dataset and it is called the Empirical Risk Minimization (ERM). Those learning methods follow the ERM principle and neural networks are the most common example of ERM. On the other hand, the SVMs are based on the Structural Risk Minimization (SRM) principle rooted in the statistical learning theory. It gives better generalization abilities (i.e. performances on unseen test data) and SRM is achieved through a minimization of the upper bound (i.e. sum of the training error rate and a term that depends on VC dimension) of the generalization error [1-3,43-45]. 2.1
Linear Support Vector Machines for Linearly Separable Case
The basic idea of the SVMs is to construct a hyperplane as the decision plane, which separates the positive (+1) and negative (-1) classes with the largest margin, which is related to minimizing the VC dimension of SVM. In a binary classification problem where feature extraction is initially performed, let us label the training data x i ∈ R d with a label y i ∈ { − 1, + 1} , for all the training data i = 1,..., l , where l is the number of data, and d is the dimension of the problem. When the two classes are linearly separable in R d , we wish to find a separating hyperplane which gives the smallest generalization error among the infinite number of possible hyperplanes. Such an optimal hyperplane is the one with the maximum margin of separation between the two classes, where the margin is the sum of the distances from the hyperplane to the closest data points of each of the two classes. These closest data points are called Support Vectors (SVs). The solid line on Fig.1 represents the optimal separating hyperplane.
Fig. 1. Linear separating hyperplanes for the separable case. The support vectors are circled (taken from [44])
Applications of Support Vector Machines for Pattern Recognition
215
Lets’ suppose they are completely separated by a d-dimensional hyperplane described by (1)
w ⋅x + b = 0
The separation problem is to determine the hyperplane such that w ⋅ x i + b ≥ + 1 for positive examples and w ⋅ x i + b ≤ − 1 for negative examples. Since the SVM finds the hyperplane, which has the largest margin, it can be found by minimizing
min Φ ( w ) = w ,b
w
1
2
w
2
2
(2)
2
The optimal separating hyperplane can thus be found by minimizing equation (2) under the constraint (3) to correctly separate the training data.
yi ( x i ⋅ w + b ) − 1 ≥ 0, ∀ i
(3)
This is a Quadratic Programming (QP) problem for which standard techniques (Lagrange Multipliers, Wolfe dual) can be used [43,51,69,70]. The detailed explanation on QP problems and alternative researches are described in Sub-section 2.4. 2.2
Linear Support Vector Machines for Non-separable Case
In practical applications for real-life data, the two classes are not completely separable, but a hyperplane that maximizes the margin while minimizing a quantity proportional to the misclassification errors can still be determined. This can be done by introducing positive slack variables ξ i in constraint (3), which then becomes
yi ( x i ⋅ w + b ) ≥ 1 − ξ i , ∀ i If an error occurs, the corresponding
(4)
ξi
must exceed unity, so
∑ξ
i i
is an upper
bound for the number of misclassification errors. Hence the objective function (2) to be minimized can be changed into min{
w
2
/2 + C
∑
l i=1
ξ i}
(5)
where C is a parameter chosen by the user that controls the tradeoff between the margin and the misclassification errors. A larger C means that a higher penalty to misclassification errors is assigned. Minimizing equation (5) under constraint (4) gives the Generalized Separating Hyperplane. This still remains a QP problem. The nonseparable case is illustrated in Fig. 2.
216
Hyeran Byun and Seong-Whan Lee
Fig. 2. Linear separating hyperplane for the non-separable case (taken from [44])
2.2.1
Nonlinear Support Vector Machines and Kernels
2.2.2
Nonlinear Support Vector Machines
An extension to nonlinear decision surfaces is necessary since real-life classification problems are hard to be solved by a linear classifier [41]. When the decision function is not a linear function of the data, the data will be mapped from the input space into a high dimensional feature space by a nonlinear transformation. In this high dimensional feature space, the generalized optimal separating hyperplane shown in Fig 3 is constructed [43]. Covers’ theorem states that if the transformation is nonlinear and the dimensionality of the feature space is high enough, then input space may be transformed into a new feature space where the patterns are linearly separable with high probability [68]. This nonlinear transformation is performed in implicit way through so-called kernel functions.
(a) input space
(b) feature space
Fig. 3. Feature space is related to input space via a nonlinear map Φ , causing the decision surface to be nonlinear in the input space (taken from [33])
2.2.3
Inner-Product Kernels
In order to accomplish nonlinear decision function, an initial mapping Φ of the data into a (usually significantly higher dimensional) Euclidean space H is performed as
Applications of Support Vector Machines for Pattern Recognition
217
Φ : R n → H , and the linear classification problem is formulated in the new space with
dimension d. The training algorithm then only depends on the data through dot product in H of the form Φ(x i ) ⋅ Φ(x j ) . Since the computation of the dot products is prohibitive if the number of training vectors Φ(x i ) is very large, and since Φ is not known a priori, the Mercers’ theorem [44] for positive definite functions allows to replace Φ(x i ) ⋅ Φ(x j ) by a positive definite symmetric kernel function K (x i , x j ) , that is, K (x i , x j ) = Φ(x i ) ⋅ Φ(x j ) . In training phase, we need only kenel function K (x i , x j ) and
Φ(x i ) does not need to be known since it is implicitly defined by the choice of kernel K (x i , x j ) = Φ(x i ) ⋅ Φ(x j ) . The data can become linearly separable in feature space al-
though original input is not linearly separable in the input space. Hence kernel substitution provides a route for obtaining nonlinear algorithms from algorithms previously restricted to handling linear separable datasets [75]. The use of implicit kernels allows reducing the dimension of the problem and overcoming the so-called d“ imension curse” [3]. Variant learning machines are constructed according to the different kernel function K ( x , x j ) and thus construct different hyperplane in feature space. Table 1 shows three typical kernel functions. Table 1. Summary of inner-product kernels [68]
Kernel function
Inner Product Kernel K (x , x i ), i = 1,2,..., N
Polynomial kernel
K (x, x i ) = (xT x i + 1) d
Gaussian (Radial-basis) kernel
K ( x, x i ) = exp( − x − x i
Multi-layer perceptron (sigmoid)
2
/ 2σ 2 )
K (x , xi ) = tanh(β 0 xT xi + β1 ) , β 0 and β1 are decided by the user
2.3
Quadratic Programming Problem of SVMs
2.3.1
Dual Problem
In equation (2) and (3), the optimization goal Φ(x i ) is quadratic and the constraints are linear, it is a typical QP. Given such a constrained optimization problem, it is possible to construct another problem called the dual problem. We may now state the dual problem: given the training sample {(x i , d i )}iN=1 , find the
Lagrange multipliers {ai }iN=1 that maximize the objective function N
Q(α ) =∑α i − i =1
1 N N ∑∑α iα j d i d j xTi x j 2 i =1 i =1
subject to the constraints
(6)
218
(1)
Hyeran Byun and Seong-Whan Lee N
∑α d i
i
=0
i =1
(2) α i ≥ 0
for i = 1,2,..., N
We also may formulate the dual problem for non-separable pattern using the method of Lagrange multipliers. Given the training sample {(x i , d i )}iN=1 , find the Lagrange multipliers {ai }iN=1 that maximize the objective function N
Q(α ) =∑α i − i =1
1 N N ∑∑α iα j d i d j xTi x j 2 i =1 i =1
(7)
subject to the constraints N (1) α d = 0
∑
i
i
i =1
(2) 0 ≤ α i ≤ C
for i = 1,2,..., N
where C is a user-chosen positive parameter. The objective function Q(α ) to be maximized for the case of non-separable problems in the dual problem is the same as the case for the separable problems except for a minor but important difference. The difference is that the constraints α i ≥ 0 for the separable case is replaced with the more stringent constraint 0 ≤ α i ≤ C for the non-separable case [68]. 2.3.2
How to Solve the Quadratic Problem
A number of algorithms have been suggested for solving the dual problems. Traditional QP algorithms [71,72] are not suitable for large size problems because of the following reasons [70]: • • •
They require that the kernel matrix be computed and stored in memory and it requires extremely large memory. These methods involve expensive matrix operations such as the Cholesky decomposition of a large submatrix of the kernel matrix. For practitioners who would like to develop their own implementation of an SVM classifier, coding these algorithms is very difficult.
A few attempts have been made to develop methods that overcome some or all of these problems. Osuna et al. proved a theorem, which suggests a whole new set of QP problems for SVM. The theorem proves that the large QP problem can be broken down into a series of smaller QP sub-problems. As long as at least one example that violate the Karush-Kuhn-Tucker (KKT) conditions is added to the examples for the previous sub-problem, each step will reduce the cost of overall objective function and maintain a feasible point that obeys all of the constraints. Therefore, a sequence of QP sub-problems that always add at least one violator will be guaranteed to converge [51]. Platt proposed a Sequential Minimal Optimization (SMO) to quickly solve the SVM QP problem without any extra matrix storage and without using numerical QP
Applications of Support Vector Machines for Pattern Recognition
219
optimization steps at all. Using Osunas’ theorem to ensure convergence, SMO decomposes the overall QP problem into QP sub-problems. The difference of the Osunas’ method is that SMO chooses to solve the smallest possible optimization problem at every step. At each step, (1)SMO chooses two Lagrange multipliers to jointly optimize, (2)finds the optimal values for these multipliers, and (3)updates the SVMs to reflect the new optimal values. The advantage of SMO is that numerical QP optimization is avoided entirely since solving for two Lagrange multipliers can be done analytically. In addition, SMO requires no extra matrix storage at all. Thus, very large SVM training problems can fit inside the memory of a personal computer or workstation [69]. Keerti et al. [73] pointed out an important source of confusion and inefficiency in Platts’ SMO algorithm that is caused by the use of single threshold value. Using clues from the KKT conditions for the dual problem, two threshold parameters are employed to derive modifications of SMO. 2.4
SVMs Applied to Multi-Class Classification
The basic SVMs are for two-class problem. However it should be extended to multiclass to classify into more than two classes [45,46]. There are two basic strategies for solving q-class problems with SVMs. 2.4.1
Multi-class SVMs: One to Others [45]
Take the training samples with the same label as one class and the others as the other class, then it becomes a two-class problem. For the q-class problem (q >2), q SVM classifiers are formed and denoted by SVMi, i=1,2,…, q. As for the testing sample x, d i( x ) = w *i ⋅ x + bi* can be obtained by using SVMi. The testing sample x belongs to jth class where d j ( x ) = max d i ( x )
(8)
i =1~ q
2.4.2
Multi-class SVMs: Pairwise SVMs
In the pairwise approach, q 2 machines are trained for q-class problem [47]. The pairwise classifiers are arranged in trees, where each tree node represents an SVM. A bottom-up tree, which is similar to the elimination tree used in tennis tournaments was originally proposed in [47] for recognition of 3D objects and was applied to face recognition in [9,48]. A top-down tree structure has been recently published in [49]. There is no theoretical analysis of the two strategies with respect to classification performance [10]. Regarding the training effort, the one-to-others approach is preferable since only q SVMs have to be trained compared to q 2 SVMs in the pairwise approach. However, at runtime both strategies require the evaluation of q-1 SVMs [10]. Recent experiments on people recognition show similar classification performances for the two strategies [24].
220
Hyeran Byun and Seong-Whan Lee
1 2 3 4 2 3 4
3 4
not 1
3 vs 4
not 4 1 2 3
2 vs 4
not 2
not 4 2 3
4
1
1 vs 4
not 1
not 3 1 2
2 vs 3
3
1
1 vs 3
1
6
3
6
7
1 vs 2
2
1
1
(a) example of top-down tree structure
2
3
4
5
6
7
8
(b) example of bottom-up tree structure
Fig. 4. Tree structure for multi-class SVMs. (a) The decision Directed Acyclic Graph (DAG) for finding the best class out of four classes. The equivalent list state for each node is shown next to that node (taken from [49]), (b) The binary tree structure for 8 classes. For a coming test data, it is compared with each two pairs, and the winner will be tested in an upper level until the top of the tree is reached. The numbers 1-8 encode the classes (taken from [48,9])
3
Applications of SVMs for Pattern Recognition
In this Section, we survey applications of pattern recognition using SVMs. We classify existing applications into seven categories according to their aims. Some methods, which are not included in major categories, are classified into other methods and there can be more application areas which are not included in this section. Table 2 shows the summary of major SVMs-related applications Table 2. Summary of major SVMs-related applications
Categories
Major differences Frontal face detection
Face Detection
To speed up face detection on skin segmented region
Summary of applications -
applied SVM to face detection first suggested novel decomposition algorithm [51]
-
face detection/eye detection [52] ICA features as an input [83] orthogonal Fourier-Mellin Moments as an input [11] overcomplete wavelet decomposition as an input [76]
-
Applications of Support Vector Machines for Pattern Recognition -
To speed up face detection
Face Detection
Multi-view face detection
-
-
Combination of multiple methods
Object Recognition
reformulated Fishers’ linear discriminant ratio to quadratic problem to apply SVM [8]
-
showed that the performance of SVMs was relatively insensitive to the representation space(PCA, LDA) and preprocessing steps [5]
ORL database (Recognition Rate 97%)
-
bottom-up tree multi-class method input feature for SVM was extracted by PCA [9,48]
ORL database (Recognition Rate 98%)
-
suggested the modified kernel to explore spatial relationships of the facial features [56]
-
top-down tree multi-class method 3D range data for 3D shape features and 2D textures are projected onto PCA subspace and PCs’ are input to SVMs [50] compared component-based features with global feature as an input of SVM SVM gave better performance when component-based features were used [10]
M2VTS database (EER=1.0)
Own database (Recognition Rate 90%)
-
Own database
Object Recognition
eyes-in-whole and face templates as preprocessing [78] calculated reduced support vectors [79] constructed separate SVMs for face detection on different views [26, 54, 80] eigenface for a coarse face detection followed by an SVM for fine detection [55] Majority voting on outputs of 5 different kernels of SVMs [77]
-
M2VTS database (EER=3.7%) Face Vaerification
-
221
Own database (People Rec. Rate: 99.5% ; Pose Rec. Rate : 84.5%)
-
-
people recognition(4 people) pose recognition(4 poses) compared bottom-up and top-down multi-class SVM and the results showed similar performance of two methods [24]
222
Hyeran Byun and Seong-Whan Lee -
-
COIL database (7200 images 72 views per each objects 100 objects)
-
-
Object Recognition Own database
Own database
Own database
Own database, Character recognition Handwritten Character/ Digit Recognition
-
-
radar target recognition [14] pedestrian recognition [84]
-
used local view and global view for character recognition local view model for input normalization SVM, global view model for recognition [16]
-
-
NIST database, Handwritten digit recognition (Rec. Rate:98.06%)
showed that SVMs gave a good performance for 3D object recognition from single view tested on many synthesized images with noise, occlusion, and pixel shifting [47] illustrated the potential of SVMs in terms of the number of training views per object(from 36 views to 2 views) for 3D object recognition showed that the performance was decreased much when the number of training views were less than 18 views [15] people detection recognized trajectory of moving people [57] detected moving vehicle constructed the problem as 2-class problem by classifying moving vehicle from shadows [13]
-
-
combined structural and statistical features are input to single SVM classifier constructed different SVM classifier for each feature and then combined 2 different SVMs by rule-based reasoning single SVM gave better performance[17]
Applications of Support Vector Machines for Pattern Recognition
Handwritten Digit Recognition
-
compared the performance according to: effect of input dimension, effect of the kernel function(Linear, Polynomial, Gaussian), comparison of different classifier (ML, MLP, SOM+LVQ, RBF, SVM), comparison of 3 types of multi-class SVM(one-to-others, pairwise, decision tree)[45]
-
extracted biologically plausible features showed that their extracted features were linearly separable features by using linear SVM classifier [18]
NIST database, Handwritten digit recognition
-
Utterance verification for Speech recognition Speaker/ Speech Recognition
-
SVMs are used to accept keyword or reject non-keyword for speech recognition [22]
-
PolyVar telephone database is used [21] new method for normalizing polynomial kernel to use with SVMs, YOHO database, text independent, best EER=0.34% [23]
-
Speaker verification/ recognition
-
Image Retrieval
Prediction
Brodatz texture database Correl image database Financial time series prediction Bankruptcy prediction
223
combined Gaussian Mixture Model in SVM outputs text independent speaker verification best EER = 1.56% [20]
- boundaries between classes were obtained by SVM [33] -
SVMs were used to separate two classes of relevant and irrelevant images [34, 36, 87]
-
C-ascending SVMs were suggested based on the assumption that it was better to give more weights on recent data than distant data [41]
-
suggested to select suitable input variables that tends to discriminate within the SVM kernel used [40]
224
Hyeran Byun and Seong-Whan Lee -
FERET database : 3.4% error rate compared SVM-based method to: linear, quadratic, FLD, RBF, ensemble-RBF [27]
Goal detection
-
ghost goal detection [64]
Fingerprint classification
-
Types of fingerprints were classified into 5 classes [62]
-
extracted data points from huge databases and the accuracy of a classifier trained on this reduced sets were comparable to results from training with the entire data sets [42]
-
on FERET database [31,18]
-
bullet-hole classification for autoscoring [32] white blood cell classification [88] spam categorization [89] cloud classification [74] hyperspectral data classification [28] storm cell classification [29] image classification [30]
Gender classification
Other Classifications
Data condensation
Face pose classification
Other Classifications
3.1
-
Face Detection and Recognition
Face detection, verification and recognition are one of the popular issues in biometrics, identity authentication, access control, video surveillance and human-computer interfaces. There are many active researches in this area for all these applications use different methodologies. However, it is very difficult to achieve a reliable performance. The reasons are due to the difficulty of distinguishing different persons who have approximately the same facial configuration and wide variations in the appearance of a particular face. These variations are because of changes in pose, illumination, facial makeup and facial expression [50]. Also glasses or a moustache makes difficult to detect and recognize faces. Recently many researchers applied SVMs to face detection, facial feature detection, face verification, recognition and face expression recognition and compared their results with other methods. Each method used different input features, different databases, and different kernels to SVMs classifier. Face Detection: The application of SVM in frontal face detection in image was first proposed by Osuna et al. [51]. The proposed algorithm scanned input images with a 19x19 window and a SVM with a 2nd-degree polynomial as kernel function is trained with a novel decomposition algorithm, which guarantees global optimality. To avoid exhaustive scanning for face detection, SVMs are used on different features of seg-
Applications of Support Vector Machines for Pattern Recognition
225
mented skin regions. Kumar and Poggio [52] recently incorporated Osuna et al.s’ SVM algorithm in a system for real-time tracking and analysis of faces on skin region and also to detect eyes. In [83], SVMs classified the ICA features after applying skin color filter for face detection and they showed that the used ICA features gave better generalization capacity than by training SVM directly on the image data. In Terrillon et al. [11], they applied SVM to invariant Orthogonal Fourier-Mellin Moments as features for binary face/non-face classification on skin color-based segmented image and compared the performance of SVM face detector to multi-layer perceptron in terms of Correct Face Detector (CD) and Correct Face Rejection (CR). Also to speed up the face detection, in [78], two templates : eyes-in-whole and face are used for filtering out face candidates for SVMs to classify face and non-face classes. Another method to improve the speed of the SVM algorithm, [79] found a set of reduced support vectors (RVs) which are calculated from support vectors. RVs are used to speed up the calculation sequentially. SVMs have also been used for multi-view face detection by constructing separate SVMs specific to different views based on the pose estimation. For face recognition, frontal view SVM-based face recognizer is used if the detected face is in frontal view after head pose estimation [26,54,80]. Also combined methods are tried to improve the performance for face detection. In [55], they tested the performance of three face detection algorithms, eigenface method, SVM method and combined method in terms of both speed and accuracy for multi-view face detection. The combined method consisted of a coarse detection phase by eigenface method followed by a fine SVM phase and could achieve an improved performance by speeding up the computation and keeping the accuracy. Buciu et al. [77] attempted to improve the performance of face detection by majority voting on the outputs of 5 different kernels of SVM. Papageorgio et al. [76] applied SVM to overcomplete wavelet representation as input data to detect faces and people and Richman et al. [82] applied SVM to find nose crosssection for face detection. Face Recognition and Authentication: The recognition of face is a well-established field of research and a large number of algorithms have been proposed in the literature. Machine recognition of faces yields problems that belong to the following categories whose objectives are briefly outlined [8]: • •
Face Recognition: Given a test face and a set of reference faces in a database, find the N most similar reference faces to the test face. Face authentication: Given a test face and a reference one, decide if the test face is identical to the reference face.
Guo et al. [9,48] proposed multi-class SVM with a binary tree recognition strategy for face recognition. Normalized feature extracted by PCA was the input of the SVM classifier. For face recognition, the papers used different inputs to an SVM classifier. Heisele et al. [10] developed a component-based method and global method for face recognition. In the component-based system they extracted facial components and combined them into a single feature vector, which is classified by SVM. The global system used SVM to recognize faces by classifying a single feature vector consisting of the gray values of the whole face image. Their results showed that component-
226
Hyeran Byun and Seong-Whan Lee
based method outperformed the global method. Kim et al. [56] modified SVM kernel to explore spatial relationships among potential eye, nose, and mouth object and compared their kernel with existing kernels. Wang et al. [50] proposed a face recognition algorithm based on both of 3D range and 2D gray-level facial images. 2D texture and 3D shape features are projected onto PCA subspace and then integrated 2D and 3D features are an input to SVM to recognize faces. For face authentication and recognition, Jonsson et al. [5] presented that SVMs extracted the relevant discriminative information from the training data and the performance of SVMs was relatively insensitive to the representation space and preprocessing steps. Tefas et al. [8] reformulated Fishers’ discriminant ratio to a quadratic optimization problem subject to a set of inequality constraints to enhance the performance of morphological elastic graph matching for frontal face authentication. SVMs, which find the optimal separating hyperplane are constructed to solve the reformulated quadratic optimization problem for face authentication. 3.2
Object Detection and Recognition
Object detection or recognition aims to find and track moving people or traffic situation for surveillance or traffic control. Nakajima et al. [24] developed people recognition and pose estimation as a multi-class classification problem. This paper used bottom-up and top-down multi-class SVMs and the two types of SVM classifiers showed very similar performance. 3D object recognition was developed in [15] and [47]. Both of them used COIL object database, which contained 7200 images of 100 objects with 72 different views per each object. Roobaert et al. [15] proposed 3D object recognition with SVMs to illustrate the potential of SVMs in terms of the number of training views per object. Their result showed that the performance was decreased much when the number of training views was less than 18 views. M. Pontil and A.Verri [47] used linear SVMs for aspect-based 3D object recognition from a single view without feature extraction, data reduction and estimating pose. They tested SVM method on the synthesized images of COIL database with noise, occlusion, and pixel shifts and got very good performance. Pittore et al. [57] proposed a system that was able to detect the presence of moving people, represented the event by using an SVM for regression, and recognized trajectory of visual dynamic events from an image sequence by SVM classifier. Gao et al. [13] proposed a shadow and headlights elimination algorithm by considering this problem as a 2-class problem. That is, the SVM classifier was used to detect real moving vehicles from shadows. Some other object recognitions were on radar target recognition[14] and pedestrian recognition [84]. 3.3
Handwritten Character/Digit Recognition
Among the SVM-based applications, on the handwritten digit recognition problem, SVMs have shown to largely outperform all other learning algorithms, if one excludes the influence of domain-knowledge [15]. A major problem in handwriting recognition is the huge variability and distortions of patterns. Elastic models based on local observations and dynamic programming such as HMM are efficient to absorb this variabil-
Applications of Support Vector Machines for Pattern Recognition
227
ity. But their vision is local [16]. To combine the power of local and global characteristics, Choisy et al. [16] used NSPH-HMM for local view and normalization. SVM for global view is used for character recognition after normalization of NSPH-HMM. For handwritten digit recognition, SVMs are used in [17], [18] and [45]. Gorgevik et al. [17] used two different feature families (structural features and statistical features) for handwritten digit recognition using SVM classifier. They tested single SVM classifier applied on the both feature families as one set. Also two feature sets are forwarded to 2 different SVM classifiers and obtained results are combined by rule-based reasoning. The paper showed that single SVM classifier was better than rule-based reasoning applied to 2 individual classifiers. Teow et al. [18] had developed a vision-based handwritten digit recognition system, which extracts features that are biologically plausible, linearly separable and semantically clear. In their system, they showed that their extracted features were linearly separable features over a large set of training data in a highly non-linear domain by using linear SVM classifier. In [45], they showed the performance of handwritten digit recognition according to (1) the effect of input dimension, (2) effect of kernel functions, (3) comparison of different classifiers(ML, MLP, SOM+LVQ, RBF, SVM) and (4) comparison of three types of multiclass SVMs(one-to-others, pair-wise, decision tree). 3.4
Speaker/Speech Recognition
In speaker or speech recognition problem, the two most popular techniques are discriminative classifiers and generative model classifiers. The methods using discriminative classifiers consist of decision tree, neural network, SVMs, and etc. The wellknown generative model classification approaches include Hidden Markov models (HMM) and Gaussian Mixture models (GMM) [20]. For training and testing data, there are text dependent and text independent data. Bengio et al.[21] and Wan et al.[23] used SVMs for speaker verification on different data sets. In [21], they experimented on text dependent and text independent data and replaced the classical thresholding rule with SVMs to decide accept or reject for speaker verification. Text independent tasks gave significant performance improvements. [23] proposed a new technique for normalizing the polynomial kernel to use with SVMs and tested on YOHO database. Dong et al. [20] reported on the development of a natural way of achieving combination of discriminative classifier and generative model classifiers by embedding GMM in SVM outputs, thus created a continuous density support vector machine (CDSVM) for text independent speaker verification. For utterance verification which is essential to accept keywords and reject non-keywords on spontaneous speech recognition, Ma et al. [22] have trained and tested SVMs classifier to the confidence measurement problem in speech recognition. 3.5
Information and Image Retrieval
Content-based image retrieval is emerging as an important research area with applications to digital libraries and multimedia databases[33]. Guo et al. [33] proposed a new metric, distance-from-boundary to retrieve the texture image. The boundaries between classes are obtained by SVM. To retrieve more images relevant to the query image,
228
Hyeran Byun and Seong-Whan Lee
SVM classifier was used to separate two classes of relevant images and irrelevant images in [36,34,87]. Drucker et al. [36], Tian et al. [34] and Zhang et al. [87] proposed that SVMs automatically generated preference weights for relevant images. The weights were determined by the distance of the hyperplane, which was trained by SVMs using positive examples (+1) and negative examples (-1). 3.6
Prediction
The aim of many nonlinear forecasting methods[37,39,40,41] is to predict next points of time series. Tay and Cao [41] proposed C-ascending SVMs by increasing the value of C, the relative importance of the empirical risk with respect to the growth of regularization term. This idea was based on the assumption that it was better to give more weights on recent data than distant data. Their results showed that C-ascending SVMs gave better performance than standard SVM in financial time series forecasting. Fan et al. [40] had adopted SVM approach to the problem of predicting corporate distress from financial statements. For this problem, the choice of input variables (financial indicators) affects the performance of the system. This paper had suggested selecting suitable input variables that maximize the distance of vectors between different classes, and minimize the distance within the same class. Euclidean distance–based input selection provided a choice of variables that tends to discriminate within the SVM kernel used. 3.7
Other Applications
There are many more applications of SVMs for pattern recognition problems. Yang et al. [27] have investigated SVMs for visual gender classification with low-resolution t“ humbnail” faces (21-by-12 pixels) processed from 1,755 images from the FERET face database. Then they trained and tested each classifier with the face images using five fold cross validation. The performance of SVM (3.4% error) was shown to be superior to traditional pattern classifiers (linear, quadratic, FLD, RBF, ensembleRBF). Gutta et al. [31] have applied SVMs to face pose classification on FERET database and their results yielded 100% accuracy. Also Huang et al. [81] applied SVMs to classify into 3 different kinds of face poses. Yao et al. [62] proposed to classify fingerprint types into 5 different fingerprint classes. SVMs were trained on combining flat and structured representation and showed good performance and promising approach for fingerprint classification. In addition, SVMs had been applied to many other applications such as data condensation [42], goal detection [64], and bullet-hole image classification [32]. Data condensation [42] was to select a small subset from huge databases and the accuracy of a classifier trained on such reduced data set were comparable to results from training with the entire data sets. The paper extracted data points lying close to the class boundaries, SVs, which form a much reduced but critical set for classification using SVMs. But the problem of large memory requirements for training SVMs in batch mode was solved so that the training would preserve only the SVs at each incremental step, and add them to the training set for the next step, called incremental learning. Goal detection for a particular event, ghost goals, using SVMs was proposed by An-
Applications of Support Vector Machines for Pattern Recognition
229
cona et al. [64]. Xie et al. [32] focused on the application of SVM for classification of bullet-hole images in an auto-scoring system. The image was classified into one, two or more bullet-hole images by multi-class SVMs. White blood cells classification[88], spam categorization[89], text detection and categorization [85,86] and more others [63, 65] are applied SVMs. .
4
Limitations of SVM
The performance of SVMs largely depends on the choice of kernels. SVMs have only one user-specified parameter C, which controls the error penalty when the kernel is fixed, but the choice of kernel functions, which are well suited to the specific problem is very difficult [44]. Smola et al. [66] explained the relation between the SVM kernel method and the standard regularization theory. However, there are no theories concerning how to choose good kernel functions in a data-dependent way [4]. Amari and Wu [4] proposed a modified kernel to improve the performance of SVMs classifier. It is based on information-geometric consideration of the structure of the Riemannian geometry induced by the kernel. The idea is to enlarge the spatial resolution around the boundary by a conformal transformation so that the separability of classes is increased. Speed and size is another problem of SVMs both in training and testing. In terms of running time, SVMs are slower than other neural networks for a similar generalization performance [68]. Training for very large datasets with millions of SVs is an unsolved problem [44]. Recently, even though Platt [69] and Keerthi et al. [70] proposed SMO (Sequential Minimization Optimization) and modified SMO to solve the training problem, it is still an open problem to improve. The issue of how to control the selection of SVs is another difficult problem, particularly when the patterns to be classified are nonseparable and the training data are noisy. In general, attempts to remove known errors from the data before training or to remove them from the expansion after training will not give the same optimal hyperplane because the errors are needed for penalizing nonseparability [68]. Lastly, although some researches have been done on training a multi-class SVM, the work for multi-class SVM classifiers is an area for further research [44].
5
Conclusion
We have presented a brief introduction on SVMs and several applications of SVMs in pattern recognition problems. SVMs have been successfully applied to a number of applications ranging from face detection and recognition, object detection and recognition, handwritten character and digit recognition, speaker and speech recognition, information and image retrieval, prediction and etc. because they have yielded excellent generalization performance on many statistical problems without any prior knowledge and when the dimension of input space is very high. In this paper, we did not compare the performance results for same application.
230
Hyeran Byun and Seong-Whan Lee
Some researches compared the performance of different kinds of SVM kernels to solve their problems and most results showed that RBF kernel was usually better than linear or polynomial kernels. RBF kernel performs usually better than others for several reasons such as (1) it has better boundary response as it allows extrapolation and (2) most high dimensional data sets can be approximated by Gaussian-like distributions similar to that used by RBFs[81]. Among the application areas, the most popular research fields to apply SVMs are for face detection, verification and recognition. SVMs are binary class classifiers and it was first applied for verification or 2 class classification problems. But SVMs had been used for multi-class classification problems since one to others and pairwise bottom-up, top-down multi-class classification methods were developed. Most of applications using SVMs showed SVMs-based problem solving outperformed to other methods. Although SVMs do not have long histories, it has been applied to a wide range of machine learning tasks and used to generate many possible learning architectures through an appropriate choice of kernels. If some limitations related with the choice of kernels, training speed and size are solved, it can be applied to more real-life classification problems.
Acknowledgements The authors would like to thank Mr. Byungchul Ko for many useful suggestions that helped to improve the presentation of the paper. This research was supported by the Brain Neuroinfomatics Research Program and the Creative Research Initiative Program of the Ministry of Science and Technology, Korea.
References 1. 2. 3. 4. 5. 6. 7.
B. Boser, I. Guyon, and V. Vapnik, A training algorithm for optimal margin classifiers, In Proceedings of Fifth Annual Workshop on Computational Learning Theory, New York, (1992). C. Cortes and V. Vapnik, Support vector networks, In Proceedings of Machine Learning, vol. 20, pp. 273-297, (1995). V. Vapnik, The nature of statistical learning theory, Springer, (1995). S. Amari and S. Wu, Improving support vector machine classifiers by modifying kernel functions, In Proceedings of International Conference on Neural Networks, 12, pp. 783-789, (1999). K. Jonsson, J. Kittler, and Y.P. Matas, Support vector machines for face authentication, Journal of Image and Vision Computing, vol. 20. pp. 369-375, (2002). Juwei Lu, K.N. Plataniotis, and A.N. Ventesanopoulos, Face recognition using feature optimization and v-support vector machine, IEEE Neural Networks for Signal Processing XI, pp. 373-382, (2001). F. Seraldi and J. Bigun, Retinal vision applied to facial features detection and face authentication, Pattern Recognition Letters, vol. 23, pp. 463-475, (2002).
Applications of Support Vector Machines for Pattern Recognition
8.
9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19.
20. 21. 22.
231
A. Tefas, C. Kotropoulos, and I. Pitas, Using support vector machines to enhance the performance of elastic graph matching for frontal face authentication, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 23. No. 7, pp. 735-746, (2001). G. Guo, S. Z. Li, and K. L.Chan, Support vector machines for face recognition, Journal of Image and Vision Computing, vol. 19, pp. 631-638, (2001). B. Heisele, P. Ho, and T. Poggio, Face Recognition with support vector machines: global versus component-based approach, In Proceedings of Eighth IEEE Int. Conference on Computer Vision, vol. 2, pp. 688-694, (2001). T. J. Terrillon, M.N. Shirazi, M. Sadek, H. Fukamachi, and S. Akamatsu, Invariant face detection with support vector machines, In Proceedings of 15th Int. Conference on Pattern Recognition, vol. 4, 20, pp. 210-217, (2000). E. M. Santos and H.M. Gomes, Appearance-based object recognition using support vector machines, In Proceedings of XIV Brazilian Symposium on Computer Graphics and Image Processing, pp. 399, (2001). D. Gao, J. Zhou, and Leping Xin, SVM-based detection of moving vehicles for automatic traffic monitoring, IEEE Intelligent Transportation System, pp. 745749, (2001). Z. Li, Z. Weida, and J. Licheng, Radar target recognition based on support vector machine, In Proceedings of 5th Int. Conference on Signal processing, vol. 3, pp. 1453-1456, (2000). D. Roobaert and M.M. Van Hulle, View-based 3D object recognition with support vector machines, In Proceedings of IX IEEE Workshop on Neural Networks for Signal Processing, pp. 77-84, (1999). C. Choisy and A. Belaid, Handwriting recognition using local methods for normalization and global methods for recognition, In Proceedings of Sixth Int. Conference On Document Analysis and Recognition, pp. 23-27, (2001). D. Gorgevik, D. Cakmakov, and V. Radevski, Handwritten digit recognition by combining support vector machines using rule-based reasoning, In Proceedings of 23rd Int. Conference on Information Technology Interfaces, pp. 139-144, (2001). L.N. Teow and K.F. Loe, Robust vision-based features and classification schemes for off-line handwritten digit recognition, Pattern Recognition, January, (2002). C.S. Shin, K.I. Kim, M.H. Park, and H.J. Kim, Support vector machine-based text detection in digital video, In Proceedings of IEEE Workshop on Neural Networks for Signal Processing, vol. 2, pp. 634-641,(2000). K. I. Kim, K. Jung, S. H. Park, and H. J. Kim, Support vector machine-based text detection in digital video, Pattern Recognition, vol 34, pp. 527-529, (2001). X. Dong and W. Zhaohui, Speaker recognition using continuous density support vector machines, Electronics Letters, vol. 37, pp. 1099-1101, (2001). S. Bengio and J. Mariethoz, Learning the decision function for speaker verification, In Proceedings of IEEE Int. Conference on Acoustics, Speech, and Signal Processing, vol. 1, pp. 425-428, (2001). C. Ma, M.A. Randolph, and J. Drish, A support vector machines-based rejection technique for speech recognition, In Proceedings of IEEE Int. Conference on Acoustics, Speech, and Signal Processing vol. 1, pp. 381-384, (2001).
232
Hyeran Byun and Seong-Whan Lee
23. V. Wan and W.M. Campbell, Support vector machines for speaker verification and identification, In Proceedings of IEEE Workshop on Neural Networks for Signal Processing X, vol. 2, (2000). 24. C. Nakajima, M. Pontil, and T. Poggio, People recognition and pose estimation in image sequences, , In Proceedings of IEEE Int. Joint Conference on Neural Networks, vol. 4, pp. 189-194, (2000). 25. E. Ardizzone, A. Chella, and R. Pirrone, Pose classification using support vector machines, , In Proceedings of IEEE Int. Joint Conference on Neural Networks, vol. 6, pp. 317-322, (2000). 26. J. Ng and S. Gong, Composite support vector machines for detection of faces across views and pose estimation, Image and Vision Computing, vol. 20, Issue 56, pp. 359-368, (2002). 27. M. H. Yang and B. Moghaddam, Gender classification using support vector machines, , In Proceedings of IEEE Int. Conference on Image Processing, vol. 2, pp. 471-474, (2000). 28. J. Zhang, Y Zhang, and T. Zhou, Classification of hyperspectral data using support vector machine, In Proceedings of Int. Conference on Image Processing, vol. 1, pp. 882-885, (2001). 29. L. Ramirez, W. Pedrycz, and N. Pizzi, Severe storm cell classification using support vector machines and radial basis approaches, In Proceedings of Canadian Conference on Electrical and Computer Engineering, vol. 1, pp. 87-91, (2001). 30. Y. Zhang, R. Zhao, and Y. Leung, Image Classification by support vector machines, In Proceedings of Int. Conference on Intelligent Multimedia, Video and Speech Processing, pp. 360-363, (2001). 31. S. Gutta, J.R.J. Huang, P. Jonathon, and H. Wechsler, Mixture of experts for classification of gender, ethnic origin, and pose of human, IEEE Trans. on Neural Networks, vol. 11, Issue.4, pp. 948-960, (2000). 32. W.F. Xie, D.J. Hou, and Q. Song, Bullet-hole image classification with support vector machines, In Proceedings of IEEE Signal Processing Workshop on Neural Networks for Signal Processing, vol.1, pp. 318-327, (2000). 33. G. Guo, H.J. Zhang, and S.Z. Li, Distance-from-boundary as a metric for texture image retrieval, In Proceedings of IEEE Int. Conference on Acoustics, Speech, and Signal Processing, vol. 3, pp. 1629-1632, (2001). 34. Q. Tian, P. Hong, and T.S. Huang, Update relevant image weights for contentbased image retrieval using support vector machines, In Proceedings of IEEE Int. Conference on Multimedia and Expo, vol.2, pp. 1199-1202, (2000). 35. P. Hong, Q. Tian, and T,S. Huang, Incorporate support vector machines to content-based image retrieval with relevance feedback, In Proceedings of Int. Conference on Image Processing, vol. 3, pp. 750-753, (2000). 36. H. Druker, B. Shahrary, and D.C. Gibbon, Support vector machines: relevance feedback and information retrieval, Information Processing & Management, vol. 38, Issue 3, pp. 305-323, (2002). 37. T. Van Gestel, J.A.K. Suykens, D.E. Baestaens, A. Lambrechts, G. Lanckriet, B. Vandaele, B. De Moor, and J. Vandewalle, Financial time series prediction using least squares support vector machines within the evidence framework, IEEE Trans. On Neural Networks, vol. 12. Issue 4, pp. 809-821, (2001).
Applications of Support Vector Machines for Pattern Recognition
233
38. T. Frontzek, T. Navin Lal, and R. Eckmiller, Predicting the nonlinear dynamics of biological neurons using support vector machines with different kernels, In Proceedings of Int. Joint Conference on Neural Networks, vol. 2, pp. 1492-1497, (2001). 39. D. Mckay and C. Fyfe, Probability prediction using support vector machines, In Proceedings of Int. Conference on Knowledge-Based Intelligent Engineering Systems and Allied Technologies, vol. 1, pp. 189-192, (2000). 40. A. Fan and M. Palaniswami, Selecting bankruptcy predictors using a support vector machine approach, vol. 6, pp. 354-359, (2000). 41. F. Tay and L.J. Cao, Modified support vector machines in financial time series forecasting, Neurocomputing, Sept., (2001). 42. P. Mitra, C.A. Murthy, and S.K. Pal, Data condensation in large database by incremental learning with support vector machines, In Proceedings of 15th Int. Conference on Pattern Recognition, vol. 2, pp. 708-711, (2000). 43. B. Gutschoven and P. Verlinde, Multi-modal identity verification using support vector machines (SVM), In Proceedings of The third Int. Conference on Information Fusion, pp. 3-8, (2000). 44. C. C. Burges, A tutorial on support vector machines for pattern recognition, In Proceedings of Int. Conference on Data Mining and Knowledge Discovery, 2(2), pp. 121-167, (1998). 45. B. Zhao, Y. Liu, and S.W. Xia, Support vector machines and its application in handwritten numerical recognition, In Proceedings of 15th Int. Conference on Pattern Recognition, vol. 2, pp. 720-723, (2000). 46. S. Bernhard, C.C. Burges, and A.J. Smola, Pairwise classification and support vector machines, The MIT Press, C. Massachusetts, London England, pp. 255268, Jan. (1999). 47. M. Pontil and A. Verri. Support vector machines for 3-D object recognition, IEEE Trans. on Pattern Analysis and Machine Intelligence, pp. 637-646, (1998). 48. G. Guodong, S. Li, and C. Kapluk, Face recognition by support vector machines. In Proceedings of IEEE Int. Conference on Automatic Face and Gesture Recognition, pp. 196-201, (2000). 49. J. Platt, N. Christianini, and J. Shawe-Taylor, Large margin DAGs for multiclass classification, Advances in Neural Information Processing Systems, (2000). 50. Y. Wang, C.S. Chua, and Y.K, Ho. Facial feature detection and face recognition from 2D and 3D images, Pattern Recognition Letters, Feb., (2002). 51. E. Osuna, R. Freund, and F. Girosi, Training support machines: An application to face detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 130-136, (1997). 52. V. Kumar and T. Poggio, Learning-based approach to real time tracking and analysis of faces, In Proceedings of IEEE Int. Conference on Automatic Face and Gesture Recognition, (2000). 53. E. Hjelams and B. K. Low, Face Detection: A Survey, Computer Vision and Image Understanding, 83, pp. 236-274, (2001).
234
Hyeran Byun and Seong-Whan Lee
54. J. Ng and S. Gong, Performing multi-view face detection and pose estimation using a composite support vector machine across the view sphere, In Proceedings of IEEE Int. Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, (1999). 55. Y. Li, S. Gong, J. Sherrah, and H. Liddell, Multi-view Face Detection Using Support Vector Machines and Eigenspace Modelling, In Proceedings of Fourth Int. Conference on Knowledge-Based Intelligent Engineering Systems & Allied Technologies, pp. 241-244, (2000). 56. K.I. Kim, J. Kim, and K Jung, Recognition of facial images using support vector machines, In Proceedings of 11th IEEE Workshop on Statistical Signal Processing, pp. 468-471, (2001). 57. M. Pittore, C. Basso, and A. Verri, Representing and recognizing visual dynamic events with support vector machines, In Proceedings of Int. Conference on Image Analysis and Processing, pp. 18-23, (1999). 58. A. K. Jain and B. Yu, Automatic text location in images and video frames, Pattern Recognition, vol. 31, No. 12, pp. 2055-2976, (1998). 59. S. Antani, U. Gargi, D. Crandall, T. Gandhi, and R. Kasturi, Extraction of text in video, Dept. of Computer Science and Eng. Pennsylvania Stat Univ., Technical Report, CSE-99-016, (1999). 60. I. Jang, B.C. Ko, and H. Byun, Automatic text extraction in news images using morphology, In Proceedings of SPIE Visual Communication and Image Processing, San Jose, Jan., (2002). 61. T. Joachims, Text categorization with support vector machines: learning with many relevant features, In Proceedings of 10th European Conference on Machine learning, (1999). 62. Y. Yao, G. L. Marcialis, M. Pontil, P. Frasconi, and F. Roli, Combining flat and structured representations for fingerprint classification with recursive neural networks and support vector machines, Pattern Recognition, pp. 1-10, (2002). 63. A. Gretton, M. Davy, A. Doucet, and P. J.W. Rayner, Nonstationary signal classification using support vector machines, In Proceedings of 11th IEEE Workshop on Statistical Signal Processing, pp. 305-308, (2001). 64. N. Ancona, G. Cicirelli, A. Branca, and A. Distante, Goal detection in football by using support vector machines for classification, In Proceedings of Int. Joint Conference on Neural Networks, vol.1 pp. 611-616, (2001). 65. S. I. Hill, P. J. Wolfe, and P. J. W. Rayner, Nonlinear perceptual audio filtering using support vector machines, In Proceedings of 11th IEEE Int. Workshop on Statistical Signal Processing, pp. 305-308, (2001). 66. A. J. Smola, B. Scholkopf, and K. R. Müller, The connection between regularization operators and support vector kernels, Neural Networks, 11, pp. 637-649, (1998). 67. C. J. C. Burges, Simplified support vector decision rules, In Proceedings of 13th Int, Conference on Machine Learning, pp. 71-77, (1996). 68. S. Haykin, Neural Networks, Prentice Hall Inc. (1999). 69. J. Platt, Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines, Microsoft Research Technical Report MSR-TR-98-14, (1998).
Applications of Support Vector Machines for Pattern Recognition
235
70. S.S. Keerthi, S.K. Shevade, C. Bhattacharyya, and K.R.K. Murthy, Improvements to platt's SMO algorithm for SVM classifier design, Technical report, Dept of CSA, IISc, Bangalore, India, (1999). 71. B. Schölkopf and C. Burges, Advances in Kernel Methods: Support Vector Machines, MIT Press, Cambridge, MA, Dec. (1998). 72. A.J. Smola and B. Schölkopf, A tutorial on support vector regression, NeuroCOLT Technical Report TR-1998-030, Royal Holloway College, London, UK, (1998). 73. J. Kivinen, M. Warmuth, and P. Auer, The perceptron algorithm vs. winnow: Linear vs. Logarithmic mistake bounds when few input variables are relevant, In Proceedings of Int. Conference on Computational Learning Theory, (1995). 74. M.R. Azimi-Sadjadi and S.A. Zekavat, Cloud classification using support vector machines, In Proceedings of IEEE Geoscience and Remote Sensing Symposium, vol. 2, pp. 669-671, (2000). 75. C. Campbell. An introduction to kernel methods. In R.J. Howlett and L.C. Jain, editors, Radial Basis Function Networks: Design and Applications, page 31. Springer Verlag, Berlin, (2000). 76. C. P. Papageorgiou, M. Oren, and T. Poggio, A General framework for object detection, In Proceedings of International Conference on Computer Vision, pp. 555-562, (1998). 77. N. Bassiou, C. Kotropoulos, T. Kosmidis, and I. Pitas, Frontal face detection using support vector machines and back-propagation neural networks, In Proceedings of Int. Conference on Image Processing, pp. 1026-1029, (2001). 78. H. Ai, L. Liang, and G. Xu, Face detection based on template matching and support vector machines, In Proceedings of Int. Conference on Image Processing, pp. 1006-1009, (2001). 79. S. Romdhani, B. Schokopf, and A. Blake, Computationally efficient face dectection, In Proceedings of Int. Conference Computer Vision, pp. 695-700, (2001). 80. Y. Li, S. Gong, and H. Liddell, Support vector regression and classification based multi-view face detection and recognition, In Proceedings of Face and Gesture Recogntion, pp. 300-305, (2000). 81. J. Huang, X. Shao, and H. Wechsler, Face pose discrimination using support vector machines(SVM), In Proceedings of Int. Conference on Image Processing, pp. 154-156, (1998). 82. M.S. Richman, T. W. Parks, and H. C. Lee, A novel support vector machinebased face etection method, In Proceedings of Record of Thirty-Third Asilomar on Signals, Systems, and Computers, pp. 740-744 ( 1999). 83. Y. Qi, D. Doermann, and D. DeMenthon, Hybrid independent component analysis and support vector machine, In Proceedings of Int. Conference on Acoustics, Speech and Signal Processing, vol. 3, pp. 1481-1484, (2001). 84. C. Wohler and U. Krebel, Pedestrian recognition by classification of image sequences-global approaches vs. local shape-temporal processing, In Proceedings of Int. Conference on Image Processing, pp. 540-544, (2000). 85. K. I. Kim, K. Jung, S. H. Park, and H. J. Kim, Supervised texture segmentation using support vector machines, Electronics Letters, Vol. 35, No. 22, pp. 19351937, (1999).
236
Hyeran Byun and Seong-Whan Lee
86. K. I. Kim, K. Jung, S. H. Park, and H.J Kim, Support vector machine-based text detection in digital video, Pattern Recognition, vol. 34, pp. 527-529 (2001). 87. L. Zhang, F. Lin, and B. Zhang, Support vector machine learning for image retrieval, In Proceedings of Int. Conference on Image Processing, pp. 721-724, (2001). 88. C. Ongun, U. Halici, K. Leblebicioglu, V. Atalay, M. Beksac, and S. Beksac, Feature extraction and classification of blood cells for an automated differential blood count system, In Proceedings of Int. Joint Conference on Neural Networks, pp. 2461-2466, (2001). 89. H. Drucker, D. Wu, and V. Vapnik, Support vector machines for spam categorization, IEEE Transactions on Neural Networks, vol. 10, No. 5, pp. 1048-1054 (1999).
Typhoon Analysis and Data Mining with Kernel Methods Asanobu Kitamoto National Institute of Informatics (NII) 2–1–2, Hitotsubashi, Chiyoda-ku, Tokyo 101–8430, JAPAN
[email protected] http://research.nii.ac.jp/~kitamoto/
Abstract. The analysis of the typhoon is based on the manual pattern recognition of cloud patterns on meteorological satellite images by human experts, but this process may be unstable and unreliable, and we think could be improved by taking advantage of both the large collection of past observations and the state-of-the-art machine learning methods, among which kernel methods, such as support vector machines (SVM) and kernel PCA, are the focus of the paper. To apply the ”learning-fromdata” paradigm to typhoon analysis, we built the collection of more than 34,000 well-framed typhoon images to be used for spatio-temporal data mining of typhoon cloud patterns with the aim of discovering hidden and unknown regularities contained in large image databases. In this paper, we deal with the problem of visualizing and classifying typhoon cloud patterns using kernel methods. We compare preliminary results with baseline algorithms, such as principal component analysis and a k-NN classifier, and discuss experimental results with the future direction of research.
1
Introduction
Most of us are now familiar with the appearance of typhoon cloud patterns thanks to TV weather forecast programs that broadcast meteorological satellite images. It seems that a typhoon takes the infinite variety of shape, but we also know that it takes, in any case, a variant of vortic shape, and meteorology experts actually have been trying to capture such variety of shape and describe them in a way useful for typhoon analysis and prediction. Hence the current typhoon analysis techniques for estimating the intensity rely on the manual interpretation of typhoon cloud patterns by human experts, which may be unstable and unreliable. Our aim is therefore to develop a set of algorithms and tools for the pattern recognition of typhoons to be used for typhoon analysis. The ultimate goal is not to replace human experts, but we are more concerned with the data mining viewpoint — namely, we want to discover unknown regularities and anomalies hidden in the large collection of past observations, and we hope that obtained knowledge can directly or indirectly help human experts interpret cloud patterns from various viewpoints. S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 237–249, 2002. c Springer-Verlag Berlin Heidelberg 2002
238
Asanobu Kitamoto
In this paper, we focus on the problem of pattern recognition of the typhoon, and illustrate several research issues where kernel methods [1,2,3] should play an important role. The infrastructure of this research is the typhoon image collection [4,5], which consists of about 34,000 well-framed typhoon images for the northern and southern hemisphere, created from the geostationary meteorological satellite, GMS-5 (Himawari). The next section describes the background and this image collection. Then Section 3 and Section 4 describes the application of kernel methods to the real world application, the pattern recognition of the typhoon, and Section 5 concludes the paper with discussion.
2 2.1
Background Meteorological Background
Because of the huge impact of the typhoon to the society, the development of typhoon analysis and prediction methods have been one of the primary concerns among meteorologists, and in 1970s, a standard procedure for typhoon analysis, the Dvorak method [6], was established. Dvorak established this empirical method based on his experience of observing numerous satellite images of many typhoons, and since its inception, this method has been used in tropical storm analysis centers worldwide. The Dvorak method is essentially the heuristic pattern recognition of typhoon cloud patterns. The main components of the Dvorak method consist of a set of empirical rules that relate various cloud features to a set of parameters representing the intensity of the typhoon, such as the central pressure and maximum wind. The usage of this seemingly unreliable method reflects the underlying situation — reliable ground-truth measurements are usually not available at the center of the typhoon, especially when the typhoon is found to be in the middle of the ocean. Satellite images are therefore an important source of information and the Dvorak method serves as a guide for human experts to interpret typhoon cloud patterns and subsequently to make decisions on the intensity and future evolution of the typhoon. Although the wide usage of the Dvorak method suggests its effectiveness for typhoon analysis, we claim here several drawbacks of the Dvorak method. First, this analysis method is a collection of empirical rules and lacks theoretical background or statistical justification. Second, the potential improvement of the Dvorak method through learning from historical data has largely been unexplored. Most of the valuable satellite data are left unused mainly because of its volumetric challenges to computing and human resources. Third, the Dvorak method relies heavily on the manual pattern recognition of human experts, and its performance is, for the better or worse, dependent on the capability of human experts’ pattern recognition, which is subjective in nature.
Typhoon Analysis and Data Mining with Kernel Methods
239
Table 1. The current status of the typhoon image collection. JMA and BOM stand for Japan Meteorology Agency and Australian Bureau of Meteorology, respectively Basin
Northern Hemisphere Southern Hemisphere Best Track Name of agency JMA BOM Latitudinal Domain 0◦ N ∼ ∼ 0◦ S Longitudinal Domain 100◦ E ∼ 180◦ E 90◦ E ∼ 170◦ E Typhoon Image Collection Typhoon Seasons 6 Seasons (1995–2000) 5 Seasons (1995–2000) Typhoon Sequences 136 62 Total Images 24,500 9,400 Images per Sequence. 53 ∼ 433 25 ∼ 480 Observation Frequency 1 hour 1 hour
2.2
Challenging Issues
Above arguments suggest that typhoon analysis can be approached with pattern recognition and machine learning algorithms. We further list some of the research issues for typhoon analysis as follows: Clustering : Find prototypes of cloud patterns that summarize the whole variety of cloud patterns. Classification : Classify an observed cloud pattern into an appropriate intensity category and development stage. Regression : Estimate central pressure, maximum wind, and other parameters directly from cloud patterns. Novelty Detection : Detect characteristic cloud features or anomalies that indicate unusual rapid development in the near future. On the other hand, we have other family of research issues for typhoon prediction, whose solution leads to greater societal impact due to its direct linkage to disaster management. Nevertheless, from a scientific point of view, typhoon prediction is more challenging due to the chaotic nature of the atmosphere [4,7] – that is, similar observations in the past do not lead to similar futures due to the effect of nonlinearity of the atmosphere. Thus we concentrate on typhoon analysis issues under the ”learning-fromdata” paradigm. This paradigm requires large number of training and testing samples to derive meaningful results from data. We therefore start from building a large number of typhoon images, Typhoon Image Collection, that provides the consistent and comprehensive archive of satellite typhoon images. 2.3
Typhoon Image Collection
The typhoon image collection consists of about 34,000 typhoon images, of which about 24,500 images from the northern hemisphere for 136 typhoon sequences,
240
Asanobu Kitamoto
(a) Northern hemisphere image collection (b) Southern hemisphere image collection
Fig. 1. Image collections for the northern hemisphere and the southern hemisphere. Variety of cloud patterns is summarized here by K-means clustering with a weak topological ordering by Sammon mapping [8]
and 9,400 images from the southern hemisphere for 62 typhoon sequences, as summarized in Table 1, and this number will increase every year. Respecting the importance of the typhoon, meteorological organizations publish an official record called ”Best Track” that summarizes the life cycle of every typhoon based on the most reliable retrospective study on geographical location, central pressure, maximum wind, and others. We then use this information to create the collection of centered images, or well-framed images, with necessary preprocessing steps such as map projection and data cleaning. Figure 1 visualizes the variety of typhoon images. 2.4
Preprocessing
Basically we refer the reader to [4] for the detail associated with the creation of this image collection, but we briefly comment here on the final product that we are going to use in the subsequent experiments. The most important step for the final product is the classification of images, since the pixel value of remote sensing images (in our case infrared images) is just an indirect indication of cloudiness. Hence we first apply a cloud classification algorithm introduced in [9] to a well-framed typhoon image of 512 × 512 pixels. Then we derive what we call the cloud fraction image so that each pixel represents the fraction of clouds f ∈ [0, 1] inside the pixel 1 . One pixel of the cloud fraction image corresponds 1
At this moment, we actually assign a different cloudiness value to each pixel based on the cloud type. For example, cumulus cloud (such as cumulonimbus) is more em-
Typhoon Analysis and Data Mining with Kernel Methods
241
to a block of 16 × 16 pixels of the well-framed image, so this process results in the 32 × 32 = 1024 dimensional cloud fraction vector. 2.5
Previous Work
To our knowledge, typhoon researches in the pattern recognition and machine learning community has not been many. Among a few research papers, Lee et. al. [10] used active contour model and elastic graph matching for identifying the cloud patterns of the typhoon. Their goal, however, was in the automation of the existing Dvorak method, and their dataset of 145 images was too small to derive any statistical justification of the results. Another paper by Zhou, et. al. [11] analyzed satellite images of the hurricane mainly for extracting cloud structure from cloud motion. In contrast, we start our project from building a comprehensive collection of typhoon images with consistent quality, and then apply various data mining algorithms to discover some hidden regularities in the large dataset. In this sense, our research has close connection with other benchmarking studies based on large image collections such as face recognition and handwritten character recognition.
3
Feature Extraction by Kernel PCA
3.1
Principal Component Analysis
In spite of the recent advancement of weather forecasting in general, we are yet to have a realistic model of the typhoon. The simulation of a realistic typhoon is not possible yet, which indicates that we do not have a good mathematical model for describing the observed cloud patterns of the typhoon. Hence we need to build an empirical quantitative model for that purpose, and we here consider two choices. The first choice, a shape-based approach, represents cloud patterns explicitly using mathematical shape models. Here, to deal with temporal dynamics, this shape model should be active so that it keeps track of cloud deformation [9]. In addition, this shape model should have a capability to zero in on an appropriate spatial and temporal scale. The second choice, a holistic approach, focuses on lower level features such as the spatial distribution of pixel values, and derive features from mathematical operations such as multivariate analysis. An example of holistic approaches is the principal component analysis (PCA), which is often called EOF (empirical orthogonal function) [12] in the context of meteorology. With PCA we can derive the basic component of typhoon cloud patterns in terms of maximum variance present in the dataset, and the advantage of using PCA is based on the optimality that the mean-squared approximation error in representing the observations by the first q principal components is minimal over all possible q directions. As is well known, however, that this phasized than cirrus cloud when the intensity of the typhoon is concerned. However, this procedure is still no more than heuristic, and we omit the detail.
242
Asanobu Kitamoto
(a) Northern Hemisphere
(b) Southern Hemisphere
(c) Reconstruction of the original pattern from eigenpictures
Fig. 2. The eigenvectors of typhoon cloud patterns, or eigen-typhoons, for the northern and the southern hemisphere. (a) and (b): From upper-left corner: the mean image, the variance image, and eigen-pictures from the 1st to the 22nd. (c) shows the reconstruction of the original cloud pattern using 60 leading components
approximation efficiency does not always lead to good selection of features for classification. Figure 2 illustrates the gray-scale representation of principal components, namely eigen-typhoons for the northern and the southern hemisphere. The first principal eigen-typhoon in the northern hemisphere, which shows maximum variance in the data collection, corresponds to a component describing the latitudinal structure of the typhoon, whereas subsequent principal eigen-typhoons tend to describe spiral components that look like rainbands. Moreover, the comparison between the northern and the southern hemisphere images suggest that they show nearly the same tendency except for the sign, and the mirror reflection in terms of latitudinal structure. Hence those structure might suggest that basic components are shared by severe tropical storms that exist in different areas on the Earth. Finally, as Figure 2(c) illustrates, weighted combination of these components can reconstruct the original image. 3.2
Kernel PCA
Kernel PCA [3], an extension of linear PCA, extracts principal components in the feature space, which is usually a much higher dimensional space than the input space. In this paper, the input space is X = [0, 1]N = [0, 1]1024 , where the domain [0, 1] represents the fraction of cloud in a pixel, and 1024 = 32 × 32 is
Typhoon Analysis and Data Mining with Kernel Methods
243
Component 1
Component 2
Component 3
Fig. 3. The distribution of typhoon cloud patterns on a low dimensional space spanned by leading principal components. Panels above and below the diagonal correspond to results from standard PCA and kernel PCA (RBF kernel with γ = 0.1), respectively the size of the cloud fraction image. We use popular kernels as follows: d x, y + 1 N γ −( N ||x−y||2) K (x, y) = e
Polynomial kernel : RBF kernel :
K (x, y) =
(1) (2)
To solve the kernel eigenvalue problem, we use our own implementation of kernel PCA built on top of a linear algebra package LAPACK [13]. Here we compare features extracted from linear PCA and kernel PCA for the northern hemisphere image collection. We apply the standard PCA to the covariance matrix computed from 24,500 images, but we only compute the Gram
244
Asanobu Kitamoto
Table 2. Number of observations for developing and weakening typhoons Validation groups by typhoon season 1995 1996 1997 Developing Cases 539 774 963 Weakening Cases 470 618 1064 Total 1009 1392 2027
1998 344 287 631
1999 328 309 637
2000 581 559 1140
Total 3529 3307 6836
matrix of the kernel PCA for sampled images due to the constraint of the memory. For sampling, we first sort all the images by typhoon sequence and then by observation time-stamp. Next we sample every 6 images from the list of images and create a dataset of about 4,000 images. Figure 3 illustrates the distribution of typhoon cloud patterns on a low dimensional space spanned by leading principal components2 . In linear PCA, we can refer to eigen-typhoons in Figure 2 to interpret the distribution of cloud patterns. For example, the image of ”component 1” and ”component 2” starts from the cloud patterns in which the northern part is filled with clouds (bottom) to the ones in which the southern part is filled with clouds (top), showing the latitudinal structure. On the other hand, the result of kernel PCA shows similar tendency, latitudinal and rotational transition from one edge to the other. In our case, as the nonlinearity of the kernel increases, more number of components are required to achieve the same approximation errors. For example, linear PCA represents 85% of the total variance with only 71 components, while kernel PCA requires 1803 components. This large number of components are not advantageous for dimensionality reduction, but can be exploited for effective feature extraction, which is left for future work.
4
Classification by Support Vector Machines
4.1
Classification of Developing and Weakening Typhoons
The classification task we challenge in this paper is the binary classification task of developing and weakening typhoons. The motivation behind this task is the detection of the precursor of rapid deepening or rapid intensification. To define the rapid deepening, we first introduce the notation ∆p(δt) to denote the difference of central pressure p(t+δt) − p(t) during a time interval δt (hours). Then rapid deepening can be defined as the rapid decrease of central pressure such that ∆p(24) ≤ −42hPa, ∆p(24) ≤ −30hPa or ∆p(12) ≤ −30hPa3 . The 2
3
Grid points are chosen with uniform intervals along each axis, and those points are then projected back to the original high-dimensional space using a Moore-Penrose generalized inverse matrix. Then the nearest image to each grid point is visualized on the two dimensional array of grid points as in Figure 3. Small central pressure indicates a strong typhoon. The strongest typhoon since 1951 showed the central pressure of 870 hPa, much lower than the average pressure of 1013 hPa on the earth’s surface.
Misclassification Errors
Typhoon Analysis and Data Mining with Kernel Methods
245
6-fold CV 0.40 0.35 0.30 0.25 0.20
1995
1996
1997
1998
1999
2000
0.15 0
20
40
60
80
100
120
140
160
180
Number of Neighbors Fig. 4. Binary classification of developing and weakening typhoons using k-nearest-neighbors algorithm, showing misclassification errors as a function of neighborhood size. We show misclassification errors for each group that corresponds to Table 2, and we show the result of 6-fold cross-validation with a bold line unexpected occurrence of rapid deepening leads to, for example, shipwreck in the middle of the ocean because necessary evacuation procedures may not be taken in advance due to an optimistic forecast. Hence the analysis of rapid deepening is a relevant issue, but the detection of special image features from typhoon cloud patterns is difficult because of the relatively rare occurrence of the phenomenon (less than 5% of total observations). Thus the true rapid developing cases are too rare to begin with, so we relax this condition by defining developing cases as ∆p(24)> ≤ −10hPa, and similarly weakening cases as ∆p(24) ≥ +10hPa, and perform the classification of typhoon images into either developing and weakening cases. Then the percentage of cases increases to about 29% of the total observations as summarized in Table 2. This classification task is the most simple classification task without rejection. As a baseline classifier, we first apply a k-nearest-neighbor (k-NN) classifier [8] (Euclidean distance in the input space). This naive classifier is here employed just to show the baseline classification performance due to the lack of established algorithms for this problem. Furthermore, to apply the cross validation (CV) scheme for measuring the generalization performance of the classifier, we create six groups of images that correspond to 6 typhoon seasons, from 1995 to 2000, so as to eliminate possible correlation between typhoon sequences in the same typhoon season. From Figure 4, the estimate of misclassification errors
246
Asanobu Kitamoto
for k-NN by cross validation is approximately 0.27. Relatively large number of neighbors (∼ 100) is necessary for attaining a low misclassification error, probably because of the abundance of many correlated images from the same typhoon sequence. 4.2
Classification by Support Vector Machines
We then start the application of kernel methods, or support vector machines, to this classification task. We compare the performance of two popular kernels, namely the polynomial kernel and the RBF kernel, or more generally, dot product kernels and translation invariant kernels. For this experiment, we use an implementation of the kernel methods called BSVM (version 2.0) proposed and implemented by Hsu and Lin [14]. The result shows that RBF kernels perform consistently better compared to polynomial kernels. In particular, polynomial kernel with higher order d perform much worse than the baseline classifier, the k-NN method. Although this performance could be improved by the fine tuning of the parameters, this result may indicate that for this problem, or at least for this encoding of the typhoon, the dot product kernel seems to be an inadequate choice. The reason is still not clear, but we suspect that the fixed origin of the dot product kernel does not capture the true similarity of input vectors encoded as cloud fraction vectors. The kernel (e) RBF that showed the best performance is the same kernel used in Figure 3. However, the overall performance of about 25% misclassification is not satisfactory considering the simplicity of the classification task, and the performance of the RBF kernel was nearly comparable to that by the simple k-NN algorithm. One explanation for this result is that, in our image collection, many of the data are highly correlated because they are actually samplings from continuous time series data, which reduces the ”effective” number of independent data points in a high dimensional space. Another explanation is that this poor classification performance is the result of the poor encoding of the typhoon.
Table 3. Misclassification errors in the classification of developing and weakening typhoons. POLY and RBF represents the polynomial kernel and the RBF kernel , respectively. Misclassification errors are shown for each year, and we compute 6-fold cross validation from these results Sub-group (a) POLY (d = 2) (b) POLY (d = 3) (c) POLY (d = 4) (d) RBF (γ = 1) (e) RBF (γ = 0.1) (f) RBF (γ = 10.0) (g) k-NN (k = 175)
1995 29.5 34.5 42.5 32.8 25.9 32.5 31.0
1996 25.0 38.9 46.0 19.3 23.5 19.5 22.4
1997 24.9 25.0 46.2 23.8 24.2 24.4 25.5
1998 27.7 38.8 45.2 28.5 29.5 23.3 22.2
1999 30.7 33.9 42.5 37.7 31.2 37.2 33.3
2000 6-fold CV 26.8 26.7 29.3 32.0 31.4 42.9 28.5 26.7 26.7 25.9 29.3 26.2 30.2 26.8
Typhoon Analysis and Data Mining with Kernel Methods
247
It seems that the effective feature selection scheme combined with the design of the kernels is necessary to improve the performance. In fact, we have performed several preliminary experiments toward developing a new kernel that incorporates locality of the information, an idea inspired by the notion of localityimprovement kernels [3]. The result (not shown here) suggests that we need a systematic approach to design a new kernel that is well-suited to the application domain.
5
Discussion
A slogan ’No Free Kernel’ claims that the simple application of kernel methods to real world application does not lead to good performance in general without the incorporation of domain knowledge or prior information relevant in the problem domains. We do have a large amount of meteorological knowledge about the typhoon to be incorporated into kernels. Thus, in this sense, feature selection and kernel design may be the most important next step to develop more effective kernel machines. Nevertheless, whether we should include feature selection before, after, or into the kernel is an open problem. Another important future direction is the kernel methods with temporal information. Our treatment in this paper of typhoon images is to regard them as independent data, even though they are generated from image sequences in one hour observation frequency. To utilize temporal information such as cloud motion and shape change, it needs to be encoded into the input vector to discover temporal regularities of typhoon cloud patterns. On the other hand, spatial information should be encoded in the multiresolution representation, because the typhoon cloud pattern involves objects over multiple scales (the eye vs. the cloud system). Although wavelet transform has already been used with kernel methods, its integration with kernel design seems to require more work. In this paper, we used kernel PCA and SVM, but considering the wealth of algorithms in the family of kernel algorithms, we should test other useful algorithms for our application. Of particular interest is the single-class SVM methods for density estimation and novelty detection. This method can be used for characterizing and detecting cloud patterns of special cases, such as cloud patterns just before the onset of rapid deepening. Also the clustering of spatiotemporal cloud patterns that leads to the selection of prototypical cloud patterns seem to have a practical value to summarize and understand the variation of cloud patterns, as meteorology experts have been doing in this way for a long time. Finally, in terms of the goal of this research, our purpose is not in replacing human experts from typhoon analysis with black-box machines that automate decision-making processes. Instead, our goal is to provide a data mining tool that augments the ability of human experts with the help of powerful pattern recognition methods that can digest, learn, and analyze an enormous amount of historical data. This course of research may not give a concise physical theory that can be encoded in simple mathematical equations, but may discover a set of rules and knowledge with the statistical and probabilistic justification.
248
Asanobu Kitamoto
We conclude this paper that, although the result with kernel methods failed to be an impressive one, we still have a possibility to improve the performance through the careful study of the behavior and the design of kernels. In particular, incorporation of temporal and spatial domain knowledge into the kernels are the most important challenge in the future work.
Acknowledgments Typhoon images used in the paper were created from GMS-5 satellite images originally received at Institute of Industrial Science, University of Tokyo. We express our deep appreciation to Prof. Y. Yasuoka and Prof. M. Kitsuregawa at University of Tokyo for providing and maintaining valuable collection of data. We also thank the developers of BSVM at National Taiwan University for providing useful tools.
References 1. V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, Inc., 1998. 238 2. B. Sch¨ olkopf, C. J. C. Burges, and A. J. Smola, editors. Advances in Kernel Methods. The MIT Press, 1999. 238 3. B. Sch¨ olkopf and A. J. Smola. Learning with Kernels. The MIT Press, 2002. 238, 242, 247 4. A. Kitamoto. Spatio-temporal data mining for typhoon image collection. Journal of Intelligent Information Systems, 19(1), 2002. (in press). 238, 239, 240 5. A. Kitamoto. IMET: Image mining environment for typhoon analysis and prediction. In C. Djeraba, editor, Multimedia Data Mining. Kluwer Academic Publishers, 2002. (in press). 238 6. V. F. Dvorak. Tropical cyclone intensity analysis using satellite data. NOAA Technical Report NESDIS, 11:1–47, 1984. 238 7. E. N. Lorenz. The Essence of Chaos. University of Washington Press, 1993. 239 8. T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer, 2001. 240, 245 9. A. Kitamoto. The development of typhoon image database with content-based search. In Proceedings of the 1st International Symposium on Advanced Informatics, pages 163–170, 2000. 240, 241 10. R. S. T. Lee and J. N. K. Liu. An automatic satellite interpretation of tropical cyclone patterns using elastic graph dynamic link model. Journal of Pattern Recognition and Artificial Intelligence, 13(8):1251–1270, 1999. 241 11. L. Zhou, C. Kambhamettu, and D. B. Goldgof. Extracting nonrigid motion and 3D structure of hurricanes from satellite image sequences without correspondences. In Proc. of Conference on Computer Vision and Pattern Recognition. IEEE, 1999. 241 12. D. S. Wilks. Statistical Methods in the Atmospheric Sciences. Academic Press, 1995. 241 13. E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen. LAPACK Users’ Guide. Society for Industrial and Applied Mathematics, Philadelphia, PA, third edition, 1999. 243
Typhoon Analysis and Data Mining with Kernel Methods
249
14. C. W. Hsu and C. J. Lin. A simple decomposition method for support vector machines. Machine Learning, 46:291–314, 2002. 246
Support Vector Features and the Role of Dimensionality in Face Authentication Fabrizio Smeraldi1, , Josef Bigun1 , and Wulfram Gerstner2 1
2
Halmstad University Box 823, S-30118 Halmstad, Sweden Swiss Federal Institute of Technology (EPFL-DI) 1015 Lausanne, Switzerland
Abstract. A study of the dimensionality of the Face Authentication problem using Principal Component Analysis (PCA) and a novel dimensionality reduction algorithm that we call Support Vector Features (SVFs) is presented. Starting from a Gabor feature space, we show that PCA and SVFs identify distinct subspaces with comparable authentication and generalisation performance. Experiments using KNN classifiers and Support Vector Machines (SVMs) on these reduced feature spaces show that the dimensionality at which saturation of the authentication performance is achieved heavily depends on the choice of the classifier. In particular, SVMs involve directions in feature space that carry little variance and therefore appear to be vulnerable to excessive PCA-based compression.
1
Introduction
Face authentication/recognition can be described as a classification problem in a high-dimensional vector space. Much research effort is devoted to determining a subspace (feature space) that would carry all the relevant information from a facial image in a form that is convenient for classification. To date, there is no general agreement in the literature concerning the optimal number of features required for authentication. Principal Component Analysis (PCA) based approaches call for a reduction of the dimensionality of the input space by projection onto a set of basis vectors (Eigenfaces) [13,18]. Part of the literature indicates a number of Principal Components of the order of 30 to 50 to be sufficient [18,5], while other studies such as [9] and [20], where PCA is used in conjunction with Linear Discriminant Analysis, raise this number to a few hundred components. On the other hand, approaches based on multi-resolution decompositions involve mapping (regions of) the image to a higher dimensional space such as the Gabor decomposition space [11], which corresponds to an over-complete representation [1] of the image. Discretization of the local spectrum typically leads to a number of features of the order of one thousand or more [2,4,16].
Part of this research was carried out while F. Smeraldi was employed at (2 )
S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 249–259, 2002. c Springer-Verlag Berlin Heidelberg 2002
250
Fabrizio Smeraldi et al.
Because of the demonstrated effectiveness of the Gabor decomposition both in face authentication and in a variety of other computer vision problems, we have chosen to base our analysis on the Gabor space rather than on the original grey level space of the images. Support Vector Machine (SVM) classifiers also implicitly involve mapping the image onto a high (possibly infinite) dimensional Hilbert space [19]. The joint use of a sparse subset of Gabor features corresponding to the region around the eyes and the mouth of a subject and linear SVMs has been shown to yield very low error rates in authentication experiments [16,15]. Gabor filter responses extracted from images of different subjects turn out to be linearly separable, so that the hyperplane decision functions of linear SVMs are sufficient to discriminate between clients and impostors. Each such hyperplane determines, through its normal vector, a direction in the Gabor space. We show that the directions computed on a set of training subjects can effectively be used to reduce the dimensionality of the feature space used for authenticating a completely different set of subjects, thus achieving a data–independent compression of the biometric features. This SVM–based compression technique, to which we refer as Support Vector Features (SVFs), yields a performance comparable to that obtainable by performing PCA in the Gabor space. Concerning the relation between the dimensionality of the feature space, the type of classifier employed and the resulting performance, we find that the error rate of simple threshold-based classifiers applied after either PCA or SVFs saturates to its minimum value already in a 30 dimensional subspace. In contrast, more complex and reliable classifiers such as SVMs appear to profit from a higher dimensionality. Our results indicate that the classical criterion of discarding the Principal Components associated with the smallest Eigenvalues must be used with utmost care in conjunction with complex classifiers.
2
Gabor Feature Space and Mixture of Experts
We consider a face authentication algorithm based on a Gabor feature space G constructed following the procedure detailed in [16]. A sparse Log-polar retinotopic sampling grid is centred on the eyes and the mouth of the subjects in three successive and automatic fixations. A set of Gabor filter responses is then extracted at each point of the grid. Each of these sets of responses can be seen as a vector in a separate copy of G. Three machine experts, implemented by classifiers, are then employed to independently authenticate a client based on the Gabor responses obtained from the three fixations. The final decision about the identity claim being true or false is obtained by fusing the opinions of the experts using a simple rule. This procedure is meant to increase robustness; however, each machine expert independently authenticates the client based on a single copy of G. The set of Gabor filters employed in the experiments reported below consists of 30 filters arranged in 5 resolution (frequency) and 6 orientation channels. Given that the retinotopic grid consists of 50 points, the dimensionality of G
Support Vector Features and the Role of Dimensionality
a) Linear Support Vector Machine
b) Principal Component Analysis
10
10
9
9
8
8
7
7
6
6
5
5
4
4
3
3
2
2
1
1
0
0
1
2
3
4
5
6
7
251
8
9
10
(a) The dash-dot line represents the direction of the normal vector . Support vectors are boxed.
w
0
0
1
2
3
4
5
6
7
8
9
10
(b) The dash-dot line represents the direction of the first Principal Component.
Fig. 1. Two dimensional “toy” examples of a) Linear SVM, b) Principal Component Analysis turns out to be 30 × 50 = 1500. The mapping between the original grey-level space and G is effective in separating the response vectors obtained from different subjects. This is confirmed by the very low error rates achieved using linear SVM classifiers on G (see Table 1). Because of this fact, we have decided to base this study on G rather than on the grey level image space. In the rest of this paper we employ PCA and the SVFs algorithm to restrict the feature space to subspaces of G.
3
Support Vector Features
Support Vector Machine classifiers are characterised by a decision function of the form αj yj K(sj , v) + b ≥ 0, (1) j
where v ∈ G is the input to be classified [19]. The Support Vectors sj constitute a subset of the training examples that is determined through an optimisation process, together with the coefficients αi . The set of labels yj identifies the sj as positive examples (yj = +1) or negative examples (yj = −1). The kernel function K(u, v), is a (generally nonlinear) symmetric scalar function on G × G that satisfies the conditions of Mercer’s theorem of Functional Analysis [19]. In the case of linear SVMs, i.e. K(u, v) = u · v, the decision
252
Fabrizio Smeraldi et al.
surface is a hyperplane in G: αj yj sj · v + b = w · v + b ≥ 0.
(2)
j
The normal vector to the hyperplane is w = j αj yj sj (Figure 1a). Given a database of training images for an ordered set Γ = {1, 2, . . . , #Γ } of subjects, we consider the set {Sγ }γ∈Γ of the linear Support Vector Machines Sγ , each trained using the response vectors belonging to person γ as positive examples and the vectors belonging to the remaining #Γ − 1 identities in Γ \ {γ} as negative examples (that is, Sγ is trained to distinguish subject γ from all the others). For each classifier Sγ , let wγ ∈ G denote the normal vector to the separating hyperplane. Note that, whatever the number of support vectors, there is exactly one w γ for each Sγ , and therefore one wγ for each identity γ ∈ Γ . We now consider the set {wγ }γ∈Γ of all the normal vectors. If v is an arbitrary vector in G, we define the first n Support Vector Features v|nΓ associated to Γ and v as the ordered list of scalar products v|nΓ = (wγ · v)1≤γ≤n .
(3)
The v|nΓ can be seen as n-dimensional vectors (SVF vectors) in a compressed feature space G|nΓ . Classification can therefore be performed in this space much in the same way as one would do after projecting the vectors in G over the first n Principal Components from a given dataset. Indeed, Support Vector Features are obtained simply by substituting the normal vectors {wγ } for the Principal Components. SVFs therefore compare directly with linear algorithms such as PCA and LDA rather than with their kernel-based nonlinear extensions, Kernel PCA [12] and Kernel LDA [8]. As the toy example reported in Figure 1 suggests, however, normal vectors may determine entirely different directions from the Principal Components, a fact that is analysed in depth in the following Sections.
4
SVFs versus PCA
In this section, we address the question of whether the vectors {wγ } obtained from a training set Γ consisting of a (small) number of subjects carry some general information about the structure of the face authentication problem itself, independently of the specific identities in Γ . To this end, a set of 37 normal vectors w γ has been computed starting from the set Γ of 37 identities from the M2VTS database (this procedure has been repeated for left-eye, right-eye and mouth based authentication separately as specified in Section 2; however, such redundancy is exclusively meant for robustness and can be safely ignored for the purposes of this paper). We have then applied the authentication algorithm described in Section 2 to the entirely different set of 295 subjects from the XM2VTS database (see Appendix A) after replacing the uncompressed Gabor space G with the space G|nΓ
Support Vector Features and the Role of Dimensionality
253
SVF versus PCA (PCs computed from M2VTS and XM2VTS data) 18
16
14
Equal Error Rate (%)
12
10
8
6
4
2
0
0
5
10
15 20 25 Number of SVFs / PCs
30
35
40
Fig. 2. EER of XM2VTS authentication tests as a function of the number of SVFs (solid line), Principal Components obtained from the M2VTS DB (dashed line) and PCs obtained from the XM2VTS DB (dash-dot line). The dotted line indicates the performance of the threshold-based classifiers in the uncompressed 1500-dimensional Gabor space G of the first n Support Vector Features associated to the set Γ of the identities in the M2VTS DB. More precisely, the feature space G has been compressed by computing the dot products between each response vector v ∈ G extracted from an XM2VTS image and the first n normal vectors wγ to obtain the first n Support Vector Features, according to (Equation 3). The machine experts that authenticate the clients have then been restricted to work on the resulting n-dimensional SVF vectors v|nΓ , with 2 ≤ n ≤ 37. For these experiments, the machine experts have been implemented as simple threshold based classifiers of the type commonly used in conjunction with PCA [18]. An identity claim is accepted if the distance between the corresponding SVF vector v|nΓ and the closest training example (SVF vector) xν,i |nΓ for the claimed identity ν is lower than a fixed threshold τ , otherwise it is rejected. The resulting decision function is min xν,i |nΓ − v|nΓ ≤ τ, i
(4)
where i varies across all the training examples for the claimed identity ν. Figure 2 (solid line) shows the Equal Error Rate (EER) obtained in XM2VTS authentication tests as a function of the number n of Support Vector Features employed for classification. The EER obtained using the full-length, 1500dimensional Gabor feature vectors is 3.5%. As can be seen, the SVF technique allows achieving the same result already in a 29-dimensional subspace, with a
254
Fabrizio Smeraldi et al.
compression ratio in excess of 50. Even in a 15-dimensional subspace the EER is as low as 4.0%. Analogous results are obtained by projecting the Gabor vectors onto a variable number of Principal Components determined by performing PCA over the M2VTS DB, that is over the same set Γ of identities that was used in the computation of the SVFs (Figure 2, dashed line; note that no statistical knowledge about the test database XM2VTS is employed). The dash-dotted line in Figure 2 refers to the EER obtained by extracting the Principal Components from the XM2VTS training data directly. As can be seen, the introduction of statistical knowledge about the very same identities the system is required to authenticate only results in a relatively small improvement in performance, at the expense of making the feature compression scheme data-dependent. These experiments suggest that a 30-dimensional subspace might be sufficient to perform face authentication, a result that would be in agreement with other studies on PCA using similar classification techniques [18]. Furthermore, such a subspace appears to be characterised by a certain stability, in that the directions wγ or the Principal Components computed on the M2VTS DB can be used to compress the Gabor response vectors from the completely different set of subjects constituting the XM2VTS DB with only a small performance degradation. However, the optimal subspace for authentication need not be unique. The subspace spanned by the Principal Components only accounts, on the average, for 36% of the norm of the w γ ; therefore, PCA and SVFs essentially map the Gabor vectors to two different subspaces. It follows that a large contribution to the SVFs comes from directions that only carry a small fraction of the variance of M2VTS data, a result that will be confirmed by the experiments reported in Section 5. This is made intuitive in Figure 3, where we show that the angle formed by each normal vector w γ with the closest direction among the first 37 Principal Components of M2VTS data is on the average quite large (median value: 73◦ ).
5
Dimensionality Reduction and Advanced Classifiers
We might ask whether the apparent convergence of SVF and PCA based authentication to the performance achieved on the unaltered feature space G indicates that all the available information has been captured, or whether it is rather a consequence of the (simple) classification technique employed. Substituting linear SVM classifiers for the simple threshold based rule (Equation 4) on the entire Gabor space G causes the EER achieved on the XM2VTS DB to drop from 3.5% to below the resolution limit of 0.25%, that corresponds to a single falsely rejected identity claim (see Table 1 and Appendix A). In Table 1 we also show the EER obtained by using SVM experts in conjunction with the three choices of subspaces in G corresponding to 34 Support Vector Features (G|34 Γ ), 34 Principal Components obtained from the M2VTS DB and PCA over the XM2VTS DB. The first two choices both lead to the minimal EER of 3.5% when the simple threshold-based classifiers are used. As can be seen, perfor-
Support Vector Features and the Role of Dimensionality
255
Angles between SVF w directions and the closest PC 10
9
8
7
Occurrences
6
5
4
3
2
1
0
0
10
20
30
40 50 Angle (Deg)
60
70
80
90
Fig. 3. Distribution of the angle between each normal vector wγ and the closest direction among the 37 most significant Principal Components (median: 73◦ , M2VTS DB, left eye expert)
mance on the compressed Gabor space is considerably improved when SVMs are used for classification. In particular, the dimensionality boost provided by nonlinear kernel functions appears to somewhat compensate for the compression, yielding lower error rates. Performance does not differ significantly on the subspaces determined by SVFs and PCA, with PCA on the XM2VTS database being (as expected) slightly better. However, the best results still fall short of the results achieved using the entire Gabor space G in conjunction with SVMs. This suggest that SVM classifiers exploit information that is lost in the compression procedure and that the dimensionality of the three subspaces actually is too low. In order to get a feeling for what the correct dimensionality might be, we revert to the 1500-dimensional Gabor space G and consider the set Ξ of the 200 clients from the XM2VTS DB. From this we construct the set {Sξ } of the 200 linear SVMs that distinguish each of these 200 clients from all the others, as has been done in Section 3 for the subjects in Γ . Let {wξ } denote the corresponding set of normal vectors. On the average, only 28% of the norm of each specific wξ¯ lies in the subspace spanned by the remaining 199 vectors {wξ }ξ∈Ξ\{ξ} ¯ . This indicates that all of the wξ identify independent directions, as is confirmed by displaying the norm of the residues obtained by applying the Gram-Schmidt orthogonalisation procedure to the list of (normalised) vectors wξ (Figure 4). We conclude that (SVM-based) authentication effectively involves a 200 dimensional subspace of G. By looking at the distribution of the magnitude of the first 200 PCA Eigenvalues on the same set Ξ of clients, one can see that SVM-based classification involves many directions that carry only a small fraction of the global variance [14].
256
Fabrizio Smeraldi et al.
Table 1. Equal Error Rates for XM2VTS authentication experiments. SVM classifiers are trained and tested in four different feature spaces: the first 34 SVFs (G|34 Γ ), the first 34 PCs extracted from the M2VTS DB, the first 34 PCs from the XM2VTS DB and the uncompressed 1500-dimensional Gabor space G. Performance of the linear kernel and the optimal polynomial kernel is shown. The degree of the polynomial is reported in the last line EER 34 SVFs 34 PCs 34 PCs G (SVM) (G|34 ) (M2VTS) (XM2VTS) (1500-dim) Γ Linear 2.3% 2.1% 1.5% 0.25% Best 1.6% 1.5% 1.0% 0.25% Degree 3 2 2 1−4
. .
6
Conclusions
A comparative study of the dimensionality of the face authentication problem is presented. A feature space is constructed using a sparse Log-polar retinotopic grid to sample the Gabor decomposition of facial images around the location of the eyes and the mouth. Two types of compression algorithms are employed to reduce the dimensionality of the feature space, namely PCA and a novel SVM– based technique that we call Support Vector Features (SVFs). We report experimental results using two classification techniques, thresholded distance classifiers and Support Vector Machines. PCA and SVFs are shown to identify two different subspaces in the Gabor space that allow comparably good authentication performance regardless of the classifier employed. Experiments with a simple threshold-based classifier and SVMs suggest that the saturation of recognition/authentication performance with a low number of Principal Components or Support Vector Features heavily depends on the choice of the classifier. In particular, authentication of the 200 clients from the XM2VTS DB using linear SVMs is shown to require a 200 dimensional subspace. SVM classifiers appear to rely on directions that are not necessarily associated to a large variance of the training data. Therefore, the classical criterion of discarding Principal Components associated with small Eigenvalues must be used with care in conjunction with SVM classifiers.
Acknowledgements Thanks are due to B. Sch¨olkopf for an encouraging discussion on the topics of this paper.
Support Vector Features and the Role of Dimensionality
257
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
20
40
60
80
100
120
140
160
180
200
Fig. 4. Solid line: norm of the residues obtained by applying the Gram-Schmidt orthogonalisation procedure to the list of (normalised) vectors w ξ . Dash-dot line: average fraction of the norm of each wξ that contributes an independent direction. Comparison with the relative magnitude of the first 200 eigenvalues from PCA (dashed line) shows that most of these directions only carry a small fraction of the variance of the data. All data refer to the set of clients from the XM2VTS Database, left eye expert
A
Details on the Experimental Protocol
The experiments described are performed using the disjoint subsets of frontal images from the M2VTS (37 subjects, 551 images) and XM2VTS (295 subjects, 2360 images) databases. All the authentication results refer to the XM2VTS DB and are obtained using the training and test sets defined by the Lausanne Protocol, Configuration II [7] (the images from the 3rd session have been included in the training set). The protocol specifies a total of 400 client tests and 112000 impostor tests. This allows estimating the EER with a resolution of 0.25%, corresponding to the false rejection of a single image of a client. The position of the eyes and the mouth has been manually detected on the images of the M2VTS DB. Automatic detection has been used on the XM2VTS DB [17]; erroneous detections have been corrected manually. For other published results on the XM2VTS database we refer the reader to [6]. The SVM engine was developed following the ideas in [3] and [10].
References 1. I. Daubechies. Ten lectures in wavelets. Society for industrial and applied mathematics, Philadelphia, USA, 1992. 249 2. B. Duc, S. Fischer, and J. Bigun. Face authentication with Gabor information on deformable graphs. IEEE Transactions on Image Processing, 8(4):504–516, April 1999. 249
258
Fabrizio Smeraldi et al.
3. T. Joachims. Making Large-scale SVM Learning Practical, chapter 11 of Advances in Kernel Methods - Support Vector Learning. MIT Press, 1998. Eds. B. Sch¨ olkopf, C. J. C. Burges, A. J. Smola. 257 4. M. Lades, J. C. Vorbruggen, J. Buhmann, J. Lange, C. von der Malsburg, R. P. Hurtz, and W. Konen. Distortion invariant object recognition in the dynamic link architectures. IEEE Trans. on Computers, 42(3):300–311, March 1993. 249 5. C. Liu and H. Wechsler. A unified Bayesian framework for face recognition. In International Conference on Image Processing, ICIP-98, Chicago, October 4-7, volume 1, pages 151–155, 1998. 249 6. J. Matas, M. Hamouz, K. Jonsson, J. Kittler, Y. Li, C. Kotropoulos, A. Tefas, I. Pitas, T. Tan, H. Yan, F. Smeraldi, J. Bigun, N. Capdevielle, W. Gerstner, S. Ben-Yacoub, Y. Abdeljaoued, and E. Mayoraz. Comparison of face verification results on the XM2VTS database. In Proceedings of the 15th International Conference on Pattern Recognition, Barcelona (Spain), September 2000, volume 4, pages 858–863. IEEE Comp. Soc. Order No. PR00750, September 2000. 257 7. K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre. XM2VTSDB: the Extended M2VTS database. In Proceedings of the 2nd international conference on Audio- and Video-based Biometric Person Authentication (AVBPA’99), Washington DC, U.S.A., pages 72–77, 1999. 257 8. S. Mika, G. R¨ atsch J. Weston, B. Sch¨ olkopf, and K.-R. M¨ uller. Fisher discriminant analysis with kernels. In Y.-H. Hu, J. Larsen, E. Wilson, and S. Douglas, editors, Neural Networks for Signal Processing IX, pages 41–48. IEEE, 1999. 252 9. B. Moghaddam and A. Pentland. Beyond linear eigenspaces: Bayesian matching for face recognition. In H. Wechsler et al., editor, Nato-Asi advanced study on face recogniton, volume F 163, pages 230–243. Springer, 1998. 249 10. E. Osuna, R. Freund, and F. Girosi. Improved training algorithm for Support Vector Machines. In Proceedings of IEEE NNSP’97, pages 276–285, September 1997. 257 11. M. Porat and Y. Y. Zeevi. The generalized Gabor scheme of image representation in biological and machine vision. IEEE transactions on Pattern Analysis and Machine Intelligence, 10(4):452–468, July 1988. 249 12. B. Sch¨ olkopf, A. J. Smola, and K.-R. M¨ uller. Kernel Principal Component Analysis, chapter 20 of Advances in Kernel Methods - Support Vector Learning. MIT Press, 1998. Eds. B. Sch¨ olkopf, C. J. C. Burges, A. J. Smola. 252 13. L. Sirovich and M. Kirby. Application of the Karhunen-Lo`eve procedure for the characterization of human faces. IEEE-PAMI Transactions on Pattern Analysis and Machine Intelligence, 12(1):103–108, 1990. 249 14. F. Smeraldi. Attention–driven pattern recognition. PhD thesis, Swiss Federal Institute of Technology – Lausanne, Switzerland, No. 2153 (2000). 255 15. F. Smeraldi and J. Bigun. Retinal vision applied to facial features detection and face authentication. Pattern Recognition Letters, 23:463–475, 2002. 250 16. F. Smeraldi, N. Capdevielle, and J. Bigun. Face authentication by retinotopic sampling of the Gabor decomposition and Support Vector Machines. In Audio and Video based Biometric Person Authentication – AVBPA99, pages 125–129, 1999. 249, 250 17. F. Smeraldi, N. Capdevielle, and J. Bigun. Facial features detection by saccadic exploration of the Gabor decomposition and support vector machines. In Proceedings of the 11th Scandinavian Conference on Image Analysis – SCIA 99, Kangerlussuaq, Greenland, volume I, pages 39–44, June 1999. 257 18. M. Turk and A. Pentland. Eigenfaces for recognition. J. Cognitive Neuroscience (Winter), 3(1):71–86, 1991. 249, 253, 254
Support Vector Features and the Role of Dimensionality
259
19. V. N. Vapnik. The nature of statistical learning theory. Springer–Verlag, 1995. 250, 251 20. W. Zhao, A. Krishnaswami, R. Chellappa, D. L. Swets, and J. Weng. Discriminant analysis of principal components for face recognition. In H. Wechsler et al., editor, Nato-Asi advanced study on face recogniton, volume F 163, pages 73–85. Springer, 1998. 249
Face Detection Based on Cost-Sensitive Support Vector Machines Yong Ma and Xiaoqing Ding State Key Laboratory of Intelligent Technology and System Dept. of Electronic Engineering, Tsinghua University Beijing 100084, P. R.China
[email protected] Abstract. This paper presents a method of detecting faces based on cost-sensitive Support Vector Machines (SVM). In our method, different costs are given to the misclassification of having a face missed and having a false alarm to train the SVM classifiers. The method achieves significant speed-ups over conventional SVM-based methods without reducing detection rate too much and the hierarchical architecture of the detector also reduces the complexity of training of the nonlinear SVM classifier. Experimental results have demonstrated the effectiveness of the method.
1
Introduction
In recent years, the problem of face detection has attracted much attention due to its wide applications in automatic face recognition, content-based image/video retrieval, and human computer interaction. Under constrained environment, methods using simple rules [1] or simple invariant features such as texture [8], shape [9], color [10] can locate faces successfully. But in images with cluttered background, illumination and noise often seriously affect these methods. However, other appearance-based methods such as neural networks [4], view-based learning clustering [2], and probabilistic estimation [3], relying on techniques of statistical analysis and machine learning to find the relevant characteristics of face and non-face patterns, commonly achieve better results in these situations. Support Vector Machines (SVM) have been recently proposed as an effective statistical learning method for pattern recognition. Osuna etc.[5] first applied it to face detection by training a nonlinear SVM directly from face and non-face examples collected using bootstrapping method, and the result is promising. However, their method is too slow. The reason for this is that the run-time complexity of a nonlinear SVM is proportional to the number of support vectors, i.e. to the number of training examples that the SVM algorithm utilizes in the expansion of the decision function. The other problem with the method is that even by bootstrapping method, the training of SVM
S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 260-267, 2002. Springer-Verlag Berlin Heidelberg 2002
Face Detection Based on Cost-Sensitive Support Vector Machines
261
still remains a great challenge due to the great diversity of non-face class compared with face class. In this paper we present a method of detecting faces based on cost-sensitive SVM classifiers. The cost-sensitive SVM allows one to assign different costs to different types of misclassification. In our cases, the cost of having a face undetected is different from the cost of having a false alarm. Thus, cost-sensitive SVM seeks to minimize the number of high cost errors and total misclassification cost. In our method, we give different cost to the two types of misclassification to train the cost-sensitive SVM classifiers in different stage of the face detector. In detection, 5 cost-sensitive linear SVM classifiers are first applied to fast exclude most non-faces in images without missing possible faces, and a nonlinear SVM is used to verify face candidates further. Experimental results have demonstrated the effectiveness of the method.
Fig. 1. Processing steps of every sub-window
262
2
Yong Ma and Xiaoqing Ding
Overview of the Method
Similar to other methods [4][5], to detect faces, each input image is scanned with a 20 × 20 rectangular sub-window in which the decision of there being a face pattern is made. A face is detected if the sub-window passes all the classification steps, as Fig. 1 illustrates. First, the sub-window is preprocessed by histogram equalization, lighting rectification [4] and gray-scale normalization. These preprocessing steps can lighten the influence of uneven illumination on faces and reduce the diversity of face patterns. Then, the cost-sensitive SVM are applied to classify the sub-window into face and non-face pattern. In order to detect faces of different scales from 20 × 20 to 256 × 256 pixels, each input image is repeatedly subsampled by a factor of 1.2 and scanned through for 14 iterations.
3
Face Detector Based on Cost-Sensitive SVM
In following sections, we first give a brief introduction of the theory of SVM, and then give a detailed description of the face detector. 3.1
Support Vector Machines
SVM is a new pattern recognition tools mainly developed by Dr.Vapnik [6]. Different from previous nonparametric techniques such as nearest-neighbors and neural network that are based on the minimization of the empirical risk, SVM operates on another induction principle, called structural risk minimization, which minimizes an upper bound on the generalization error. Considering a two-class pattern classification problem, let the training set of size N be
( yi , x i ) , i = 1,..., N , where x i ∈ R n is the input pattern for the ith example and
y i ∈ {− 1,+1} is the corresponding desired label. Support Vector classifiers implic-
x i from the input space to a higher dimensional feature space via a (usually nonlinear) function φ ( x i ) . An optimum separating hyper-plane is constructed which itly map
separates the training set in this feature space by a maximized margin. Then SVM is formed as solving the following quadratic programming problem N 1 N arg min ∑ a i a j y i y j K (x i , x j ) − ∑ a i a i =1 2 i , j =1
(1)
subject to N
∑a y i
i =1
i
= 0 , 0 ≤ αi ≤ C
(2)
Face Detection Based on Cost-Sensitive Support Vector Machines
Where
263
a i (1 ≤ i ≤ N ) is Lagrange multipliers; C is a positive constant which de-
termining the trade-off between margin maximization and training error minimization, and it can be viewed as the cost of the misclassification error; K ( x i , x j ) is the innerproduct kernel function defined by K ( x i , x j ) = φ ( x i ) ⋅ φ ( x j ) . Those training examples
x i with ai > 0 are called Support Vectors (SV). Then the
SVM classifier is
u = sgn ∑ y iα i K ( xi , x) + b xi ∈SV 3.2
(3)
Cost-Sensitive SVM and Its Application to Face Detection
In general SVM, the same cost C is given to the two types of misclassification error. While, in cost-sensitive SVM, different cost can be given to different types of misclassification seeking to minimize the number of high cost errors and total misclassification cost. Then equation 2 becomes N
∑a y i
i
= 0 , 0 ≤ α i ≤ Ci
(4)
i =1
Where
C i is the cost if the ith sample is misclassified.
In face detection, in order to enhance the detection speed, the classifiers are often in a cascade structure. A positive result from the first classifier triggers the evaluation of a second classifier. A positive result from the second classifier triggers a third classifier, and so on. A negative outcome at any point leads to the immediate rejection of the sub-window. Most non-face sub-windows should be fast rejected in the early stage of face detector by simple classifiers with missing faces as few as possible. So in these simple classifiers, the cost of having a face undetected should be much higher than the cost of having a false alarm in order to exclude more non-faces without missing too many faces. In the last stage of face detector, because the false positives from previous classifiers are much similar to face patterns, the cost of having a face undetected should be close to the cost of having a false alarm. As we know, the linear SVM is much faster in classification speed than the nonlinear SVM mainly due to the fact that the linear SVM does not need the nonlinear kernel mapping function. Then in our face detector, the first S classifiers are the costsensitive linear SVM classifiers, as Fig.1 illustrates. In our experiments, we select S = 5 according to the size of non-face training samples and the false alarm rate of the classifiers. If a pattern passes all the judgements of the linear SVM, then it probably belongs to face class and needs to be verified by a nonlinear SVM further, or else the pattern belongs to non-face class. The last part of face detector is a nonlinear SVM classifier. Here the type of kernel is 2nd polynomial is:
K (xi , x) = (xi ⋅ x + 1) 2 and the classifier
264
Yong Ma and Xiaoqing Ding
u=
∑ y α K (x i
i
i
⋅ x) + b
xi ∈SV
(5)
In order to fuse information from near positions and multiple scales, a mapping is used to transfer the non-linear SVM output u to a posterior probability of the face:
P ( face u ) =
1 ( A < 0) 1 + e Au + B
(6)
A and B are estimated using the training data and simple prior estimates [7]. If P ( face u ) is greater than a predefined threshold, the sub-window con-
The parameters tains a face. 3.3
Training of Face Detector
Face training samples are collected from different face databases such as Olivette, Yale, and Harvard databases to cover wide variations in facial expression and lightening condition. Each face sample is manually cropped and normalized to the size of 20 × 20 pixels. In order to make the classifier less sensitive to rotation and size variation, several face samples are generated from each original sample by rotating 10 degrees left and right, and scaling between 0.8 and 1.2. Approximately 5000 face samples are collected to train all the classifiers. All these samples are preprocessed by histogram equalization, lighting rectification and gray-scale normalization. The non-face samples to train the 5 cost-sensitive linear SVMs are collected from 63 images without faces randomly. Every linear SVM is trained by the false positives passing through the previous classifiers. Among them, the first cost-sensitive linear SVM classifier is trained with C face = 100C nonface ( C face is the cost of having a face undetected and
C nonface is the cost of having a false alarm) and can reject 86% of
all non-face samples with face detection rate of 100%. Other cost-sensitive linear SVM classifiers are trained with C face being 50,10,5,5 times of C nonface . The total 5 linear SVM can reject about 99% non-face patterns in the training set with face detection rate of 99.3%. The non-face samples to train the nonlinear SVM are collected from false positives of previous 5 cost-sensitive linear SVMs via bootstrapping method. The nonlinear SVM classifier is trained with C face = C nonface = 200 .
4
Experimental Results
Performance of the face detector is evaluated on the CMU’s data set [4], which consists of 130 images with 507 faces independent from training set. As a comparison with Osuna’s SVM method [5] on CMU test set B (originally in MIT [2]), which contains 23 images with 155 faces, the performance of our method is shown in figure 2. Our method gets 19 false alarms at the detection rate of 80.6%,
Face Detection Based on Cost-Sensitive Support Vector Machines
265
while Osuna’s method gets 20 false alarms at the detection rate of 71.9%. On the other hand, the speed of our method is much faster than that of Osuna’s method due to the
Fig. 2. Detection rate versus false alarm rate of our method on CMU test set B
effective coarse filtering of the cost-sensitive linear SVM classifiers. Taken the image (Fig. 3a) with the size of 392 × 272 pixels for example, it takes about 15 seconds to locate the five faces using our method (PIII-500MHZ, 128M memory), about 30times faster than only using nonlinear SVM classifier. Table 1 gives the comparative results on CMU’s test set with H.Rowley’s method [4](in system 5 only one neural net-work is used, and in system 11 the arbitration of two neural network is used). Experimental result shows that our methods have comparable detection performance with theirs. Table 1. Experimental results on CMU’s test set
Methods
Detection Rate
False Alarm
Cost-Sensitive SVM
451
88.9%
145
Rowley[4] System 5
467
90.5%
570
86.2%
23
Rowley[4] System 11
5
Face Detected
437
Conclusion
A face detection method based on cost-sensitive SVM is presented in this paper. In our method, we give different cost to the misclassification of having a face missed and having a false alarm to train the cost-sensitive SVM classifiers to achieves significant speed-ups over conventional SVM-based methods without reducing detection rate too much. The hierarchical architecture of the classifier also reduces the complexity of training of the nonlinear SVM classifier. Comparative results on test sets demonstrate the effectiveness of the algorithm.
266
Yong Ma and Xiaoqing Ding
Some test images and detected faces are shown in Fig. 3.
Fig. 3. Some test images and results
Face Detection Based on Cost-Sensitive Support Vector Machines
267
References 1.
Yang G. Z., Huang.T. S., “Human face detection in a complex background”, Pattern Recognition,1993, 27:53-63 2. K. Sung, T. Poggion, “Example-Based Learning for View-Based Human Face Detection”, IEEE Trans.PAMI, 1998, 20(1): 39-51 3. B. Moghaddam, A. Pentland, “Probabilistic Visual Learning for Object Representation”, IEEE Trans. PAMI, 1997, 19(7): 696-710 4. H. A. Rowly, S. Baluja T. Kanade, “Neural Network-Based Face Detection”, IEEE Tran. PAMI, 1998, 20(1):23-38 5. Edgar Osuna, Robert Freund, “Training Support Vector Machines: an Application to Face Detection”, In Proc on CVPR, Puerto Rico, pp.130-136, 1997 6. Vapnik V. N., The Nature of Statistical Learning Theory, New York: SpringerVerlag, 1995 7. J. Platt, “Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods”, In Advances in Large Margin Classifiers, MIT Press, Cambridge, MA, USA, 1999 8. Y. Dai, Y. Nakano, “Extraction for Facial Images from Complex Background Using Color Information and SGLD matrices”, In Proc of the First Int Workshop on Automatic Face and Gesture Recognition, pp.238-242, 1995 9. S. A. Sirohey, “Human Face Segmentation and Identification”, Technical Report CS-TR-3176, University of Maryland, 1993 10. C. Garcia, G. Tziritas, “Face Detection Using Quantized Skin Color Regions Merging and Wavelet Packet Analysis”, IEEE Trans. Multimedia, 1999, Vol.1, No.3, pp.264-277
Real-Time Pedestrian Detection Using Support Vector Machines Seonghoon Kang1 , Hyeran Byun2 , and Seong-Whan Lee1 1
Department of Computer Science and Engineering, Korea University Anam-dong, Seongbuk-ku, Seoul 136-701, Korea {shkang,swlee}@image.korea.ac.kr 2 Department of Computer Science, Yonsei Universiy Shinchon-dong, Seoadaemun-gu, Seoul 120-749, Korea
[email protected] Abstract. In this paper, we present a real-time pedestrian detection system in outdoor environments. It is necessary for pedestrian detection to implement obstacle and face detection which are major parts of a walking guidance system. It can discriminate pedestrian from obstacles, and extract candidate regions for face detection and recognition. For pedestrian detection, we have used stereo-based segmentation and SVM (Support Vector Machines), which has superior classification performance in binary classification case (e.g. object detection). We have used vertical edges, which can extracted from arms, legs, and the body of pedestrians, as features for training and detection. The experiments on a large number of street scenes demonstrate the effectiveness of the proposed for pedestrian detection system.
1
Introduction
Object detection is essential for the driver assistance system and walking guidance for the visually impaired, etc. It is very difficult to detect pedestrians in varying outdoor scenes. Especially, pedestrian detection is much more important than any other objects. In case of the driver assistance system, the pedestrian is an obstacle, which the driver should avoid. But, in case of the walking guidance system for the visually impaired, the pedestrian is an meaningful object to interact. So, pedestrian detection should be performed in real-time before the operator encounters other pedestrians. After a pedestrian is detected, we can extract and recognize a face effectively by reducing the candidate region to search faces. There are two steps for detecting pedestrians. The first is to separate foreground objects from the background. The second is to distinguish pedestrians from other objects. The first procedure is object detection and the second one
This research was supported by Creative Research Initiatives of the Ministry of Science and Technology, Korea.
S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 268–277, 2002. c Springer-Verlag Berlin Heidelberg 2002
Real-Time Pedestrian Detection Using Support Vector Machines
269
is pedestrian recognition. In this paper, we have used stereo-based segmentation for the object detection procedure, and SVM classification for pedestrian recognition. This proposed method is the main part of an outdoor walking guidance system for the visually impaired, OpenEyes-II which has been developed in the Center for Artificial Vision Research at Korea University[1]. OpenEyes-II enables the visually impaired to respond naturally to various situations that can happen in unrestricted natural outdoor environments while walking and finally reaching their destination. To achieve this goal, foreground objects(pedestrian, obstacles, etc.) are detected in real-time by using foreground-background segmentation based on stereo vision. Then, each object is classified as pedestrian or obstacles by a SVM classifier. These two main elements make the pedestrian detection system robust and in real-time.
2
Related Works
Most human detection and tracking systems employs a simple segmentation procedure such as background subtraction or temporal differencing to detect pedestrians. A people tracking system is an integrated system for the detecting and tracking of humans from image sequences. These systems have many various properties from input camera types to detailed methods for detection of body parts. Haritaoglu et al. introduced the W4 system [7] and the Hydra system [8] for detecting and tracking multiple people or the parts of their bodies. While the W4 system is an integrated tracking system that uses a monocular, monochrome, and static camera as an input device, Hydra is a sub-module, which allows W4 to analyze people moving in a group. Hydra segments a group of multiple people into each individual person by head detection and distance transformation. The Pfinder system [9] used a multi-class statistical model of a person and the background for person tracking and gesture recognition. This model utilizes stochastic, region-based features, such as blob and 2D contour. Although it performs novel model-based tracking, it is unable to track multiple people simultaneously. Darrell et al. [10] used disparity and color information for individual person tracking and segmentation. Their system uses stereo cameras, and computes the range from the camera to each person with the disparity. The depth estimation allows the elimination of the background noises, and disparity is fairly insensitive to illumination effects. Mohan, Papageorgious and Poggio [11] present a more robust pedestrian detection system based on the SVM technique. However, the system has to search the whole image at multiple-scales to detect many components of human for pedestrian detection. This would be an extremely computationally expensive procedure, and it may cause multiple responses from a single detection. To increase reliability, some systems integrate multiple cues such as stereo, skin color, face, shape to detect pedestrians [10,7]. These systems prove that stereo and shape are more reliable and helpful cues than color and face detection in general situations.
270
3
Seonghoon Kang et al.
Properties of Support Vector Machines
Basically, a SVM is a linear machine with some very attractive properties. Its main idea is to construct a separating hyperplane between two classes, the positive and negative examples, in such a way that the distance from each of the two classes to the hyperplane is maximized. In other words, a SVM is an approximate implementation of the structural risk minimization method. This induction principle is based on the fact that the actual error rate of a learning machine is upper bounded by the sum of the training error rates. From this property, the SVM can provide a good generalization performance on pattern classification problems without domain knowledge. Figure 1 shows a constructed optimal hyperplane between two classes in a linear space[2]. If we assume that the input space can be separated linearly for simplicity, a constructed hyperplane is represented by equation (1) in a high dimensional space. w·x+b =0 ,
(1)
where x is a training pattern vector and w is a normal vector of the hyperplane. Some constraints represented by equation (2) must be satisfied to ensure that all training patterns are correctly classified. yi (w · xi + b) − 1 ≥ 0 , ∀i
(2)
Among the hyperplanes constructed under the constraints above, the one from which the distance to its closest point is maximal is called an optimal separating hyperplane (OSH). From the given training data, we would like to find the parameters (wo , bo ) for the OSH, where wo and bo are optimal values for w and b, respectively, in equation 1. Since the distance to the closest point, the margin, is 2/||w||, wo can be obtained by minimizing the value ||w||. This is an
Fig. 1. A constructed hyperplane between two classes in a linear space
Real-Time Pedestrian Detection Using Support Vector Machines
271
optimization problem, and wo is represented by equation (3) with non-negative Lagrange multipliers (αi )s. wo =
N
αi yi xi
(3)
i
Using equations (1) and (3), bo can be obtained. In classification tasks, the class of an unknown input pattern(x) is determined by the sign of the signed distance of the pattern from the constructed hyperplane. The signed distance is calculated by equation (4). N sgn(
i
αi yi xi · x + bo ) ||w||
(4)
This method can be also applied to a nonlinear input space through a kernel function. Kernel functions map a nonlinear input space to a linear feature space. So, the inner product in equation (4) can be replaced by a kernel function. Consequently, a distance measurement can be derived no matter whether the input space is linear or not. However, we are still left with an open question of which kernel function is best to be used for a given problem.
4
Pedestrian Detection System
The pedestrian detection system consists of a training module and detection module as shown in figure 2. First of all, the pedestrian detector model, which consists of SVM have been trained by training module. After the detector model is constructed, a pedestrian can be detected from the input of natural scenes by using the constructed detector model. Because most well-known pattern matching and model-based methods cannot be applied to varying natural scenes, we used the SVM algorithm for the main part of the classification. SVM is wellknown as it can use statistical properties of various images by training, classifying and recognizing effectively in a huge data space by using a small number of training data. Because pedestrians have so many different colors and textures, it is difficult to use color or texture features for classifying pedestrians. So, we have used vertical edges, which can be extracted from arms, legs, and the body of pedestrians, as features for training and detection. 4.1
Detector Model Training
The pedestrian model target detection determines if a person exists 4-5m ahead. For model training, we have used a 32x64 size1 of training images(manually collected pedestrian images for positive set and randomly collected background and other object images for negative set) as shown in figure 3. 1
It have been calibrated in half scale of input image(160x120 size) for fast and accurate detection.
272
Seonghoon Kang et al.
Fig. 2. Block diagram of pedestrian detection system
For good detector model construction, we have used ’bootstrapping’ training [3]. The combined set of positive and negative examples form the initial training database for the detector model. In the early stage, the detector model was trained by a 100 data set. After initial training, we run the system over many images that have various backgrounds. Any detections clearly identified as false positives are added to the training database of negative examples. These iterations of the bootstraping procedure allow the classifier to construct an incremental refinement of the non-pedestrian class until satisfactory performance is achieved. 4.2
Pedestrian Detection
In the pedestrian detection of a walking guidance system for the visually impaired, real-time issue is most important. If this system cannot operate in realtime, it is useless for real-world application. But, the SVM algorithm which is
(a) Positive data set
(b) Negative data set
Fig. 3. Examples of training data set
Real-Time Pedestrian Detection Using Support Vector Machines
273
Fig. 4. Examples of bootstrapping training
the main part of this system is too complicated to operate in real-time. So, it is very difficult to satisfy both detection accuracy and speed. In order to overcome these problems, we have used a stereo vision technique which is frequently used in robot vision area. Stereo vision technique can provide range information for object segmentation. Using stereo vision to guide pedestrian detection carries with its some distinct advantages over conventional techniques. First, it allows explicit occlusion analysis and is robust to illumination changes. Second, the real size of an object derived from the disparity map provides a more accurate classification metric than the image size of the object. Third, using stereo cameras can detect both stationary and moving objects. Fourth, computation time is significantly reduced by performing classification where objects are detected; it is less likely to detect the background area as pedestrians since detection is biased toward areas where objects are detected[4]. Firstly, we have employed a video-rate stereo system[5] to provide range information for object detection. This system have used area correlation mothod for generating disparity image as shown in figure 5. By means of feature for correlation, Sobel edge transform were chosen, because it gives good quality results. Figure 5(c) shows a typical disparity image. Higher disparities(closer objects) are indicated by brighter white.
Fig. 5. Examples of disparity image based on area correlation
274
Seonghoon Kang et al.
Fig. 6. Candidate regions of detected objects by distance
Disparity image has range information. so we can separate the object region from background by using some filtering. In this paper, we have processed disparity image in three levels(near distance, middle distance, far distance). we have been interested in only middle distance, because most objects to detect are in the middle distance. The areas in the far distance are regarded as background. So, the disparity image has been binarized and thresholded with some threshold values correspond to the middle distance, in order to extract the candidate region of the object as shown in figure 6(b). By extracting the candidate region of the object, we can reduce much more time to search the image, in order to detect pedestrians. For example, SVM classification must be performed 165 times without reducing the candidate region of the object using stereo vision, but only 2 times of SVM classification are needed while reducing the candidate region as shown in figure 7. When we assume that calculation time of disparity images is negligible, the proposed detection method with stereo vision is about 80 times faster than the detection method without stereo vision.
5
Experimental Result and Analysis
The pedestrian detector model is trained finally as followings. The number of training data set is 528(140 positive data, 378 negative data are used). 3rd order polynomial have been used as the kernel function. By training, we have 252 support vectors. The system has been implemented on a Pentium III 800 MHz system under Windows XP with a MEGA-D Megapixel Digital Stereo Head[6]. It has been tested extensively on large amounts of outdoor natural scene images including pedestrians. Over 900 instances of pedestrian and other objects have been presented. The system can detect and classify objects over a 320x240 pixel stereo image at a frame rate ranging from 5 frames/second to 10 frames/second, depending on the number of objects presented in the image. The performance of
Real-Time Pedestrian Detection Using Support Vector Machines
275
Fig. 7. Detection with reducing candidate regions vs. without one
any detection system has a tradeoff between the positive detection rate and false detection rate. To capture this tradeoff, we vary the sensitivity of the system by thresholding the output and evaluate the ROC (Receiver Operating Characteristic) curve[12]. As shown in figure 8, ROC curves comparing different representations(grey images and vertical edge images are used for features) for pedestrian detection. The detection rate is plotted against the false detection rate, measured on a logarithmic scale. The trained detection system has been run over test images containing 834 images of pedestrians to obtain the positive detection rate. The false detection rate has been obtained by running the system over 2000 images, which do not contain pedestrian. Experiments have been performed by four kinds of features. 64x128 size of gray and vertical edge images and 32x64 size of gray and vertical edge images are used. As shown in ROC curves, 32x64 size of vertical edge images superior than other features. It is more fast and accurate. Figure 9 shows the results of our pedestrian detection system on some typical urban street scenes. This figure shows that our system can detect pedestrians in different size, pose, gait, clothing and occlusion status. However, there are some cases of failure. Most failures are occur when a pedestrian is almost similar in color to the background, or two pedestrians are too close to be separable.
6
Conclusion and Future Works
This system is part of the outdoor walking guidance system for the visually impaired, OpenEyes-II that aims to enable the visually impaired to respond naturally to various situations that can happen in unrestricted natural outdoor environments while walking and finally reaching the destination. To achieve this goal, foreground objects(pedestrian, obstacles, et.) are detected in real-time by using foreground-background segmentation based on stereo vision. Then, each
276
Seonghoon Kang et al.
100
Positive detection rate (%)
90
80
70
60
50
40
30
Gray image (64x128 size) Vertical edge (64x128 size) Gray image (32x64 size) Vertical edge (32x64 size)
20
10 −5 10
−4
10
−3
10
−2
10
−1
10
0
10
False detection rate
Fig. 8. ROC curves of pedestrian detector using SVMs
object is classified as pedestrian or obstacle by the SVM classifier. These two main elements make the pedestrian detection system robust and real-time. But, this system has the problem that it becomes slower in case that there are many objects to classify in the field of view, because the complexity of the SVM algorithm. So, as furture works, it is necessary to make the SVM algorithm faster through research about feature vector reduction and hardware implementation, etc. Also, multi-object discrimination and detection properties should be included for good application in real life.
Fig. 9. Detection results
Real-Time Pedestrian Detection Using Support Vector Machines
277
References 1. Kang, S., Lee, S.-W.: Hand-held Computer Vision System for the Visually Impaired. Proc. of the 3rd International Workshop on Human-friendly Welfare Robotic, Daejeon, Korea, January (2002) 43–48 269 2. Haykin, S.: Neural Networks, Prentice Hall, NJ, (1998) 270 3. Sung, K.-K., Poggio, T.: Example-Based Learning for View-Based Human Face Detection. A. I. Memo 1521, AI Laboratory, MIT (1994) 272 4. Zhao, L., Thorpe, C.: Stereo- and Neural-Based Pedestrian Detection. Proc. of Internationl IEEE Conference on Intellegent Transprtation Systems, Tokyo, Japan, October. (1999) 5–10 273 5. Konolige, K: Small Vision Systems: Hardware and Implementation. Proc. of 8th International Symposium on Robotics Research, Hayama, October (1997) 273 6. http://www.videredesign.com 274 7. Haritaoglu, I., Harwood, D., Davis, L. S.: W4: Who? When? Where? What? A Real Time System for Detecting and Tracking People. Proc. of International Conference on Face and Gesture Recognition, Nara, Japan, April (1998) 222–227 269 8. Haritaoglu, I., Harwood, D., Davis, L. S.: Hydra: Multiple People Detection and Tracking Using Silhouettes. Proc. of 2nd IEEE Workshop on Visual Surveillance, Fort Collins, Colorado, June (1999) 6–13 269 9. Wren, C., Azarbayejani, A., Darrell, T., Pentland, A.: Pfinder: Real-Time Tracking of the Human Body. IEEE Transactions on Pattern Analysis and Machine Intelligence 19 7 (1997) 780–785 269 10. Darrell, T., Gordon, G., Harville, M., Woodfill, J.: Integrated Person Tracking Using Stereo, Color, and Pattern Detection. Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Santa Barbara, California (1998) 601–608 269 11. Mohan, A., Papageorgiou, C., Poggio, T.: Example-based Object Detection in Images by Components. IEEE Trans. on Pattern Recognition and Machine Intelligence 23 4 (2001) 349–361 269 12. Oren, M., Papageorgiou, C., Sinha, P., Osuna, E., Poggio, T.: Pedestrian Detection Using Wavelet Templates. Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico (1997) 193–199 275
Forward Decoding Kernel Machines: A Hybrid HMM/SVM Approach to Sequence Recognition Shantanu Chakrabartty and Gert Cauwenberghs Center for Language and Speech Processing Johns Hopkins University, Baltimore, MD 21218, USA {shantanu,gert}@jhu.edu
Abstract. Forward Decoding Kernel Machines (FDKM) combine largemargin classifiers with Hidden Markov Models (HMM) for Maximum a Posteriori (MAP) adaptive sequence estimation. State transitions in the sequence are conditioned on observed data using a kernel-based probability model, and forward decoding of the state transition probabilities with the sum-product algorithm directly produces the MAP sequence. The parameters in the probabilistic model are trained using a recursive scheme that maximizes a lower bound on the regularized cross-entropy. The recursion performs an expectation step on the outgoing state of the transition probability model, using the posterior probabilities produced by the previous maximization step. Similar to Expectation-Maximization (EM), the FDKM recursion deals effectively with noisy and partially labeled data. We also introduce a multi-class support vector machine for sparse conditional probability regression, GiniSVM based on a quadratic formulation of entropy. Experiments with benchmark classification data show that GiniSVM generalizes better than other multi-class SVM techniques. In conjunction with FDKM, GiniSVM produces a sparse kernel expansion of state transition probabilities, with drastically fewer non-zero coefficients than kernel logistic regression. Preliminary evaluation of FDKM with GiniSVM on a subset of the TIMIT speech database reveals significant improvements in phoneme recognition accuracy over other SVM and HMM techniques.
1
Introduction
Sequence estimation is at the core of many problems in pattern recognition, most notably speech and language processing. Recognizing dynamic patterns in sequential data requires a set of tools very different from classifiers trained to recognize static patterns in data assumed i.i.d. distributed over time. The speech recognition community has predominantly relied on Hidden Markov Models (HMMs) [1] to produce state-of-the-art results. HMMs are generative models that function by estimating probability densities and therefore require a large amount of data to estimate parameters reliably. If the aim is discrimination between classes, then it might be sufficient to model discrimination boundaries between classes which (in most affine cases) afford fewer parameters. S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 278–292, 2002. c Springer-Verlag Berlin Heidelberg 2002
Forward Decoding Kernel Machines: A Hybrid HMM/SVM Approach
279
Recurrent neural networks have been used to extend the dynamic modeling power of HMMs with the discriminant nature of neural networks [2], but learning long term dependencies remains a challenging problem [3]. Typically, neural network training algorithms are prone to local optima, and while they work well in many situations, the quality and consistency of the converged solution cannot be warranted. Large margin classifiers, like support vector machines, have been the subject of intensive research in the neural network and artificial intelligence communities [4,5]. They are attractive because they generalize well even with relatively few data points in the training set, and bounds on the generalization error can be directly obtained from the training data. Under general conditions, the training procedure finds a unique solution (decision or regression surface) that provides an out-of-sample performance superior to many techniques. Recently, support vector machines have been used for phoneme (or phone) recognition [7] and have shown very encouraging results. However, use of a standard SVM classifier implicitly assumes i.i.d. data, unlike the sequential nature of phones. Figure 1(b) depicts a typical vowel chart as used extensively in speech and linguistics, showing the location of different vowels with respect to the first two formants (resonant frequencies of the speech articulators). Due to inertia in articulation, speech production results in a smooth transition between vowels, and phones in general [8]. FDKM, introduced in [9], augments the ability of large margin classifiers to perform sequence decoding and to infer the sequential properties of the data. It performs a large margin discrimination based on the trajectory of the data rather than solely on individual data points and hence relaxes the constraint of i.i.d. data. FDKMs have shown superior performance for channel equalization in digital communication where the received symbol sequence is contaminated by inter symbol interference [9]. This paper applies FDKM to the recognition of phoneme sequences in speech, and introduces GiniSVM, a sparse kernel machine for regression of conditional probabilities as needed for training FDKM over large data. Finally, results of applying FDKM and GiniSVM on standard phoneme benchmark data such as TIMIT are included.
2
FDKM Formulation
The problem of FDKM recognition is formulated in the framework of MAP (maximum a posteriori) estimation, combining Markovian dynamics with kernel machines. A Markovian model is assumed with symbols belonging to S classes, as illustrated in Figure 1(a) for S = 3. Transitions between the classes are modulated in probability by observation (data) vectors x over time. 2.1
Decoding Formulation
The MAP forward decoder receives the sequence X[n] = {x[n], x[n−1], . . . , x[1]} and produces an estimate of the probability of the state variable q[n] over all
Shantanu Chakrabartty and Gert Cauwenberghs
Frequency of Formant II (kHz)
280
P(0|1,x) 1
/ee/ /e/ /aa/ /er/
1.5
/a/ /ar/
/u/
0
P(1|1,x)
/i/
2.5
P(0|0,x)
/aw/
/uu/
0.5
0.2
P(1|0,x) (a)
0.6
1.0
Frequency of Formant I (kHz)
(b)
Fig. 1. (a) Three state markov model, where transition probabilities between states are modulated by the observation vector x. (b) Vowel chart showing the location of typical English vowels with respect to the first two formant frequencies. Because of inertial restriction on the articulators, the transition between phones is smooth and the trajectory induces a conditional distribution of a phone with respect to others classes i, αi [n] = P (q[n] = i | X[n], w), where w denotes the set of parameters for the learning machine. Unlike hidden Markov models, the states directly encode the symbols, and the observations x modulate transition probabilities between states [10]. Estimates of the posterior probability αi [n] are obtained from estimates of local transition probabilities using the forward-decoding procedure [11,10] S−1 αi [n] = Pij [n] αj [n − 1] (1) j=0
where Pij [n] = P (q[n] = i | q[n − 1] = j, x[n], w) denotes the probability of making a transition from class j at time n−1 to class i at time n, given the current observation vector x[n]. The forward decoding (1) embeds sequential dependence of the data wherein the probability estimate at time instant n depends on all the previous data. An on-line estimate of the symbol q[n] is thus obtained: q est [n] = arg max αi [n] i
(2)
The BCJR forward-backward algorithm [11] produces in principle a better estimate that accounts for future context, but requires a backward pass through the data, which is impractical in many applications requiring real time decoding. Accurate estimation of transition probabilities Pij [n] in (1) is crucial in decoding (2) to provide good performance. In [9] we used kernel logistic regression [12], with regularized maximum cross-entropy, to model conditional prob-
Forward Decoding Kernel Machines: A Hybrid HMM/SVM Approach
281
abilities. A different probabilistic model that offers a sparser representation is introduced below. 2.2
Training Formulation
For training the MAP forward decoder, we assume access to a training sequence with labels (class memberships). For instance, the TIMIT speech database comes labeled with phonemes. Continuous (soft) labels could be assigned rather than binary indicator labels, to signify uncertainty in the training S−1data over the classes. Like probabilities, label assignments are normalized: i=0 yi [n] = 1, yi [n] ≥ 0. The objective of training is to maximize the cross-entropy of the estimated probabilities αi [n] given by (1) with respect to the labels yi [n] over all classes i and training data n N −1 S−1 H= yi [n] log αi [n] (3) n=0 i=0
To provide capacity control we introduce a regularizer Ω(w) in the objective function [6]. The parameter space w can be partitioned into disjoint parameter vectors wij and bij for each pair of classes i, j = 0, . . . , S − 1 such that Pij [n] depends only on wij and bij . (The parameter bij corresponds to the bias term in the standard SVM formulation). The regularizer can then be chosen as the L2 norm of each disjoint parameter vector, and the objective function becomes H=C
N −1 S−1
yi [n] log αi [n] −
n=0 i=0
S−1 S−1 1 |wij |2 2 j=0 i=0
(4)
where the regularization parameter C controls complexity versus generalization as a bias-variance trade-off [6]. The objective function (4) is similar to the primal formulation of a large margin classifier [5]. Unlike the convex (quadratic) cost function of SVMs, the formulation (4) does not have a unique solution and direct optimization could lead to poor local optima. However, a lower bound of the objective function can be formulated so that maximizing this lower bound reduces to a set of convex optimization sub-problems with an elegant dual formulation in terms of support vectors and kernels. Applying the convex property of the − log(.) function to the convex sum in the forward estimation (1), we obtain directly S−1 Hj (5) H≥ j=0
where Hj =
N −1 n=0
Cj [n]
S−1
yi [n] log Pij [n] −
i=0
S−1 1 |wij |2 2 i=0
(6)
with effective regularization sequence Cj [n] = Cαj [n − 1] .
(7)
282
Shantanu Chakrabartty and Gert Cauwenberghs
Disregarding the intricate dependence of (7) on the results of (6) which we defer to the following section, the formulation (6) is equivalent to regression of conditional probabilities Pij [n] from labeled data x[n] and yi [n], for a given outgoing state j. 2.3
Kernel Logistic Probability Regression
Estimation of conditional probabilities Pr(i|x) from training data x[n] and labels yi [n] can be obtained using a regularized form of kernel logistic regression [12]. For each outgoing state j, one such probabilistic model can be constructed for the incoming state i conditional on x[n]: Pij [n] = exp(fij (x[n]))/
S−1
exp(fsj (x[n]))
(8)
s=0
As with SVMs, dot products in the expression for fij (x) in (8) convert into kernel expansions over the training data x[m] by transforming the data to feature space [13] fij (x) = wij .x + bij = λm ij x[m].x + bij
(9)
m
Φ(·)
−→
λm ij K(x[m], x) + bij
m
where K(·, ·) denotes any symmetric positive-definite kernel1 that satisfies the Mercer condition, such as a Gaussian radial basis function or a polynomial spline [6,14]. Optimization of the lower-bound in (5) requires solving M disjoint but similar sub-optimization problems (6). The subscript j is omitted in the remainder of this section for clarity. The (primal) objective function of kernel logistic regression expresses regularized cross-entropy (6) of the logistic model (8) in the form [14,15] H=−
1 i
2
|wi |2 +C
N M [ yi [m]fk (x[m])−log(ef1 (x[m]) +...+efM (x[m]) ] . (10) m
i
λm ij
The parameters in (9) are determined by minimizing a dual formulation of the objective function (10) obtained through the Legendre transformation, which for logistic regression takes the form of an entropy-based potential function in the parameters [12] He =
M N N N 1 l m [ λi Qlm λm + C (yi [m] − λm i i /C) log(yi [m] − λi /C)] (11) 2 m m i l
1
K(x, y) = Φ(x).Φ(y). The map Φ(·) need not be computed explicitly, as it only appears in inner-product form.
Forward Decoding Kernel Machines: A Hybrid HMM/SVM Approach
283
subject to constraints
λm i = 0
(12)
λm i = 0
(13)
λm i ≤ Cyi [m]
(14)
m
i
Derivations to arrive at the dual formulation are provided in the Appendix. There are two disadvantages of using the logistic regression dual directly: 1. The solution is non-sparse and all the training points contribute to the final solution. For tasks involving large data sets like phone recognition this turns out to be prohibitive due to memory and run-time constraints. 2. Even though the dual optimization problem is convex, it is not quadratic and precludes the use of standard quadratic programming (QP) techniques. One has to resort to Newton-Raphson or other nonlinear optimization techniques which complicate convergence and require tuning of additional system parameters. In the next section a new multi-class probabilistic regression technique is introduced which closely approximates the logistic regression solution and yet produces sparse estimates. Like support vector machines, the resulting optimization is quadratic with linear constraints, for which several efficient techniques exist.
GiniSVM Formulation
3
SVM classification produces sparse solutions but probability estimates they generate are biased. There are techniques that extend SVMs to generate probabilities by cross-validation using held out data [16], but it is hard to extend these to generation of multi-class probabilities. GiniSVM produces a sparse solution by optimizing a dual optimization functional using a lower bound of the dual logistic functional. A quadratic (‘Gini’ [17]) index is used to replace entropy. The tightness of the bound provides an elegant trade-off between approximation and sparsity. 3.1
Huber Loss and Gini Impurity
For binary-class GiniSVM, ‘margin’ is defined as the extent over which data points are asymptotically normally distributed. Using the notation for asymptotic normality in [18], the distribution of distance z from one side of the margin for data of one class2 is modeled by F (z) = (1 − )N (z, σ) + H(z) 2
(15)
The distributions for the two classes are assumed symmetrical, with margin on opposite sides, and distance z in opposite directions.
284
Shantanu Chakrabartty and Gert Cauwenberghs
6
1
Logit Gini SVM
5
Logit Gini SVM
0.9
Potential Function
0.8
Loss
4 3 2
0.7 0.6 0.5 0.4 0.3 0.2
1
0.1
0 −5
0 Margin
(a)
5
0 0
0.2
0.4 Dual Parameter
0.6
0.8
1
(b)
Fig. 2. Comparison of logistic regression, GiniSVM and soft-margin SVM classification. (a): Loss function in the primal formulation. (b): Potential function in the dual formulation. The GiniSVM loss function and potential function closely approximate those for logistic regression, while offering the sparseness of softmargin SVM classification where N (., σ) represents a normal distribution with zero mean and standard deviation σ, H(.) is the unknown symmetrical contaminating distribution, and 0 ≤ ≤ 1. H(.) could, for instance, represent impulsive noise contributing outliers to the distribution. Huber [18] showed that for this general form of F (z) the most robust estimator achieving minimum asymptotic variance minimizes the following loss function: 2 1 z ; |z| ≤ kσ σ2 (16) g(z) = 2 |z| k σ − 12 σk 2 ; |z| > kσ where in general the parameter k depends on . For GiniSVM, the distribution F (z) for each class is assumed one-sided (z ≤ 0). As with soft-margin SVM, points that lie beyond the margin (z > 0) are assumed correctly classified, and do not enter the loss function (g(z) ≡ 0). The loss function asymptotically approaches the logistic loss function for a choice of parameters satisfying k/σ = C. The correspondence between the Huber and logistic loss functions is illustrated in Figure 2(a). In the dual formulation, the Huber loss function (16) transforms to a potential function of the form λ λ (17) G(λ) = kσ (1 − ) C C where C = k/σ. The functional form of the potential (17), for kσ = 2, corresponds to the Gini entropy (or impurity function), used extensively in growing decision trees [17].
Forward Decoding Kernel Machines: A Hybrid HMM/SVM Approach
285
The Gini impurity potential (17) for GiniSVM offers a lower bound to the binary entropy potential for logistic regression. A tight bound is obtained for kσ = 4 log 2, scaling the Gini potential to match the extrema of the logistic potential, shown in Figure 2 (b). Since the Gini potential has a finite derivative at the origin, it allows zero values for the parameters λm i , and thus supports sparsity in the kernel representation, as for soft-margin SVM classification. The principle of lower bounding the entropy potential function with the Gini impurity index, for a sparse kernel representation, can be directly extended to the multi-class case. 3.2
Multi-class GiniSVM
Jensen’s inequality (log p ≤ p− 1) formulates a lower bound for the entropy term in (11) in the form of the multivariate Gini impurity: 1−
M
p2i ≤ −
i
M
pi log pi
(18)
i
M where 0 ≤ pi ≤ 1, ∀i and i pi = 1. Both forms of entropy − i pi log pi and M 2 1 − i pi reach their maxima at the same values pi ≡ 1/M corresponding to a uniform distribution. As in the binary case, the bound can be tightened by scaling the Gini index with a multiplicative factor γ ≥ 1, of which the particular value depends on M .3 The GiniSVM dual cost function Hg is then given by Hg =
M N N N 1 l 2 [ λi Qlm λm + γC( (yi [m] − λm i i /C) − 1)] 2 m m i
(19)
l
The convex quadratic cost function (19) with constraints in (11) can now be minimized directly using SVM quadratic programming decomposition methods like SMO [19] or by using incremental SVM techniques [20], details of which are discussed in [21]. The primary advantage of the technique is that it yields sparse solutions and yet approximates the logistic regression solution very well. Figure 3 compares results of training a kernel probability model on threeclass data, with the exact solution using logistic regression, and the approximate solution obtained by GiniSVM. Deviation between models is observed most clearly at decision regions far removed from training data. Figure 4 demonstrates that by minimizing the GiniSVM cost function (using a form of SMO [19,21]) the true logistic regression cost function tends to decrease with it, although the effect of the approximation is apparent in the fluctuations at convergence.
4
Recursive FDKM Training
The weights (7) in (6) are recursively estimated using an iterative procedure reminiscent of (but different from) expectation maximization. The procedure 3
Unlike the binary case (M = 2), the factor γ for general M cannot be chosen to match the two maxima at pi = 1/M .
286
Shantanu Chakrabartty and Gert Cauwenberghs
(a)
(b)
Fig. 3. Equal (0.5) probability contours obtained from kernel conditional probability regression on a 3-class problem. (a) Exact logistic regression, and (b) approximate GiniSVM solution involves computing new estimates of the sequence αj [n − 1] to train (6) based on estimates of Pij using previous values of the parameters λm ij . The training proceeds in a series of epochs, each refining the estimate of the sequence αj [n−1] by increasing the size of the time window (decoding depth, k) over which it is obtained by the forward algorithm (1). The training steps are illustrated in Figure 5 and summarized as follows: 1. To bootstrap the iteration for the first training epoch (k = 1), obtain initial values for αj [n−1] from the labels of the outgoing state, αj [n−1] = yj [n−1]. This corresponds to taking the labels yi [n − 1] as true state probabilities which corresponds to the standard procedure of using fragmented data to estimate transition probabilities. 2. Train logistic kernel machines, one for each outgoing class j, to estimate the parameters in Pij [n] i, j = 1, .., S from the training data x[n] and labels yi [n], weighted by the sequence αj [n − 1]. 3. Re-estimate αj [n − 1] using the forward algorithm (1) over increasing decoding depth k, by initializing αj [n − k] to y[n − k]. 4. Re-train, increment decoding depth k, and re-estimate αj [n − 1], until the final decoding depth is reached (k = K). 4.1
Efficient Implementation
The training procedure of Figure 5 entails solving a full optimization problem for each iteration incrementing the decoding depth k = 1, . . . K. This seems poor use of computational resources given that parameter estimates do not vary considerably. The overall training time can be reduced considerably by bootstrapping the optimization problem at each iteration k using the previous parameter values.
Forward Decoding Kernel Machines: A Hybrid HMM/SVM Approach
287
0
Gini Logistic
−10
Cost Function
−20 −30 −40 −50 −60 −70 −80 0
500
1000
1500 2000 2500 Training Iterations
3000
3500
4000
Fig. 4. Minimization sequence of the GiniSVM dual cost function during SMO training. The logistic dual cost function is concurrently evaluated to illustrate the similarity between the two models Updates to the regularization sequence Cj [n] = Cαj [n−1] at each iteration k affect the feasible region (14) for the optimization, so that the previous solution may be infeasible as a starting point for the next iteration. The solution to this problem lies in the invariance of the probability model (8) to a common additive m offset λm ∗ in the parameters λj . To bootstrap a feasible solution, the additive are chosen to project the previous solution to the current feasible constants λm ∗ space as defined by the new regularization sequence Cj [n]. The performance of FDKM training depends on the final decoding depth K, although observed variations in generalization performance for large values of K are relatively small. A suitable value can be chosen a priori to match the extent of temporal dependency in the data. For phoneme classification in speech, for instance, the decoding depth can be chosen according to the length of a typical syllable.
5 5.1
Experiments and Results GiniSVM Experiments
To test how well GiniSVM performs as a multi-class classifier, we evaluated its performance on two standard speech datasets, the Paterson and Barney formant dataset and the Deterding dataset. Results are summarized in Table 1. In both cases, GiniSVMs yielded better or comparable out-of-sample classification performance compared to standard “one vs. one,” “one vs. all” and other [22] multi-class SVM approaches.
288
Shantanu Chakrabartty and Gert Cauwenberghs training
1
1
1
1
1
0
0
re-estimation
1
0
0
n-1 n
1
1
0
n-2 n-1 n
2
training
n-K time
n-2 n-1 n
K
Fig. 5. Iterations involved in training FDKM on a trellis based on the Markov model of Figure 1. During the initial epoch, parameters of the probabilistic model, conditioned on the observed label for the outgoing state at time n − 1, of the state at time n are trained from observed labels at time n. During subsequent epochs, probability estimates of the outgoing state at time n − 1 over increasing forward decoding depth k = 1, . . . K determine weights assigned to data n for training each of the probabilistic models conditioned on the outgoing state Table 1. Vowel classification results Machine Dataset Accuracy SVM (one vs. all) Paterson-Barney 79.1% SVM (one vs. one) Paterson-Barney 83.6% GiniSVM Paterson-Barney 86.5% SVM (one vs. all) Deterding 68.2% SVM (one vs. one) Deterding 68.4% GiniSVM Deterding 70.6%
5.2
Experiments with TIMIT Data
FDKM performance was evaluated on the larger TIMIT dataset consisting of continuous spoken utterances, preserving sequential structure present between labels (phones). The TIMIT speech dataset [23] consists of approximately 60 phone classes, which were first collapsed onto 39 phone classes according to standard folding techniques [24]. Training was performed on a subset of ”sx” training sentences in the TIMIT corpus. The choice of ”sx” sentences was motivated by their phonetic balance, ı.e. each of the phone classes carries approximately the same frequency. For training we randomly selected 960 ”sx” sentences spoken by 120 speakers, and for test set we randomly chose 160 ”sx” sentences spoken by 20 speakers. The relative small size of the subset of the database was primarily governed by current limitations imposed by our SVM training software. The speech signal was first processed by a pre-emphasis filter with transfer function 1 − 0.97z −1, and then a 25 ms Hamming window was applied over 10 ms shifts to extract a sequence of phonetic segments. Cepstral coefficients were extracted from the sequence, combined with their first and second order time differences into a 39-dimensional vector. Cepstral mean subtraction and
Forward Decoding Kernel Machines: A Hybrid HMM/SVM Approach
289
speaker normalization were subsequently applied. Right and left context were added by concatenating previous and next segments to obtain a 117-dimensional feature vector [25]. The feature vectors were averaged over the duration of the phone to obtain a single 117 dimensional feature vector for each phone utterance resulting in approximately 32,000 data points. Evaluation on the test was performed using MAP forward decoding (2) and thresholding [26]. The decoded phone sequence was then compared with the transcribed sequence using Levenshtein’s distance to evaluate different sources of errors shown in table 2. Multiple runs of identical phones in the decoded and transcribed sequences were collapsed to single phone instances to reflect true insertion errors. Table 2 summarizes the results of the experiments performed with TIMIT. Note that training was performed over a small subset of the available data, and the numbers are listed for relative comparison purposes only. To calibrate the comparison, benchmark results from a standard Gaussian mixture triphone HMM model are included.
Table 2. TIMIT evaluation Machine Accuracy Insertion Substitution Deletion Errors HMM (6 mixture triphone) 60.6% 8.4% 30.1% 9.3% 47.8% SVM (one vs all) 68.4% 4.9% 22.4% 9.2% 36.5% SVM (one vs one) 69.1% 4.9% 21.6% 9.3% 35.9% GiniSVM 70.3% 4.4% 21.6% 8.1% 34.1% FDKM k=1, threshold = 0.25 71.8% 4.3% 21.3% 6.9% 32.5% FDKM k=1, threshold = 0.1 71.4% 5.1% 21.5% 7.1% 33.7% FDKM k=10, threshold = 0.25 73.2% 4.9% 19.6% 7.2% 31.7% FDKM k=10, threshold = 0.1 72.9% 5.4% 20.1% 7.0% 32.5%
6
Conclusion
This paper addressed two problems in machine learning, and offered two solutions in conjunction: GiniSVM, and FDKM. GiniSVM was introduced as a sparse technique to estimate multi-class conditional probabilities with a largemargin kernel machine. Preliminary evaluations of GiniSVM show that it outperforms other multi-class SVM techniques for classification tasks. FDKM merges large-margin classification techniques into an HMM framework for robust forward decoding MAP sequence estimation. FDKM improves decoding and generalization performance for data with embedded sequential structure, providing an elegant tradeoff between learning temporal versus spatial dependencies. The recursive estimation procedure reduces or masks the effect of noisy or missing
290
Shantanu Chakrabartty and Gert Cauwenberghs
labels yj [n]. Other advantages include a feed-forward decoding architecture that is very amenable to real-time implementation in parallel hardware [27].
References 1. L. Rabiner and B-H Juang, Fundamentals of Speech Recognition, Englewood Cliffs, NJ: Prentice-Hall, 1993. 278 2. Robinson, A. J., “An application of recurrent nets to phone probability estimation,” IEEE Transactions on Neural Networks, vol. 5,No.2,March 1994. 279 3. Bengio, Y., “Learning long-term dependencies with gradient descent is difficult,” IEEE T. Neural Networks, vol. 5, pp. 157-166, 1994. 279 4. Boser, B., Guyon, I. and Vapnik, V., “A training algorithm for optimal margin classifier,” in Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, pp 144-52, 1992. 279 5. Vapnik, V. The Nature of Statistical Learning Theory, New York: Springer-Verlag, 1995. 279, 281 6. Girosi, F., Jones, M. and Poggio, T. “Regularization Theory and Neural Networks Architectures,” Neural Computation, vol. 7, pp 219-269, 1995. 281, 282 7. Clark, P. and Moreno, M. J. “On the use of Support Vector Machines for Phonetic Classification,” IEEE Conf. Proc., 1999. 279 8. Laderfoged, P. A Course in Phonetics, New York, Harcourt Brace Jovanovich, 2nd ed., 1982. 279 9. Chakrabartty, S. and Cauwenberghs, G. “Sequence Estimation and Channel Equalization using Forward Decoding Kernel Machines,” IEEE Int. Conf. Acoustics and Signal Proc. (ICASSP’2002), Orlando FL, 2002. 279, 280 10. Bourlard, H. and Morgan, N., Connectionist Speech Recognition: A Hybrid Approach, Kluwer Academic, 1994. 280 11. Bahl, L. R., Cocke J., Jelinek F. and Raviv J. “Optimal decoding of linear codes for minimizing symbol error rate,” IEEE Transactions on Inform. Theory, vol. IT-20, pp. 284-287, 1974. 280 12. Jaakkola, T. and Haussler, D. “Probabilistic kernel regression models,” Proceedings of Seventh International Workshop on Artificial Intelligence and Statistics , 1999. 280, 282, 292 13. Sch¨ olkopf, B., Burges, C. and Smola, A., Eds., Advances in Kernel Methods-Support Vector Learning, MIT Press, Cambridge, 1998. 282 14. Wahba, G. Support Vector Machine, Reproducing Kernel Hilbert Spaces and Randomized GACV, Technical Report 984, Department of Statistics, University of Wisconsin, Madison WI. 282, 291 15. Zhu, J and Hastie, T., “Kernel Logistic Regression and Import Vector Machine,” Adv. IEEE Neural Information Processing Systems (NIPS’2001), Cambridge, MA: MIT Press, 2002. 282, 291 16. Platt, J., “Probabilities for SV Machines,” Adv. Large Margin Classifiers, Smola, Bartlett et al., Eds., Cambridge MA: MIT Press, 1999. 283 17. Breiman, L. Friedman, J. H. et al. Classification and Regression Trees, Wadsworth and Brooks, Pacific Grove, CA, 1984. 283, 284 18. Huber, P. J., “Robust Estimation of Location Parameter,” Annals of Mathematical Statistics, vol. 35, March 1964. 283, 284 19. Platt, J. “Fast Training of Support Vector Machine using Sequential Minimal Optimization,” Adv. Kernel Methods, Scholkopf, Burges et al., Eds., Cambridge MA: MIT Press, 1999. 285
Forward Decoding Kernel Machines: A Hybrid HMM/SVM Approach
291
20. Cauwenberghs, G. and Poggio, T., “Incremental and Decremental Support Vector Machine Learning,” Adv. IEEE Neural Information Processing Systems (NIPS’2000), Cambridge MA: MIT Press, 2001. 285 21. Chakrabartty, S. and Cauwenberghs, G. “Sequential Minimal Optimization for Kernel Probabilistic Regression,” Research Note, Center for Language and Speech Processing, The Johns Hopkins University, MD, 2002. 285 22. Weston, J. and Watkins, C., “Multi-Class Support Vector Machines,” Technical Report CSD-TR-9800-04, Department of Computer Science, Royal Holloway, University of London, May 1998. 287 23. Fisher, W., Doddington G. et al The DARPA Speech Recognition Research Database: Specifications and Status. Proceedings DARPA speech recognition workshop, pp. 93-99, 1986. 288 24. Lee, K. F. and Hon, H.W, “Speaker-Independent phone recognition using hidden markov models,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, pp. 1641-1648, 1989. 288 25. Fosler-Lussier, E. Greenberg, S. Morgan, N., “Incorporating contextual phonetics into automatic speech recognition,” Proc. XIVth Int. Cong. Phon. Sci., 1999. 289 26. Wald, A. Sequential Analysis, Wiley, New York, 1947. 289 27. Chakrabartty, S., Singh, G. and Cauwenberghs, G. “Hybrid Support vector Machine/Hidden Markov Model Approach for Continuous Speech recognition,” Proc. IEEE Midwest Symp. Circuits and Systems (MWSCAS’2000), Lansing, MI, Aug. 2000. 290
Appendix: Dual Formulation of Kernel Logistic Regression The regularized log-likelihood/cross entropy for kernel logistic regression is given by [14,15] H=
1 k
2
|wk |2 − C
N M [ yk [n]fk (x[n]) − log(ef1 (x[n]) + ... + efM (x[n]) ] . (20) n
k
First order conditions with respect to parameters wk and bk in fk (x) = wk .x+bk yield wk = C
N n
0=C
N n
Denote
efk (x[n]) [yk [n] − M ]x[n] fp (x[n]) p e efk (x[n]) [yk [n] − M ]. fp (x[n]) p e
efk (x[n]) ] λnk = C[yk [n] − M fp (x[n]) p e
(21)
(22)
in the first-order conditions (21) to arrive at the kernel expansion (9) with linear constraint λnk K(x[n], x) + bk (23) fk (x) = n
292
Shantanu Chakrabartty and Gert Cauwenberghs
0=
λnk .
(24)
n
Note also that k λnk = 0. Legendre transformation of the primal objective function (20) in wk and bk leads to a dual formulation directly in terms of the coefficients λnk [12]. De fp (x[n]) ), and Qij = K(xi , xj ). Then (22) and (23) transform fine zn = log( M p e to Qnl λlk − log[yk [n] − λnk /C] + bk − zn = 0 (25) l
which correspond to first-order conditions of the convex dual functional Hd =
M N N N 1 n [ λk Qnl λlk + C (yk [n] − λnk /C) log(yk [n] − λnk /C)] 2 n n k
(26)
l
under constraints
λnk = 0
(27)
λnk = 0
(28)
λnk ≤ Cyk [n]
(29)
n
k
where bk and zn serve as Lagrange parameters for the equality constraints (27) and (28).
Color Texture-Based Object Detection: An Application to License Plate Localization Kwang In Kim1, Keechul Jung2, and Jin Hyung Kim1 1
Artificial Intelligence Lab CS Department, Korea Advanced Institute of Science and Technology Taejon, 305-701, Korea {kimki,jkim}@ai.kaist.ac.kr 2 Pattern Recognition and Image Processing Lab CS and Engineering Department, Michigan State University East Lansing, MI 48824-1226, USA
[email protected] Abstract. This paper presents a novel color texture-based method for object detection in images. To demonstrate our technique, a vehicle license plate (LP) localization system is developed. A support vector machine (SVM) is used to analyze the color textural properties of LPs. No external feature extraction module is used, rather the color values of the raw pixels that make up the color textural pattern are fed directly to the SVM, which works well even in high-dimensional spaces. Next, LP regions are identified by applying a continuously adaptive meanshift algorithm (CAMShift) to the results of the color texture analysis. The combination of CAMShift and SVMs produces not only robust and but also efficient LP detection as time-consuming color texture analyses for less relevant pixels are restricted, leaving only a small part of the input image to be analyzed.
1 Introduction The detection of objects in a complex background is a well-studied, but yet unresolved problem. This paper presents a generic framework for object detection based on color texture. To demonstrate our technique, we have developed a Korean vehicle license plate (LP) localization system that locates the bounding boxes of LPs with arbitrary size and perspective under moderate amounts of changes in illumination. We stress that the underlying technique is fairly general, and accordingly can also be used for detecting objects in other problem domains, where object of interest may not be perfectly planar or rigid. The following presents challenges in LP detection problem and a brief overview of previous related work along with the objective of the present study.
S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 293-309, 2002. Springer-Verlag Berlin Heidelberg 2002
294
Kwang In Kim et al.
1.1
License Plate Detection Problem
Detecting the LP of a vehicle is of great interest because it is usually the first step of an automatic LP recognition system whose possible applications include traffic surveillance system, automated parking lot, etc. Korean LP is a rectangular plate with two rows of white characters embossed on a green background.1 The problem of automatic LP detection is challenging, as LPs can have significantly variable appearances in image: Variations in shape due to the distortion of plates and differences in characters embossed on the plates (Fig. 1a) Variations in color due to similar but definitely different surface reflectance properties, change in illumination conditions, haze, dust on the plate, blurring occurred during image acquisition, etc. (Fig. 1a). Size variations and perspective deformation caused by a change of sensor (camera) placement with respect to vehicles (Fig. 1b).
(a)
(b) Fig. 1. Examples of LP image with different zoom, perspective, and other various imaging conditions
1.2
Previous Work
A number of approaches have been proposed for developing LP detection systems. For example, color (gray level)-based methods utilize the fact that LPs (when disregarding the embossed characters) in images often show unique and homogenous color (gray level). They segmented an input image according to color (gray level) homogeneity and analyzed the color or shape of each segment. Kim, et al. [1] adopted 1
In fact, there are two more color configurations for Korean LPs (deep blue characters on yellow background: business purpose vehicles, white characters on orange background: industrial vehicles). However we are interested in that of the most popular kind of vehicles (white characters on green background: personal vehicles).
Color Texture-Based Object Detection: An Application to License Plate Localization
295
genetic algorithms for color segmentation and searched for green rectangular regions as LPs. To make the system insensitive to noise and variation in illumination condition, they encouraged the consistency of the labeling between neighboring pixels during color segmentation. Lee, et al. [2] utilized a neural network (NN) to estimate surface color of LP from given samples of LP images. All the pixels in the image are filtered by NN and greenness for each pixel is calculated. Then the LP region is identified by verifying green rectangular region using structural features. Crucial to the success of color (or gray level)-based method is color (gray level) segmentation stage. However solutions currently available do not provide a high degree of accuracy in natural scene. On the other hand, edge-based methods are based on observation that characters embossed on LPs contrast with their background (plate region) in gray level. Then, LPs are found by searching for regions with such a high contrast. Draghici [3] searched for regions with high edge magnitude and verified them by examining the presence of rectangular boundaries. In [4], Gao and Zhou computed gradient magnitude and their local variance in an image. Then, regions with a high edge magnitude and high edge variance are identified as LP regions. Although efficient and effective in simple images, edge-based methods can hardly be applied to a complex image, since in this case background region can also show high edge magnitude or variance. With an assumption that LP region consists of dark characters on a light background, Cui and Huang [21] performed spatial thresholding on an input image based on a Markov random filed (MRF), and detected characters (LPs) according to the spatial edge variances. Similarly, Naito, et al. [6] performed adaptive thresholding on the image, segmented the resulting binary image using the priori knowledge of character size in LPs, and detected a string of characters based on the geometrical property (character arrangement) of LPs. While the reported results with clean images were promising, these methods may degrade when LPs are partially shaded or stained as shown in Fig. 1a, since in these cases, binarization may not correctly separate characters from the background. Another type of approach stems from the well-known method of (color) texture analysis. In [5], Park, et al. presented a LP detection method based on color texture. They adopted a NN to analyze the color textural properties of horizontal and vertical cross-sections of LPs in an image and performed projection profile analysis on the classification result to generate LP bounding boxes. Brugge, et al. [11] utilized discrete time cellular NNs (DT-CNNs) for analyzing textural properties of LPs. They also attempted to combine a texture-based method with an edge-based method. Texture-based methods are known to perform well even with noisy or degraded LPs and rather insensitive to variations in illumination condition; however, they are most often time consuming, as texture classification is inherently computationally dense.
1.3
Overview of Present Work
Crucial to the success of color texture-based LP detection are the following: construction of (1) classifier that can discriminate between color textures associated with different classes (LP and non-LP), (2) LP bounding box generation module that
296
Kwang In Kim et al.
operates on the classification results obtained from (1). Accordingly, a color texturebased LP detection problem can be conventionally divided into two sub-problems: classification and bounding box generation. Recently, a large number of techniques for analyzing color texture have been proposed [7, 8]. In this paper, we focus our attention on a support vector machine (SVM)-based approach that was introduced in [9] for texture classification. SVMs are a natural choice because of their robustness even in lack of training examples. The previous success of SVMs in texture classification [9] and other related problems [10, 13] also provided further motivation to use SVMs as a classifier for identifying LP regions. In addition to robustness, SVMs have another important advantage. Since they work well even in highdimensional spaces, no external feature extractor is required to reduce the dimensionality of the pattern, thereby eliminating the need for a time-consuming feature extraction stage. In fact, SVMs can efficiently extract features within their own architecture using kernel functions [9]. After the classification, a LP score image is generated where each pixel represents the possibility of the corresponding pixel in the input image being part of a plate region. Then, the LP bounding boxes within the LP score image are identified by applying the continuously adaptive meanshift algorithm (CAMShift), which has already demonstrated its effectiveness in a face detection application [12]. The combination of CAMShift and SVM produces robust and efficient LP detection through restricting the color texture classification of less relevant pixels. In addition to exhibiting a better detection performance than existing techniques, the processing time of the proposed method is short (average 1.4 second for 320×240-sized images). M M
Color texture classification
Input image pyramid
Classification fusion
Classification pyramid
Bounding box generation
Fused classification result Final detection result
Fig. 2. Top-level process of LP detection system
2 System The proposed method poses LP detection as a color texture classification problem where problem-specific knowledge is available prior to classification. Since this knowledge (the number and type of color textures) is often available in the form of example patterns, the classification can be supervised. A SVM as a trainable classifier
Color Texture-Based Object Detection: An Application to License Plate Localization
297
is adopted for this task. Specifically, the system uses a small window to scan an input image and classifies the pixel located at the center of the window into plate or nonplate (background) by analyzing its color and texture properties using a SVM.2 To facilitate the detection of plate at different scales, a pyramid of images is generated from the original image by gradually changing the resolution at each level. The classification results are hypothesized at each level and then fused to the original scale. To reduce the processing time, CAMShift is adopted as a mechanism for automatically selecting the region of interest (ROI). Then, it locates bounding boxes of LPs by analyzing only these ROIs. Fig. 2 summarizes the LP detection process. The rest of this section is organized as follows: Section 2.1 describes the use of SVMs for color texture classification. Next, Section 2.2 discusses the fusion of color texture analysis at different scales, while Section 2.3 outlines the LP detection process based on CAMShift. 2.1
Data Representation and Classification
One of the simplest ways to characterize the variability in a color texture pattern is by noting the color values (color coordinates in a color space e.g. RGB, HSI, etc.) of the raw pixels. This set of color values then becomes the feature set on which the classification is based. An important advantage of this approach is the speed with which the images can be processed since the features need not be calculated. Furthermore, this approach releases the system developer from the laborious feature design task. Yet, the main disadvantage is the large size of the feature vector. Accordingly, a classifier may be required to generalize the patterns in a highdimensional space. In this case, it is important to keep the capacity of the classifiers as small as possible since classifiers with a larger capacity are inclined to store the specific details of a training set and can show a poor performance with a test set that differs in these details. SVMs provide a suitable means of constructing such classifiers, since they can simultaneously minimize the bound on the capacity of the classifier and the training error [13, 14]. Furthermore, SVMs incorporate feature extractors and can use their nonlinear mapped input patterns as features for classification [9]. Therefore, it may be more advantageous to allow a SVM to extract features directly from the pixels rather than forcing it to base the features on a userdefined feature set. The architecture of a SVM color texture classifier involves three layers with entirely different roles. The input layer is made up of source nodes that connect the SVM to its environment. Its activation comes from the M×M (typically 11×11) window in the input image. However, instead of using all the pixels in the window, a configuration for autoregressive features (shaded pixels in Fig. 2) is used. This reduces the size of the feature vector (from 3× M 2 to 3×(4M-3)) and results in an improved generalization performance and classification speed. The hidden layer applies nonlinear mapping Φ from the input space to the feature space F and 2
It is assumed that the number of texture classes is fixed at 2 for the purpose of object detection. While it is sometimes convenient to further divide the class of object of interest into more than two classes, when the intra-class variation (of object) is large, we assume that this two-class classification can be generalized to a multi-class classification.
298
Kwang In Kim et al.
computes the dot product between its input and support vectors (SVs) [13]. In practice, these two operations are performed in a single step by introducing the kernel function k which is defined as the dot product of two mapped patterns: ( k (x, y ) = (Φ(x ) ⋅ Φ(y )) ). Accordingly, various mappings (feature extractions) Φ can be indirectly induced by selecting the proper kernels k. One feature extraction is achieved by taking the p-order correlations among the entries x i of an input vector x [14]. If x represents a pattern of just pixel values, this amounts to mapping the input space into the space of p-th order products (monomials) of input pixels. It should be noted that the direct computation of this feature is not easy even for moderate-sized problems because of the extensive computation required. However, the introduction of a polynomial kernel facilitates work in this feature space, as the polynomial kernel with degree p ( k (x, y ) =(x ⋅ y ) p ) corresponds to the dot product of the feature vectors extracted by the monomial feature extractor C p [14]: p
N xi1 ⋅ … ⋅ xi p ⋅ yi1 ⋅ … ⋅ yi p = ∑ xi ⋅ yi =(x ⋅ y ) p . p p i1 ,…,i p =1 i =1 The degree p is empirically determined to be 3. The size of the hidden layer m is determined as the number of SVs identified during in the training phase. The sign of the output y, obtained by weighting the activation of the hidden layer, then represents the class of the central pixel in the input window. For training, +1 was assigned for the LP class and –1 for the non-LP class. Accordingly, if the SVM output of a pixel is positive, it is classified as LP.
(C (x) ⋅ C (y )) = ∑ N
When using a learning-from-examples approach, it is desirable to make the training set as large as possible in order to attain a comprehensive sampling of the input space. However, when considering real-world limitations, the size has to be moderate. Accordingly, the problem is how to build a comprehensive yet tractable database. For plate patterns, a collection can be made of all the plate-containing images available. However collecting non-plate patterns is more difficult as practically any image can serve as a valid training example. From among these patterns a r‘ epresentative’ set of non-plate patterns should selected. A bootstrap method recommended by Sung and Poggio [15] was adopted for this purpose. The idea is that some of the non-plate training patterns are collected during training rather than before training: a partially trained SVM is applied to images that do not contain LPs, then patterns with a positive output are added to the training set as non-plate patterns. This process iterates until no more patterns are added to the training set. However, a training set constructed by a bootstrap, which is moderate (about 100,000 in preliminary experiments) for training conventional learning systems such as NNs, is often too large for training SVMs: when the number of training patterns is l, the training of a SVM requires an O(l2)-sized memory, which grows prohibitively in proportion to l (when l>10,000 the memory grows to more than hundred gigabytes). Although several methods have been developed to reduce the time and memory complexity for training a large-scale SVM [16], they barely provide practical facilities for training sets larger than 100,000.
Color Texture-Based Object Detection: An Application to License Plate Localization
299
One straightforward method for training classifiers on such a large training set is boosting by filtering. As a type of boosting technique, it constructs a strong classifier based on a set of weak classifiers [17].3 The method involves training several weak classifiers, which are different from each other in that they are trained on examples with different characteristics, and arbitrating the answers of the trained weak classifiers to make a strong decision. The training set for each classifier is iteratively obtained by utilizing the existing classifiers to filter a new training set. For a detailed description of boosting by filtering, readers are referred to [17]. A slight simplification is utilized in the current work: given a large training set, the first weak classifier is trained on a moderately sized subset of the training set. This classifier is then used to filter another set of examples to train the second classifier: those patterns for which the answers of the first classifier are wrong are collected. Next, the third training set is generated using the previous two filters (weak classifiers): those patterns for which the answers of both previous classifiers are wrong are collected. The procedure is iterated until no more patterns are collected. After the procedure is terminated, the answer for an unknown input pattern is obtained by voting among the classifiers. Fig. 3 summarizes the training process, while Fig. 4 shows examples of training images. 1. Create initial training set N1 that includes complete set of LP class patterns NL and partial set of non-plate patterns NNL1 Train SVM E1 on N1 I=2 2. Do { • Create training set Ni that includes NL and random selection of 50% of patterns from NNLi-1 Create empty set NNLi • Do { Exhaustively scan image which contains no LPs with E1,,… E i-1 and add patterns whose corresponding E1,,… E i-1 decisions are all LP class to NNLi } Until (|NNLi |≥10,000 or no more images are available) • Add NNLi to Ni Train SVM Ei on Ni i= i+1 } Until (no more images are available) Fig. 3. Classifier training process
3
Degrees of S ‘ trong’ and w ‘ eak’ are measured with respect to training errors.
300
Kwang In Kim et al.
(b)
(a)
(c)
(d)
Fig. 4. Example images used in training: (a) LP images, (b) background images, (c) background images used in bootstrap, (d) classification results of images in (c) performed by E1 (white: plate, black: background) (pixels showing green color or high-contrast were classified as LPs), where non-plate training patterns were tagged
2.2
Fusion of Classification at Different Scale
As outlined in Section 2.1, we do not use any external feature extractor for color texture analysis. Instead, the SVM receives color values of small input windows as a pattern for classification. While proven to be general and effective [9], this approach has an important shortcoming: it lacks of any explicit mechanism to fit variations in texture scale. This can be dealt with by constructing a sufficiently large set of training patterns sampled at various different scales. However, it is often more efficient to cast the problem into where the object of interest is size-invariantly represented. A pyramidal approach is adopted for this purpose. In this method, the classifier is constructed at a single scale. Then in the classification stage, a pyramid of images is constructed by gradually changing the resolution of input image, and the classification is performed at each pyramid level. A mechanism is required to fuse the resulting pyramid of classification result (classification hypotheses at each level) into a single scale classification. We use a method similar to the NN-based arbitration which has already shown effectiveness in a face detection application [18]. The basic idea is to train a new classifier as an arbitrator to collect the outputs of classifiers at each level. For each location of interest (i, j) in the original image scale, the arbitrator examines corresponding locations in each level of output pyramid, together with their spatial neighborhood (i’, j’) (|i-’ i|≤N |i-’ i|≤ N, where N defines neighborhood relations and is empirically determined to be 3). A linear SVM is used as the arbitrator. Then, it receives the normalized outputs of color texture classifiers in a 3×3 window, around the location of interest at each scale. Normalization is simply done by applying a sigmoidal function to the output of SVM.
Color Texture-Based Object Detection: An Application to License Plate Localization
2.3
301
Plate Bounding Box Generation Using CAMShift
In most object detection applications, the detection process needs to be fast and efficient so that objects can be detected in real time while consuming as few system resources as possible. However, many (color) texture-based object detection methods suffer from the considerable computation involved. The majority of this computation lies in texture classification, therefore, if the number of calls for texture classification can be reduced, this will save computation time.4 The proposed speed up approach achieves this based on following two observations: (1) LPs (and often other different class of objects) form smooth boundary and (2) often, LP size does not dominate in the image. The first observation indicates that LP regions may show aggregate of pixels showing LP specific characteristics (color texture) in the image. This validates the use of coarse-to-fine approach for local feature-based object detection: firstly, the ROI related to the possible object region is selected based on a coarse level of classification (sub-sampled classification of image pixels). Then only pixels in the ROI are classified at the finer level. This may significantly reduce the processing time when the object size does not dominate the image size (as supported by the second observation). It should be noted that the prerequisite for this approach is that the object of interest must be characterized with local features (e.g. color, texture, color texture, etc). Accordingly, features representing global characteristics of object, such as the contour or geometric moments may not be directly applied. The implementation is borrowed from well-developed face detection methodologies. CAMShift was originally developed by Bradski [12] to detect and track faces in a video stream. As a modification of the meanshift algorithm that climbs the gradient of a probability distribution to find the dominant mode, CAMShift locates faces by seeking the modes of flesh probability distribution.5 The distribution is defined as a two-dimensional image {y i , j }i , j =1,… , IW , IH (IW: image width, IH: image height) whose entry y i , j represents the probability of a pixel x i , j in the original image {x i , j }i , j =1,… , IW , IH being part of a face, and is obtained by matching x i , j with a facial color model. Then, from the initial search window, CAMShift iteratively change the location and size of window to fit its contents (or flesh probability distribution within the window) during the search process. More specifically, the center of window is moved to the mean of the local probability distribution and the size of window varies with respect to the sum of the flesh probabilities within the window, until no further considerable movement is observed. For the purpose of locating facial bounding box, the shape of search window is set as a rectangle [12]. Then, after iteration finishes, the finalized search window itself represents the
4
5
One simple method is shifting which classifies pixels at an interval and interpolates pixels located between them [20]. While significantly reducing the processing time, this technique trades between the precision and speed, and accordingly is not considered in this paper. Actually, it is not a probability distribution, because its entries do not total 1. However, this is not generally a problem with the objective of peak (mode) detection.
302
Kwang In Kim et al.
bounding box of face in the image. For a more detailed description of CAMShift readers are referred to [12]. The proposed method is simply replacing the flesh probability y i , j with a LP score z i , j that is obtained by performing color texture analysis on the input x i , j , and operating CAMShift on {z i , j }i , j =1,… , IW , IH . Although the output of classifier (scale arbitrator: linear SVM) is not a probability (it is not even bounded), when it is scaled into an interval [0, 1] using a sigmoidal activation function, the detection results with CAMShift are acceptable. Henceforth, for convenience, these scaled classification results for the pixels within a selected window W will be referred to as the p‘ robability distribution within W.’ As a gradient ascent algorithm, CAMShift has possibility of being stuck to local optima. To resolve this, it is used in parallel with different initial window positions. This also facilitates the detection of multiple objects in an image. One important advantage of using CAMShift on color texture-based object detection is that CAMShift does not necessarily require all the pixels in the input image to be classified. Since CAMShift utilizes local gradient, only the probability distribution (or classification result) within the window is sufficient for iteration. Furthermore, since the window size varies in proportional to the probabilities within the window, the search windows initially located outside of LP region may diminish, while windows located within the LP region grow. This is actually a mechanism for automatic selection of the ROI. The parameters controlled in CAMShift at iteration t are the position x(t), y(t), width w(t), height h(t), and orientation θ(t) of the search window. x(t) and y(t) can be simply computed using moments: x = M 10 / M 00 and y = M 01 / M 00 ,
(1)
where M ab is the (a+b)-th moment as defined by M
ab
(W ) = ∑ i a j b z i , j . i , j∈W
w(t) and h(t) are estimated by considering the first two eigenvectors (two major axes) and their corresponding eigenvalues of the probability distribution within the window. These variables can be calculated using up to the second order moments [12]: 2 2 w = 2 (a + c ) + b 2 + (a − c ) / 2 , h = 2 (a + c ) − b 2 + (a − c ) / 2 ,
where the intermediate variables a, b, and c are a = M 20 / M 00 − x 2 , b = 2(M 11 / M 00 − xy ) , and c = M 02 / M 00 − y 2 .
(2)
Color Texture-Based Object Detection: An Application to License Plate Localization
303
Similarly, the orientation θ can also be estimated by considering the first eigenvector and corresponding eigenvalue of the probability distribution and then calculated using up to the second order moments as follows: b /2. a−c
θ = arctan
(3)
Note that the moments involve sums of all the pixels, and so are robust against small changes of elements. As such, a robust estimation of the parameters is possible even with the existence of noise (mis-classifications) in the window. Nonetheless, the use of equation (2) to estimate the window size is not directly suitable for the purpose of LP detection. When the whole LP region is completely contained within the search window, equ. (2) produces exactly what is needed. However, when the actual LP region is larger than the current (iteration t) window, the window size needs to be increased beyond the estimation of equ. (2) so as to explore a potentially larger object area in the next iteration (t+1). For this, h and w are set to be slightly larger than that estimated by equ. (2): 2 2 w = α w ∗ 2 (a + c ) + b 2 + (a − c ) / 2 , h = α h ∗ 2 (a + c ) − b 2 + (a − c )
/2 ,
(4)
where αw (≥1) and αh (≥1) are constants determined to be 1.5. The new estimate enables the window to grow as long as the major content of the window is LP pixels. When the iteration terminates, the final window size is re-estimated using equ. (2). The terminal condition for iteration is that for each parameter, the difference between two parameters of x(t+1)– x(t), y(t+1)– y(t), w(t+1)– w(t), h(t+1)– h(t), and θ(t+1)– θ(t) in two consecutive iterations (t+1) and (t) is less than the predefined thresholds Tx, Ty, Tw, Th, and Tθ respectively. During the CAMShift iteration, search windows can overlap each other. In this case, they are examined as to whether they are originally a single object or multiple objects. This is performed by checking the degree of overlap between two windows, which is measured using the size of the overlap divided by the size of each window. Supposing that Dα and D β are the areas covered by two windows α and β , then the degree of overlap between α and β is defined as Λ (α , β ) = max size Dα ∩ Dβ size(Dα ), size Dα ∩ D β size D β ,
( (
)
(
)
( ))
where size(λ ) counts the number of pixels within λ . Then, α and β are determined to be a single object if To ≤ Λ(α , β ) multiple objects otherwise, where To is the threshold set at 0.5. Then, in CAMShift iteration, every pair of overlapping windows is checked and those pairs identified as a single object are merged to form a single large window encompassing them. After CAMShift iteration finishes, any small windows are eliminated, as they are usually false detections.
304
1. 2. 3. 4. 5. 6.
Kwang In Kim et al.
Set up initial locations and sizes of search windows Ws in image. For each W, repeat Steps 2 to 4 until terminal condition is satisfied. Generate LP probability distribution within W using SVM. Estimate parameters (location, size, and orientation) of W using eqns. (1), (3), and (4). Modify W according to estimated parameters. Re-estimate sizes of Ws using equ. (2). Output bounding Ws as LP bounding boxes. Fig. 5. CAMShift for LP detection
Fig. 5 summarizes the operation of CAMShift for LP detection. It should be noted that, in case of overlapping windows, the classification results are cached so that the classification for a particular pixel is performed only once for an entire image. Fig. 6 shows an example of LP detection. A considerable number of pixels (91.3% of all the pixels in the image) are excluded from the color texture analysis in the given image.
(a)
(b)
(c)
(d)
Fig. 6. Example of LP detection using CAMShift: (a) input image (640×480), (b) initial window configuration for CAMShift iteration (5×5-sized windows located at a regular interval of (25,25)), (c) color texture classified region marked as white and gray levels (white: LP region, gray: background region), and (d) LP detection result
3 Experimental Results The proposed method was tested using a LP image database of 450 images, of which 200 were stationary vehicle images taken at the parking lots and the remaining 150 were taken from the vehicles traveling on a road. The images included examples of LP appearances varying in terms of size, orientation, perspective, illumination condition, etc. The resolution of these images ranged from 240×320 to 1024×1024 while the sizes of LPs in these images range from about 79×38 to 390×185. One hundred randomly selected images from the database were used for collecting the initial samples for training the SVMs, and the remaining 350 images were used for testing. Non-LP training examples for bootstrapping the SVMs were also collected from 75 images that contained no LPs and were distinct from the LP image database.
Color Texture-Based Object Detection: An Application to License Plate Localization
305
For each detected LP region, the system drew a rectangle encompassing that region in the input image. The initial locations and sizes of the search windows are dependent on the application. A good selection of initial search windows should be relatively dense and large enough not to miss LPs located between the windows and to be tolerant of noise (classification errors), yet they also should be moderately sparse and small enough to ensure fast processing. The current study found that 5×5-sized windows located at a regular interval of (25, 25) were sufficient to detect LPs. Variations in the parameter values of CAMShift (for termination condition) did not significantly affect the detection results excepting the case that they were so large as to make the search process converge prematurely. Therefore, based on various experiments with the training images, the threshold values were determined as Tx=Ty=3 pixels, Tw=Th=2 pixels and Tθ=1°. The slant angle θ of the finalized search windows was set at 0° if its absolute value was less than 3°, and 90° if it was greater than 87° and less than 93°. This meant that small errors occurring in the orientation estimation process would not significantly affect the detection of horizontally and vertically oriented LPs. Although these parameters were not carefully tuned, the results were acceptable with both the training and test images, as shown later. The systems were coded in C++ and run on a Pentium3 CPU with an 800 MHz clock speed. The time spent processing an image depended on the image size and number and size of LPs in the image. Most of the time was spent in the classification stage. For the 340×240-sized images, an average of 12.4 seconds was taken to classify all the pixels in the image. However, when the classification was restricted to just the pixels located within the involved search windows of CAMShift, the entire detection process only took an average of 1.4 second. To quantitatively evaluate the performance, an evaluation criteria need to be established for the detections produced automatically by the system. A set of ground truths GTs is created by manually constructing bounding boxes around each LP in the image. Then, the outputs produced by the system As are compared with the GTs. This comparison is made by checking the similarity, defined as the size of the overlap divided by the size of the union. Formally, the similarity between two detections α and β is defined as
(
)
(
)
Γ(α , β ) = size Dα ∩ D β size Dα ∪ D β . Actually, this is the Tanimoto similarity [19] between two detections when they are represented as IW×IH vectors whose elements are 1 if the corresponding pixels are contained within the detection areas and 0, otherwise. As such, the output of the comparison is a binary value of {correct, incorrect} , which is determined as correct, if T< Γ(GT , A) , incorrect, otherwise, where T is the threshold, which is set at 0.8.
306
Kwang In Kim et al.
The incorrect detections are further classified into {false detection, miss} as defined by: false detection, if size(D A ) - size(DGT ) > 0 , miss, otherwise. Two metrics are used to summarize the detection results, as defined by: # misses miss rate (% ) = × 100 , # LPs
false detection rate (% ) =
# false detections × 100 . # LPs
The proposed system achieved a miss rate of 2.6% with a false detection rate of 9.4%. Almost all the plates that the system missed were blurred during imaging process, stained with dust, or strongly reflecting the suhshine. In addition, many of the false detections were image patches with green and white textures that looked like parts of LPs. Figs. 7 and 8 show examples of LP detection without and with mistakes, respectively.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Fig. 7. LP detection examples: The system exhibited a certain degree of tolerance with pose variations ((c), (h), and (l)), variations in illumination condition ((a) compared with (d) and (e)), and blurring ((j) and (k)). Although (d) shows fairly strong reflection of sunshine and accordingly show quite different color properties (or irradiance) from surface reflectance, our LP detector correctly located it. While green leaves are originally classified into LPs before bootstrap (Fig. 4(c)), they were correctly classified into background and were not detected as LP ((a) and (l))
Color Texture-Based Object Detection: An Application to License Plate Localization
307
To gain a better understanding of the relevance of the results obtained using the proposed method, benchmark comparisons with other methods were carried out. A set of experiments was performed using a color-based method [2] and color texture-based method [5], which are described in Section 1.2. Since both methods are developed to detect horizontally aligned LPs without perspective, the comparison were made with 82 images containing upright frontal LPs. Table 1 summarizes the performances of the different systems: A (SVM+CAMShift) was the proposed method, B (SVM+profile analysis) used a SVM for color texture classification and profile analysis [5] for bounding box generation, C (NN+CAMShift) used a NN adopted from [5] for classification and CAMShift for bounding box generation, and D (NN+profile analysis) and E (Color-based) were the methods described in [5] and [2], respectively. A and B exhibited similar performances and were far better than the other three methods, while A was much faster than B. C and D also showed similar performances and C was faster than D. They were rather sensitive to changes in illumination condition than A and B. While showing the highest processing speed, E produced the highest miss rate, which mainly stemmed from a poor detection of LPs reflecting sun light or with strong illumination shadow that often occurred in natural scene.
(a)
(b)
(c)
(d)
Fig. 8. Examples of LP detection with mistakes: LPs were missed due to bad illumination (a) and excessive burring (b). False detections are present in (c) and (d) where white characters were written on green background and complex color patterns occurred in glass, respectively
Table 1. Performances of various systems
System
Miss rate
False detection rate
Avg. proc. time per image
A
3.7(%)
7.3(%)
1.28(sec.)
B
3.7(%)
9.8(%)
9.03(sec.)
C
8.5(%)
13.4(%)
0.69(sec.)
D
7.3(%)
17.1(%)
3.92(sec.)
E
20.7(%)
26.8(%)
0.35(sec.)
308
Kwang In Kim et al.
4 Conclusions A color texture-based method for detecting LPs in images was presented. The system analyzes the color and textural properties of LPs in images using a SVM and locates their bounding boxes using CAMShift. While the proposed method does not assume the size and perspective of LPs, and is relatively insensitive to variations in illumination, it also can facilitate fast LP detection and was found to produce a better performance than some other techniques. However, the proposed method encountered problems when the image is extremely blurred or quite complex in color. While many objects can effectively be located with bounding box, there are some objects whose location cannot be fully described by only bounding box. When the precise boundaries of these objects are required, more delicate boundary location method can be utilized. Possible candidates include deformable template model [7]. Then the object detection problem can be dealt with fast and effective ROI selection process (SVM+CAMShift) followed by delicate boundary location process (such as deformable template model).
5 References 1.
Kim, H. J., Kim, D. W., Kim, S. K., Lee, J. K.: Automatic Recognition of A Car License Plate using Color Image Processing. Engineering Design and Automation Journal 3 (1997) 2. Lee, E. R., Kim, P. K., Kim, H. J.: Automatic Recognition of A License Plate Using Color. In Proc. International Conference on Image Processing (ICIP) (1994) 301-305 3. Draghici, S.: A Neural Network based Artificial Vision System for License Plate Recognition. International Journal of Neural Systems 8 (1997) 113-126 4. Gao, D.-S., Zhou, J.: Car License Plates Detection from Complex Scene. In Proc. International Conference on Signal Processing 2000 1409-1414 5. Park, S. H., Kim, K. I., Jung, K., Kim, H. J.: Locating Car License Plates using Neural Networks. IEE Electronics Letters 35 (1999) 1475-1477 6. Naito, T., Tsukada, T., Yamada, K., Kozuka, K., Yamamoto, S.: Robust LicensePlate Recognition Method for Passing Vehicles under Outside Environment. IEEE Trans. Vehicular Technology 49 (2000) 2309-2319 7. Zhong, Y., Jain, A. K.: Object Localization Using Color, Texture and Shape. PR. 33 (2000) 671-684 8. Mirmehdi, M., Petrou, M.: Segmentation of Color Textures. IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI) (2000) 142-159 9. Kim, K. I., Jung, K., Park, S. H., Kim, H. J.: Support Vector Machines for Texture Classification. IEEE Trans. PAMI to be appeared 10. Osuna, E., Freund, R., Girosi, F.: Training Support Vector Machines: an Application to Face Detection. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (1997) 130-136
Color Texture-Based Object Detection: An Application to License Plate Localization
309
11. Brugge, M. H. ter, Stevens, J. H., Nijhuis, J. A. G., Spaanenburg, L.: License Plate Recognition using DTCNNs. In Proc. IEEE International Workshop on Cellular Neural Networks and their Applications (1998) 212-217 12. Bradski, G. R.: Real Time Face and Object Tracking as a Component of a Perceptual User Interface. In Proc. IEEE Workshop on Applications of Computer Vision (1998) 214-219 13. Vapnik, V.: The Nature of Statistical Learning Theory. Springer-Verlag, NY (1995) 14. Schölkopf, B., Sung, K., Burges, C. J. C., Girosi, F., Niyogi, P., Poggio, T., Vapnik, V.: Comparing Support Vector Machines with Gaussian Kernels to Radial Basis Function Classifiers. IEEE Trans. Signal Processing 45 (1997) 2758-2765 15. Sung, K. K., Poggio, T.: Example-based Learning for View-based Human Face Detection. IEEE Trans. PAMI 20 (1998) 39-51 16. Joachims, T.: Making Large-Scale SVM Learning Practical. In Schölkopf, B., Burges, C., Smola, A. (Eds.): Advances in Kernel Methods-Support Vector Learning, MIT-Press, Cambridge (1999) pp. 169-184 17. Haykin, S.: Neural Network-A Comprehensive Foundation, 2nd edn. Prentice Hall, NJ (1999) 18. Rowley, H. A., Baluja, S., Kanade, T.: Neural Network-based Face Detection. IEEE Trans. PAMI. 20 (1999) 23-37 19. Duda, R. O., Hart, P. E.: Pattern Classification and Scene Analysis, A Wileyinterscience publication, NY (1973) 20. Li, H., Doermann, D., Kia, O.: Automatic Text Detection and Tracking in Digital Video. IEEE Trans. Image Processing 9 (2000) 147-156 21. Cui, Y., Huang, Q.: Extracting Characters of License Plates from Video Sequences. Machine Vision and Applications 10 (1998) 308-320
Support Vector Machines in Relational Databases Stefan R¨ uping University of Dortmund, 44221 Dortmund, Germany
[email protected] http://www-ai.cs.uni-dortmund.de
Abstract. Today, most of the data in business applications is stored in relational databases. Relational database systems are so popular, because they offer solutions to many problems around data storage, such as efficiency, effectiveness, usability, security and multi-user support. To benefit from these advantages in Support Vector Machine (SVM) learning, we will develop an SVM implementation that can be run inside a relational database system. Even if this kind of implementation obviously cannot be as efficient as a standalone implementation, it will be favorable in situations, where requirements other than efficiency for learning play an important role.
1
Introduction
There exist many efficient implementations of Vapnik’s Support Vector Machine [8] (see for example http://www.kernel-machines.org/ for a list of available Support Vector software). So why would another SVM implementation be of interest? In this paper we aim for an implementation, that is more adapted to the needs of the user in real-world applications of knowledge discovery. Today, most of the data in business applications is stored in relational databases or in data warehouses built on top of relational databases. Relational databases are built upon a well-defined theoretical model of how data can be stored and retrieved and can deal with most questions that revolve around data in real-world settings, such as efficiency and effectiveness of storage and queries, security of the data, usability and handling of meta data. On the other hand, available Support Vector software is either implemented as a standalone tool in a programming language like C, or as part of a numerical software such as Matlab. Of course, it would be easy to export the relevant data from the database, run the SVM software and load the results back into the database, but this approach suffers from various drawbacks: Usability: Learning algorithms in general cannot be applied independently. Depending on the problem, preprocessing steps have to be taken to clean and transform the data, that can be as complex as the final learning task itself [6],[2]. S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 310–320, 2002. c Springer-Verlag Berlin Heidelberg 2002
Support Vector Machines in Relational Databases
311
The same preprocessing steps have to be taken in order to apply the result to new examples. In [4], Kietz et. al. describe that 50 - 80% of the efforts in real-world application of knowledge discovery are spent on finding an appropriate pre-processing of the data. They present a meta-data based framework to the re-use of KDDapplications that is centered on keeping as much data and data operations in the database as possible. Efficiency for learning: While a standalone SVM application can be expected to be much more efficient than an SVM as a database application, the time that is necessary to transfer the data from the database to the application cannot be neglected. Slow network connections can have a serious impact on the overall runtime. Efficiency for prediction: The evaluation of the final decision function is relatively easy. Calling an external application to evaluate every new example would be extremely ineffective. Security: Commercial database management system offer fine grained possibilities to control, which user can access or modify which data. If the data is exported from the database, expensive additional measures have to be taken to guarantee this level of security. In this paper, we approach this problem by implementing an SVM that can be run entirely inside a database server. We do this by making use of Java Stored Procedures as the core of the program and the use of simple SQL statements to compute intermediate results whenever possible.
2
Support Vector Machines
The principles of Support Vector Machines and of statistical learning theory [8] are well known, so we give only a short introduction to the parts that are important in the context of this paper. In particular, we will only discuss Support Vector Machines for classification. See [8] and [1] for a more detailed introduction on SVMs and [7] for an introduction on SVMs for regression. Support Vector Machines try to find a function f (x) = wx+ b that minimizes the expected Risk with respect to a loss function L: R[f ] = L(y, f (x))dP (y|x)dP (x) (1) by minimizing the regularized risk Rreg [f ], which is the weighted sum of the empirical risk Remp [f ] with respect to the data (xi , yi )i=1...n and a complexity term ||w||2 1 Rreg [f ] = ||w||2 + CRemp [f ]. 2
312
Stefan R¨ uping
This optimization problem can be efficiently solved in its dual formulation n
W (α) = w.r.t.
n
n
1 αi αj yi yj xi · xj − αi → min 2 i=1 j=1 i=1 n
(2)
αi yi = 0
(3)
0 ≤ αi ≤ C.
(4)
i=1
n The resulting decision function is given by f (x) = i=1 yi αi xi x + b. It can be shown that the SVM solution depends only on it’s support vectors {xi |αi = }0. Support Vector Machines also allow the use of non-linear decision functions via the use of kernel function, which replace the inner product xi · xj by an inner product in some high dimensional feature space K(xi , xj ) = Φ(xi ) · Φ(xj ). Then the decision function becomes f (x) = ni=1 yi αi K(xi , x) + b. 2.1
SVM Implementations
In practical implementations of Support Vector Machines it turns out that solving the quadratic optimization problem (2)-(4) with standard algorithms is not efficient enough, because these algorithms often require that the quadratic matrix K = (K(xi , xj ))1≤i,j≤n has to be computed beforehand and stored in main memory. Three tricks can speed up the calculation of the SVM solution significantly. Working set decomposition: To improve the efficiency of the SVM calculation, Osuna et. al. [5] suggest to split the problem into a sequence of simpler problems by fixing most variables and optimizing only on the rest, the so-called working set. This procedure is iterated until all variables satisfy the optimality conditions of the global problem. These optimality conditions, the Kuhn-Tucker conditions of the quadratic optimization problem (2)-(4), are essentially conditions on the gradient of the target function W (α) and on its Lagrangian multipliers. Joachims [3] proposes an efficient and effective method for selecting this working set. Shrinking: Joachims also proposes two other improvements to the optimization problem. Usually most variables α lie at their boundaries 0 or C and tend to stay there from very early on in the optimization process. This is the case because usually the rough location of the decision boundary is found very early while most time is spent to find its exact location. Therefore, examples that lie far away from the decision boundary can be spotted easily. This is exploited by the idea of shrinking the optimization problem: Variables that are optimal at 0 or C for a certain number of iterations are fixed at that position and not re-examined in furthers iterations.
Support Vector Machines in Relational Databases
313
Kernel caching: The third trick to improve SVM efficiency involves the caching of kernel functions. Both the selection of the working set and the check of the optimality conditions require the computation of the gradient ∇ of W (α). In fact, the computation of the gradient is the most expensive part of the SVM. The in th component of the gradient itself is given by ∇i = yi j=1 yj αj K(xj , xi ) − 1. n The values si = j=1 yj αj K(xj , xi ) can be computed once and be updated by si = si + yj (αj − αj )K(xi , xj ) whenever a variable changes from αj to αj . Therefore, whenever a variable is updated, the kernel row Ki· is needed to incrementally update the gradient. As mostly only a certain subset of all variables gets into the working set at all, caching these kernel rows can significantly improve performance.
3
An SVM Implementation for Relational Databases
We will now show how an SVM can implemented on top of a relational database1 , that meets the following conditions: 1. It runs entirely inside the database, such that guarantees about the consistency as well as the security of the data can be given. 2. It does use as little main memory as needed for an efficient implementation. In particular, it does not duplicate the complete example set in its memory space. 3. It uses standard interfaces to access the database, so that it is database independent. 4. The evaluation of the decision function on new examples is as easy as possible. 5. It is as efficient as possible. The first goal is achieved by the use of Java Stored Procedures. Java Stored Procedures allow to run Java programs inside the database server, that can directly access the database tables. To achieve the third goal, the JDBC standard is used to send pure SQL queries to the database. In this section, we make the following assumption on the database schema: The examples (xi , yi )i=1...n are stored in a single table or view, where the d components of the vectors xi are stored in the columns att 1... att d. The index i is stored in a column index and the classifications are stored in a column y. Note that in principle, all the user needs to do to specify the learning data set, is to specify some SQL-statement that returns the columns att 1,..., att d, index, y. 3.1
A Simple Approach
From the discussion in section 2 it is clear, that the only access to the examples’ x-values is via the kernel function K. So, as the most simple approach one could 1
In the implementation of this work, an Oracle 8.1.6 database was used.
314
Stefan R¨ uping
use any given SVM implementation and replace the call of the function K(xi , xj ) by the call of a function K(read f rom database(i), read f rom database(j)). Unfortunately, this approach does not work very well. The reason for this is, that any access to the database is far more expensive than a simple memory access. To make the code more efficient, we need to reduce the number and size of database queries as much as possible. 3.2
Database Kernel Calculation
There is a more efficient way to access the examples: As we do need only the value of K(xi , xj ), there is no need to read both x and y from the database, if we can read K(xi , xj ) directly. Then, instead of 2d number, only one number has to be read from the database. This can be easily accomplished in SQL. For example, for the linear kernel K(xi , xj ) = xi · xj , the following SQL statement gives the value of K: select X.att_1 * Y.att_1 +...+ X.att_d * Y.att_d from EXAMPLES where X.index = and Y.index = <j> Similarly, a radial basis kernel function K(xi , xj ) = exp(−γ||xi − xj ||2 ) can be expressed as the following SQL query: select exp(-*(pow(X.att_1-Y.att_1,2)+...+pow(X.att_d-Y.att_d,2))) from EXAMPLES where X.index = and Y.index = <j> Here , and <j> stand for the actual values of γ, i and j. Note that for this query to be efficient, there needs to be a database index on the column index. Most popular kernel functions, e.g. polynomial, neural network or anova kernels, depend on either the inner product or the euclidian distance of the example vectors, therefore it is possible to give corresponding SQL queries for these kernel functions as well. To demonstrate the effect of this optimizations, we give the runtime of this version on two data sets, one linear classification task Pat and one linear regression task Reg. Detailed information on these datasets and how the runtime was measured follows in section (4). Dataset Old Version New Version Pat 23.81s 13.94s Reg 1156.26s 676.64s Comparing these results with we can see, that the new version is about 40% faster.
Support Vector Machines in Relational Databases
3.3
315
Kernel Rows
The experiment in the last section shows, that there is still need for improvement. The reason for the inefficiency of the last approach is that a lot of time in the database is spent analyzing the query and looking up the data tables. Once the tables are found, calculating the result is relatively easy. This means, that a very limiting factor for performance is the number of calls to the database and not so much not the size of the data itself. In section (2) we have seen that the kernel values are not accessed randomly, but in terms of kernel rows. So we can optimize the database access, if we select the whole kernel row in one query: select , Y.index from EXAMPLES X, EXAMPLES Y where X.index = Here the term stands for the SQL term that constructs the kernel value from the attributes, e.g. X.att 1*Y.att 1 +...+ X.att d*Y.att d. We also need to get the index of Y to make a kernel row of the result set, as the order the results are returned in is not defined. From the following table we can see, that this optimization reduces the runtime by about 15% to 35%. Dataset Old Version New Version Pat 13.94s 11.96s Reg 676.64s 426.66s 3.4
Shrinking
Shrinking has a big effect on runtime, because information on shrinked examples does not need to be updated in further iterations. This means, the only kernel information needed in later iterations is that of the sub-matrix of non-shrinked examples. To get only these kernel entries, the query to select a kernel row can be adapted. What we need to do is to adjust the from EXAMPLES Y part of the kernel SQL statement, such that only non-shrinked examples are considered. There are two ways to do this: We could either add a column shrinked to the examples table and do the query select , Y.index from EXAMPLES X, EXAMPLES Y where X.index = and Y.shrinked = ’false’
316
Stefan R¨ uping
(once again, stands for one of the select-statements defined in section (3.2)) or we can create a table free examples that contain only the indices of non-shrinked examples. Then the kernel query becomes: select , Y.index from EXAMPLES X, EXAMPLES Y where X.index = and Y.index in (select index from free_examples) The advantage of the first alternative is that the query can be done by a simple scan over the examples table with little extra cost and without adding new tables. On the other hand, this alternative requires that the user has the privileges to modify the examples table. This is a serious drawback in any situations, where data security is an issue. The second alternative does not suffer from this drawback. 3.5
The Decision Function
To be useful for application in real-world databases, we do need also an efficient way to evaluate the SVM decision function f (x) = i∈SV yi αi K(xi , x) + b on new examples. In this section we will show, that this can be simply done with pure SQL statements. Linear Kernel: we can make use of the linearity and With the linear kernel write f (x) = i∈SV (yi αi xi · x) + b = ( i∈SV yi αi xi ) · x + b =: w · x + b. So we only need to calculate the vector w and the constant b after learning and can write select <w_1> * X.att_1 + ... + <w_d> * X.att_d + as f from X in TOPREDICT to get the f-values from the examples in table TOPREDICT. General Kernels: In general kernels, we need the support vectors and their α-values to predict a new example. We assume that we still have the training examples in table EXAMPLES and we have a table MODEL that contains the values yi αi and the index of the corresponding vector (to simplify the calculation, yi αi is stored instead of αi alone). Then we can calculate i∈SV yi αi K(xi , x) using the kernel select-statement and the SQL sum keyword as: select sum(z.alpha * ) from MODEL Z, EXAMPLES X, EXAMPLES_TO_PREDICT Y where X.index = Z.index and Y.index =
Support Vector Machines in Relational Databases
317
The value of b can be stored in the same table as the α’s by using the index value null. Then the whole decision function is calculated by: select alpha + (select sum(z.alpha * ) from MODEL Z, EXAMPLES X, EXAMPLES_TO_PREDICT Y where X.index = Z.index and Y.index = ) as f from test_model where alpha in (select alpha from test_model where key is null);
4
Experiments
We used two implementations of the SVM to compare the efficiency of the database version of the SVM to a C++ standalone version. Both SVMs used the same algorithm and parameters. The database experiments were made on a Sun Enterprise 250 that was equipped with a double UltraSparc II 400MHz processor and 1664 MB of main memory, running an Oracle 8.1.6 database. The C++ experiment were made on a Sun Ultra with a UltraSparc IIi 440 MHz processor and 256 MB of main memory. As the kernel cache was kept at a size of 40 MB in all experiments, the different memory equipment should not influence the results. Three datasets were used in the comparison. The first data set Pat consisted of a simple artificial classification task with 100 examples and a linear decision function. The second data set Reg is an artificial regression problem with 2000 examples and a linear target function. The third data set, Cyc is a real-world dataset of 157 examples. The task is to classify the state of the german business cycle (upswing or downswing) from several economic variables. A radial basis kernel with parameter γ = 1 was used in the experiments. The data sets are summarized in the following table Name Size Dimension #SVs Pat 100 27 47 Reg 2000 27 56 Cyc 157 13 157 To get clearer results, 5-fold cross-validation was done on each of the data sets and the CPU time of each learning run was recorded. In each learning run, the resulting decision functions of both implementations were equal up to sensibly small numerical errors. In the case of the standalone version, also the time needed to create the input files from the database tables was recorded. The following table shows the time needed to access the data from the database for the standalone C++ -Version, the CPU time of the standalone version and the total time for the standalone version. This is compared to the CPU time of the database version:
318
Stefan R¨ uping
Name Db Access C++ SVM C++ Total Db SVM Factor Pat 0.29s 0.16s 0.45s 8.73s 19.40 Reg 6.06s 3.48s 9.54s 364.72s 38.23 Cyc 0.24s 0.13s 0.37s 16.46s 44.48 The experiments show, that the database version is slower than the standalone version by a factor of 20 to 45. If this difference is acceptable has to be evaluated with respect to the individual application’s requirements.
5
Discussion and Further Research
In this paper we made the assumption that the data is given in a database table in attribute-value form. While this may be the most prominent way of representing examples, there are other representations, that have interesting properties. 5.1
Sparse Data Format
For data that is sparse, i.e. many attributes are zero (e.g. text data), the attribute-value format is not optimal because too much space is lost storing zeros. Also, in SVM kernel calculation much time is spent in unnecessary numerical operations, because attributes where both (in the case of kernels based on the euclidian distance) or even one (for kernels based on the inner product) value is zero, do not amend to the value of the kernel function on the respective examples. Therefore SVM software, e.g. SV M light [3], often stores examples in a format where only the non-zero values of the examples vector together with their attribute number are stored. In relational databases, this format could be used in form of a table that consists of the columns example id, attribute id and attribute value. Following the earlier discussion, to show that SVMs with the most commonly used kernel functions can be efficiently trained on data in this representation, it suffices to show that the inner product and the euclidian distance of two examples can be calculated in this representation. For the inner product is suffices to sum up all the products of attribute values where the examples have the given ids and the attribute indices are equal. In SQL: select sum(x.attribute_value * y.attribute_value) from EXAMPLES x, EXAMPLES y where x.example_id = and y.example_id = <j> and x.attribute_id = y.attribute_id; To calculate the squared euclidian distance, first the squared distance of all attributes that exists in both the vectors xi and xj can be calculated in a similar way to that of the inner product. In fact, only the select-part of the statement has to be adapted. Then the squared distance of all attributes that exist in vector xi but not in vector xj to the vector 0 can be calculated by
Support Vector Machines in Relational Databases
319
select sum(x.attribute_value * x.attribute_value) from sparse x where x.example_id = and not exists ( select attribute_id from sparse where attribute_id = x.attribute_id and example_id = <j> ) Then the results of the three query of attributes in xi and xj , xi without xj and xj without xi can be added up to give the final result. 5.2
Joins
In relational databases, data is typically not stored in one but in multiple relations. For example, a clinical information system may store minutely recorded vital signs of its intensive care patients together with demographic data like age, sex or height that do not change during a patients stay or even information about drug ingredients that are invariant over all patients. In attribute-value representation, for example the patients age would have to be stored over and over again for all time-points where a vital sign was recorded. In a relational database, this information would be typically stored in three tables vital signs, demographic and drug ingredients. As the SVM cannot deal with multi-relational data, the different tables would have to be joined together for the SVM to access them, e.g. like select * from vital_signs, demographic where vital_signs.patient_id = demographic.patient_id In the worst case, the join of two tables of size m and n can have the size m · n, when every row of the first table can be joined with every row of the second table. Of course, one would like to avoid having to store this data as an intermediate step. Fortunately there is a trick in the case of Support Vector Machines. The important observation is, that the inner product of two n + m-dimensional points (xM , xN ) and (yM , yN ) can be calculated as the sum of an n- and an mdimensional inner product: (xM , xN ) · (yM , yN ) = xM · yM + xN · yN . A similar observation holds for the euclidian distance: ||(xM , xN ) − (yM , yN )||2 = ||xM − yM ||2 + ||xN − yN ||2 . This mean, instead of a kernel matrix of size (n · m)2 it suffices to compute two matrixes of size n2 and m2 of the inner products or the euclidian distances, respectively, and calculate the kernel values from them. In the case of kernel caching, this trick allows for a far more efficient organization of the cache as two independent caches.
320
5.3
Stefan R¨ uping
Discussion
This paper proposed an implementation of a Support Vector Machine on top of a relational database. Even as this implementation obviously cannot be as efficient as a standalone implementation with direct access to the data, considerations such as data security, platform-independence and usability in a databasecentered environment suggest that this is a significant improvement for SVM applications in real-world domains. Careful analysis and optimization has shown, that the optimal usage of database structures can significantly improve performance.
References [1] Burges, C.: A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery2 (1998) 121–167 311 [2] Chapman, P., Clinton, J., Khabaza, T., Reinartz, T., Wirth, R.: The CRISP–DM Process Model. The CRIP–DM Consortium (1999) 310 [3] Joachims, T.: Making large-Scale SVM Learning Practical. In: Sch¨ olkopf, B., Burges, C., Smola, A. (eds): Advances in Kernel Methods - Support Vector Learning. MIT Press, Cambridge, MA (1999) 312, 318 [4] Kietz, J., Z¨ ucker, R., Vaduva, A. Mining Mart: Combining Case-Based-Reasoning and Multi-Strategy Learning into a Framework to reuse KDD-Applications. In: Michalki, R. S., Brazdil, P. (eds): Proceedings of the fifth International Workshop on Multistrategy Learning (MSL2000). Guimares, Portugal (2000) 311 [5] Osuna, E., Freund, R., Girosi, F.: An improved training algorithm for support vector machines. In: Principe, J., Giles, L., Morgan, N., Wilson, E. (eds.): Neural Networks for Signal Processing VII — Proceedings of the 1997 IEEE Workshop. IEEE, New York (1997) 276–285 312 [6] Pyle, D.: Data Preparation for Data Mining. Morgan Kaufman (1999) 310 [7] Smola, A., Sch¨ olkopf, B.: A tutorial on support vector regression Technical Report, NeuroCOLT2 Technical Report Series (1998) 311 [8] Vapnik, V.: Statistical Learning Theory. Wiley, Chichester, UK (1998) 310, 311
Multi-Class SVM Classifier Based on Pairwise Coupling Zeyu Li1 , Shiwei Tang1 , and Shuicheng Yan2 1
National Laboratory on Machine Perception and Center for Information Science Peking University, Beijing, 100871, P.R. China {zeyul,tsw}@cis.pku.edu.cn 2 Dept. of Info. science, School of Math. Sciences Peking University, Beijing, 100871
[email protected] Abstract. In this paper, a novel structure is proposed to extend standard support vector classifier to multi-class cases. For a K-class classification task, an array of K optimal pairwise coupling classifier (O-PWC ) is constructed, each of which is the most reliable and optimal for the corresponding class in the sense of cross entropy or square error. The final decision will be produced through combining the results of these K O-PWC s. The accuracy rate is improved while the computational cost will not increase too much. Our approach is applied to two applications: handwritten digital recognition on MNIST database and face recognition on Cambridge ORL face database, experimental results reveal that our method is effective and efficient.
1
Introduction
In many real world applications, such as handwritten digital recognition, face recognition, text categorization and so on, a multi-class classification problem has to be solved. SVM[1,2,3,7,16,17] was originally designed for binary classification. Various approaches have been developed in order to extend standard SVM to multi-class cases. Weston proposed a natural extension to the binary SVM to solve the K-class classification problem directly in [19]. But more popular and applicable method is to reduce a multi-class problem to a set of binary classification problems. There are different strategies to decompose a multi-class problem into a number of binary classification problems. For a K-class classification task, the common method is to use so called one-against-rest [5] principle to construct K binary SVMs. Each SVM distinguishes one class from all other classes. The final output is the class that corresponds to the SVM with the highest output value. The other is called one-against-one[15]. This method constructs all possible K(K − 1)/2 binary SVMs, each of which is used to discriminate two of the K classes only. When a testing pattern is to be classified, it is presented to all the SVMs, each providing a partial answer that concerns the two involved classes. In this paper, we only consider the later method. S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 321–333, 2002. c Springer-Verlag Berlin Heidelberg 2002
322
Zeyu Li et al.
Different schemes are used to combine the results of binary SVMs. The Max Voting strategy considers the output of each SVM as binary decision and selects the class that wins maximum votes. DAGSVM [12] constructs a Direct Acyclic Graph. Reference[6] proposed ”Hamming decoding” method. For a given pattern x, it will be assigned to the class with the minimal hamming distance. All of these methods didn’t consider the case in which the binary classifiers output a score whose magnitude is a measure of confidence in the prediction. Pairwise Coupling (in short, PWC) [14] trains K(K-1)/2 binary classifiers, each of which produces a pairwise probability. PWC couples these pairwise probabilities into a common set of posterior probabilities. This method is used widely in many fields[9][18]. Error Correct Output Codes (in short, ECOC) proposed in [13] allows a correct classification even if a subset of binary classifiers gives wrong classification results. However, PWC method has some drawbacks [10]. When a sample x is classified by one of the K(K-1)/2 classifiers, and at the same time, x doesn’t belong to both of the two involved classes of this classifier, the probabilistic measures of x to the two classes are meaningless and maybe damage the coupling output of PWC. To tackle this problem, PWC-CC method is proposed in [10]. For each pairwise classifier separating class ci from class cj , an additional classifier separating the two classes from the other classes will be trained. Of course this will lead to the increment of the computational cost. In this paper, optimal PWC (in short, O-PWC) is introduced to overcome the problem encountered by original PWC. For a K-class classification problem, an array of K O-PWCs is constructed, each of which is optimal to the corresponding class in the sense of cross entropy or square error. Classifying a pattern equals to finding the class label which corresponds to the minimal cross entropy or square error. Improved performance can be achieved while the computational cost will not increase too much. The rest of the paper is organized as follows: In section 2, we will briefly introduce the binary SVM and its probabilistic output. In Section 3, original PWC method is briefly introduced. In section 4, our algorithm is described in detail. Experimental results and conclusion will be given in section 5 and 6, respectively.
2
Binary SVM and Its Probabilistic Output
Due to the better generalization ability, SVM[1,2,3,7,16,17] is used widely in recent years. Given a two class classification problem, from the point of view of statistical learning theory[16][17], the optimal hyperplane is the one that maximizes the margin: l αi yi K(x, xi ) + b f (x) = sign i=1
where, αi is the Lagarange multiplier and K is a kernel function.
Multi-Class SVM Classifier Based on Pairwise Coupling
323
Different methods[8,11] were proposed to map the binary decision value of standard SVM into probabilistic outputs. J.Platt in [11] argues that the classconditional densities between the margins are apparently exponential and can be represented using a parametric form of a sigmoid: P (y = 1|f ) =
1 1 + exp(Af + B)
The parameters A and B can be found by minimizing the cross entropy of the training data: ti logPi + (1 − ti )log(1 − Pi ) min − i
where Pi =
1 , 1 + exp(Afi + B)
ti =
yi + 1 2
This ”SVM+Sigmoid” model in [11] leaves SVM unchanged, and is easy to be implemented. In this paper, the model is adopted to map SVM outputs to posterior probabilities.
3
Pairwise Coupling Method
Haste and Tibshirani in [14] proposed pairwise coupling method(in short, PWC) to tackle the multi-class classification problems. Given a set of K classes {ci }, PWC trains K(K-1)/2 binary SVM classifiers. The posterior probability of x belongs to class ci , given x is in either class ci or cj , can be written as a pairwise probability: rij = P (ci |x, x ∈ ci ∪ cj ), j =i Going through all K(K-1)/2 binary classifiers, a pairwise probabilities matrix (in short, PPM) can be constructed. To couple the PPM into a common set of posterior probabilities Pi , Ref.[14] introduced a new set of auxiliary variables: µij =
Pi Pi + Pj
and found Pi ’s such that the corresponding µij ’s are in some sense ”close” to the observed rij ’s. The Kullback-Leibler divergence between µij and rij : rij 1 − rij P = nij rij log + (1 − rij ) log (1) µij 1 − µij i<j is selected as the closeness measure in [14]. Pi can be found through minimizing this function. The iterative procedure is as follows: Algorithm Input: {µij }, {rij }
324
Zeyu Li et al.
ˆ , and compute corresponding µ step 1. Initialize P ˆij ; step 2. Repeat until convergence: j=i nij rij ˆ ˆ Pi ← Pi · ˆij j=i nij µ re-normalize the Pˆi , and recomputed the µ ˆij ; ˆ ˆ ˆ step 3. P ← P / Pi ˆ (x) = (Pˆ1 , Pˆ2 , · · · , PˆK ) Output: P For simplicity, the weights are assumed to be equal, that is, nij = 1 for all i, j. A simple non-iterative estimate of P can be obtained simply as: P˜i =
2 rij , i, j = 1, 2, · · · , K K(K − 1)
(2)
j=i
In fact, the P˜i ’s are in the same order as the Pˆi ’s computed using the above iterative procedure and it is sufficient if only the classification is required [14]. Because we care more about the accurate probabilistic estimation in this paper, however, the iterative procedure will always be used to couple the pairwise probabilities and the estimations in Eqn.(2) are used as starting values in the iterative procedure. ˆ (x) = (Pˆ1 , Pˆ2 , · · · , PˆK ), Let the posterior probabilities of a sample x be P then the resulting classification rule is: d(x) = argmaxi [Pˆi (x)].
4
Methodology
In PWC, the weight matrix {nij } in Eqn.(1) can be rewritten as the follows: − W1,2 W1,3 · · · W1,K W2,1 − W2,3 · · · W2,K W = W3,1 W3,2 − · · · W3,K .. .. .. . . . . . .. . . WK,1 WK,2 WK,3 · · ·
−
which reflects the influence of each pairwise classifier on the final decision for a given pattern x. Usually the weights are all assumed to be 1, which means that all binary classifiers’ contributions to the final decision are the same, in original PWC. In fact, given a testing pattern x ∈ / ci ∪ cj , the pairwise probability rij is absolutely irrelevant to x because the corresponding binary classifier which is used to discriminate class ci and cj has not been trained with data from the true classes. Consequently, using it to find P is very likely to damage the result of the calculation[10]. Accurately, not all binary classifiers are useful and relevant to the final decision for a given pattern, some of which are meaningless, or even harmful. However we have no idea about the information because this is what we aim at determining.
Multi-Class SVM Classifier Based on Pairwise Coupling
4.1
325
Optimal PWC Classifier for Each Class
To overcome the problem mentioned above, we introduce the Optimal PWC. Before that, we will present such a fact: Suppose the pairwise probabilities matrix (PPM ) is known, then we can use original coupling algorithm (see Section 3) to construct a unique PWC classifier, given a weight matrix W . Definition: The optimal weight matrix for class ci is: W i = {Wm,n } which satisfies: Wm,n = 1, Wm,n = 0,
if m=i or n=i otherwise
m, n = 1, 2, · · · , K, m =n
The corresponding PWC classifier is called Optimal PWC (in short, O-PWC ). For example, considering a 5-class classification problem, the optimal weight matrix for class c2 and c4 can be represented as the follows respectively: − 1 0 0 0 − 0 0 1 0 1 − 1 1 1 0 − 0 1 0 0 1 − 0 0 , 0 0 − 1 0 0 1 0 − 0 1 1 1 − 1 0 1 0 0 − 0 0 0 1 − It means that, given a pattern x ∈ ci , only the binary classifiers whose one of the two involved classes is ci are considered, while other binary classifiers are all ignored. Hence, all pairwise probabilities in O-PWC i are all relevant to class ci . Given a pattern x , the output of O-PWC i can be represented as a probi ability vector:P i = (P1i , P2i , · · · , PK ), i ∈ {1, 2, · · · , K}. Each of its components represents the probability of x belonging to the respective class. 4.2
Properties of O-PWC
In this sub-section, we will investigate the properties of O-PWC in two scenarios: 1). when a given sample is presented to different O-PWC s; 2). when samples from different classes are presented to the same O-PWC. Fig.1(left) and Fig.1(right) show the two scenarios, respectively. Given a pattern x ∈ ci ,the output of O-PWC i , P i , will be an ”regular” probability vector where its i-th component Pii is the largest and very remarkable, while the other components will be small and similar. When x is presented to O-PWC j ,j =i, the output P j does not have such properties and all of its components are ”irregular”. An approximated and intuitive explanation can be given as the fellows: Given an optimal weight matrix W i , a new PPM can be constructed:P P M i = W i · P P M . If x ∈ ci , then the values in i-th row of P P M i are all bigger probabilities(close to 1), while the other rows contain only one value, each of which
326
Zeyu Li et al.
0.8
0.8
Probability Estimation to Each Class
Probability Estimation Using O−PWC−1
Using O−PWC−1 Using O−PWC−2 Using O−PWC−4
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1
2
3
4
5
6
7
8
Image of Digital "2" Image of Digital "0" Image of Digital "1"
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
9
0
1
2
Class No.
3
4
5
6
7
8
9
Class No.
Fig. 1. Properties of O-PWC. The left shows the outputs of O-PWC 1 ,O-PWC 2 and O-PWC 4 for a sample from digital ”3”(class c4 ). The right panel shows the output of O-PWC 1 for three faces from class c1 ,c2 and c3 , respectively is very small. After using Eqn.(2), the i-th component of P i will be remarkable, and at the same time, the other components of P i will be small and similar comparing with the i-th component. In this paper, the following two metrics: cross entropy and square error are used to assess the extent of ”regular”: CE(P (x), T n ) = −
K m=1
SE(P (x), T n ) = P (x) − T n =
K
n Tm log2
Pm (x) , n ∈ {1, 2, · · · , K} (3) n Tm
n 2 (Pm (x) − Tm ) , n ∈ {1, 2, · · · , K} (4)
m=1 n
where T (x) is the ”true” probability vector of pattern x belonging to the K classes. Its component will be defined to be 1 if the label of x is n and 0 otherwise. For example, considering a 5-class classification problem, the ”true” probability vector of class c2 can be represented as: T 2 (x) = (0, 1, 0, 0, 0). Similarly, if x ∈ c4 , then the ”true” probabilities can be presented as: T 4 (x) = (0, 0, 0, 1, 0). Since for most real-world datasets, true labels are known, but not probabilities, this ”true” probability vector is sufficient and reasonable. Based on the above analysis and observation, O-PWC i will produce a ”regular” probability vector only for the samples from class ci . Similarly, given a pattern x, the output of O-PWCi is ”regular” and is the closest to T i (x) if and only if x ∈ ci . Then the cross entropy or square error between T i and P i (x) will be minimal. So O-PWCi can be regarded as the optimal classifier for class ci in the sense of the cross entropy or square error. 4.3
Classification Using O-PWCs
For a K-class classification problem, we propose to use an array of K O-PWC s to perform classification task. The systemic diagram is shown in Fig.2.
Multi-Class SVM Classifier Based on Pairwise Coupling
327
Fig. 2. The systemic diagram of combination of K O-PWC s There are K channels in Fig.2, each of which will use the respective O-PWC to produce a probability vector, given a pattern x. For a K-class problem, there will be K probability vector outputs: {P m (x)}, m = 1, 2, · · · , K If x ∈ ci , then the output of O-PWC i ,P i , will be ”regular” and be the closest to its ”true” probability vector T i , while the outputs of other O-PWC s will be ”irregular”. Classifying x equals to find the class label corresponding to the ”regular” output. As shown in Fig.2, the i-th channel is corresponding to class ci and has a ”true” probability vector T i (x) . For a given sample x, an array of cross entropy or square error will be computed: {Evli = Evl(P i , T i )}, i = 1, 2, · · · , K where the metric function Evl(P i , T i ) will be square error (See Eqn.(3)) or cross entropy (Eqn.(4)). Based on Section 4.2, when x ∈ ci , the cross error or square error of i-th channel will be minimal. The final decision can be represented as: d(x) = arg min{Evli (x)} i
4.4
Construction of Pairwise Probabilities Matrix (PPM ) Using SVM
All the discussions above suppose the existence of a pairwise probability matrix (PPM ). As shown in Section 2, probabilistic outputs of a binary SVM can be
328
Zeyu Li et al.
computed using J.Platt’s ”SVM+Sigmoid” model[11]. For a K-class problem, K(K-1)/2 sigmoid models, each of which is corresponding to a binary support vector classifier, will be trained. Going through all the pairwise binary SVMs, we can construct a pairwise probability matrix (PPM ) for a given sample. 4.5
Evaluation of Computational Cost
Like MaxVoting and original PWC, the approach proposed in this paper also needs to construct the pairwise probability matrix (PPM ) (that is, evaluating K(K-1)/2 SVM binary classifiers for a K-class problem) for a given sample (see section 4.4). The overhead of this process is determined by the number of support vectors(SV) of the SVM classifier and usually is time-consumed because of the large number of SVs. After the construction of PPM, our approach still needs K coupling processes for K channels. It can be experimentally observed that the coupling procedure can usually converge after a few iterations(see section 3). Hence the whole computational cost is up to the construction of pairwise probability matrix. Hence, comparing with MaxVoting and original PWC, the computational cost of our approach will not increase too much.
5
Experimental Results
To illustrate the feasibility of our approach, we will perform two tasks in this section: (1) recognize 10 classes of handwritten digital on MNIST database [20]. (2) face recognition on Cambridge ORL face database. 5.1
Handwritten Digital Recognition on MNIST
The MNIST database[20] contains 60000 training images and 10000 testing images of size 28 × 28 (See Table 1). Some samples from MNIST are depicted in Fig.3 (Left). The procedure of feature extraction can be summarized as the follows: Firstly we use wavelet to get the low frequency components of the image. Secondly we eliminate the most outer layer pixels to reduce the dimension because these regions seldom contain useful information. Then we formulize each image into a vector of 144 dimensions, and then whiten in order to make the 0-mean and 1-variance. The images after wavelet are shown in Fig. 3(Right).
Table 1. Samples in MNIST database ”0” ”1” ”2” ”3” ”4” ”5” ”6” ”7” ”8” ”9” Training Set 5923 6742 5958 6131 5842 5421 5918 6265 5851 5949 Testing Set 980 1135 1032 1010 982 892 958 1028 974 1009
Multi-Class SVM Classifier Based on Pairwise Coupling
329
Fig. 3. Left: examples from MNIST (28×28); Right: after preprocessing (12×12)
Training phase includes two parts in this setting: one is to train 45 binary SVMs, the other is to fit 45 sigmoid models to estimate the posterior probability of each binary SVM. In this paper, LIBSVM [4] is used to train binary SVMs. Sigmoid kernel is adopted and we use 3-folds cross validation to train binary SVMs in order to acquire the optimal parameters of C and σ. In this paper, we adopt the same C and σ to train all binary SVM classifiers. Table 2 shows the comparison of recognition accuracy rate of different methods on MNIST database: It can be observed that original PWC method is superior to MaxVoting, and O-PWC is superior to both MaxVoting and original PWC in term of accuracy rate. What’s more, the strategy using square error is better than that using cross
Table 2. Comparson of accuracy rate with other methods on MNIST. The results with asterisk are from [20] Method Test Error Rate (%) Linear classifier (1-layer NN)* 12.0 Pairwise linear classifier* 7.6 K-NN, Euclidean, deskewed* 2.4 SVM deg 4 polynomial* 1.1 2-layer NN, 300 HU, [deskewing]* 1.6 Reduced set SVM deg5 polynomial* 1.0 LeNet-5, [distortions]* 0.8 Boosted LetNet-4,[distortions]* 0.7 Max Voting 1.68 Pairwise Coupling SVM 1.16 Training set 0.015 O-PWC(Cross Entropy) Testing set 0.91 Training set 0.002 O-PWC(Square Error) Testing set 0.76
330
Zeyu Li et al.
4
1.3
1.2
Square Error of Each Channel
Cross Entropy of Each Channel
3.5
3
2.5
2
Image of Digital "0" Image of Digital "4" Image of Digital "9"
1.5
1
0.5
1.1
1
0.9
0.8
Image of Digital "0" Image of Digital "4" Image of Digital "9"
0.7
0.6
1
2
3
4
5
6
Channel No.
7
8
9
10
0.5
1
2
3
4
5
6
7
8
9
10
Channel No.
Fig. 4. The top are the 3 images which are from c1 , c5 and c10 , respectively; the bottom-left shows the cross entropy of 10 channel for the 3 images, respectively. The bottom-right shows the square error of 10 channels for the 3 images, respectively
entropy in our approach. At the same time, we can see that our method is close to the best one. Fig.4 shows the cross entropy and square error of 10 channels for different classes, respectively. Fig.4 shows some results when a digital sample was presented to the framework shown in Fig.2. For example, for a sample from class c1 (digital ”0”), the cross entropy or square error of the 1-th channel will be minimal. Hence, we can easily classify this sample to c1 . 5.2
Face Recognition on Cambridge ORL Face Database
Cambridge ORL face database consists of 400 images of 40 individuals, containing quite a high degree of variability in expression, pose and facial details. Some samples from this database are depicted in Fig.5.
Fig. 5. Four individuals (each in one row) in the ORL. There are 10 images for each person
Multi-Class SVM Classifier Based on Pairwise Coupling
331
Table 3. Recognition accuracy rate comparation O-PWC O-PWC (Cross Entropy) (Square Error) 95.13% 96.79% 98.11%
Method MaxVoting PWC Rate
94%
0.1
Using O−PWC−4 Using O−PWC−6 Using O−PWC−2
0.08
0.09
Estimation of Probabilities to Each Class
Estimation of Probabilities to Each Class
0.1 0.09
0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0
Face Image of Person #2 Face Image of Person #4
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
5
10
15 20 25 30 Person Identity / Class No.
35
40
0
0
5
10
15
20
25
30
35
40
Person Identity / Class No.
Fig. 6. Outputs of O-PWC on ORL face database. The left panel shows the outputs of O-PWC2 , O-PWC4 and O-PWC6 for a face image from person #4 (c4 ). The right panel shows the output of O-PWC2 for two faces from class c2 and c4 , respectively In our setting, we perform twice Wavelet Transformation and use the whiten low frequency components as our feature (168 dim). We select 200 samples (5 for each individual) randomly as the training set. The remaining 200 samples are used as the testing set. LIBSVM [1] is used to train binary SVMs. We adopt σ = 0.3 and C=10 (sigmoid kernel) to train all binary SVM classifiers. Table 3 shows the comparison of different recognition methods on ORL database: It also can be observed that original PWC method is superior to MaxVoting, and O-PWC is superior to MaxVoting and PWC in term of accuracy rate. At the same time, we can see that, the square error metric is better than the cross entropy metric. From Fig.6, our conclusion about the properity of O-PWC can be verfied. For example, given a sample x ∈ c4 , only the output of O − P W C 4 will be ”regular” and is the closest to the ”true” probability vector T 4 of class c4 (see (Fig.6(Left)). What’s more, O−P W C 2 will produce ”regular” probability vector only for the samples from class c2 (see Fig.6(Right)). Fig.7 shows some results when a pattern x is presented to the framework shown in Fig.2: 1) the output of O-PWC13 for c13 , P 13 , is the closest to the ”true” probabilities, T 13 ; 2) The cross entropy or square error between P 13 and T 13 , that is, the cross entropy or square error of channel 13, will be minimal; 3)
332
Zeyu Li et al.
Pobabilities Estimation of O−PWC−13 for class 13
0.1
0.09
0.08
Face 1 Face 2 Face 3 Face 4 Face 5
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
5
10
15
20
25
30
35
40
Person Identity / Class No. 5
1.01
1
4.5 0.99
0.98
Square Error
Cross Entropy
4
3.5
0.97
0.96
Face 1 Face 2 Face 3 Face 4 Face 5
0.95
3 0.94
Face 1 Face 2 Face 3 Face 4 Face 5
2.5
2
0
5
10
15
20
Channel No.
0.93
0.92
25
30
35
40
0.91
0
5
10
15
20
25
30
35
40
Channel No.
Fig. 7. The top-left are the five face images from person #13; the top-right are the outputs of O-PWC13 for these five images; the bottom-left and bottom-right are the cross entropy and square error of the 40 channels for these five images, respectively The square error metric is superior to the cross entropy metric. Hence, we can classify these five faces into class c13 correctly.
6
Conclusion
In this paper, we propose an array of O-PWC s to extend standard SVM to multi-class cases. For a K-class classification problem, an array of K O-PWC s are constructed, each of which is optimal to the corresponding class in the sense of cross entropy or square error. In fact, the optimal weight matrix provides a method to select binary classifiers which are relevant to the interested class, for example class ci . O-PWCi only takes these meaningful binary classifiers into account. The classification accuracy rate can be improved while the computational cost will not increase too much. Of course, there are some other ways to improve the overall performance, such as feature extraction, virtual samples and so on. All of these are our feature directions of study.
Acknowledgement This work was supported by the Special Funds for Major State Basic Research of China, No.G1999032705.
Multi-Class SVM Classifier Based on Pairwise Coupling
333
References 1. B. Scholkopf, C. Burges, and V. Vapnik, ”Extracting support data for a given task”, In Proceedings, First International Conference on Knowledge Discovery and Data Mining. AAAI Press, Menlo Park, CA, 1995. 2. B. Scholkopf, K. Sung, C. Burges,etc, ”Comparing Support Vector Machines with Gaussian Kernels to Radial Bias Function Classifiers”, A. I. Memo No.1599 C. B. C. L. Paper No.142. 3. Christopher J. C. Burges, ”A Tutorial On Support Vector Machines for Pattern Recognition”, Data Mining and Knowledge Discovery, 2, 121-167(1998). 4. Chih-Jen Lin, web site, http://csie.ntu.edu.tw/ cjlin/ 5. C. -W. Hsu and C. -J. Lin. ”A comparison of methods for multi-class support vector machines”, April 2001. To appear in IEEE Transactions on Neural Networks. 6. Erin L. Allwein, Singer.Y, ”Reducing multiclass to binary: A unifying approach to margin classifiers”, Journal of Machine Research, 1:113-141. 7. Fukunaga L., ”Introduction to Statistical Pattern Reognition”, Academic Press, INC, 2nded, 1990. 8. James Tin-Yau Kwok, ”The Evidence Framework Applied to Support Vector Machines”, IEEE Trans. On Neural Network, Vol.?, No.?, 2000. 9. K. Goh, E. Chang and T. Cheng, ”Support Vector Machine Pairwise Classifiers with Error Reduction for Image Classification”, Proceedings of ACM International Conference on Multimedia (MIR Workshop), Ottawa, October 2001. 10. M. Moreira and E. Mayoraz, ”Inproving pairwise coupling classification with error correcting classifiers”, Proceeding of Tenth Eruopean Conference on Machine Learning, April 1998. 11. Platt J., ”Probabilistic outputs for SVMs and comparisons to regularized likehood methods”, In Advances in Large Margin Classifiers. MIT Press, 1999. 12. Platt J, Cristianini N, Shawe-Taylor J. ”Large margin DAGs for multiclass classification”, In: Advances in Neural Information Processing Systems, Vol.12, pp547 553. MIT Press, 2000. 13. T. Dietterich and G. Bakiri. ”Solving multiclass learning problems via errorcorrecting output codes”, Journal of Artifical Intelligence Research, 2, 1995. 14. Trevir Hastie, Robert Tibshirani, ”Classification by pairwise coupling”, Technical report, stanford Univ and Univ. of Toronto, 1996. in Proceeding of NIPS*97. 15. U. Krebel. ”Pairwise classification and support vector machines”, In ”Advance in Kernel Methods - Support Vector Learning”, pages 255-268, Cambridge, MA, 1999. MIT Press. 16. Vapnik V., ”The Nature of Statistical Learning Theory”, New York: SpringerVerlag, 1995. 17. Vapnik V., ”Statistical Learning Theory”, Wiley, 1998. 18. Volker Rothm, Koji Tsuda, ”Pairwise Coupling for Machine Recognition of HandPrinted Japanese Characters”, Accepted for CVPR01. 19. Weston,J. and Watkins, C., Multi-class support vector machines. Technical Report CSD-TR-98-04, Dept. of Computer Science, Royal Holloway, University of London, Egham, Surrey TW20 0EX, England, 1998. 20. Yann LeCun’s website, http://www.research.att.com/yann/exdb/mnist/index.html
Face Recognition Using Component-Based SVM Classification and Morphable Models Jennifer Huang1 , Volker Blanz2 , and Bernd Heisele3 1
Center for Biological and Computational Learning, M.I.T., Cambridge, MA, USA j
[email protected] 2 Computer Graphics Research Group, University of Freiburg, Freiburg, Germany
[email protected] 3 Honda R&D Americas, Inc., Boston, MA, USA
[email protected] Abstract. We present a novel approach to pose and illumination invariant face recognition that combines two recent advances in the computer vision field: component-based recognition and 3D morphable models. In a first step a 3D morphable model is used to generate 3D face models from only two input images from each person in the training database. By rendering the 3D models under varying pose and illumination conditions we then create a vast number of synthetic face images which are used to train a component-based face recognition system. In preliminary experiments we show the potential of our approach regarding pose and illumination invariance.
1
Introduction
As real-world applications for face recognition systems continue to increase, the need for an accurate, easily trainable recognition system becomes more pressing. Current systems (for a survey see e.g. [3]) have advanced to be fairly accurate in recognition under constrained scenarios, but extrinsic imaging parameters such as pose, illumination, and facial expression still cause much difficulty in correct recognition. Recently, component-based approaches have shown promising results in various object detection and recognition tasks such as face detection [8,5], person detection [6], and face recognition [2,9,7,4]. In [4] we proposed an SVM based recognition system which decomposes the face into a set of components that are interconnected by a flexible geometrical model. Changes in the head pose mainly lead to changes in the position of the facial components which could be accounted for by the flexibility of the geometrical model. In our experiments, we have shown that the component-based system consistently outperformed whole face recognition systems in which classification was based on the whole face pattern. A major drawback of the system was the need of a large number of training images taken from different viewpoints and under different lighting conditions. In this paper we further develop this system by adding a 3D morphable face model to the training stage of the classifier. Based on only two images of S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 334–341, 2002. c Springer-Verlag Berlin Heidelberg 2002
Face Recognition Using Component-Based SVM Classification
335
a person’s face the morphable model allows us to compute a 3D face model using an analysis by synthesis method [1]. Once the 3D face models of all the subjects in the training database are computed we generate arbitrary synthetic face images under varying pose and illumination to train the component-based recognition system. The outline of the paper is as follows: Section 2 explains the generation of 3D head models and synthetic images from two input images. Section 3 describes the training of the component-based face detector from the synthetic images. Section 4 incorporates the synthetic images and face detector from the previous sections into a face recognition system. Section 5 briefly presents preliminary experimental results. Finally, Section 6 summarizes results and outlines future work.
2
Generation of 3D Face Models
The first step in the process of training a component-based recognizer is the generation of 3D face models based on two images of each person in the training database. Examples of the pairs of training images are shown in the top row of Figure 1. The bottom row shows the corresponding synthetic images created by rendering the 3D face models. In the following we give a brief overview of the morphable model approach, a detailed description can be found in [1]. The main idea behind the morphable model approach is that given a sufficiently large database of 3D face models any arbitrary face can be generated by morphing the ones in the database. An initial database of 3D models was built by recording the faces of 200 subjects with a 3D laser scanner. Then 3D correspondences between the head models were established in a semi-automatic way using techniques derived from optical flow computation. Using these correspondences, a new 3D face model can be generated by morphing the existing models in the database. To create a 3D face model from a set of 2D face images, an analysis by synthesis loop is used to find the morphing parameters such that the rendered images of the 3D model are as close as possible to the input images. Using the 3D models, synthetic images such as the ones in Figure 2 can easily be created by rendering the models. The 3D morphable model also provides the full 3D correspondence information which allows for automatic extraction of facial components. Prior to using 3D face models, countless images under different pose and illumination conditions had to be recorded for each subject and the components had be extracted in time consuming computations [4].
3
Component-Based Face Detection
The component-based detector performs two tasks: the detection of the face in a given input image and the extraction of the facial components which are later needed to recognize the face. In the following we describe how we trained the component-based face detection system using synthetic face images.
336
Jennifer Huang et al.
Fig. 1. The upper row consists of the two pictures per person used to generate a 3D model. The lower row consists of the synthetic pictures generated from the model. Notice the similarity between the original and synthetic images
Fig. 2. Synthetic face images generated from the 3D head models under different illuminations (top row) and different poses (bottom row). Synthetic images are used for training the face detection and recognition system 3.1
Training Set
Approximately 7700 synthetic faces were generated at a resolution of 58 × 58 for the 6 subjects by rendering the 3D face models under varying pose and illumination. Specifically, the faces were rotated in depth from 0◦ to 34◦ in 2◦ increments. We rendered the faces for two illumination models. One model consisted of ambient light alone, while the other model was composed of directed light in addition to ambient light, both at equal intensities. The directed light was pointed at the center of the face and positioned between −90◦ and +90◦ in azimuth and 0◦ and 75◦ in elevation. The angular position of directed light
Face Recognition Using Component-Based SVM Classification
337
was incremented by 15◦ in both directions. To build the negative training set we randomly extracted 13655 patterns of size 58 × 58 from a database of non-face images. 3.2
Extraction of Components
Fourteen components were extracted from every face image based on the correspondence information given by the morphable model. The shape of the components was learned by an algorithm described in [5] to achieve optimal detection results. Figure 3 shows examples of the fourteen components. The components included the left eyebrow, right eyebrow, left eye, right eye, area between eyebrows, bridge of nose, right lip, left lip, right cheek, left cheek, center of mouth, entire mouth, right side of nose, and left side of the nose. Negative training images for the component classifiers were extracted from the set of 58 × 58 non-face images.
Fig. 3. Examples of the fourteen components extracted from a frontal view and half profile view of a face
3.3
Architecture of the Face Detector
We used the two level component-based face detection system described in [4]. The architecture of the system is schematically shown in Figure 4. The first level consists of fourteen independent component classifiers (linear SVMs). Each component classifier was trained on the previously described set of extracted facial components and on a set of randomly selected non-face patterns. On the second level, the maximum continuous outputs of the component classifiers within rectangular search regions around the expected positions of the components were used as inputs to a geometrical classifier (linear SVM) which performed the final
338
Jennifer Huang et al. Output of Output of Output of Eye Classifier Nose Classifier Mouth Classifier
First Level: Component Classifiers
Classifier
Second Level: Detection of Configuration of Components Classifier
Fig. 4. System overview of component-based face detector. On the first level, windows (lined boxes) of component size are shifted over the face image and classified by the component classifiers. On the second level, the maximum outputs of the component classifiers within the predefined search regions (dotted boxes) and the positions of the detected components are fed to the geometrical classifier detection of the face. The rectangular search regions were determined from statistical information about the location of the components in the training images.
4
Component-Based Face Recognition
From the fourteen components extracted by the face detector we used only ten components for face recognition. The four components that were eliminated either strongly overlapped with other components or contained few grey value structures (e.g. cheeks). The face detection system was applied to each synthetic face image in the training set to detect the facial region and extract the components. Figure 5 shows the composite of the ten extracted components for some example images. For each face, the pixel values of the extracted components were combined into a single feature vector. A face recognition system consisting of SVM classifiers was trained on these feature vectors in a one vs. all approach. In other words, an SVM was trained for each subject in the database to separate her/him from all the other subjects. To determine the identity of a person at runtime, we compared the normalized outputs of the SVM classifiers, i.e. the distances to the hyperplanes in the feature space. The identity associated with the face classifier with the highest normalized output was taken to be the identity of the face. If the highest normalized output was below a preset threshold, the face in the input image was rejected by the classifier.
Face Recognition Using Component-Based SVM Classification
339
Fig. 5. Composite of the ten face components used for face recognition
5
Experimental Results
The component-based face recognition system was compared to a whole face recognition system; both systems were trained and tested on the same images. In contrast to the component-based classifiers, the input vector to the whole face detector and recognizer consisted of the pixel values from the entire 58 × 58 facial region. For a more detailed description of the whole face system see [4]. The whole face and the component-based face detectors were trained with linear SVMs. For the recognition systems, we trained both linear and 2nd degree polynomial SVMs. This resulted in four different face classification systems which were compared on two test sets. In the following, the classifiers will be referred to as whole linear, whole polynomial, component linear, and component polynomial, respectively. The two test sets of novel images were rendered from the 3D models described in Section 2. The first test set consisted of synthetic faces of the six subjects in the database rotated between −36◦ and +36◦ in 6◦ steps. The point light source was positioned in elevation between −22.5◦ and 67.5◦ in 30◦ steps and in azimuth from −112.5◦ to 97.5◦ in 30◦ steps. The second test set had the same parameters as the first test set except that the faces were rotated in the image plane by 4◦ . The former test set will be referred to as regular test set and the latter as rotated test set. The resulting ROC curves for the four classifiers can be seen in Figure 6. For the whole face system the accuracy of the polynomial classifier exceeds that of its linear counterpart while for the component-based system the polynomial and linear classifiers are roughly equal. On both test sets the component-based system clearly outperforms the whole face system. The large discrepancy on the the rotated test set is explained by the sensitivity of the whole face recognition system to rotations.
6
Conclusion and Future Work
This paper presented a new development in component-based face recognition by incorporation a 3D morphable model into the training process. Based on two face images of a person and a 3D morphable model we computed the 3D face model of
340
Jennifer Huang et al.
Polynomial Classifiers − Recognition rate vs. False positive rate 1 whole − rotated whole − regular component − rotated component − regular
0.9
0.8
Recognition percentage
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.05
0.1 0.15 False recognition percentage
0.2
0.25
Linear Classifiers − Recognition rate vs. False positive rate 1 whole − rotated whole − regular component − rotated component − regular
0.9
0.8
Recognition percentage
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.05
0.1 0.15 False recognition percentage
0.2
0.25
Fig. 6. The top diagram shows the ROC curve for the polynomial whole face and polynomial component-based recognition systems on two different test sets. The bottom diagram shows the ROC curve for the linear whole face and linear component-based recognition systems on two different test sets
Face Recognition Using Component-Based SVM Classification
341
each person in the database. By rendering the 3D models under varying poses and lighting conditions we automatically generated a large number of synthetic face images to train the component-based recognition system. Preliminary results on synthetic test images show that the component-based recognition system clearly outperforms a comparable whole face recognition system. We achieved component-based recognition rates around 98% for faces rotated up to ±36◦ in depth. Future work includes the application of the system to a test set of real face images and extending the pose invariance by training on synthetic images over a larger range of views.
References 1. V. Blanz and T. Vetter. A morphable model for synthesis of 3D faces. In Computer Graphics Proceedings SIGGRAPH, pages 187–194, Los Angeles, 1999. 335 2. R. Brunelli and T. Poggio. Face recognition: Features versus templates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(10):1042–1052, 1993. 334 3. R. Chellapa, C. Wilson, and S. Sirohey. Human and machine recognition of faces: a survey. Proceedings of the IEEE, 83(5):705–741, 1995. 334 4. B. Heisele, P. Ho, and T. Poggio. Face recognition with support vector machines: global versus component-based approach. In Proc. 8th International Conference on Computer Vision, volume 2, pages 688–694, Vancouver, 2001. 334, 335, 337, 339 5. B. Heisele, T. Serre, M. Pontil, and T. Poggio. Component-based face detection. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 657–662, Hawaii, 2001. 334, 337 6. A. Mohan, C. Papageorgiou, and T. Poggio. Example-based object detection in images by components. In IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 23, pages 349–361, April 2001. 334 7. A. V. Nefian and M. H. Hayes. An embedded HMM-based approach for face detection and recognition. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 6, pages 3553–3556, 1999. 334 8. H. Schneiderman and T. Kanade. A statistical method for 3D object detection applied to faces and cars. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 746–751, 2000. 334 9. L. Wiskott. Labeled Graphs and Dynamic Link Matching for Face Recognition and Scene Analysis. PhD thesis, Ruhr-Universit¨ at Bochum, Bochum, Germany, 1995. 334
A New Cache Replacement Algorithm in SMO Jianmin Li, Bo Zhang, and Fuzong Lin Department of Computer Science and Technology State Key Laboratory of Intelligent Technology and Systems Tsinghua University, Beijing 100084, China
[email protected], {linfz,dcszb}@mail.tsinghua.edu.cn
Abstract. In the methods for training Support Vector Machines (SVM), precomputed elements in the Hessian matrix are usually cached in order to avoid recomputation. However, the least-recent-used replacement algorithm that is widely used is not suitable since the elements are requested in an irregular way. A new cache replacement algorithm applied in Sequential Minimal Optimization (SMO) is presented in the paper. The item corresponding to the component with minimal violation of the Karush-Kuhn-Tucher (KKT) condition is deleted to make room for new one when the cache is full. It is shown in the experiments that the hit ratio of the cache is improved compared with LRU cache while the training time can be reduced in the tasks where the computation of elements in Hessian matrix is very time-consuming.
1
Introduction
In recent years, Support Vector Machines (SVM) [1], [2], [3] have been paid more and more attention. Based on statistical learning theory, SVM not only has concise mathematical form, intuitive geometric interpretation and excellent generalization ability, but also avoids local optimal solution and curse of dimensionality. It has been applied successfully in many fields including classification and regression tasks, such as handwriting character recognition, object detection, text classification, etc. Given a training data set of size l, training an SVM is equivalent to solving a quadratic programming problem (QP) with a Hessian matrix whose order is l for classification problems and 2*l for regression problems. Although the QP prevents SVM from getting local optimal solution, it also offers difficulty especially in those large-scale problems because of the density of the Hessian matrix. For example, for a binary classification task with 100 thousand samples, the whole Hessian matrix will take about 80 GB if each element is stored in 8 bytes. Consequently, the storage of the Hessian matrix becomes the main bottleneck in training large-scale SVM particularly on common computers. Due to the requirement of the whole Hessian matrix, many traditional QP algorithms, such as gradient projection method, reduced gradient method, interior point method, etc., are not suitable for training large-scale SVM. The main practical S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 342-353, 2002. Springer-Verlag Berlin Heidelberg 2002
A New Cache Replacement Algorithm in SMO
343
methods at present belong to the family of decomposition algorithm [4] in which a much smaller QP on a subset of the variables in the original QP named working set is solved in each iteration. Various decomposition algorithms, e.g. SVMlight [5], Sequential Minimal Optimization (SMO) [6], Generalized SMO [7], Generalized Decomposition Algorithm [8], SVCradit [9], etc., differ in the size of working set, the strategy for selecting working set and the algorithm for solving sub-QP. As the most important issue in the decomposition algorithm, working set selection is critical to convergence and convergence rate. In decomposition algorithm, only a small part instead of the whole matrix is required in each iteration. Although the elements of the Hessian matrix can be computed as needed, a small chunk of memory called cache is usually allocated to store them. If an element requested is already in the cache, it is retrieved from the cache without being computed again. Only when the requested one is not in the cache, it is calculated and saved into the cache according to some rules, e.g. least-recent-used (LRU) rule. The computation of elements in the matrix that takes most of the time in the training procedure of many practical tasks is reduced since there are always some elements appearing in different iterations. However, caching all the computed elements is impossible due to the limited memory. Therefore, it is important to improve hit ratio of a cache whose size is quite small compared with the size of the Hessian matrix. Many methods have been developed. The main idea is to design a strategy for working set selection in order to reuse the elements already in the cache. A novel cache replacement algorithm for based on minimal violation of the Karush-Kuhn-Tucher (KKT) condition applied in SMO is presented in the paper. The item in the cache corresponding to the sample with minimal violation instead of the least-recent-used one is deleted to insert new item when the cache is full. It is shown in the experiments that the hit ratio of minimal-violation-based (MV-based) cache is improved compared with LRU-based cache. This paper is organized as follows. In section 2, SMO and methods used to maximize cache reuse are introduced briefly. In section 3, minimal-violation-based cache is described in detail. Experimental results and discussion are then presented in section 4.
2
SMO, Working Set Selection and Cache
Given some training samples
( xi , y i ), x i ∈ X ⊆ R n , y i ∈ {−1, + 1}, i = 1,..., l in an input space, a kernel function K(xi, xj) specifying inner product in a feature space, and a regularization parameter C, designing a binary classifier based on SVMs is essentially equivalent to solving the following quadratic programming problem: 1 T a Qa − e T a 2 y T a = 0,
min W ( a ) = a
s.t.
0 ≤ a i ≤ C , i = 1,..., l .
(1)
344
Jianmin Li et al.
where e is a vector of all ones, y is a vector of labels of the given samples and Q is an l by l positive semidefinite matrix with Qij=yiyjK(xi, xj). The final decision function is: l
u( x ) = ∑ a j K ( x j , x ) − b, j =1
(2)
y ( x ) = sign(u( x )) . As an extreme case of the decomposition algorithm, Platt’s SMO method is a quite simple and effective one whose modified version has been implemented in some popular software for training SVM, e.g. LIBSVM [10], SVMTorch [11]. In each iteration, after a working set of size two is selected, a sub-QP on these two variables is solved while other variables are fixed. Thus, each sub-problem can be analytically solved without an optimization software. In Platt’s SMO, a working set is selected with a set of well-designed heuristics. Two variables are selected separately in the outer loop and inner loop. Keerthi et al. [12] proposed an optimality criterion with two threshold parameters to avoid the problem resulted from single threshold in the original method. Two modified versions of selection strategy were then suggested. They are two special cases of GSMO where any violating pair in current iteration can be selected as a working set. It is proved that given a particular stopping criterion, GSMO terminates in finite iterations under any stopping tolerance. The 2nd modification in [12] is actually a simple case of the steepest feasible direction strategy firstly introduced in SVMlight by Joachims [5]. Two components with maximal violation of the KKT condition are chosen as the working set. In fact, this strategy is also equivalent to the maximal inconsistency algorithm suggested by Laskov [13] when the size of working set is restricted to two. It is shown in the experiments that the steepest feasible direction strategy is quite effective compared with other methods used in SMO. Because of avoiding heavy numerical QP overhead, the training time for SMO is dominated by the computation of elements in Hessian matrix. Many methods have been proposed to reduce the time spent on kernel evaluation. Platt [6] suggested some techniques to optimize kernel evaluation exploiting the sparseness of the input. To reduce the total number of kernel evaluations, some caches are used. For example, an error cache is kept for every unbounded support vector in the original SMO [6]. More generally, cache is used to store elements in the Hessian matrix, e.g. element cache [14], row cache [5], [10]. The cache is usually LRU-based, i.e. the least-recent-used item in the cache is deleted to receive new one if the cache is full. It is based on an assumption that the LRU item may also be used far in the future. This rule, however, may conflict with the above working set selection strategies since the variable associated with the LRU item may be selected into the working set soon, even in next iteration step. When the size of cache is small compared with that of the problem, hit ratio perhaps is quite low. Therefore, SMO may not benefit from the kernel cache. On the contrary, extra time may be spent in keeping such a cache. The efficiency of the cache is then taken into consideration. In SVMlight [5], when the size of working set is not smaller than four and the kernel is non-linear, at least for half of the selected variables the kernel values should be cached. In SMORCH (a part of NODElib) [14], all the variables in the inner loop whose kernel output with the first
A New Cache Replacement Algorithm in SMO
345
variable selected in the outer loop is cached are scanned. The variable that gives the greatest reduction of the object function is selected as the second one. They also suggested that kernel cache should be turned off in some special situations. Although the strategy enables cache to be effectively used with SMO, it also inherits the problem of the original SMO. In [15], computational cost and reduction of the object function related to the selected working set is kept balanced in each iteration. Two candidate working sets are selected from all the components and those in the cache respectively. One of them is then used as final working set according to the ratio of their reduction. However, an ad-hoc parameter should be specified for each task. Decoste et al. [16] pointed out the inefficiency caused by the candidate support vectors. Once those non-support vectors that become support vectors temporarily during the solving procedure is inserted into the cache, they are reserved for a long time. They suggested a set of ‘digestion’ heuristics for original SMO to jump out of full SMO iteration so that intermediate support vector bulge is avoided.
3 3.1
Minimal-Violation-Based Cache Basic Idea
The main idea in the above algorithms is to design a strategy for selecting working set to reuse the elements in the cache. However, it is found that the steepest feasible direction strategy is quite competitive in the total iteration numbers compared with heuristics in many practical problems. In addition, it is shown in some tasks that the intermediate support vector bulge behavior is not critical problematic when this strategy is used. For example, in the tests on Adult-1a to be presented in section 4, if C is equal to 1, there are 731 candidate support vectors while 691 of them are support vectors. Since LRU-based cache is not suitable for the steepest feasible direction strategy, the cache itself needs to be modified. The common replacement algorithms for cache are first-in-first-out (FIFO) rule, LRU rule, and optimal rule. Among them, the best one is optimal rule in which the deleted item is the one to be visited later than others. Nevertheless, this rule can not be implemented practically since the sequence of the requested items can not be known in advance. The goal of this paper is to find a criterion relative to the next occurrence of each item. In the steepest feasible direction strategy, two components with maximal violation of the Karush-Kuhn-Tucher (KKT) condition are chosen as the working set. On the other hand, the components with minimal violation of the KKT condition may be selected later than those with larger violation. They are even already in optimal status. So it is natural to delete the item related to the component with minimal violation. Unlike LRU rule, the information of the QP is used explicitly. Therefore, the minimal violation rule is a rough approximation to the optimal rule. 3.2
Calculation of Violation
After the kth iteration, ak defines a decision function:
346
Jianmin Li et al. l
u k ( x ) = ∑ a kj K ( x j , x ) − b k ,
(3)
j =1
y k ( x ) = sign(u k ( x )) . where bk will be defined later. This decision function is not optimal if ak is not an optimal solution of (1), i.e. ak does not satisfy the KKT condition: there is at least a number b such that ∇W ( a ) i − by i ≥ 0 if a i = 0, ∇W ( a ) i − by i ≤ 0 if ai = C ,
(4)
∇W ( a ) i − by i = 0 if 0 < a i < C . Define the classification error for ak: Ei = u k ( xi ) − y i l
= ∑ a kj y j K ( x j , x i ) − b k − y i j =1
l
= y i ( ∑ a y i y j K ( x j , x i ) − 1) − b k j
(5) k
j =1
= y i ∇W ( a k ) i − b k . Then define the violation of the KKT condition for ak: − y i E i Vi = + y i E i y E i i
if a i = 0, if a i = C ,
(6)
if 0 < a i < C .
According to (4), when ak is an optimal solution of (1), Vi should be not larger than zero, otherwise there is at least a Vi which is greater than zero. In the steepest feasible direction strategy, two components with maximal violation are chosen as working set. In our cache replacement rule, the item in the cache related to the component with minimal violation is deleted firstly when the cache is full. 3.3
Calculation of Bias
It can be seen from (5) and (6) that bk is needed to compute V. It is unnecessary in steepest feasible direction strategy when size of working set is even since bk can be eliminated from the object function. Unfortunately, in minimal-violation-based cache replacement algorithm, bk should be calculated explicitly. In fact, in original SMO, bk is also used to check optimality of ak although it is pointed out that this single threshold criterion may degrade the performance of SMO and even result in difficult in convergence. However, the single threshold criterion itself is not wrong. The calculation method of bk in original SMO is unsuitable for some special cases.
A New Cache Replacement Algorithm in SMO
347
The following method for calculating bk is developed from the optimality criterion with two thresholds [12]. Define bup = min{Ft : t ∈ I up }, blow = max{Ft : t ∈ I low },
(7)
i = arg min{Ft : t ∈ I up }, j = arg max{Ft : t ∈ I low } . where
Ft = y t ∇W (a k ) t , I low = I 0 ∪ I 1 ∪ I 2 , I up = I 0 ∪ I 3 ∪ I 4 , I 0 = {t : 0 < a tk < C},
(8)
I 1 = {t : y t = 1, a tk = 0}, I 2 = {t : y t = −1, a tk = C}, I 3 = {t : y t = 1, a tk = C}, I 4 = {t : y t = −1, a tk = 0} .
In the kth iteration, if and only if bup ≥ blow, ak satisfies KKT condition of (1); otherwise (i, j) is selected as working set in the steepest feasible direction strategy. Define bk = (bup + blow)/2 . k
(9) k
It can be seen that when a satisfies KKT condition of (1), b is equal to b in (2). In fact, bk can be used in single threshold criterion to avoid the problem in original SMO. Furthermore, bk can be calculated at the end of working set selection without any extra procedure. 3.4
Implementation
The minimal-violation-based cache can be integrated into any SMO implementation with the steepest feasible direction strategy for selecting working set, such as LIBSVM, SVMTorch. Since implementation of minimal-violation-based cache replacement rule and integration it into SMO is quite simple, the pseudo-code is omitted.
4
Experimental Results
Two cache replacement algorithms, minimal violation rule and LRU rule are integrated into an SMO implementation without shrinking implemented in C++ and compiled with C++ Builder 5.0 without any code optimization. Since the sequence of
348
Jianmin Li et al.
working sets can be known in advance in our experiments, optimal rule is also implemented in matlab 5.3 to get the best performance on hit ratio. These algorithms are tested on the well-known benchmark data sets, UCI Adult data set and its subsets [17]. The experiments are performed on a notebook with PIII 600 and 128M RAM running Windows 98 SE. 4.1
Hit Ratio
To compare the hit ratio of these methods, tests are performed on the 1st subset of Adult data set where there are 1605 samples. A Gaussian RBF-kernel with a variance of 10 is used. Cache size varies from 0 rows to 800 rows. C is set to 1, 10 and 100 to simulate the cases with different number of unbounded support vectors (free support vectors). Stopping tolerance in the same stopping condition as LIBSVM is set to 0.001. The results are shown in figure 1 to figure 3. Table 1. Basic information in the tests on Adult-1a
C number of iterations number of requests number of components updated at least once number of support vectors number of bounded support vectors
1 925 1850 731 691 585
10 5817 11634 752 691 324
100 23045 46090 764 661 55
Table 2. Training time of LRU-based cache and MV-based cache
data set
Input
C
Adult-1a Adult-1a Adult-1a Adult-1a Adult-1a Adult-1a Adult-4a Adult-4a Adult-4a Adult-4a Adult-4a Adult-4a Adult-4a Adult-4a Adult-a Adult-a
Sparse Sparse Sparse Sparse Sparse Sparse Sparse Sparse Sparse Sparse Sparse Sparse Full Full Sparse Sparse
1 1 10 10 100 100 1 1 10 10 100 100 1 1 1 1
cache size cache (row) mode 28 LRU 28 MV 77 LRU 77 MV 158 LRU 158 MV 132 LRU 132 MV 269 LRU 269 MV 543 LRU 543 MV 132 LRU 132 MV 277 LRU 277 MV
number of samples 1605 1605 1605 1605 1605 1605 4781 4781 4781 4781 4781 4781 4781 4781 32562 32562
number of SVs 691 691 691 691 661 661 1883 1883 1890 1890 1904 1904 1883 1883 11575 11575
training time (s) 7.715 7.551 40.555 33.955 119.921 103.334 44.569 40.939 272.905 216.579 1091.045 935.112 201.094 169.043 2627.035 2473.280
A New Cache Replacement Algorithm in SMO
4.2
349
Training Time
To compare the training time of LRU cache and minimal-violation cache, some other tests are performed. A Gaussian RBF-kernel with a variance 10 is used. Stopping tolerance is set to 0.001. The results are in the table 2. 4.3
Discussion
Performance. From the experimental results, it is very clear that the hit ratio of minimal-violation-based cache is better than that of LRU-based cache. Of course, it is worse than optimal cache, but the latter is impossible to be implemented. The performance on training time of minimal-violation-based cache is not as good as that on hit ratio especially in the simple problems since the extra time is spent in updating violation and searching the minimum in the cache. The kernel evaluation is optimized utilizing sparseness of the input data so that the extra time spent on cache is significant compared with time spent on kernel evaluation. Although time spent on kernel evaluation is reduced greatly, the whole time is reduced a little. However, in the tasks with complex kernel function, large dimension of input space and density of input data, the kernel evaluation is very time-consuming. The MV-based cache will perform better in these tasks, e.g. adult-4a with non-sparse input in the experiments. Relation to Shrinking. Minimal-violation-based replacement algorithm is somewhat similar to the shrinking method proposed by Joachims [5]. In shrinking method, components sticking to the lower bound or upper bound for a number of iterations are eliminated from the QP since they tend to be non-support vectors and bounded support vectors. In minimal-violation-based cache, if ai stays at bound for many iterations, its violation may also be smaller than that of unbounded ones. Therefore, it will be deleted from the cache if it is in the cache. Shrinking is quite efficient since kernel evaluations on those deleted samples are saved and size of QP is reduced dynamically. However, the whole QP should be solved again from the current a if elimination proves to be wrong after the reduced QP is solved. The cost of wrong deletion caused by minimal violation replacement method is much less because the items deleted from the cache can be recomputed at any time. Another important difference is that shrinking only aims at current non-support vectors and bounded support vectors while minimal-violation-based replacement algorithm aims at all the samples. As shown in figure 4, besides the middle part of solving procedure, hit ratio is also improved at the end phase where only the nonbounded support vectors are involved. Minimal-violation-based replacement method also differs from shrinking in calculation of bk.
350
Jianmin Li et al.
100
hit ratio (%)
80
60
40
20
0
0
100
200
300 400 500 cache size (rows)
600
700
800
Fig. 1. Hit ratio of optimal rule (solid), MV rule (dot) and LRU rule (dash) when C is equal to 1, as a function of cache size
100
hit ratio (%)
80
60
40
20
0
0
100
200
300 400 500 cache size (rows)
600
700
800
Fig. 2. Hit ratio of optimal rule (solid), MV rule (dot) and LRU rule (dash) when C is equal to 10, as a function of cache size
A New Cache Replacement Algorithm in SMO
351
100
hit ratio (%)
80
60
40
20
0
0
100
200
300 400 500 cache size (rows)
600
700
800
hit number
hit number
hit number
Fig. 3. Hit ratio of optimal rule (solid), MV rule (dot) and LRU rule (dash) when C is equal to 100, as a function of cache size
100
50
0 100
500
1000
1500
1900
500
1000
1500
1900
500
1000 iteration
1500
1900
50
0 100
50
0
Fig. 4. Hit number of optimal rule (top), MV rule (middle) and LRU rule (bottom) in every 100 iterations during training adult-1a with C being 1 and cache size being 40 rows
352
5
Jianmin Li et al.
Conclusion
A new cache replacement algorithm based on minimal violation is presented in the paper. Hit ratio of the cache is improved compared with that of LRU cache that is generally used in SVM training software. Consequently, training time will also be reduced to some extent. The improvement will be greater in the tasks with complex kernel function, large dimension of input space and density of input data. This cache replacement method can be applied in not only SMO but also any decomposition algorithms with steepest feasible direction strategy for working set selection. The performance will be tested in the future. Cache and shrinking are both methods for reducing kernel evaluations. It is interesting that shrinking and the cache replacement algorithm presented here are somewhat similar. The performance of implementation with shrinking and minimalviolation-based cache will also be tested.
Reference 1. 2. 3. 4.
5.
6. 7. 8. 9.
Vladimir N. Vapnik. The Nature of Statistical Learning Theory, SpringerVerlag, New York, 2000 Christopher J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998, 2(2): 1-43 Kristin P. Bennett, Colin Campbell. Support Vector Machines: Hype or Hallelujah? SIGKDD Explorations, 2000, 2(2): 1-13 Osuna E., Freund R., Girosi F. An Improved Training Algorithm for Support Vector Machines. Principe J, Gile L, Morgan N, et al. Proceedings of the 1997 IEEE Workshop on Neural Networks for Signal Processing. IEEE, 1997. 276285 Thorsten Joachims. Making large-scale support vector machine learning practical, In B. Schölkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods – Support Vector Learning, pages 169-184, Cambridge, MA, 1999, MIT Press John C. Platt. Using Analytic QP and Sparseness to Speed Training of Support Vector Machines, In Advances in Neural Information Processing Systems 11, M. S. Kearns, S. A. Solla, D. A. Cohn, eds., MIT Press, 1999 S. S. Keerthi, E. G. Gilbert. Convergence of a Generalized SMO Algorithm for SVM Classifier Design, Machine Learning, vol. 46, nos. 1-3, Jan., 2002, pp. 351-360 Hush D., Scovel C. Polynomial-Time Decomposition Algorithms for Support Vector Machines. LANL Technical Report LA-UR-00-3800, Los Alamos: Los Alamos National Laboratory, 2000 Pérez-Cruz F., Alarcón-Diana P., Navia-Vázquez A., et al. Fast Training of Support Vector Classifiers. Leen T, Dietterich T, Tresp V. Advances in Neural Information Processing Systems 13. Cambridge, MA: MIT Press, 2001. 734740
A New Cache Replacement Algorithm in SMO
10. 11. 12. 13. 14. 15. 16. 17.
353
Chih-Chung Chang, Chih-Jen Lin. LIBSVM: a Library for Support Vector Machines (Version 2.3), 2001. http://www.csie.ntu.edu.tw/~cjlin/papers/ libsvm.pdf R. Collobert, S. Bengio. SVMTorch: A support vector machine for large-scale regression and classification problems. Journal of Machine Learning Research, 1:143-160, 2001 S. S. Keerthi, S. K. Shevade, C. Bhattcharyya, K. R. K. Murthy. Improvements to Platt’s SMO Algorithm for SVM Classifier Design, Neural Computation, Vol. 13, March 2001, pp. 637-649 Pavel Laskov. Feasible direction decomposition algorithms for training support vector machines, Machine Learning, vol. 46, nos. 1-3, Jan., 2002, pp. 315-349 Gary William Flake, Steve Lawrence. Efficient SVM Regression Training with SMO, Machine Learning, volume 46, nos. 1-3, Jan., 2002, pp. 271-290 Jianmin Li, Bo Zhang, and Fuzong Lin. A New Strategy for Selecting Working Sets Applied in SMO, to appear in ICPR 2002. Dennis DeCoste and Bernhard Schoelkopf Training Invariant Support Vector Machines Machine Learning, vol. 46, nos. 1-3, Jan., 2002 http://www.research.microsoft.com/~jplatt/adult.zip
Optimization of the SVM Kernels Using an Empirical Error Minimization Scheme Nedjem-Eddine Ayat1,2 , Mohamed Cheriet1 , and Ching Y. Suen2 1
´ LIVIA, ETS, 1100, rue Notre Dame Ouest, Montreal, H3C 1K3, Canada 2 CENPARMI, Concordia University 1455 de Maisonneuve Blvd West, Montreal, H3G 1M8, Canada {ayat}@livia.etsmtl.ca, {cheriet}@gpa.etsmtl.ca, {suen}@cenparmi.concordia.ca
Abstract. We address the problem of optimizing kernel parameters in Support Vector Machine modelling, especially when the number of parameters is greater than one as in polynomial kernels and KMOD, our newly introduced kernel. The present work is an extended experimental study of the framework proposed by Chapelle et al. for optimizing SVM kernels using an analytic upper bound of the error. However, our optimization scheme minimizes an empirical error estimate using a QuasiNewton technique. The method has shown to reduce the number of support vectors along the optimization process. In order to assess our contribution, the approach is further used for adapting KMOD, RBF and polynomial kernels on synthetic data and NIST digit image database. The method has shown satisfactory results with much faster convergence in comparison with the simple gradient descent method. Furthermore, we also experimented two more optimization schemes based respectively on the maximization of the margin and on the minimization of an approximated VC dimension estimate. While both of the objective functions are minimized, the error is not. The corresponding experimental results we carried out show this shortcoming.
1
Introduction
Any learning machine embeds hyper-parameters that may deteriorate its performance if not well chosen. These parameters have a regularization effect on the optimization of the objective function. For any classification task, picking the best values for these parameters is a non trivial model selection problem that needs either an exhaustive search over the space of hyper-parameters or optimized procedures that explore only a finite subset of the possible values. The same consideration apply to the Support Vector Machine in that kernel parameters and tradeoff parameter (C) are hyper-parameters that one needs to find their optimal values. Until now, many practitioners select these parameters empirically by trying a finite number of values and keeping those that provide the least test error. Our primary interest through this work is to optimize the SVM classifier for any kernel function used. More specifically, we seek an automatic S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 354–369, 2002. c Springer-Verlag Berlin Heidelberg 2002
Optimization of the SVM Kernels
355
method for model selection that optimizes the kernel profile dependently of the data. This allows better performance for the classifier and rigorous comparison among different kernel results as well. In the literature, there exist many published works that consider the problem of hyper-parameters adaptation in Neural Networks. The concern herein is to automatically fix those parameters involved in the training process which are not, however, explicitly linked to the resulting decision function. Much fewer, however, dealt with the SVM classifiers. Recently, Chapelle et al. in [13] proposed a gradient descent framework for optimizing kernels by minimizing an analytic upper bound on the generalization error. This bound equals to the ratio of the data enclosing sphere radius over the separation margin value. Whereas the computation of the margin is somewhat straightforward given the trained model, estimating the radius of the enclosing sphere needs a quadratic optimization procedure that may be difficult to proceed especially for large scale problems handling tens of thousands of data entries and hundreds of support vectors. This represents a serious handicap to the application of the aforementioned scheme on relatively large databases such as NIST digit image database. One more drawback in [13], is that the gradient descent method requires several iterations, namely hundreds, before convergence would be ensured. This is due to the fact the gradient direction is not necessarily the best search direction for the algorithm to reach a minimum. Clearly, a gradient descent method will throw away a lot of time before multiple SVMs are optimized for a 10-class problem. Instead, an alternative method that we investigate consists of optimizing the SVM hyper-parameters by minimizing an empirical estimate of the error through a validation set. The present work is an experimental study of the method. In addition, we make use of a Quasi-Newton variant to adapt the hyperparameters for its convergence efficiency. This optimization algorithm proceeds in two steps: computing the search direction and computing the step along this direction. Furthermore, in order to compute the error estimate, we considered probabilistic outputs for the SVM by fitting a two-parameter logistic function to the classifier output [10]. This provides a posterior probability measure that allows good estimation of the probability of error over the validation data points. Through the use of such a function, the estimated error on the validation set is also smoothed, so one can compute its gradient for the optimization. The experimental results we obtained on a toy classification problem show a faster convergence for our method. As well, we find out that the presented scheme minimizes the number of support vectors which is a pessimistic upper bound of the generalization error. We have compared the method to two minimization procedures based on the maximization of the margin and on the minimization of an estimate of the VC dimension [16, 20]. The results have shown these methods are ineffective to reduce the generalization error. In section 2, we present the standard Support Vector Machine formulation. Also, we introduce some of the state of the art kernels usually used for classification. A brief recall of the properties of our previously introduced kernel– KMOD will also be found. In section 3, a detailed description of the optimization
356
Nedjem-Eddine Ayat et al.
method is given. The posterior probability mapping algorithm is described too. Section 3.3 presents an experimental study of the method on synthetic two-class data. In section 4, a real-life ten-class problem is considered. The strategy of learning, optimization and classification on NIST digit images are well detailed as well. Finally, a summary of the work is presented in section 5.
2
SVMs for Classification: Background
The SVM is a structural risk minimization based classifier which finds an optimum decision region. Let us have a data set {xi , yi }, i = 1, . . . , l, where yi ∈ {−1, 1} represents the label of an arbitrary example xi ∈ d ; d being the dimension of the input space. Also, let us define a linear decision surface by the equation f (x) = w.x+b = 0. The original formulation of support vector machine algorithm seeks a linear decision surface that maximizes the margin between positive and negative examples. This can be achieved through the minimization of 2 w which solution is l w= αi yi xi (1) i=1
with the constraint i αi yi = 0. The parameters αi are found by maximizing the dual objective [19, 17] LD =
αi −
i
1 αi αj yi yj xi .xj . 2 i,j
(2)
However, in real-life classification problems, the algorithm as stated above is unable to achieve perfect separation between two classes especially in case of noisy data. Cortes et al. in [6] slightly modified the model by adding an heuristic that accounts for accepting misclassified examples while penalizing them. Mathematically, this does not imply any major modification except that αi must be upper bounded as 0 ≤ αi ≤ C, where C is a penalization parameter also called trade-off parameter. An infinite value for C yields a classifier that seeks a well separated data. In another work, Boser et al. [5] added an important feature that allows a different insight into support vector theory. In fact, they enable these models to produce complex nonlinear boundaries in the original space. The technique consists of projecting the data into higher order spaces of possibly infinite dimension through a mapping function φ. This function, however, must keep the inner product formulation in Eq. 2 still useful. Furthermore, the explicit analytical form of functions φ must not be known. Only the expression of their pairwise inner product k(x, y) = φ(x).φ(y) in the augmented space has to be defined. This dot product defines a particular subset of kernels called Mercer kernels [7] (see Table 1). The dual objective of Eq. 2 become then LD =
i
αi −
1 αi αj yi yj k(xi , xj ). 2 i,j
Optimization of the SVM Kernels
357
Table 1. Common kernels Kernels Formula linear k(x, y) = x.y sigmoid k(x, y) = tanh(ax.y + b) polynomial k(x, y) = (1 + x.y)d RBF k(x, y) = exp(−ax − y2 ) exponential RBF k(x, y) = exp(−ax − y)
Recently, we introduced a new two-parameter kernel namely a Kernel with Moderate Decreasing-KMOD [2, 3] whose analytic expression is kmod(x, y) = a[exp(
γ ) − 1]; x − y2 + σ 2 1 . exp( σγ2 )−1
where a is a normalization constant equal to
(3)
The parameters γ and
σ are two positive reals that control the decreasing behavior of the kernel. In particular, σ is a scale space parameter that defines a gate surface around zero whereas γ controls the decreasing speed around zero. The -1 bias ensures the kernel converges toward 0 at infinity (see Figure 1). This kernel has a particular behavior that allows a varying speed of decay around zero and a moderate decrease toward infinity. We also undertook the existing duality between similarity in feature space and spatial behavior of kernels. We show that KMOD allows at once good discrimination between patterns while maintaining the closeness information of far apart points from vanishing. Although empirical, some of the experiments we perform using KMOD were encouraging (see Table 2). On the Breast Cancer Database of UCI, for example, KMOD achieves the best results yet established.
−4
1 1.4
kmod expo rbf
0.9
x 10
KMOD intersecting exp. RBF at far distance 1.2
kmod expo rbf
0.8
1
0.6
correlation
correlation
0.7
0.5
0.4
0.3
0.8
0.6
0.4
0.2 0.2
0.1
0
0
5
10
15
20
25
30
spatial distance
a
35
40
45
50
0 900
1000
1100
1200
1300
1400
1500
spatial distance
b
Fig. 1. a Correlation in feature space vs. spatial distance in input space; b KMOD preserving the far points closeness information
358
Nedjem-Eddine Ayat et al.
Table 2. Performance of KMOD (column 1) versus state of the art classifiers on Breast Cancer database [15, 3]. The plus sign beside the values indicates wether or not the error is significantly different from KMOD’s one classifier KMOD svm RBF svm RBF network Adaboost Adaboost Reg error 25.4±4.4 26.0±4.7 + 27.6±4.7 + 30.4±4.7 26.5±4.5
In order to assess KMOD as well as any SVM kernel, we undertook the problem of automatically selecting the best kernel parameters so practitioners can compare and choose suitable kernels given different applications data. Next, we address a systematic method for model selection in Support Vector Machines.
3
Model Selection
Let us assume any kernel k depends on one or several parameters encoded into a vector θ = (θ1 , ..., θn). Support Vector Machines consider a class of functions parameterized by α, b and θ: αi yi kθ (x, xi ) + b, (4) fα,b,θ = i
where α is the vector that contain the multipliers αi and yi represents the target of support vector xi (multipliers of non support vectors are zero). Optimizing the SVM hyper-parameters is a model selection problem that needs adapting multiple parameter values at the same time. The parameters to tune are those that embed any kernel function as the σ parameter in an RBF kernel or the couple (γ, σ) in case of KMOD kernel (see Table 1). In addition, another parameter the optimization may consider is the trade-off parameter C which may have a strong effect on the SVM behavior for hard classification tasks. Usually, model selection is done by minimizing an estimate of the generalization error or by default a known upper bound of that error. As in Neural Network area, empirical estimation of the generalization error suggests the use of either a validation error estimate which procedure requires a reduction of the amount of data used for learning; or a leave-one-out error estimate (extreme case of K-fold cross validation) which gives a precise estimate of the true error [4]. However, leave-one-out error estimation is time consuming and may be avoided if enough data could be retained for the validation set. The larger the validation set is, tighter is the variance of the estimated error. This estimate is given by T =
1 Ψ (−yi f (xi )) N i
(5)
where Ψ is a step function (Heaviside function) and N being the size of the validation set. Using a gradient descent approach assumes the error estimate in
Optimization of the SVM Kernels
359
Eq. 5 to be differentiable. Unfortunately the step function is not. To circumvent this drawback it is possible to use a contracting function of the form Ψ (x) = 1 1+exp(−Ax+B) [10, 13]. A very nice way to choose the values of constants A and B is to estimate posterior probabilities [10]; whereby a smooth approximation of the test error is obtained. Next, we describe the method. 3.1
Probability Estimation: A Two-Parameter Identification Procedure
Recently, many researchers have considered the problem of probabilities estimation for SVM classifiers. The methods they proposed are of varying levels of complexity. Sollich for example, proposed in [18] a bayesian framework to tackle two of the outstanding challenges in SVM classification: how to obtain predictive class probabilities rather than the conventional deterministic class label predictions and even how to tune hyper-parameters. This very attractive method interprets Support Vector Machines as maximum a posteriori solutions to inference problems with Gaussian process priors. Earlier, Wahba proposed in [8] to use a logistic function of the form P (y = 1|x) =
1 1 + exp(−f (x))
where f (x) is the SVM output (without threshold) and y = ±1 represents the target of the data example x. Platt in [10] used a slightly modified logistic function given by 1 . (6) P (y = 1|f ) = 1 + exp(Af + B) The two-parameter contracting function he proposed allows to map the SVM output values to the corresponding posteriors. The method is easy to implement and requires a non-linear optimization of the couple of parameters (A, B) such a way the negative log-likelihood ti log(pi ) + (1 − ti ) log(1 − pi ) i
through the validation data points is minimized; where pi = yi +1 2 ;
1 1+exp(Afi +B)
is the
inferred posterior probability and ti = with ti = 1 if the input vector xi belongs to class C1 and ti = 0 if xi belongs to class C2 . For the rest of the paper, we shall refer to yi as the bipolar target (+1, −1) and to ti as the binary target (1, 0). The constant B in Eq. 6 allows the threshold probability of 0.5 to correspond to a non zero value for SVM output. Using a contracting function such in Eq. 6 allows to estimate the probability of error on the data and circumvent the need to use the Heaviside function in Eq. 5. It follows that the gradient of the error could now be approximated. We tried out the above mentioned algorithm using a Newton method to adapt the sigmoid in Eq. 6 [10]. Figure 2 shows respectively the separation frontier and the inferred contracting function for a linear SVM trained on iris3v12 data set. Below, we shall integrate this two-parameter identification procedure into the overall optimization process.
360
Nedjem-Eddine Ayat et al. Sigmoid fits data 1
0.9
0.8
Posterior P(input|class=1)
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0 −20
−15
−10
−5
a
0 SVM outputs
5
10
15
20
b
Fig. 2. a Separation frontier; b Posterior probability versus SVM output value
3.2
Kernel Optimization: A Quasi-Newton Scheme
Different approaches for non-linear optimization have shown varying levels of efficiency. Among these methods (gradient descent, conjugate gradient, Newton methods, ...), Quasi-Newton approach has shown faster convergence and stable optimization for Neural Networks. Even though a gradient descent procedure is sufficient to find satisfactory weights values for an MLP or an RBF network, adapting possible hyper-parameters requires an estimation of the gradient after each training process, and then a downhill step toward a local minima could be done [12]. This procedure is time consuming and needs many training process before a feasible solution could be found. In fact, this algorithm has two more drawbacks that must be pointed out. A downhill direction toward a minima is not guaranteed and the algorithm is very sensitive to noise so any discontinuity can lead to an arbitrary step along the error surface. The Quasi-Newton procedure circumvents these disadvantages by ensuring a downhill direction of search through the use of second order information, and computes a feasible amplitude for the step along the search direction. Practically, it proceeds in two steps. First, the search direction must be chosen. This step needs to approximate the corresponding parameters Hessian matrix. Second, a line search minimization along the search direction finds the best amplitude for the computed step. These two features may speed up drastically the convergence of the optimization process. We shall describe below the method to optimize the SVM kernels. We can formulate the error probability of observing either target value for a given data example xi as i (1 − pi )ti Ei = P (yi =zi ) = p1−t i
where zi = sign(fi ), fi = f (xi ) is the corresponding SVM output value and pi is the estimated posterior probability. For a validation set of size N , the average estimate of the error over the validation data set could be written as
Optimization of the SVM Kernels
E=
361
N N 1 1 1−ti Ei = p (1 − pi )ti . N i=1 N i=1 i
To approximate the gradient of the error we shall assume the current vector of kernel parameter values is sufficiently close to a local minimum. As the derivative of the error w.r.t θ at the minima vanishes, it follows that we can approximate this gradient as ∂E ∂α ∂E = (7) ∂θ ∂α ∂θ where α = (α1 , ..., αk ) represents the vector of multiplicative parameters and k ∂E equals to the number of support vectors. In the other hand, the components ∂α j could be expanded as N ∂E 1 ∂Ei ∂pi ∂fi = (8) ∂αj N i=1 ∂pi ∂fi ∂αj where ∂Ei i i = −p1−t ti (1 − pi )ti −1 + (1 − ti )(1 − pi )ti p−t = i i ∂pi
+1 if ti = 0 −1 if ti = 1
(9)
The gradient of the posterior probability w.r.t. to the raw svm output is given by ∂pi = −Ap2i exp(Afi + B) (10) ∂fi and the gradient of the raw SVM output w.r.t. αj is given by ∂fi = yi Kθ (xj , xi ) ∂αj
(11)
where yi = ±1 is the bipolar target for the data point xi . Once the derivative of the error w.r.t. the multipliers vector α is computed, the next step consists of estimating the derivative of α w.r.t. hyper-parameters θ. Notice that we may include the SVM bias b in the vector α such that α = (α1 , ..., αk , b). Moreover, it is shown that
∂α ∂H T = −H −1 α ∂θ ∂θ
(12)
KY Y Y = yi yj K(xi , xj ) [13]. The vecwhere H = and the components Kij YT 0 tor Y is the target vector corresponding to the support vectors set. Y T is its transpose. H is a ('SV +1)×('SV +1) matrix, 'SV being the number of support vectors. Next, we shall refer to the matrix H as the kernel’s Hessian. This matrix is different from the Quasi-Newton related Hessian. We shall refer to the latter as H .
362
Nedjem-Eddine Ayat et al.
Herein we give the optimization algorithm using a Quasi-Newton scheme: 1. Initialize θ to some value. 2. Train the SVM with fixed θ. 3. Infer the parameters A and B for the given SVM model and validation set (non-linear optimization procedure). 4. Estimate the probability of error. ∂E(α,θ) 5. Calculate the gradient of that error ∂θ . 6. Calculate the Hessian H . ∂E(α,θ) 7. Update θ using: ∆θ = −λH ∂θ . where λ is the amplitude of the step along the search direction and H is a n × n matrix; n being the dimension of vector θ. The used line search minimization algorithm is a variant of the Golden Section Search described in [14]. Besides, it is also possible to choose other criterions for the optimization process. For example, one may consider the minimization of an analytic upper bound of the capacity h (also called VC dimension [19]) given by h < R2 w2 ,
(13)
where R is the radius of the smallest sphere enclosing the training data in feature space, and w is determined by the support vector algorithm and is computed in feature space (see Eq. 1). This criterion has previously been used by Scholkopf in [16] to choose among different degrees for a polynomial kernel. Chapelle et al. in [13] also minimized this criterion as well as the span bound using a gradient descent method. 3.3
Experiments
The core of the used SVM solver is based on the shrinking algorithm implemented in SV M light and detailed in [9]. To invert the kernel Hessian H in Eq. 12, we used an LU decomposition method. This procedure has a computational complexity of O(n3 ); n being the dimension of the square matrix H. In addition, some numerical problems could be encountered if H is badly conditioned, i.e. some of the matrix eigenvalues are very close to zero. In fact, since H is guaranteed to be only semi-definite positive, it may happen that the Hessian is singular. One solution consists of adding a very small constant to the diagonal elements of H. Another way consists of using a variant of the Cholesky decomposition algorithm adapted to semi-definite matrices [21]. This problem was mainly remarked for spread kernels, i.e. very small values for σ in case of KMOD kernel and a in case of RBF kernel (Table 1). For the experiments, we first considered synthetic two-class data. So we produced 2000 data points from two overlapping spherical gaussians with known Bayes error (0.017). Each data point has two attributes and the classes are balanced. For the optimization purpose, we retain a subset of 1000 examples for the training, 500 examples for validation and 500 examples for testing. We did simulation using KMOD kernel with an initial vector of values for (γ, σ) equal
Optimization of the SVM Kernels
363
1 1 to ( 2.72 , 2.72 ) and an initial value for C equal to 1000 (the optimization of C was carried out as in [13]). In order to evaluate the convergence behavior of the empirical error minimization method, we first run the simulation using a gradient-descent method. The estimate of the error to minimize is computed on the validation data examples after each adaptation of the hyper-parameters. Before, the SVM model is trained on the training data. Along the optimization process, we recorded the estimate of the error, the margin value, the number of support vectors and the test error. The latter is the ratio of the number of misclassified examples over the number of whole test examples. We report in Figure 3 the obtained plots. Figure 3 a shows that the algorithm minimizes the estimate of the probability of error even though some noise could be observed. For example, around the iteration 60 we remark a small increase of the error estimate. This confirms the relative sensibility of gradient descent to noisy error surface. In fact, performing a step assumes the trained model i.e. support vectors and multipliers αi to remain unchanged along the search direction path. However, large gradient values through the error surface invalids this hypothesis so a downhill descent will not be ensured. The test error plot of Figure 3 b
0.3
0.25
test error
objective function
0.034
0.2
0
0.03
0.026
20
40
60
80
0.022 0
100
20
40
iteration
a
80
100
60
80
100
b
200
0.04
0.036
160
0.032
margin
SV number
60
iteration
120
0.028
80 0.024
40 0
20
40
60
iteration
c
80
100
0.02 0
20
40
iteration
d
Fig. 3. Optimizing hyper-parameters using a gradient descent method and KMOD kernel: a Variation of the objective function; b Variation of the test error; c Variation of support vectors number; d Variation of the margin
364
Nedjem-Eddine Ayat et al.
consolidates the adopted objective function. Notice the existing correspondence between the objective minimums and the test error ones. Notice also the existing noise on the test error curve. Despite this relative instability, the error is minimized after 100 iterations of the optimization procedure to 0.022. In Figure 3 c we report the variation of the number of support vectors. The curve shows that the optimization criterion reduces considerably the number of support vectors. This number was reduced from 187 to 42 support vectors. This is a key feature since it is known that 'SV (14) l is a pessimistic bound on the true generalization error that one should minimize to ensure good performance (where 'SV is the number of support vectors and l is the number of training data). The curve in Figure 3 d shows the variation of the margin along the optimization procedure. Notice the relative increase of its values for last iterations. We also experimented the quasi-Newton method on the same classification task. We report in Figure 4 the corresponding plots which were obtained for an initial C = 1000. The curves show that the algorithm is able to reduce the error while converging in very few iterations. Indeed, the procedure converges after only 5 iterations. The plots a and d of the same figure show that the objective function is minimized as well as the test error after four iterations. The final test error is 0.022. The same value achieved by the gradient descent procedure. Notice the reduction of the support vectors number after just one iteration of the optimization procedure (Figure 4 b ). The algorithm stabilizes at approximately 40 support vectors. Figure 4 c shows the variation of the classification error on the validation examples. Moreover, notice that the objective to minimize never increases during the optimization. Besides, we chose to minimize separately two more criterions. The first one is the VC dimension bound in which we approximate the radius square R2 with maxi∈supportvectors (Φ(xi ) − Φ(o))2 . The second criterion is the margin value. We report in Figures 5 and 6 the related variation of the objective function, the test error and the support vectors number for both of the criterions. While the objective functions are minimized, none of the criterions minimizes the test error. In fact, the number of support vectors is even increased from 180 to 700 approximately, which increases dramatically Vapnik’s bound (given in Eq. 14). One may conclude the margin maximization is not suitable for our purpose. This is expectable since the data distribution is not takin into account through the use of a precise estimate of the radius R of the enclosing sphere. In the other hand, the approximation of the radius we considered does not reduce the error as well. This lead us to the conclusion that the VC dimension approximate is a very pessimistic upper bound of the SVM capacity h and is an ineffective criterion for the optimization. On the other hand, the empirical estimate through a validation data set lowers the error with an impressive reduction of support vectors number too. The classification duration is , thus, greatly decreased.
Optimization of the SVM Kernels
365
180
0.29
SV number
objective function
0.27
0.25
0.23
60
0.21 1
2
3
4
40 1
5
2
3
4
5
4
5
iteration
iteration
a
b
0.032
0.03
test error
validation error
0.028
0.026
0.024 0.022
0.02 1
2
3
4
0.022 1
5
2
3
iteration
iteration
c
d
Fig. 4. Optimizing hyper-parameters using a Quasi-Newton method and KMOD kernel: a Variation of the objective function; b Variation of support vectors number; c Variation of the validation error; d Variation of the test error
0.032
65 60
800
0.0315
700
55
0.031 600
50
test error
||w||
40 35
SV number
0.0305
45
0.03
500
400
0.0295
30
300 0.029
25 20
15 0
200
0.0285
20
40
60
iteration
a
80
100
120
0.028 0
20
40
60 iteration
b
80
100
120
100 0
20
40
60 iteration
80
100
120
c
Fig. 5. Minimizing w using a gradient descent method and KMOD kernel: a Variation of the objective function; b Variation of the test error; c Variation of support vectors number
366
4
Nedjem-Eddine Ayat et al.
Hand-Written Digits Recognition
Support Vector Machine is a binary classifier which is useful for two-class data only. However, k − class pattern recognition problems (where one has k ≥ 3) such as the digit recognition task could be solved using a voting scheme method based on combining many binary decision functions. One possible approach is to consider a collection of k binary classification problems. k classifiers can then be constructed, one for each class. The ith classifier constructs a hyper-plane between class i and the k − 1 other classes. A majority vote across the classifiers hyper-planes can is then applied to classify a new example. Alternatively, k(k−1) 2 be constructed, separating the classes from each other and similarly an appropriate voting scheme could be used. Clearly, a digit recognition system using this strategy requires building 45 different models, one for each pair of classes. This scheme was already used to solve multi-class recognition problems with linear decision functions as in the Ho-Kashyap classifier. It is commonly referred to as “Pairwise strategy” in contrast to the well-known “One Against Others strategy” [11, 3]. In order to test the optimization method we used NIST digit image database along with different kernel models, namely KMOD, RBF and polynomial kernels. We considered only the pairwise strategy of learning. For that, we proceed 45 training processes to build the entire pairs’ models. During classification, an appropriate combination scheme consists of finding the class k for which all the pairs’ models (k, j) with 0 ≤ j ≤ 9 have a positive output. The example to classify is rejected if no class k was found. The optimization of kernel parameters is done on each pair model as the scheme already described (cf. §3.2 ). The result is that each SVM will have its own kernel parameter values. Thus, kernel profiles will vary dependently of the pair of classes to separate. This is a strong feature that improves obviously the resulting decision frontiers. We used a subset of 18,000 images from hsf 123 part for training, 2,000 supplementary images were used for validation and 10,000 images from hsf 7 part for testing. From each image we extract 272 features that well characterize local and
65
800
0.25
60 700
55
0.2
600
40 35
0.15
SV number
45
test error
Objective function
50
0.1
30
500
400
300
25
0.05
200
20 15 0
20
40
60
iteration
a
80
100
120
0 0
20
40
60
iteration
b
80
100
120
100 0
20
40
60
80
100
120
iteration
c
Fig. 6. Minimizing the approximated VC dimension using a gradient descent method and KMOD kernel: a Variation of the objective function; b Variation of the test error; c Variation of support vectors number
Optimization of the SVM Kernels
367
morphological shapes. Those values are presented to every SVM [1]. In order to assess the method, we decided to start the optimization from the best kernel parameters that we have previously obtained empirically in [3]. The duration of training varies dependently of the used kernel, its parameter values, the trade-off parameter C and obviously the size of data. On the average, the entire training took about 70 hours on a SUN-ULRA-SPARC 500 MHz, 256Mo. We counted 2 to 9 iterations for the Quasi Newton to converge on each pair model. For all experiments, we considered an initial value for C equal to 1000. Table 3 shows the obtained testing rates before and after optimization was done. Notice the increase of recognition rate near 0.5% for the optimization strategy. Remark also the considerable reduction of the support vectors number for the optimized system, which shrinks the complexity of the model and ensures lower bound for the generalization error. Moreover, it is worth mentioning that KMOD does slightly better than other kernels in general. In order to assess the significance of the results we did a z-normal test between KMOD and the performance of the other kernels. This test does not take into account the variability throughout different testing sets. We assume that 10,000 testing examples are sufficient to pass over this condition. The plus sign beside the values in the table indicates whether or not the performance is significantly different from that of KMOD. Finally, recall that KMOD is significantly better than RBF and polynomial kernel of degree 3 (a statistical test at 0.95 confidence level was carried out).
5
Summary
In a multi-class classification problem, the distribution of the data can vary widely from one class to another. It is thus very important to fit the inferred decision frontiers to the classes; i.e. one must select the appropriate model for each classification task with respect to the difficulty of separating the data. This is particularly true for the case of SVM classifier that embeds hyper-parameters for which optimal values must be found. We proposed an empirical error based optimization that uses a Quasi-Newton method to adapt the parameters. We have shown experimentally that the criterion we optimize minimizes the number of support vectors. Moreover, the Quasi-Newton approach proved to converge
Table 3. Testing recognition rates and average of support vectors (per model) on NIST database using the Quasi-Newton method Kernel
Optimized system Not optimized recog. rate SV recog. rate KMOD 98.98 198 98.56 polynomial (d=4) 98.81 203 98.40 polynomial (d=3) + 98.25 385 + 97.88 RBF + 98.51 222 + 98.03
system SV 527 513 677 540
368
Nedjem-Eddine Ayat et al.
much faster than the simple gradient descent. It also prevents any divergence of the optimization for badly conditioned Hessian matrices. We also proposed an optimization scheme applied to a multi-class task problem. For this purpose, we adopted a pairwise strategy of learning. On NIST database, the optimization improves our previously obtained recognition rate by 0.5%. Finally, it would be interesting to test the optimization method on more difficult data so one can study and compare the behavior of different kernels.
Acknowledgment This research was supported by the NSERC of Canada and the FCAR program of the Ministry of Education of Quebec.
References [1] N. E. Ayat, M. Cheriet, and C. Y. Suen. Un syst`eme neuro-flou pour la reconnaissance de montants num´eriques de ch`eques arabes. In CIFED, pages 171–180, Lyon,France, Jul. 2000. 367 [2] N. E. Ayat, M. Cheriet, and C. Y. Suen. Kmod- a new support vector machine kernel for pattern recognition. application to digit image recognition. In ICDAR, pages 1215–1219, Seattle,USA, Sept. 2001. 357 [3] N. E. Ayat, M. Cheriet, and C. Y. Suen. Kmod-a two parameter svm kernel for pattern recognition. to appear in icpr 2002. quebec city, canada, 2002, 2002. 357, 358, 366, 367 [4] Y. Bengio. Gradient-based optimization of hyper-parameters. Neural Computation, 12(8):1889–1900, 2000. 358 [5] B. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In Fifth Annual Workshop on Computational Learning Theory, Pittsburg, 1992. 356 [6] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273– 297, 1995. 356 [7] Courant and R. Hilbert. Methods of Mathematical Physics. Interscience, 1953. 356 [8] G.Wahba. The bias-variance trade-off and the randomized gacv. Advances in Neural Information Processing Systems, 11(5), November 1999. 359 [9] T. Joachims. Making large-scale svm learning practical. In B. Scholkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning, chapter 11. 1999. 362 [10] J. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3), October 1999. 355, 359 [11] U. Kreβel. Pairwise classification and support vector machines. In B. Scholkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning, chapter Chap.15, pages 255–268. 1999. 366 [12] J. Larsen, C. Svarer, L. N. Andersen, and L. K. Hansen. Adaptive regularization in neural network modeling. In Neural Networks: Tricks of the Trade, pages 113–132, 1996. 360
Optimization of the SVM Kernels
369
[13] O. Chapelle and V. Vapnik. Choosing multiple parameters for support vector machines. Advances in Neural Information Processing Systems, 03(5), March 2001. 355, 359, 361, 362, 363 [14] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical Recipes in C, the art of scientific computing. Cambridge University Press, second edition, 1992. 362 [15] G. R¨ atsch, T. Onoda, and K.-R. M¨ uller. Soft margins for adaboost. Machine Learning, 43(3):287–320, 2001. 358 [16] B. Scholkopf. Support vector learning. PhD thesis, Universit¨ at Berlin, Berlin, Germany, 1997. 355, 362 [17] B. Scholkopf, C. Burges, and A. Smola. Introduction to support vector learning. In B. Scholkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods Support Vector Learning, chapter 1. 1999. 356 [18] P. Sollich. Bayesian methods for support vector machines: Evidence and predictive class probabilities. Machine Learning, 46(1/3):21, 2002. 359 [19] V. Vapnik. The Nature of Statistical Learning Theory. NY, USA, 1995. 356, 362 [20] V. Vapnik. An overview of statistical learning theory. IEEE Transactions on Neural Networks, 10(5), September 1999. 355 [21] D. S. Watkins. Fundamentals of Matrix Computations. Wiley, New York, 1991. 362
Face Detection Based on Support Vector Machines Dihua Xi and Seong-Whan Lee Center for Artificial Vision Research, Korea University Anam-dong, Seongbuk-ku, Seoul 136-701, Korea {dhxi,swlee}@image.korea.ac.kr
Abstract. Face detection is a key problem in building an automatic face system such as face recognition and authentication. A number of approaches have been proposed for face detection. Recently, a novel statistical machine learning method, support vector machine, has been employed. Generally, the current SVM-based methods can be divided into two categories: component-based and whole face-based. It is difficult for the component-based method to extract the small face due to no enough information for each component exists. On the other hand, the whole face-based method is too much computationally expensive to build an effective system. In this paper we present a fast system named waveletSVM method to extract a wide range scales of faces from grey-scale images or color images with a preprocessing using a TSL color model. The system is not only accurate and effective, but also largely speeds the system up by applying a TSL B-G color model and multiresolution wavelet decomposition.
1
Introduction
Human face cognition is currently an active researcher topic in computer vision. The research has been conducted by computer scientists, statisticians, mathematicians, psychophysicists, neuroscientists, and others. The increase of human face detection and recognition is mainly driven by the demand of applications such as the video conferences, intelligent human computer interfaces, nonintrusive identification and verification for credit cards, automatic teller machines transactions, and more. The related research yields problems that belong to the following categories whose objectives are briefly outlined: – Face detection: to determine whether or not there are any faces in an arbitrary given single image, and return the location and extent of each face if some faces are identified. – Face tracking: to estimate continuously the location and possibly the orientation of a face, which should be decided at the first frame for tracking by another method, such as face detection, in an image sequence in real time.
This work was supported by Creative Research Initiatives of the Korean Ministry of Science and Technology.
S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 370–387, 2002. c Springer-Verlag Berlin Heidelberg 2002
Face Detection Based on Support Vector Machines
371
– Face authentication: to decide whether a test face is identical to a reference face in a database, and to return whether the test face is the reference face or not. – Face recognition: to find the most similar reference face in a face database to a given test face, and to return who the test face belongs to. Over the past 20 years many techniques for face detection [1], tracking [2], authentication [3] and recognition [4] have been developed. Face detection and recognition have been more extensive than tracking and authentication. Compared to recognition, face detection is a more fundamental problem which has to be solved for any face cognition system. Face tracking is only for the video sequence and should come after face detection. The solution of face authentication and face recognition is highly interrelated. Some approaches can be applied for both recognition and authentication, for example a well known solution is the so called dynamic link architecture [3]. An abundance of approaches for face detection have been developed in recent years. The recent survey of face detection methods was proposed by Yang [5] and Hjelmas [1]. Among the explosive growth of methods of face recognition, an important method — support vector machines (SVM) [6, 7] — has become the new paradigm for solving these problems. The SVM is a special statistical learning method which was first proposed by Vapnik [6]. SVM is now established as one of the standard tools for machine learning and data mining. In recent years, many researchers have put their minds on the applications of SVM to face detection [8, 9, 10, 11, 12]. Generally, the SVM-based methods for face detection can be divided into two categories according to the training data used. Component-based method uses the component SVMs trained by the corresponding components such as eyes, mouth, nose, brows, and nose bridge, etc. Whole face-based method employs the SVM trained by the whole face images. Therefore, the component-based method can not be applied to detect the small face because no enough information included for distinguishing the component from background. And the whole face-based method is too computing expensive to process large images especially when large faces included. This paper presents a system named wavelet-SVM method which is based on the multiresolution wavelet decomposition and SVM. This method draws on advantages from both categories of SVM-based methods. The extraction of human faces is translated to search small face by whole face-SVM in the small wavelet sub images, and then verify the result by the componentbased SVMs in the larger sub images. The advantages of this system are: i) keep the accurate of the both component-based and whole face-based methods; ii) Largely speed up the system because a) the use of the small size SVMs and the different scales sub-images, and b) the vector for SVMs in the wavelet sub images is a spare vector due to most of its components are vanish. For the color image, the searching area is much decreased by applying a preprocessing based on the TSL B-G color model. The experimental results show that our method keeps the performance of SVM-based method and much faster than other methods.
372
Dihua Xi and Seong-Whan Lee
The rest of this paper is organized as follows: Section 2 provides a basic notation of support vector machines (SVMs), and a survey of face detection methods based on SVM machines. Our face detection system which is based on multiresolution wavelet decomposition and SVMs is proposed in section 3. We also present a preprocessing method for color images in section 3. A preprocessing using TSL B-G color model is also proposed to decrease the searching area when apply to the color images. The experimental results are illustrated in section 4. Finally, the summary and conclusions are in section 5.
2
Related Works on Face Detection Based on SVMs
The problem of face detection is a central issue which cannot be bypassed for any automatic face cognition system. To identify all faces in an image with complex background, the basic task is to decide whether a given rectangle region is a face or not. A good solution of the problem involves building a classifier by training the system with machine learning method using a database. The SVM method developed in the last decade is a novel and efficient tool for both training and classification. The training of examples plays an important role in object classification. Most methods for training a classifier, such as neural networks [13] and radial basis function (RBF), are based on the principle of minimization empirical risk, i.e, training errors. As a result, the classifier obtained may be entirely unsuitable for classification of unseen test patterns, although they may achieve lowest training error. In contrast, support vector machine is an implementation of the structural risk minimization principle proposed by Vapnik [6, 7]. A lot of publications and progress have been developed recently. In this section, we will present an overview of the face detection methods based on SVMs. We first sketch out the basic notation and development of SVM methods. Then, we will concentrate on an overview of the current face detection by SVMs. 2.1
Fundamentals of Support Vector Machines
n Given a training set of N data points {(xi , yi )}N i=1 with input pattern xi ∈ R T and output yi ∈ {−1, +1} indicating the class, suppose some hyperplanes w x+ b = 0 (named separating hyperplane) can separate the positive from negative examples. The goal of SVM training is to find the optimal hyperplane which maximizes the margin between two classes, which is defined as a perpendicular distance of examples from both classes nearest to the separating hyperplane. The optimal hyperplane can be estimated by minimizing w2 , subject to constraints T w xi + b ≥ +1, if yi = +1 (1) wT xi + b ≤ −1, if yi = −1.
This equation can be combined into one set of inequalities: yi (wT xi + b) − 1 ≥ 0, ∀ i.
(2)
Face Detection Based on Support Vector Machines
373
It has been proven that the optimal hyperplane is unique (Th 10.1 in [7]). The minimization of w2 can be transferred to a Langrange dual problem: maximize 1 αi − αi αj yi yj xTi xj (3) 2 i i,j which subjects to (i) αi ≥ 0; (ii) αi yi = 0. This can be solved by Kuhn-Tucker i
theory. The SVM algorithm controls the solution complexity irrespective of dimensionality: only some data points, i.e. the support vectors, are used in the actual solution. Once we have trained a support vector machine, for a given test pattern x, we simply determine on which side it lies and assign the corresponding class label by the sign of wT x + b. In practice, a training example set may not be divided by a hyperplane, this case is called non-separable data. This case may happen when some accidental examples exist. When apply above algorithm to non-separable data, it will find no feasible solution. One solution for theproblem is by a C-SVM, that alters to estimate the minimization of ||w||2 + C i ξi which is subject to yi (wT xi + b) ≥ 1 − ξi , ∀ i
(4)
where ξi ≥ 0. The Langrange dual problem of C-SVM is still needed to maximize eq. 3, but needs subjects to (i) 0 < αi < C; (ii) i αi yi = 0. The solution is estimated by s w= N i=1 αi yi si , where si represents the support vectors and Ns is the number of support vectors. Beside the C-SVM, the non-separable case can be solved by projecting the input data space to a high (possibly infinite) dimensional feature space H, where the problem has a higher probability of being linearly separable, through a nonlinear function φ : Rd → H. Please notice that the training algorithm would only depend on φT (xi )φ(xj ). Suppose there exists a kernel function K(x, y) = φT (x)φ(y), for a given test pattern x, its class label can be estimated by the sign of αi yi φT (si )φ(x) + b = αi yi K(si , x) + b (5) f (x) = wT x + b = i
i
where si represents the support vectors. One can see that the solution by kernel functions increases the computing complexity. Not only for the non-separable problem, another purpose to use kernel function is to enlarge the margin between two classes. Then, a better classifier may be obtained. There are five different SVMs usually used by the researchers. Their name and corresponding kernel functions are defined as: (i) Linear: xT y; (ii) Polynomial: (xT y + 1)q ; (iii) GRBF: e− ERBF: e
||x−y|| − 2σ2 1
.
||x−y||2 2 2σ2
; (iv) Sigmoid: tanh(κ · xT y − θ); (v)
374
2.2
Dihua Xi and Seong-Whan Lee
Face Detection by Support Vector Machines
The first application of SVM method to face detection was developed by Osuna et al. [9] in 1997, only about two years after the novel statistical machine learning method, SVM, was developed by Vapnik [6]. C-SVM and polynomial SVMs were employed to train a very large face database, and then applied to face detection. They used a 19 × 19 window pattern cut of the scaled input grey image as the vector xi of SVMs. Papageorgiou et al. [14, 8] employed the SVM approach to the detection of a wide range of objects like faces, people, and cars. The wavelet representation of the input image was applied for both training and classification. Li et al. made effort on multi-view based face detection and recognition [15, 11]. Ng et al. [12] constructed a multi-view face model using SVMs. They extended SVMs to model the 2D appearance of human faces which undergo nonlinear change across the view sphere. Their model enables simultaneous multi-view face detection and pose estimation at near frame rate. Ai et al. [16] proposed a face detection algorithm based on template matching and SVM using the fast training algorithm SMO [17]. Qi et al. [18] proposed a face detection system by integrating the independent component analysis (ICA) with the SVMs. It enlarged the separation margin and reduced the number of support vectors in training and finally produced fewer test errors. Fazekas [19] constructed a new SVM kernel function which is based on Walsh function. The experimental results showed that the algorithm enlarged the margin when compared to the Linear and Polynomial SVMs. Romdhani et al. Romdhani et al. [10] demonstrated a method to reduce computational complexity by the use of a sequential reduced support vector evaluation. In summary, a face detection system based on SVMs can be divided into two main stages: SVM training (learning) and face searching (classification) as illustrated in Figure 1. To identity a face standing in the complex background of an image, we must first understand how to distinguish the face and non-face regions. This problem is the focal point of a face detection system. The SVM machine offers a good solution to make distinctions of face and non-face regions. The SVMs should be first trained by a number of face (positive examples) and non-face images (negative examples), this processing is called training or learning. The training face database may contain a large number of typical human faces which may be under different light conditions, poses, races, etc. The nonface examples may be constructed from complex backgrounds. We have shown that the training vector of a SVM should have the same length in section 2.1. Therefore, all the faces in the database should be normalized in size, for example the distance between left and right eyes should be approximate for all the training faces. Then, using the trained SVMs to locate faces in any given image, the scale, pose, location, rotation, race, luminous of these faces may be variable whether in the same image or in different images. The differences of face scales require changes in the size of the candidate rectangle and then resize the candidate to the same as the trained face. For the alteration of face location, one has to search in the whole input image.
Face Detection Based on Support Vector Machines
375
** PFV=Pattern Feature Vector (x or xi for SVM)
Fig. 1. Overview of a face detection system based on SVMs
The design of a face detection system requires the careful attention to the following issues: how to define the pattern feature vector and choose proper SVM machines. It is a key factor for improving the accuracy and computing complexity to define a proper test pattern vector xi for training SVM and classification. The vector xi may be simply composed of the pixel by pixel intensity of the candidate rectangle [9]. Intensity normalization may be performed on the rectangle for overcoming the illumination alteration. The vector size is too big to make an effective algorithm, and too redundant to distinguish faces and other objects. To reduce the redundancy, some approaches may be employed. The wavelet representation of an input image is used by [14, 8] which reduced the redundancy of the pattern feature vector and computing complexity, due to much more zero components contained in the wavelet decomposition coefficients. More details about the wavelet decomposition will be introduced in the next section. Another solution to reduce the vector length by employing preprocessing such as ICA [18]. It should be noted that ICA causes the loss of useful information for face description which may lead to failure to identify or distinguish the face from the clutter background. Therefore, it should carefully choose proper ICA vectors. Li [11] reduced complexity by reducing the number of training vectors using the eigenface method. Another important issue is how to select proper SVM machines to get efficient results. Different SVMs may correspond to different feature vectors and goals. In summary, one has to consider both the speed and efficiency of the whole system when detecting the face(s) using SVM approach. One factor that affects the accuracy of the SVM is how to select the training examples. A method named bootstrapping [20] is a very effective solution [16]. The idea of bootstrapping is to start from small positive and negative examples, and then increase its size by iterating a recursive step. When the vector is large for an abundance of faces in a database, it will take a long time to train the SVMs. Platt [17] developed a fast algorithm for training
376
Dihua Xi and Seong-Whan Lee
SVM called Sequential Minimal Optimization (SMO) that makes it possible on complex applications. Recently, some researchers [10, 16] adopted this method in the face detection system.
3
Face Detection Using Wavelet-SVM Method
In this section, we are going to propose a face detection system based on wavelet and SVM machines for color images. As we have mentioned, most of the current face detection systems based on SVMs are very computing complexity and thus it is impossible to build a fast system due to the following reasons: feature vectors are too large and required to search in the whole image line by line and pixel by pixel. In our system, we first reduce the searching boundary by a preprocessing based on TSL color model. Then, the multiresolution wavelet approach is used to build a coarse to fine system which makes another way to reduce the computing complexity. The processing of SVM training and searching is based on the sub images from the multiresolution wavelet decomposition. The preprocessing by TSL model uses color information and the other two approaches only use the luminance information. 3.1
Preprocessing Using TSL B-G Color Model
Color is a powerful fundamental information that can be used at an early stage to detect objects in complex scene images. Recently, many researchers have been trying to use the color information for the face detection. A large number of articles have published such as [21, 22]. It may fail when the color of the background is similar to the skin. Some researchers also used the color cue as an early stage to speed up their system [18]. The most widely used color model is the RGB representation in which different colors are defined by combinations of red, green, and blue. Besides the RGB color model, there are several other alternative models currently being used in face detection. Terrillon [23] recently presents a comparative study of several widely used color spaces for face detection. In their study they compared normalized TSL (tint-saturation-luminance), r-g and CIE-xy chrominance spaces, and CIE-DSH, DHSV, YIQ, YES, CIE-L ∗ u ∗ v∗, and CIE L ∗ a ∗ b∗. In their face detection test, the normalized TSL space provides the best results. A modified TSL color model is selected as a preprocessing step in our face detection system. TSL Color Model The TSL color model represents a pixel color by Tint, Saturation and Luminance which can be calculated from RGB values [23]:
Face Detection Based on Support Vector Machines
377
arctan(r /g )/2π + 1/4, g > 0 T = arctan(r /g )/2π + 1/4, g < 0 0, g = 0 S=
(6) 9 2 5 (r
+ g 2 )
L = 0.299R + 0.587G + 0.114B B G where r = r − 13 and g = g − 13 , r = R+G+B and g = R+G+B . R, G and B are the red, green, blue components of RGB representation of a pixel. The values of T, S and L are normalized in the range [0; 1]. We call this model TSL R-G color model. In our system, the component S is used for the segmentation of similar skin and non-skin regions. The component L will be used to extract the edge information using multi-wavelet decomposition, and for face detection by the SVMs. According to our experiments, better results are obtained when using TSL B-G color model : 9 2 (7) (b + g 2 ) S= 5
where b = b −
1 3
and b =
B R+G+B .
Skin Region Segmentation The saturation S in the TSL B-G color model is applied to distinguish skin color from other colors. According to our experiments, the saturation is divided into three sets: (i) Skin color: 0.2 ≤ S < 0.44; (ii) Not far from skin color: 0.44 ≤ S < 0.48; (ii) Non-skin colors: S ≥ 0.48 and S < 0.2. Two examples segmented by the TSL B-G color model are illustrated in Fig. 2. There are about 30 faces included in the upper image and only one face included in the nether image. We can see that all the face regions are extracted although both examples have complex backgrounds. Some regions without faces included are also marked as skin region, we are going to extract faces by a wavelet-SVM method later. However, compared to total searching in the input image, we only need to process the white regions which occupy a small proportion of the whole image. Therefore, we may greatly speed up the future searching expense when applying SVM machines. 3.2
Feature Vector Construction
In this section, we shall build a multi-facial flexible coordinate system and define feature vectors to describe a human face. The flexible coordinate specifies the facial shape while the feature vectors specify the texture. The flexible facial coordinate system aims to describe the facial geometry and texture description. The flexible coordinate aims to build a system being invariable to the change of facial scales. We set its origin to the center of the left eye, and let the distance between the centers of eyes to unity. Suppose
378
Dihua Xi and Seong-Whan Lee
Fig. 2. Image preprocessing by TSL color model filter. The result is on the right: white–possible skin color region
that the screen coordinate of the centers of left and right eyes are (xle , yle ) and (xre , yre ), let d = (yre − yle )2 + (xre − xle )2 , then the corresponding facial le yre −yle coordinates of eyes are (0, 0) and ( xre −x , d ). If a face is almost frontal and d upright, d ≈ xre − xle , so the flexible coordinate of the right eye is approximately le (1, yre −y ). In general, the flexible coordinate of any point with screen coordinate d le y−yle (x, y) is ( x−x d , d ). In summary, a frontal facial shape can be defined after the distance of eyes given. We have shown that to define a proper feature vectors for a SVM is very important for building an excellent and effective SVM-based system. The computing cost is very expensive if using the intensity values in original rectangle. To reduce the computing expense, the wavelet decomposition is used in our system. Using multiresolution wavelet, an image can be decomposed into a sequence of sub-images which include different frequency information corresponding to different directions at different scales [24]. Suppose f (x, y) is an image, let 0 (x, y) = f (x, y), then fLL n+1 n+1 n+1 n+1 n fLL (x, y) = fLL (x, y) ⊕ fLH (x, y) ⊕ fHL (x, y) ⊕ fHH (x, y).
(8)
The LH and HL sub images which indicate the components of horizontal and vertical information of image are used in our research. The LL sub image contains the low frequency information in both directions, and can be decomposed again. Notice that the width and height of a sub image at level n are half of that at level n − 1. Generally speaking, in our system, a given image f (x, y) is decomposed into a series of sub-images as:
Face Detection Based on Support Vector Machines 1 f (x, y) ❍ ✲ fLL (x, y) ❍ ✲ ❅❍❍ ❥ f 1 (x, y) ❅❍❥ ❍ ❅ ❅ LH ❘ ❅ ❘ f 1 (x, y) ❅ HL
Input
level 1
··· ··· ···
379
N (x, y) ❍ ✲ fLL ❅❍❍ ❥ f N (x, y) ❅ LH ❘ f N (x, y) ❅ HL
level N
Based on the wavelet sub images, two series of vectors are constructed in our system. The first set of vectors, componential vectors, describes componential features. For a given component, the responses in the componential rectangle in LH and HL sub-images are used to define the feature vectors. The size of componential vector is very small, even for a large face image. When the facial size is small, the componential vectors cannot be distinguished from other regions. We use another set of vectors, facial vector corresponding to the whole face rectangle which including brows, eyes and mouth. Briefly, a facial feature includes componential and facial vectors at all levels. Figure 3 illustrates an example of the facial and componential vectors corresponding to the LH (top in figure 3) and HL (lower figure 3) sub images of a human face image. We used the componential vectors of eyes and mouth at level 1 and 2 which are shown on the right in figure 3, and the facial vectors at level 3 (32 × 32 pixels) and 4 (16 × 16 pixels) which are on the left in figure 3. 3.3
Implementation of Face Detection and Facial Feature Extraction
After the feature vectors have been defined, as we mentioned in section 2.1, training the SVM machines and then searching for existing faces in an image by those trained SVMs are two steps in face detection by SVM-based method. This section describes the algorithm for training and searching. Training Support Vector Machines Bootstrapping is used to train our SVMs, which starts from a small number of training faces and increases its size and power by iterating the recursive step. This system includes two sets of SVMs: – Facial SVMs at level l: corresponds to the rect-vector in the LH and HL sub images at level l. They are used to show the properties of the whole face in the LH and HL sub images at each level respectively. – Componential SVMs of component i at level l: corresponds to the a special component (left or right eye, mouth) which describes the componential features. It is a low dimensional vector even for a large face. The C-SVMs are usually used to extract the components after a big human face has been extracted, and used to search for componential candidates when the face is too big to process facial SVMs effectively. In summary, there are two SVMs (corresponding to LH and HL respectively) for either componential or facial SVMs at each level. The vector size for training
380
Dihua Xi and Seong-Whan Lee componential vectors facial vectors
Vectors for LH sub images
Vectors for HL sub images
Fig. 3. Whole facial vectors and componential vectors used in our system
an SVM should be of fixed length. Therefore, all trained faces should be first normalized according to the distance between eyes. Equal size rectangles are used for all face or non-face images at a level. Notice that the vector length at level n is four times than level n + 1. Some more detail about the bootstrapping approach is proposed in this paragraph. Each SVM training starts with 5 positive and 5 negative examples. For example if a vector of left eye is a positive example, then moving the corresponding rectangle to a different position with strong response can be selected as a negative one. Training on that small set offers us an initial SVM which can be used to classify other examples from the database. Each new example is thus classified and false negatives are added as new positive examples, and false positives as negative examples. The training has been processed to all categories feature vectors of left eye, right eye, mouth, and the whole facial vectors. Different scales SVMs are from the different level sub images. Face Detection and Facial Feature Extraction For a given image, the curt flowchart of our face detection system is illustrated in figure 4. Three stages are contained in the system. For a color input image, we first use the saturation component to extract the regions that are similar to skin color (section 3.1). Then, the multi wavelet decomposition (section 3.2) is applied to those extracted
Face Detection Based on Support Vector Machines
381
regions. Finally, the face-based and component-based SVMs are applied to decide whether a extracted region includes a face (or faces).
Input color Image
✯ Saturation ✟✟ ❍❍ ❥ Luminance
✲ Skin region extraction ❄
Candidate region detection
❄
Classification & face ✛ Select proper SVM model update
✻
PP ✏❄ ✏✏ P
✏ level=level-1, PP no✲ Sub images at level ✛ ✏ PP ✏ level=0 ?✏✏ PP P✏✏ yes
❄
Selection of face model & detection of highest decomposition level
❄
❄
Region MWD decomposition
✻
Iteration from level N to 1
Face(s) extracted? ❄yes
Add to human face list
no Select next region until empty
Fig. 4. Flowchart of our face detection system
Applying the approach proposed in section 3.1, we can extract all the pixels whose saturations are similar to face skin. The extracted pixel may produce some irregular shapes and therefore which need to be organized to candidate regions by combining some scattered regions and ignore those small isolated regions. For the lower example in figure 2, the extracted 5 candidate regions and corresponding areas in the original image are illustrated in the upper line of figure 5. Then we only need to search for human faces in each of the extracted candidate regions. The wavelet decomposition is then applied to a candidate region, and the highest decomposition level is decided by the size and shape of the region. Using the trained SVMs, we can judge whether a similar skin color region includes a human face or not by an iteration from smallest to largest sub images which was shown in the dash area in figure 4. Furthermore, we may extract the corresponding three main components (eyes and mouth) of the extracted human face. It is ambiguous that whether a region contains a face and what the size is. We may define the range of possible face according to the shape and size of a candidate region and this range will be used to select a proper SVM models. In our experiment, we suppose the distance between eyes is no less than 16 pixels. Then, the wavelet decomposition is only applied to the luminance L of a region. The highest decomposition level should ensure that the facial size in the smallest sub image is not less than 8 pixels. One example of the LH, HL and
382
Dihua Xi and Seong-Whan Lee
Fig. 5. Example of the face detection system. Top left: similar skin color regions extracted by TSL B-G color model. Top right: corresponding regions to the similar skin regions. Bottom left: LH (top), HL (left) and LL (diagonal) sub images by multi wavelet decomposition. Bottom right: extracted face and components
LL sub images to candidate regions is shown in the bottom-left of figure 5. In fact, we set no more than 3 leveld to region 5, 1 level to region 2, and 2 level to region 1, 3, 4. Because we don’t know the facial size included in a region, we have to suppose a wide range of possible values which should be defined according to the task. Consider all possible face models, and match each model in the region to extract face(s). The processing is done from the largest possible face and then reduce the size. After a face has been extracted, remove the occupied region by the extracted face from the candidate region. Continue the searching until no large enough region is left or all the possible size face model have been matched. To match a face model (a candidate vector corresponding to a candidate rectangle area) in a region, we start the searching in the sub images at a level which makes the distance between eyes of face model no less than 8 pixels. For example, if the distance between eyes is 42 in the face model, then we search for a human face only at level 2 (10 pixels) and then level 1 (21 pixels) sub images. Choose the proper trained SVMs according to the distance and resize the candidate vector. Moving the candidate rectangle in the region and using the selected SVM classifier, a human face may be contained when a positive is obtained. Only when positive responses obtained in all LH and HL sub images at all levels, we decide a human face is located. The search is from the highest level to level 1,
Face Detection Based on Support Vector Machines
383
and LH classifier prior to HL classifier at a level. If the preceding classification passed (positive), we try the next classifier. We use the facial SVMs to search for the face in a region. After a face has been extracted, we can find each component using componential SVMs by similar idea. After all the candidate regions have been searched, we have extracted all faces in the image. The right-bottom image in figure 5 shows the search result of the simple example. Another problem is how to choose a proper SVM. We test our system by Linear, Polynomial and GRBF SVMs and found the GRBF SVMs are the best. More SVMs will be tested in the future.
4
Experimental Results and Analysis
Our method for face detection has been implemented using Visual C++. We tested the above algorithm by over 400 true color images including two sets: test set A contains 325 high quality images with only one face per image, and test set B includes 136 images with two or more faces contained in each image. These images are taken on different light conditions, and most of them include complicated backgrounds. The training data for the classifiers started from five grey images and total contained of 400 synthetic gray face images and 400 nonface images both of original size 128 × 128. The wavelet decomposition level is up to 3, i.e., the size of smallest sub images is 16 × 16. The component-base SVMs of eyes and mouth are only trained on the first and second decompositions. The whole face-based SVMs are trained on the second and third decompositions. First the preprocessing based on the TSL B-G color model is applied to the input image, it largely decreased the searching areas which may include human faces. The areas may contain human faces are only about 5–50% of the whole image which depends on how much color in the background is similar to the skin’s. This step may be skipped over when processing a grey-scale input image. Afterwards, the grey-scale image is decomposed into a sequence of sub images by Harr wavelet until level 3. The searching of human faces started from level 3 and the trained SVMs for whole face were applied to search human faces in the marked regions (in the whole image when without preprocessing) by TSL color model at all levels . The size of the human face candidate is between 16 to 32, that means only require to train the whole face SVMs on level 2 and 3. The extracted candidate of human face was confirmed on the following level(s) by the component SVMs. Please note that the range of face candidate size for all levels sub images is 16 – 32. Theoretically, we may extract the human face whose size is between 32 and 128 pixels in the original image. We test our system under using or not using the preprocessing by TSL color model. It is meaningful because the system without TSL preprocessing may be used to grey-scale images. To show the performance of the proposed wavelet-SVM system, we compared our system with both SVM-based methods using the component pattern and whole face pattern. Table 1 illustrates the performance results. As we mentioned, the component-based method is accomplished in the detection of large human
384
Dihua Xi and Seong-Whan Lee
faces but weak at the detection of the small faces. To make the experimental results comparable, we estimated the results by dividing the human faces into two categories: small face and large face. We estimated the result by the detect rate (DR) corresponding to the correct detection of faces and false rate (FR) corresponding to the false alarm of face existence on the wrong position. We tested our system with or without the preprocessing by TSL color model because in some cases only the grey-scale images are used for save the storage. When using the preprocessing, the searching of faces is only in the regions whose color is similar to the skin color. Otherwise, the initial searching of human faces by the whole face-based SVMs has to been done in the whole sub image.
Table 1. Performance of the wavelet-SVM face detection system Test Set A Small Face Large Face DR FR DR FR Our system with TSL 58.2% 10.2% 96.5% 0.5% Our system without TSL 63.5% 9.2% 98.1% 0.3% Component-based SVM 21.0% 25.4% 97.9% 0.4% Whole face-based SVM 61.0% 9.5% 94.5% 0.8%
Test Set B Small Face Large Face DR FR DR FR 38.7% 21.5% 73.1% 15.7% 40.2% 19.5% 75.4% 13.4% 15.6% 24.3% 74.5% 12.5% 39.6% 17.8% 72.3% 14.2%
*DR—Detect Rate; FR—False Rate *Small (Large) Face indicates the face whose size is less (more) than 50 pixels
According to the comparison of performances in table 1, one can see that our wavelet-SVM system keeps the effectiveness of both the component-based SVM for the small face detection and the whole face-based SVM for the large face detection. On the other hand, it is very important to develop a fast SVM system because usually the SVM-based system is too much computing expensive. We compared the CPU time cost of our proposed wavelet-SVM system with the whole face-based SVM system, the result is illustrated in table 2. Based on the result, one can see our system is much faster than the whole face-based SVM system whether using the preprocessing of TSL B-G color model or not. The time cost is still acceptable when no preprocessing is because that the initial searching by the whole face SVMs is adopted only the small vector of small face candidate in the small sub image. In summary, our face detection system based on wavelet-SVM keeps the effectiveness of both the component-based and whole face-based SVM methods (table 1). Simultaneously, this system largely decreased the computing cost (only spending about 1/20 – 1/100 of the whole face based SVM method).
Face Detection Based on Support Vector Machines
385
Table 2. Comparison of average cost of CPU time between our system and the whole face-based SVM method whose CPU time cost is set to unity Test Set A Test Set B Small Face Large Face Small Face Large Face Our system with TSL 0.022 0.008 0.027 0.012 Our system without TSL 0.037 0.015 0.040 0.021
5
Conclusions and Further Research
We have presented SVM-based algorithms for face detection, together with a face detection system based on the multiresolution wavelet and support vector machines (wavelet-SVM based method). The following is a concise summary representing the main topics from this paper. – Face detection is currently a very active research area and the technology has come a long way. SVM-based method is a novel, promising and accurate solution for this problem. – To design an effective and efficient SVM-based algorithm, it should be careful to define a proper pattern feature vector to distinguish the face and non-face regions. Another problem is how to select the best SVM machine. – Most of the algorithms based on SVMs are still too computationally expensive to be applicable for real-time processing, but this is likely to change with coming improvement in computer hardware and development of more effective algorithm. There are always two ways to reduce the computing complexity. The first is to reduce the vector size of SVMs by preprocessing such as ICA. Another solution is try to reduce the redundancy of the vector, for example by wavelet. – A face detection system for color images based on TSL B-G color model, wavelet and SVM has been prepared in this paper. Compare to other SVMbased methods, the effectiveness is kept. The computing complexity has been largely brought down effectually through: • Similar skin color regions are extracted by a TSL B-G color model. This preprocessing decreases the human face search area by SVMs which are very computationally expensive. • Multi wavelet decomposition decreases the redundancy due to most values are naught in the LH and HL sub images. The multi decomposition can make it possible to search for the faces from coarse to fine. This is especially effective when the candidate region is large. A current limitation of the wavelet-SVM based system (like the componentbased method) is that there is no approving effectiveness for face detection of the side-view faces, especially when one of the three components (eyes and mouth) used in our system is disappeared. In the future, we will try to improve its performance in the detection of the side-view faces. Now we are also trying to apply
386
Dihua Xi and Seong-Whan Lee
the wavelet-SVM method to the face recognition which is another important and challengeable advanced task in face processing.
References [1] Hjelmas, E., Low, B. K.: Face detection: A survey. Computer Vision and Image Understanding 83 (2001) 236–274 371 [2] Chen, Y. S., Su, C. H., Chen, J. H., Chen, C. S., Hung, Y. P., Fuh, C. S.: Videobased eye tracking for autostereoscopic displays. Optical Engineering 40 (2001) 2726–2734 371 [3] Tefas, A., Kotropoulos, C., Pitas, I.: Using support vector machines to enhence the performance of elastic graph matching for frontal face authentication. IEEE Trans. on Pattern Analysis and Machine Intelligence 23 (2001) 735–746 371 [4] Guo, G., Li, S. Z., Chan, K. L.: Support vector machines for face recognition. Image and Vision Computing 19 (2001) 631–638 371 [5] Yang, M. H., Kriegman, D., Ahuja, N.: Detecting faces in images: A survey. IEEE Trans. on Pattern Analysis and Machine Intelligence 24 (2002) 34–58 371 [6] Vapnik, V. N.: The Nature of Statistical Learning Theory. Springer, New York (1995) 371, 372, 374 [7] Vapnik, V. N.: Statistical Learning Theory. John Wiley & Sons, New York (1998) 371, 372, 373 [8] Papageorgiou, C. P., Poggio, T.: A trainable system for object detection. International Journal of Computer Vision 38 (2000) 15–33 371, 374, 375 [9] Osuna, E., Freund, R., Girosi, F.: Training support vector machines: application to face detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico (1997) 130–136 371, 374, 375 [10] Romdhani, S., Torr, P., Scholkopf, B., Blake, A.: Computationally efficient face detection. In: 8th IEEE International Conference on Computer Vision. Volume 2., Vancouver, Canada (2001) 695–700 371, 374, 376 [11] Li, Y., Gong, S., Sherrah, J., Liddell, H.: Multi-view face detection using support vector machines and eigenspace modelling. In: 4th International Conference on Knowledge-Based Intelligent Engineering Systems & Allied Technologies. Volume 1., University of Brighton, UK (2000) 241–244 371, 374, 375 [12] Ng, J., Gong, S.: Multi-view face detection and pose estimation using a composite support vector machine across the view sphere. In: International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, Corfu, Greece (1999) 14–21 371, 374 [13] Podolak, I. T.: Feedforward neural network’s sensitivity to input data representation. Computer Physics Communications (1999) 181–188 372 [14] Papageorgiou, C., Oren, M., Poggio, T.: A general framework for object detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Santa Barbara, Canada (1998) 555–562 374, 375 [15] Li, Y., Gong, S., Liddell, H.: Support vector regression and classification based multi-view face detection and recognition. In: 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France (2000) 300–305 374 [16] Ai, H., Liang, L., Xu, G.: Face detection based on template matching and support vector machines. In: International Conference on Image Processing. Volume 1., Thessaloniki, Greece (2001) 1006–1009 374, 375, 376
Face Detection Based on Support Vector Machines
387
[17] Platt, J. C.: Sequential minimal optimization: A fast algorithm for tranining support vector machines. Technical Report MSR-TR-98-14, Microsoft Research, USA (1998) 374, 375 [18] Qi, Y., Doermann, D., DeMenthon, D.: Hybrid independent component analysis and support vector machine learning scheme for face detection. In: IEEE International Conference on Acoustics, Speech, and Signal Processing. Volume 3., Salt Lake City, Utah, US (2001) 1481–1484 374, 375, 376 [19] Fazekas, A., Kotropoulos, C., Buciu, I., Pitas, I.: Support vector machines on the space of walsh functions and their properties. In: 2nd IEEE Region 8-EURASIP Symposium on Image and Signal Processing and Analysis, Pula, Croatia (2001) 43–48 374 [20] Vetter, T., Jones, M. J., Poggio, T.: A bootstrapping algorithm for learning linear models of object classes. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico (1997) 40–46 375 [21] Fan, J., Yau, D., Elmagarmid, A., Aref, W.: Automatic image segmentation by integrating color-edge extraction and seeded region growing. IEEE Trans. on Image Processing 10 (2001) 1454–1466 376 [22] Wang, Y., Yuan, B.: A novel approach for human face detection from color images under complex background. Pattern Recognition 34 (2001) 1983–1992 376 [23] Terrillon, J. C., Shirazi, M., Fukamachi, H., Akamatsu, S.: Comparative performance of different skin chrominance models and chrominance spaces for the automatic detection of human faces in color images. In: 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France (2000) 54–61 376 [24] Mallat, S.: A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans. on Pattern Analysis and Machine Intelligence 11(7) (1989) 674–693 378
Detecting Windows in City Scenes Bj¨orn Johansson and Fredrik Kahl Centre for Mathematical Sciences, Lund University, Box 118, SE-221 00 Lund, Sweden {bjornj,fredrik}@maths.lth.se
Abstract. In this paper we present an object detection system for city environments. We focus on the problem of automatically detecting windows on buildings. Several possible applications for the detection system are given, such as recognition of buildings, pose estimation, rectification and 3D reconstruction. Experimental validations on real images are also provided. The system is capable of detecting windows in images at several different orientations and scales. The approach is based on learning from examples using support vector machines. Since the system is trainable, the extension to detect other objects in the scene is straightforward. The performance of the system has been evaluated on an independent training set and the results show that the object category “window” can be reliably detected under various poses and lighting conditions.
1
Introduction
In the near future, many mobile phones will be equipped with a camera. This opens up new possibilities for computer vision. One such application is a city guide, where the location and orientation of a person may be estimated from an image of the surrounding. In this paper, we show how this task may be accomplished by detecting the windows present in a given image, followed by a recognition stage and pose estimation, when models of the surrounding buildings are available. The focus of the paper is on detecting windows from learned image templates in a city scene. A detection system is presented which locates windows with various appearances, poses and under different lighting conditions. Further, several possible applications are given for which the detection system can be utilized, such as rectification, 3D reconstruction (including wide baseline matching) and as mentioned recognition and pose estimation. Object recognition is a hard problem and has been the subject of intensive research for many years, e.g. [4]. In the case of buildings, the recognition problem can be simplified if all windows can be reliably located. For example, the invariants of the window positions can be used to index a large database of models, cf. [14]. A further simplifying circumstance is that often the windows are located on a small number of planes. Recognizing planar objects has been studied in e.g. [13]. S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 388–396, 2002. c Springer-Verlag Berlin Heidelberg 2002
Detecting Windows in City Scenes
389
Fig. 1. Examples of windows in the training database. The template size is 64 × 48 The proposed system is trainable, i.e. it learns from examples. This makes the system flexible and it is straightforward to extend it to other objects of interest. For example, in many situations, there may no be sufficient number of windows. The learning technique is based on support vector machines [2, 17], which has been successfully applied to, for example, face detection [11], text categorization [6] and pedestrian detection [12].
2
The Detection System
In this section, the architecture of the system is described. There are three main parts: feature extraction, support vector learning and detection in new images. 2.1
Feature Extraction
The first issue to be addressed is that of representation of the data. We have scaled and registered the images of windows to a template of size 64 × 48 pixels. In Figure 1 some examples of typical windows that we want to detect are shown. The aspect ratio is not changed when an image is rescaled. The window frame is placed at the top left corner. As the system learns from examples, or more precisely, it learns from features extracted from the examples, one has to decide which features to use. Wavelet features have proven to be useful for many applications of object detection, e.g. [10, 12]. Inspired by these results, we have also chosen to use a wavelet basis, but it is modified to suit our application in mind. The first set of features that are extracted from the templates use the Haar wavelet [9], 1 ... 1 −1 .. −1 .. . . .. .. . . .. (1) . . . . . . , 1 ... 1 −1 .. −1 with filter kernels of sizes 16×16 and 32×32. In contrast to the wavelet transform, the kernels are shifted by 1/4 of the support of the kernel. Both horizontal and vertical kernels are applied which results in a total of 264 features. It is evident from the examples in Figure 1 that the frame of the window is a common characteristic. In order to capture the window frame more accurately
390
Bj¨ orn Johansson and Fredrik Kahl
in the feature set, we have used filter kernels of 1 ... 1 0 ... 0 −1 .. .. . . .. .. . . .. .. . . . . . . . . . .
the following type: −1 .. , .
(2)
1 ... 1 0 ... 0 −1 .. −1 as well as its transpose. The submatrix of 1’s (and −1’s) was chosen to be of size 16 × 8. The number of zeroes was chosen to match the horizontal and vertical border frames for the two filter orientations. This resulted in 72 additional elements in the feature vector. The above filters assume gray-scale images. In order to incorporate colour variations as well, we apply the filters to each of the three colour channels R,G and B, and then the one with maximum norm is kept. Thus, in total a window template is represented by a 264 + 72 = 336 feature vector. This representation will be evaluated on a large training set in Section 3. 2.2
Support Vector Machines
In recent years, support vector machines (SVM) have become very popular in statistical learning theory [2, 17]. In contrast to traditional learning techniques, such as neural networks which only minimizes training set error, SVM optimizes both training error and the complexity of the classifier at the same time. This makes it possible to capture the statistics of sparse, high-dimensional spaces with relatively few examples. Another attractive feature is that the problem can be solved with quadratic programming and thereby the optimal solution is always guaranteed. Again, compare this with neural networks where a nonlinear optimization problem is solved with back propagation. The SVM classification is based on computing the sign of the following function: Ns αi yi K(si , x) + b, (3) f (x) = i=1
where x is the feature pattern to be classified, si are the support vectors, Ns is the number of support vectors, K(·, ·) is the kernel function and finally αi , yi , b relate to the trained decision surface. In all our experiments, we have used a polynomial kernel of degree three and the software provided by [8] to learn the parameters in the classification function ( 3). 2.3
Detecting Windows in New Images
Given a new image, the objective is to determine the positions of all windows in the image. We use a direct approach by shifting the template window of size 64 × 48 over the whole image. The procedure is repeated at different scales. One example is illustrated in Figure 2 where the white rectangles show the detected windows.
Detecting Windows in City Scenes
391
Fig. 2. An image processed by the detection system. The white rectangles mark the detected windows This brute force method is time consuming as the classification function (3) at each template position in the input image at several scales has to be calculated. There are several possible methods to speed up the computations, such as reducing the number of feature to represent the data [12] or reducing the number of support vectors [1] making it possible to run in near real-time. The reduction of the feature set is further discussed in Section 3. Our implementation, which is done in Matlab, may also be sped up by using FFT for the filter calculations. Currently, the computation time for one image of size 640 × 480 with all 336 wavelet features is approximately 30 minutes.
3
Evaluation
The system has been trained on a database consisting of 1290 window templates (including mirror images) and 13972 templates of negative examples. See Figure 1 for some examples of window templates. The negative examples were collected in two phases. First, a preliminary version of the system was trained on a subset of the negative examples. The resulting system was applied to images containing no windows. The falsely detected windows in these images were then added to the database of negative examples, and a second training of the system was done on the complete database. In order to evaluate the performance of the system, an additional set of 150 window templates and 1000 negative templates was collected. These templates were not used in the training of the system. By varying the threshold for the classification function f (x) in (3), the detection rate and the number of false positives can be varied. Ideally, the detection rate should be 100% and the false alarm rate 0%.
392
Bj¨ orn Johansson and Fredrik Kahl
For comparison, four different ways of extracting features from the templates were tested, cf. Section 2.1. – Colour templates with all 336 wavelet features. – Gray-scale templates with all 336 wavelet features. – Colour templates with 264 wavelet features, with filters only of the first type (1). – Colour templates with 72 wavelet features, with filters only of the second type (2). The result is graphed in Figure 3. As can be seen, the colour and gray-scale versions of the system with all 336 wavelet features perform equally well on the test data, keeping the detection rate over 95% even with very a low false alarm rate. As expected, when reducing the size of the feature set the results get worse, but still acceptable.
1
0.98
0.96
Detection rate
0.94
0.92
0.9
336 colour 336 gray 264 colour 72 colour
0.88
0.86
0.84
0.82
0.8
−2
10
−1
10
0
10
False positive rate
Fig. 3. Receiver operator characteristic (ROC) curves for different feature sets
4
Applications
The detection system described above may be used in numerous applications. We discuss a few here and show some experimental results. 4.1
Rectification
In order to do metric measurements in a perspective image of a plane, the image first has to be rectified. This corresponds to a transformation of the original image to an image that would have been obtained from a fronto-parallel view.
Detecting Windows in City Scenes
393
The frame of a window is usually rectangular and has two horizontal and two vertical edges. In addition to this the windows are often located next to each other at the same height, or below each other. These properties make it suitable to rectify an image by first detecting windows in an image and then find the map which makes the windows rectangular and located below or next to each other. In the literature there are several suggestions to find the map, cf. [7, 16]. To the left in Figure 4 an image of a building is shown and to the right the automatically rectified image.
Fig. 4. Left: Original image. Right: Rectified image
4.2
Recognition and Pose Estimation
Using a single image of a building in a city scene we want to determine which buildings are present and estimate the relative position of the observer. We assume to have models of a number of buildings. The models consist of a metric template indicating where the windows are located. We begin by detecting all windows in the image. From a number of invariants based on the window positions, a hypothesis building is chosen in the database. The homography between the image and the template of the current building is then estimated. Knowing the calibration of the camera, the position and orientation may now be calculated from the homography with the algorithm described in [15]. Figure 5 shows 11 images of a building. In each image, the position of the matched template is plotted. The white parts of the template indicate smooth areas and the black parts indicate where the edges are located. The estimated camera positions using the template positions are plotted in Figure 6 together with a reconstruction based on a traditional structure from motion algorithm [5] with manually selected point correspondences. Notice the good aligment between the two methods.
394
Bj¨ orn Johansson and Fredrik Kahl
Fig. 5. The 11 images of a building used in the experiments. The estimated positions of the template are highlighted in each image
Fig. 6. A top view of the camera positions (asterisks) estimated from 11 images of a building (which is the bottom, middle rectangle) compared to a reconstruction based on manually selected point correspondences (circles)
Detecting Windows in City Scenes
4.3
395
3D Reconstruction
A challenging application would be to create 3D reconstructions of large scale environments based on recognition. A number of features, e.g. windows, doors, trees, cars, etc. may be recognized and complete 3D models of these features could be used in the reconstruction. Similar ideas have been investigated in [3]. Recognition algorithms may further be used in traditional 3D reconstruction. A difficult problem is to find point correspondences, particularly if the images have a wide base line. If e.g. the windows are detected in a number of images, these may be used to simplify the correspondence problem.
5
Conclusions
A system with the capability of detecting windows in images has been presented. The system is based on support vector machines for learning the appearance of window templates. An experimental evaluation has shown good performance making it applicable in man-made environments. Several applications of the detection system have been described, such as recognition, pose and rectification including experimental validations. The extension to incorporate other objects in the detection system is straightforward and this opens up an even wider range of application scenarios.
References [1] Christopher J. C. Burges. Simplified support vector decision rules. In International Conference on Machine Learning, pages 71–77, 1996. 391 [2] Christopher J. C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121–167, 1998. 389, 390 [3] A. Dick, P. Torr, S. Ruffle, and R. Cipolla. Combining single view recognition and multiple view stereo for architectural scenes, 2001. 395 [4] W. Grimson. Object Recognition by Computer. MIT Press, 1990. 388 [5] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2000. 393 [6] Thorsten Joachims. Text categorization with support vector machines: learning with many relevant features. In Claire N´edellec and C´eline Rouveirol, editors, Proceedings of ECML-98, 10th European Conference on Machine Learning, pages 137–142, Chemnitz, DE, 1998. Springer Verlag, Heidelberg, DE. 389 [7] D. Liebowitz and A. Zisserman. Metric rectification for perspective images of planes. In In Proceedings of the Conference on Computer Vision and Pattern Recognition, 1998. 393 [8] J. Ma and S. Ahalt. Osu svm classifier matlab toolbox (ver 2.00), 2001. Available at http://eewww.eng.ohio-state.edu/˜maj/osu svm. 390 [9] Stephane G. Mallat. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(7):674–693, 1989. 389
396
Bj¨ orn Johansson and Fredrik Kahl
[10] M. Oren, C. Papageorgiou, P. Sinha, E. Osuna, and T. Poggio. Pedestrian detection using wavelet templates. In Proc. Conf. Computer Vision and Pattern Recognition, pages 193–199, San Juan, Puerto Rico, 1997. 389 [11] E. Osuna, R. Freund, and F. Girosi. Training support vector machines: an application to face detection. In Proc. Conf. Computer Vision and Pattern Recognition, pages 130–136, San Juan, Puerto Rico, 1997. 389 [12] C. Papageorgiou and T Poggio. Trainable pedestrian detection. In Proceedings of International Conference on Image Processing, Kobe, Japan, volume 4, pages 35–39, 1999. 389, 391 [13] C. Rothwell, A. Zisserman, D. Forsyth, and J. Mundy. Planar object recognition using projective shape representation. In Proc. Int. Conf. on Computer Vision, pages 57–99, 1995. 388 [14] C. Rothwell, A. Zisserman, J. L. Mundy, and D. A. Forsyth. Efficient model library access by projectively invariant indexing functions. In Proc. Conf. Computer Vision and Pattern Recognition, pages 109–114, Champaign, Illinois, USA, 1992. 388 [15] Peter Sturm. Algorithms for plane-based pose estimation. In Proc. Conf. Computer Vision and Pattern Recognition, pages 706–711, Hilton Head Island, South Carolina, 2000. 393 [16] T. Tuytelaars, L. Van Gool, M. Proesmans, and T. Moons. The cascaded hough transform as an aid in aerial image interpretation. In In Proc. ICCV, pages 67–72, 1998. 393 [17] V. Vapnik. The nature of statistical learning theory. Springer Verlag, 1995. 389, 390
Support Vector Machine Ensemble with Bagging Hyun-Chul Kim, Shaoning Pang, Hong-Mo Je, Daijin Kim, and Sung-Yang Bang Department of Computer Science and Engineering Pohang University of Science and Technology San 31, Hyoja-Dong, Nam-Gu, Pohang, 790-784, Korea {grass,invu71,dkim,sybang}@postech.ac.kr
Abstract. Even the support vector machine (SVM) has been proposed to provide a good generalization performance, the classification result of the practically implemented SVM is often far from the theoretically expected level because their implementations are based on the approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM, we propose to use the SVM ensembles with bagging (bootstrap aggregating). Each individual SVM is trained independently using the randomly chosen training samples via a bootstrap technique. Then, they are aggregated into to make a collective decision in several ways such as the majority voting, the LSE(least squares estimation)-based weighting, and the double-layer hierarchical combining. Various simulation results for the IRIS data classification and the hand-written digit recognitionshow that the proposed SVM ensembles with bagging outperforms a single SVM in terms of classification accuracy greatly.
1
Introduction
The support vector machine is a new and promising classification and regression technique proposed by Vapnik and his group at AT&T Bell Laboratories [1]. The SVM learns a separating hyperplane to maximize the margin and to produce a good generalization ability [2]. Recent theoretical research work has solved the existing difficulties of using the SVM in practical applications [3, 4]. By now, it has been successfully applied in many areas, such as the face detection, the hand-writing digital character recognition, and the data mining, etc. However, the SVM has two drawbacks. First, since it is originally a model for the binary-class classification, we should use a combination of SVMs for the multi-class classification. There are methods for combining binary classifiers for the multi-class classification [6, 7], but when it has been applied to SVM, the performance has not seemed to improve as much as in the binary classification. Second, since learning SVM is time-consuming for a large scale of data, we should use some approximate algorithms [2]. Using the approximate algorithms can reduce the computation time, but degrade the classification performance. To overcome the above drawbacks, we propose to use the SVM ensembles. We expect that the SVM ensemble can improve the classification performance S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 397–408, 2002. c Springer-Verlag Berlin Heidelberg 2002
398
Hyun-Chul Kim et al.
greatly than using a single SVM by the following fact. Each individual SVM has been trained independently from the randomly chosen training samples and the correctly- classified area in the space of data samples of each SVM becomes limited to a certain area. We can imagine that a combination of several SVMs will expand the correctly-classified area incrementally. This implies the improvement of classification performance by using the SVM ensemble. Likewise, we also expect that the SVM ensemble will improve the classification performance in case of the multi-class classification. The idea of the SVM ensemble has been proposed in [8]. They used the boosting technique to train each individual SVM and took another SVM for combining several SVMs. In this paper, we propose to use the SVM ensemble based on the bagging technique where each individual SVM is trained over the randomly chosen training samples via a bootstrap technique and the independently trained several SVMs are aggregated in various ways such as the majority voting, the LSE-based weighting, and the double-layer hierarchical combining. This paper is organized as follows. Section 2 describes the theoretical background of the SVM. Section 3 describes the SVM ensembles, the bagging method and three different combination methods. Section 4 shows the simulation results when the proposed SVM ensemble are applied to the classification problems such as the IRIS data classification and the hand-written digit recognition.Finally, a conclusion is drawn.
2
Support Vector Machines
The classical emprical risk minimization approach, which determines the classification decision function by minimizing the empirical risk as L
R=
1 |f (xi ) − yi |, l i=1
(1)
where L and f are the size of examples and the classification decision function, respectively. For SVM, determining an optimal separating hyperplane that gives low generalization error is the primary concern. Usually, the classification decision function in the linearly separable problem is represented by fw,b = sign(w · x + b).
(2)
In SVM, the optimal separating hyperplane is determined by giving the largest margin of separation between different classes. This optimal hyperplane bisects the shortest line between the convex hulls of the two classes. The optimal hyperplane is required to satisfy the following constrained minimization as M in :
1 T w w, 2
yi (w · xi + b) ≥ 1.
(3)
Support Vector Machine Ensemble with Bagging
399
For the linearly non-separable case, the minimization problem needs to be modified to allow the misclassified data points. This modification results in a soft margin classifier that allows but penalizes errors by introducing a new set of l as the measurement of violation of the constraints. variables ξi=1 M in :
L 1 T w w + C( ξi )k , 2 i=1
yi (wT ϕ(xi ) + b) ≥ 1 − ξi ,
(4)
where C and k are used to weight the penalizing variables ξi , ϕ(·) is a nonlinear function which maps the input space into a higher dimensional space. Minimizing the first term in Eq.(4) is corresponding to minimizing the VC-dimension of the learning machine and minimizing the second term in Eq.(4) controls the empirical risk. Therefore, in order to solve problem Eq.(4), we need to construct a set of functions, and implement the classical risk minimization on the set of functions. Here, a Lagrangian method is used to solve the above problem. Then, Eq.(4) can be written as M ax : F (∧) = ∧ · 1 −
1 ∧ ·D∧ , 2
∧ · y = 0; ∧ ≤ C; ∧ ≥ 0,
(5)
where ∧ = (λ1 , ..., λL ),D = yi yj xi · xj . For the binary classification, the decision function Eq.(2) can be rewritten as l f (x) = sign( yi λ∗i (x) + b∗ )).
(6)
i=1
where λ∗i (x) = λi yi K(x, xi ), and K(x, xi ) = φ(x)φ(xi ). ( K(x, xi ) can be simplifed by the kernel trick [9]. ) For the multi-class classification, we can extend the SVM in the following two ways. One method is called the “one-against-all” method [6], where we have as many SVMs as the number of classes. The ith SVM is trained from the training samples where some examples contained in the ith class have ”+1” labels, and other examples contained in the other classes have ”-1” labels. Then, Eq.(4) is modified into M in : F (∧) =
L 1 (wi )T wi + C( (ξ i )t )k 2 t=1
yi ((wi )T ϕ(xt ) + bi ) ≥ 1 − (ξ i )t , if xi ∈ Ci , yi ((wi )T ϕ(xt ) + bi ) ≤ 1 − (ξ i )t , if xi ∈ C¯i , (ξ i )t > 0, i = 1, ..., L,
(7)
400
Hyun-Chul Kim et al.
where Ci is the set of data of class i. The decision function of (7) becomes f (x) = sign( M ax ((wj )T · ϕ(xj ) + bj )), j∈1,2,...,C
(8)
where C is the number of the classes. Another method is called the one-against-one method [7]. When the number of classes is C, this method constructs C(C − 1)/2 SVM classifiers. The ijth SVM is trained from the training samples where some examples contained in the ith class have ”+1” labels and other examples contained in the jth class have ”-1” labels. Then, Eq.(4) is modified into M in : F (∧) =
L 1 (wij )T wij + C( (ξ ij )t )k 2 t=1
yt ((wij )T ϕ(xt ) + bij ) ≥ 1 − (ξ ij )t , if xt ∈ Ci , yt ((wij )T ϕ(xt ) + bij ) ≤ 1 − (ξ ij )t , if xt ∈ Cj , (ξ ij )t > 0, t = 1, ..., L. (9) The class decision in this type of multi-class classifier can be performed in the following two ways. The first decision is based on the “Max Wins” voting strategy, in which C(C − 1)/2 binary SVM classifiers will vote for each class, and the winner class having the maximum votes is the last classification decision. The second method uses the tournament match, which reduces the classification time to the log scale.
3
Support Vector Machine Ensemble
An ensemble of classifiers is a collection of several classifiers whose individual decisions are combined in some way to classify the test examples [10]. It is known that an ensemble often shows much better performance than the individual classifiers that make it up. Hansen et. al. [11] shows why the ensemble shows better performance than individual classifiers as follows. Assume that there are an ensemble of n classifiers: {f1 , f2 , . . . , fn } and consider a test data x. If all the classifiers are identical, they are wrong at the same data, where an ensemble will show the same performance as individual classifiers. However, if classifiers are different and their errors are uncorrelated, then when fi (x) is wrong, most of other classifiers except for fi (x) may be correct. Then, the result of majority voting can be correct. More precisely, if the error of individual classifier is p < 1/2 and the ervoting is rors are independent, then the probability pE that the result of majority incorrect is nk=n/2 pk (1−p)(n−k) (< nk=n/2 ( 12 )k ( 12 )(n−k) = nk=n/2 ( 12 )n ). When the size of classifiers n is large, the probability pE becomes very small. The SVM has been known to show a good generalization performance and is easy to learn exact parameters for the global optimum[2]. Because of these
Support Vector Machine Ensemble with Bagging
401
advantages, their ensemble may not be considered as a method for improving the classification performance greatly. However, since the practical SVM has been implemented using the approximated algorithms in order to reduce the computation complexity of time and space, a single SVM may not learn exact parameters for the global optimum. Sometimes, the support vectors obtained from the learning is not sufficient to classify all unknown test examples completely. So, we can not guarantee that a single SVM always provides the global optimal classification performance over all test examples. To overcome this limitation, we propose to use an ensemble of support vector machines. Similar arguments mentioned above about the general ensemble of classifiers can also be applied to the ensemble of support vector machines. Figure 1 shows a general architecture of the proposed SVM ensemble. During the training phase, each individual SVM is trained independently by its own replicated training data set via a bootstrap method explained in the Section 3.1. All constituent SVMs will be aggregated by various combination strategies explained in the Section 3.2. During the testing phase, a test example is applied to all SVMs simultaneously and a collective decision is obtained based on the aggregation strategy. On the other hand, the advantage of using the SVM ensemble over a single SVM can be achieved equally in the case of multi-class classification. Since the SVM is originally a binary classifier, many SVMs should be combined for the multi-class classification as mentioned in 2.2. The SVM classifier for the multi-
Fig. 1. A general architecture of the SVM ensemble
402
Hyun-Chul Kim et al.
class classification does not show as good performance as that for the binaryclass classification. So, we can also improve the classification performance in the multi-class classification by taking the SVM ensemble where each SVM classifier is designed for the multi-class classification. 3.1
Constructing the SVM Ensembles Using Bagging
In this work, we adopt a bagging technique [12] to construct the SVM ensemble. In bagging, several SVMs are trained independently via a bootstrap method and then they are aggregated via an appropriate combination technique. Usually, we have a single training set Usually, we have a single training set T R = {(xi ; yi )|i = 1, 2, . . . , l}. But we need K training samples sets to construct the SVM ensemble with K independent SVMs. From the statistical fact, we need to make the training sample sets different as much as possible in order to obtain higher improvement of the aggregation result. For doing this, we often use the bootstrap technique as follows. Bootstrapping builds K replicate training data sets {T RkB |k = 1, 2, . . . , K} by randomly resampling, but with replacement, from the given training data set T R repeatedly. Each example xi in the given training set T R may appear repeated times or not at all in any particular replicate training data set. Each replicate training set will be used to train a certain SVM. 3.2
Aggregating Support Vector Machines
After training, we need to aggregate several independently trained SVMs in an appropriate combination manner. We consider two types of combination techniques such as the linear and nonlinear combination method. The linear combination method, as a linear combination of several SVMs, includes the majority voting and the LSE-based weighting. The majority voting and the LSE-based weighting are often used for the bagging and the boosting, respectively. A nonlinear method, as a nonlinear combination of several SVMs, includes the doublelayer hierarchical combining that use another upper-layer SVM to combine several lower-layer SVMs. Majority Voting Majority voting is the simplest method for combining several SVMs. Let fk (k = 1, 2, . . . , K) be a decision function of the kth SVM in the SVM ensemble and Cj (j = 1, 2, . . . , C) denote a label of the j-th class. Then, let Nj = #{k|fk (x) = Cj }, i.e. the number of SVMs whose decisions are known to the jth class. Then, the final decision of the SVM ensemble fmv (x) for a given test vector x due to the majority voting is determined by fmv (x) = argmaxNj . j
(10)
Support Vector Machine Ensemble with Bagging
403
The LSE-Based Weighting The LSE-based weighting treats several SVMs in the SVM ensemble with different weights. Often, the weights of several SVMs are determined in proportional to their accuracies of classifications [13]. Here, we propose to learn the weights using the LSE method as follows. Let fk (k = 1, 2, . . . , K) be a decision function of the kth SVM in the SVM ensemble that is trained by a replicate training data set T hauB k = {(xi ; yi )|i = 1, 2, . . . , L}. The weight vector w can be obtained by wE = A−1 y, where A = (fi (xj ))K×L , and y = (yj )1×L . Then, the final decision of the SVM ensemble fmv (x) for a given test vector x due to the LSE-based weighting is determined by fLSE (x) = sign(w · [(fi (x))K×1 ])
(11)
The Double-Layer Hierarchical Combining We can use another SVM to aggregate the outputs of several SVMs in the SVM ensemble. So, this combination consists of a double-layer of SVMs hierarchically where the outputs of several SVMs in the lower layer feed into a super SVM in the upper layer. This type of combination looks similar of the mixture of experts introduced by M. Jordan et. al. [14]. Let fk (k = 1, 2, . . . , K) be a decision function of the kth SVM in the SVM ensemble and F be a decision function of the super SVM in the upper layer. Then, the final decision of the SVM ensemble fSV M (x) for a given test vector x due to the double-layer hierarchical combining is determined by fSV M (x) = F ((f1 (x), f2 (x), . . . , fK (x))),
(12)
where K is the number of SVMs in the SVM ensemble. 3.3
Extension of the SVM Ensemlbe to the Multi-class Classification
In section 2, we explained two kinds of extension methods such as ”one-againstall and one-against-one methods” in order to apply the SVM to the multi-class classification problem. We can use these extension methods equally in the case of the SVM ensemble for the multi-class classification. For the C-class classification problem, we can have two types of extensions according to the insertion level of the SVM ensemble: (1) the binary-classifier-level SVM ensemble and (2) the multi-classifier-level SVM ensemble. The binary-classifier-level SVM ensemble consists of C SVM ensembles in the case of “one-against-all” method or C(C − 1)/2 SVM ensembles in the case of “one-against-one” method. And, each SVM ensemble consists of K independent SVMs. So, the SVM ensemble is built in the level of binary classifiers. We obtain the final decision from the decision results of many SVM ensembles via either the “Max Wins” voting strategy or the tournament match. The multi-classifier-level SVM ensemble consists of K multi-class classifiers. And each multi-class classifier consists of C binary classifiers in the case of
404
Hyun-Chul Kim et al.
“one-against-all” method or for C(C − 1)/2 binary classifiers in the case of “oneagainst-one” method. So, the SVM ensemble is built in the level of multi-class classifiers. We obtain the final decision from the decision results of many multiclass classifiers via an appropriate aggregating strategy of the SVM ensemble.
4
Simulation Results
To evaluate the efficacy of the proposed SVM ensemble using the bagging technique, we have performed three different classification problems such as the IRIS data classification, the UCI hand-written digit recognition.We used four different classification methods such as a single SVM, and three different SVM ensembles with different aggregating strategies like the majority voting, the LSE-based weighting, and the double-layer hierarchical combining. 4.1
IRIS Data Classification
The IRIS data set[15, 16] is one of the best known databases to be found in the pattern recognition literature. The IRIS data set contains 3 classes where each class consists of 50 instances. Each class refers to a type of IRIS planet. One class is linearly separable from the other classes but they are not linearly separable from each other. We used the one-against-one method for the multi-class extension and took the binary-classifier-level SVM ensemble for the classification. So, for the 3-class classification problem and one-against-one extension method, there are three binary-classifier-level SVM ensembles (SV MC1,C1 , SV MC1,C2 , SV MC1,C3 ) where each SVM ensemble consists of 5 independent SVMs. The final decision are obtained from the decision results of three SVM ensembles via a tournament matching scheme. Each SVM used 2-d polynomial kernel function. We selected randomly 90 data samples for the training set. For bootstrapping, we re-sampled randomly 60 data samples with replacement from the training data set. We trained each SVM independently over the replicated training data set and aggregated several trained SVMs via three different combination methods. We test four different classification methods using the test data set consisting of 60 IRIS data. Table 1 shows the classification results of four different classification methods for the IRIS data classification.
Table 1. The classification results of IRIS data classification Method Classification rate Single SVM 96.73% Majoirity voting 98.0% LSE-based weighting 98.66% Hierarchical SVM 98.66%
Support Vector Machine Ensemble with Bagging
4.2
405
UCI Hand-Written Digit Recognition
We used the UCI hand-written digit data [16]. Some digits in the database are shown in Figure 2. Among the UCI hand-written digits, we chose randomly 3,828 digits as a training data set and the remaining 1,797 digits as a test data set. The original image of each digit has the size of 32 × 32 pixels. It is reduced to the size of 8 × 8 pixels where each pixel is obtained from the average of the block of 4 × 4 pixels in the original image. So each digit is represented by a feature vector with the size of 64 × 1.
Fig. 2. Examples of hand-written digits in UCI database
We used one-against-one methods. We constructed a SVM ensemble, consiting of 11 SVMs, for one-against-one classification, and used the tournament scheme for decision. Each SVM used 2-d polynomial kernel function. For bagging, we sampled 2000 data randomly for an individual SVM. We trained each SVM and combined three methods, such as unweighted voting, weighted voting (using LSE), and a combining SVM. We applied a single SVM for the comparison. Table 2 shows the performance of a single SVM and bagging SVMs with three combining methods for the IRIS data classification. We used the one-against-one method for the multi-class extension and took the binary-classifier-level SVM ensemble for the classification. So, for the 10class classification problem and one-against-one extension method, there are 45 binary-classifier-level SVM ensembles (SV MC0,C1 , SV MC0,C2 , . . . , SV MC8,C9 ) where each SVM ensemble consists of 11 independent SVMs. The final decision are obtained from the decision results of three SVM ensembles via a tournament matching scheme. Each SVM used 2-d polynomial kernel function. For bootstrapping, we re-sampled randomly 2,000 digits samples with replacement from the training data set consisting of 3,828 digit samples. We trained each SVM independently over the replicated training data set and aggregated several trained SVMs via three different combination methods. We test four different classification methods using the test data set consisting of 1,797 digits.
406
Hyun-Chul Kim et al.
Table 2 shows the classification results of four different classification methods for the hand-written digit data classification.
Table 2. The classification results of UCI hand-written digit recognition Method Classification rate Single SVM 96.99% Majority voting 97.55% LSE-based weighting 97.82% Hierarchical SVM 98.01%
5
Conclusion
Usually, the practical SVM has been implemented based on the approximation algorithm to reduce the cost of time and space. So, the obtained classification performance is far from the theoretically expected level of it. To overcome this limitation, we addressed the SVM ensemble that consists of several independently trained SVMs. For training each SVM, we generated many replicated training sample sets via the bootstrapping technique. Then, all independently trained SVMs over the replicated training sample sets were aggregated by three combination techniques such as the majority voting, the LSE-based weighting, and the double-layer hierarchical combining. We also extended the SVM ensemble to the multi-class classification problem: the binary-classifier-level SVM ensemble and the multi-classifier-level SVM ensemble. The former did build the SVM ensemble in the level of binary classifiers and the latter did build the SVM ensemble in the level of multi-class classifiers. The former had C SVM ensembles in the case of “one-against-all” method or C(C − 1)/2 SVM ensembles in the case of “one-against-one” method. And, each SVM ensemble consisted of K independent SVMs. The latter consisted of K multi-class classifiers. And each multi-class classifier consists of C binary classifiers in the case of “one-against-all” method or for C(C − 1)/2 binary classifiers in the case of “one-against-one” method. We evaluated the classification performance of the proposed SVM ensemble over three different multi-class classification problems such as the IRIS data classification, the hand-written digit recognition. The SVM ensembles outperform a single SVM for all applications in terms of classification accuracy. For three different aggregation methods, the classification performance is superior in the order of the double-layer hierarchical combining, the LSE-based weighting, and the majority voting. In the future, we will consider other aggregation scheme like boosting for constructing the SVM ensemble.
Support Vector Machine Ensemble with Bagging
407
Acknowledgements The authors would like to thank the Ministry of Education of Korea for its financial support toward the Electrical and Computer Engineering Division at POSTECH through its BK21 program. This research was also partially supported by the Brain Science and Engineering Research Program of the Ministry of Science and Technology, Korea, and by the Basic Research Institute Support Program of the Korea Research Foundation.
References [1] Cortes, C., Vapnik, V.: Support vector network. Machine Learning. 20 (1995) 273–297 397 [2] Burges, C.: A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery. 2(2) (1998) 121–167 397, 400 [3] Joachims, T.: Making large-scale support vector machine learning practical. Advances in Kernel Methods: Support Vector Machines, MIT Press, Cambridge, MA (1999) 397 [4] Platt, J.: Fast training of support vector machines using sequential minimal optimization. Advances in Kernel Methods: Support Vector Machines, MIT Press, Cambridge, MA (1999) 397 [5] Weston, J., Watkins, C.: Support Vector Machines for Multi-Class Pattern Recognition. Proceedings of the 7th European Symposium on Artificial Neural Networks (1999) [6] Bottou, L., Cortes, C., Denker, J., Drucker, H., Guyon, I., Jackel, Lawrence D., LeCun, Y., M¨ uller U., S¨ ackinger E., Simard, P., Vapnik, V.: Comparison of classifier methods: a case study in handwriting digit recognition. Proceedings of the 13th International Conference on Pattern Recognition. IEEE Computer Society Press (1994) 77–87 397, 399 [7] Knerrm, S., Personnaz, L., Dreyfus, G.: Single-layer learning revisited: a stepwise procedure for building and training a neural network. In: Fogelman, J.(eds.): Neurocomputing: Algorithms, Architechtures and Application, Springer-Verlag (1990) 397, 400 [8] Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York, 1999 398 [9] Sch¨ olkopf, B., Smola A., and Muller K.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10(5) (1998) 1299-1319 399 [10] Dietterich, T.: Machine Learning Research: Four Current Directions. The AI Magazine, 18(4) (1998) 97–136 400 [11] Hansen, L., Salamon, P.: Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12 (1990) 993–1001 400 [12] Breiman, L.: Bagging predictors. Machine Learning, 24(2) (1996) 123–140, 1996 402 [13] Kim, D., Kim, C.: Forecasting time series with genetic fuzzy predictor ensemble. IEEE Transaction on Fuzzy Systems, 5(4) (1997) 523–535 403 [14] Jordan, M., Jacobs, R., Hierarchical mixtures of experts and the EM algorithm. Neural Computation 6(5) (1994) 181–214 403 [15] Fisher, R.: The use of multiple measurements in taxonomic problems. Annual Eugenics, 7, Part II (1936) 179–188 404
408
Hyun-Chul Kim et al.
[16] Bay, B.: The UCI KDD Archive [http://kdd.ics.uci.edu]. Irvine, CA: University of California, Department of Information and Computer Science (1999) 404, 405
A Comparative Study of Polynomial Kernel SVM Applied to Appearance-Based Object Recognition Eulanda Miranda dos Santos and Herman Martins Gomes Departamento de Sistemas e Computa¸ca ˜o, Universidade Federal da Para´ıba 58109-970 Campina Grande PB, Brazil {eulanda,hmg}@dsc.ufpb.br
Abstract. This paper investigates the performance of Support Vector Machines with linear, quadratic and cubic kernels in the problem of recognising 3D objects from 2D views. It describes an experiment using the complete set of images from the Columbia Coil100 image database. Image views were randomly selected from the object classes. Previous works used only subsets of the classes, from which only a few training and testing set sizes were extracted and object views were usually too close to each other, which may have artificially increased the recognition rates. In our experiments, we observed that the degree of the polynomial kernel played a minor role in the final results. Moreover, although recognition rates were slightly inferior to those of previous work, a clearer picture of the SVM performance on the Coil100 image database has been produced.
1
Introduction
There is a recent interest on the application of Support Vector Machines (SVM), a new class of learning algorithms derived from the Statistical Learning Theory [20], to an ever growing class of Computer Vision problems. One of the reasons for this interest is that SVM are showing remarkable performance in a number of difficult learning tasks, ranging from handwritten digit recognition [6] to the problem of recognising 3D objects from its 2D views [12]. In this paper we are particularly interested in the second class of learning tasks above, which may also be referred to, in the Computer Vision literature, as appearance-based recognition [10,8,4], as opposed to geometric-based recognition [7], since there is no explicit construction of geometric models and the learning process is based primarily on the colour or intensity information found in the images. The Columbia Coil100 image database [9] is one of the best known databases to experiment with appearance-based recognition. It is not surprising that a
This work was supported by CAPES (Coordena¸ca ˜o de Aperfei¸coamento de Pessoal de N´ıvel Superior) and DSC/COPIN/UFPB, Brazil.
S.-W. Lee and A. Verri (Eds.): SVM 2002, LNCS 2388, pp. 408–418, 2002. c Springer-Verlag Berlin Heidelberg 2002
A Comparative Study of Polynomial Kernel SVM
409
reasonable number of authors have already studied the behaviour of SVM under different experimental conditions utilising this database. However, most of the previous work used only subsets of the classes and views available and worked with a few training/testing set sizes. Although the results gave some insights about the excellent performance of SVM when compared to other algorithms, experiments using the full set of images would give a clearer picture of this performance. Moreover, most experiments were limited to linear kernel SVM. The role of the kernel in the SVM algorithm can be roughly explained as a way of mapping (usually non-linearly separable) input data into a new (linearly separable) high dimensional space. This paper builds upon the previous works by performing a study involving all classes and possible training/testing sizes with randomly selected views from the Coil100 image database. It also compares three different polynomial kernels: linear (degree 1), quadratic (degree 2) and cubic (degree 3). Results showed that the degree of the polynomial kernel plays a minor role in the final recognition rates, which is a confirmation of the experimental results on handwritten digit recognition by Vapnik in his book [21]. The following section presents an overview of SVM, followed by a review of the literature related to this work. Then, a brief description of the Columbia image database is presented. The software library used to perform the experiments is described afterwards. The main sections of this paper contain a description of the experiments and results. Finally, we present some conclusions and perspectives of future work.
2
SVM Overview
SVM belongs to the class of kernel-based learning machines, which includes, among other members, Gaussian Processes, Kernel Fisher Discriminants and Kernel Principal Component Analysis. In the recent years, this class of machines have shown successful results in applications of various research fields. SVM have particularly been used for pattern recognition, regression estimation, density estimation, novelty detection, and others [21]. This overview concentrates specifically on the pattern recognition use. Pattern recognition is usually accomplished through functions (defining hyperplanes) that divide the space of features into regions, according to the number of classes. SVM classifiers construct a special class of hyperplanes known as optimal separation hyperplanes. In the simplest case, with only two different classes to learn, an optimal separation hyperplane can be defined as the one with maximal margin of separation between the pair of classes. The margin can be seen as the sum of the distances of the hyperplane to pairs of opposite-class training points. Figure 1 illustrates this idea. In the binary classification task, with data points xi (i = 1, . . . , n) and corresponding labels yi ∈ {−1, 1}, a linear kernel SVM is based on the class of hyperplanes:
410
Eulanda Miranda dos Santos and Herman Martins Gomes
(b)
(a)
Fig. 1. (a) A separation hyperplane with small margin and (b) an optimal separation hyperplane
(w.x) + b = 0
(1)
where w is the weight vector (w ∈ R ), x is the input vector and b is the bias (b ∈ R). The decision functions are in the form: N
f (x) = sgn((w.x) + b) The geometric margin is equal to through a minimisation of vector w:
1 2 w.
(2)
The optimal hyperplane can be found
1 w 2 Subject to yi (w.xi + b) ≥ 1∀i Minimise
This is an optimisation problem that can be solved by using the method of Lagrange multipliers. Thus, finding the optimal hyperplane can be formulated as:
Maximise W (α) =
l
αi −
i=1
Subject to
l
l 1 yi yj αi αj (xi .xj ) 2 i,j=1
yi αi = 0,
i=1
αi ≥ 0, i = 1, . . . , l The data points xi appear inside an inner product. SVM can be generalised to a non-linear formulation by mapping the data points xi into a new highdimensional space Φ(xi ), generally called feature space. This mapping can be
A Comparative Study of Polynomial Kernel SVM
411
defined implicitly by a kernel: K(xi , xj ) = Φ(xi ).Φ(xj ). Provided there is a suitable choice of a kernel, the data can become linearly separable in the feature space despite being non-separable in the original input space. SVM can allow misclassified data by the introduction n of slack-variables ξ to relax the hard-margin constraints. This produces C i=1 ξi , where C is the regularisation constant that determines the trade-off between the number of training errors. By substituting the constant C and the kernel functions, the resulting problem becomes:
Maximise
l i=1
−
l 1 αi αj yi yj K(xi , xj ) 2 i,j=1
Subject to 0 ≤ αi ≤ C, i = 1, . . . , l l αi yi = 0 i=1
Different kernel functions can be used. Some of the possibilities are RBF, 2 2 K(xi , xj ) = e−xi −xj /2σ ; feedforward neural network, K(xi , xj ) = tanh(βxi .xj + b); and polynomial, K(xi , xj ) = [(xi .xj ) + 1]d, where d is the polynomial degree. We experimented with the polynomial kernel using d = 1, 2, 3. In order to apply SVM to multi-class recognition problems, several strategies can be employed. The most common involves breaking the problem into several binary classification tasks, also known as one-against-one approach. This approach constructs k(k − 1)/2 binary classifiers, where k is the number of classes, and then combines the outputs of all classifiers into a single output. Other strategies to perform multi-class SVM can me found in [22] and [19]. The software library LIBSVM, particularly used in this work, employs the one-against-one strategy.
3
Related Work
Pontil and Verri [12] presented one of the first works on the application of SVM to appearance-based object recognition using the Coil100 image database. In their work, not all 100 object classes were used at once, instead, they designed experiments involving only random selections of 32 objects at a time in order to reduce the combinatorics related to having to run many binary classification experiments. Moreover, the training set for a given class was constructed using 36 views (or poses) of the object, in steps of 10 degrees. The remaining 36 views were used for testing. This may have simplified the problem, since nearby images are very similar to each other (see Figure 3 for reference). The Coil100 database was generated in a controlled environment, in which the object’s pose (or view) was the only parameter allowed to vary. This was intended to separate the most relevant factors related to the object’s appearance (its pose)
412
Eulanda Miranda dos Santos and Herman Martins Gomes
from the other aspects of the problem, like the presence of occlusion, background clutter, lighting conditions, noise, translation, rotation etc. A number of authors have added new complexity to the problem by synthetically modifying the images in order to allow experiments with uniformly distributed noise [12,14], object translation [12], occlusions [12,16] and background clutter [13,14,15]. In terms of technique comparison, Roth et al [16] compared Nearest Neighbour, linear kernel SVM and SNoW (Sparse Network of Winnows, a sparse network of linear units over a common pre-defined or incrementally learnt feature space). Although all Coil100 object classes were used, their experiments considered only four training set sizes, having 4, 8, 18 and 36 views each. Roobaert [14] compared the recognition performance as well as computational costs of SVM, Nearest Neighbour and DirectSVM (an heuristic implementation of SVM), having concluded that DirectSVM had performance similar to SVM, but requiring substantially less computational resources and training time. However, they used a 30 object version of the image library, and considered only a subset of training views in their experiments (36, 12, 8, 4 and 2). In another work, Roobaert and co-workers [15] evaluated the performance of SVM compared to a Nearest Neighbour classifier and the Columbia image recognition system.
4
Coil100 Image Database
The Columbia image database Coil100 [9], used in our experiments, is available at www.cs.columbia.edu/CAVE. As discussed in the introduction, this database is one of the best to investigate appearance-based recognition algorithms. It contains 7200 colour images of 100 different objects. The images were acquired by placing the objects on the centre of a turntable and taking pictures from a fixed camera position. A total of 72 poses, taken every 5 degrees in the turntable, has been acquired for each object. The images were then normalised to a fixed retina of 128x128 pixels. Murase and Nayar [8] give more details about the way images have been acquired. Figure 2 shows some of the Coil100 image objects, all seen at the 10 degree angle. Since the camera position is fixed (only the objects are rotated) depending upon the angle, some objects may appear with different sizes within the retina. Figure 3 shows some images of object 44, with poses varying from 260 to 300 degrees.
5
LIBSVM Software Library
LIBSVM is a software library developed by Chang and Lin [1] at the National Taiwan University, which incorporates pattern classification (CSVM), νSVM [17], distribution estimation (one-class SVM [18]) and regression (-SVM and ν-SVM regression). The source code is available in C++ and Java. There is also a front-end implemented in Matlab, which is called OSU SVM. We used LIBSVM in C++, which led to faster training a testing times.
A Comparative Study of Polynomial Kernel SVM
413
Fig. 2. Some objects of the Coil100 image database
Fig. 3. Views between the angles 260o to 300o of the object 44
The algorithm implemented is a simplification of the Sequential Minimal Optimisation (SMO) algorithm [11] and SVMLight [5]. We chose LIBSVM for a number of reasons: it supports multi-class recognition (one-against-one strategy), the sources are available in C++, most common kernels can be used, only one result file is generated per training, and the code and memory management are more efficient than some other tools investigated, like the Matlab SVM toolbox [3] and SVMTorch [2].
6
Experiments
The overall design of the experiments is presented here and the individual processes are discussed in subsequent sections. Figure 4 shows the four main processes directly related to the experiments: (a) Image Pre-Processing, (b) Training/Testing Set Construction, (c) Classifier Training and (d) Performance Evaluation.
Image Pre-processing
Training and Testing Set Construction
Classifier Training
Fig. 4. Experiments architecture
Performance Evaluation
414
6.1
Eulanda Miranda dos Santos and Herman Martins Gomes
Pre-processing
In order to reduce the size of the input patterns, and consequently the computational resources needed, all images were converted to 256 grey levels by taking the average of the original r, g, b colour information. Then, the images were resized to a retina of 32x32 pixels. The works described in [12], [16] and [15] have also applied very similar pre-processing steps. Note that, although the above pre-processing reduces computation time and memory requirements of the SVM algorithm, some object classes may became more difficult to discriminate as input space patterns become closer to each other due to a reduction in the level of information encoded in the images. 6.2
Training and Testing Set Construction
All 100 object classes were used together with the 72 poses per class. A total of 71 training and testing set pairs were constructed per class, each pair had T training poses and 72 − T testing poses (T =1,2,. . .,71). Training poses were randomly selected, which made our experiments more realistic. The remaining poses were used for testing. Since nearby poses are highly correlated, training with such poses can artificially increase the recognition rates. In [12], for example, a sequential selection has been used. 6.3
Classifier Training
The images, pre-processed as described above, were linearised (transformed into a vector of 32x32=1024 values) and used directly as training / testing patterns according to the syntax required by svm-train, a program in the LIBSVM package responsible for training. We applied the standard SVM classifier and experimented with 3 different polynomial kernels: linear (degree 1), quadratic (degree 2) and cubic (degree 3). The relation between the number of support vectors SV obtained and the corresponding training set size can be seen in Figure 5. Since we have 100 classes, the number of training patterns in the figure is shown as 100*T . Note that, as the number of training poses T increases, the number of support vectors SV also increases, but not in the same proportion, which confirms Pontil and Verri[12] in the comment that the fraction of the support vectors is higher when there is a high dimension input space combined with a small number of training exemplars. 6.4
Performance Evaluation
When the training of all classifiers was finished, we moved on to the performance evaluation process. For each testing set of size 72-T , where T is the corresponding training set size, we ran the program svm-predict (part of SVMLIB) to classify the untrained testing poses according to the support vectors obtained in the previous stage. The actual number of testing patterns in the graph is 7200-100*T for the same reasons as in Figure 5.
A Comparative Study of Polynomial Kernel SVM
415
5000 4500
Number of support vectors (SV)
4000 3500 3000 2500 2000 1500 1000 Kernel type Linear Quadratic Cubic
500 0 0
1000
2000
3000 4000 5000 Training set size (100*T)
6000
7000
8000
Fig. 5. Number of created support vectors (SV ) versus the training set size (100 ∗ T ) for the three kernel functions investigated
Figure 6 presents a graph containing the recognition rates for each training set size T . Among the 3 kernels investigated, the quadratic kernel presented a performance slightly superior to the others. It is important to note that the differences between the kernel performances are minimal, which confirms the results of Vapnik [20,21], in which the kind of kernel (linear and radial basis function) did not cause much change in the final performance in a problem of handwritten digit recognition. The average recognition rate was 82.00% for the polinomial quadratic kernel, 81.48% for the cubic and 81.20% for the linear. The best rate was 94.00% for the cubic kernel, with T = 71, and the worst rate was 43.42% for the cubic kernel too. Table 1 summarises these results. Note that, the worst recognition rates happened when using the smallest training sets, and the rates gradually increased toward larger training set sizes, as expected. The best recognition rates happened with the largest training sets (T = 71), as expected too.
7
Conclusions
This work presented the experimental results of a study comparing the performance of a SVM classifier with three different kernel functions: linear, quadratic
416
Eulanda Miranda dos Santos and Herman Martins Gomes
100
90
Recognition rate
80
70
60
50 Kernel type Linear Quadratic Cubic 40 0
10
20
30 40 50 Number of training samples (T) per class
60
70
80
Fig. 6. Recognition curves for each of the kernels: linear, quadratic e cubic
Table 1. Worst, best and average recognition rates obtained for three different kernels: linear, quadratic and cubic Kernel Worst Best Type Rate Rate Linear 44.34% T=1 91.00% T=71 Quadratic 43.80% T=1 92.00% T=71 Cubic 43.42% T=1 94.00% T=71
Average Rate 81,20% 82,00% 81,48%
and cubic. The Coil100 image database was used as test bed for the experiments. Our results confirmed an observation from previous works, e.g. [20,21], that the kind of kernel plays a minor role in the final classification results. All three kernels investigated in this paper presented performances very close to each other. It was observed only a small superiority of the polynomial quadratic kernel, which yielded an average recognition rate of 82.00%. The recognition rates were overall smaller than the ones obtained by previous works, like [12] and [16]. This was already expected, since we used all 100 classes available for training and built the training and testing sets of all possible sizes. Moreover, we randomly selected views to build these training and testing sets,
A Comparative Study of Polynomial Kernel SVM
417
therefore, reducing the risk of systematically selecting views that are too easy or too difficult to learn or classify. In this paper, we were interested in studying the behaviour of the SVM performance under training sets of varying sizes, in other words, under a varying problem difficulty (a larger training set meaning an easier problem to solve). Indeed, it was possible to see in Figure 6 that the best recognition rates were associated with the largest training set sizes. The implementation of a k-fold cross-validation study could enhance the accuracy of our results, at a cost of a considerable increase in the number of experiments to perform. This was left as a future work. We are currently developing a comparison between the performance of SVM and a standard Neural Network algorithm, under the same experimental conditions described in this paper.
References 1. Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm. 412 2. R. Collobert and S. Bengio. SVMTorch: Support vector machines for large-scale regression problems. Journal of Machine Learning Rersearch, 1:143–160, 2001. 413 3. S. Gunn. Support vector machines for classification and regression. Technical Report ISIS-1-98, Department of Electronics nad Coputer Science, University of Southamptom, 1998. 413 4. C. Y. Huang, O. I. Camps, and T. Kanungo. Object recognition using appearancebased parts and relations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 877–883, Puerto Rico, June 1997. IEEE Computer Society Press. 408 5. T. Joachims. Making large-scale svm learning practical. In C. J. C. Burges e A. J. Smola B. Sch¨ olkopf, editor, Advances in Kernel Methods - Support Vector Learning, pages 169–184. MIT Press, 1999. 413 6. Y. LeCun, L. Jackel, L. Bottou, A. Brunot, C. Cortes, J. Denker, H. Drucker, I. Guyon, U. M¨ uller, E. S¨ akinger, P. Simard, and V. Vapnik. Comparison of Learning Algorithms for Handwritten Digit Recognition. In Proceedings of International Conference on Artificial Neural Networks, pages 53–60, 1995. 408 7. J. Mundy, A. Liu, N. Pillow, A. Zisserman, S. Abdallah, S. Utcke, S. Nayar, and C. Rothwell. An experimental comparison of appearance and geometric model based recognition. In International Workshop on Object Representations in Computer Vision, 1996. In association with ECCV 1996. 408 8. H. Murase and S. K. Nayar. Visual learning and recognition of 3-d objects from appearance. International Journal of Computer Vision, 14:5–24, 1995. 408, 412 9. S. K. Nayar, S. A. Nene, and H. Murase. Real-Time 100 Object Recognition System. In Proceedings ARPA Image Understanding Workshop, pages 1223–1227, San Francisco, USA, February 1996. Tech report CUCS-019-95. 408, 412 10. S. A. Nene, S. K. Nayar, and H. Murase. Slam: Software library for appearance matching. In Proc. of ARPA Image Understanding Workshop, Monterey, November 1994. 408
418
Eulanda Miranda dos Santos and Herman Martins Gomes
11. J. Platt. Fast training of support vector machines using sequential minimal optimization. In C. J. C. Burges e A. J. Smola B. Sch¨olkopf, editor, Advances in Kernel Methods - Support Vector Learning, pages 185–208. MIT Press, 1999. 413 12. M. Pontil and A. Verri. Support Vector Machines for 3D Object Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(6):637–646, 1998. 408, 411, 412, 414, 416 13. D. Roobaert. Improving the Generalization of Linear Support Vector Machines: an Application to 3d Object Recognition with Cluttered Background. In Proceedings SVM Workshop at the 16th International Conference on Artificial Intelligence, Stockholm, Swedn, Autgust 1999. 412 14. D. Roobaert. DIRECTSVM: a Fast and Simple Support Vector Machine Perceptron. In IEEE to be known, pages 356–365, 2000. 412 15. D. Roobaert, P. Nillus, and J. O. Eklundh. Comparison of Learning Approaches to Appearance-Based 3D Object Recognition with and without Cluttered Background. In Proceedings of the Fourth Asian Conference on Computer Vision, Taipei, Taiwan, January 2000. 412, 414 16. D. Roth, M. H. Tang, and N. Ahuja. Learning to recognise objects. In Proc. CVPR, pages 724–731, 2000. 412, 414, 416 17. B. Sch¨ olkopf. Support Vector Learning. PhD thesis, Universidade de Berlin, 1997. 412 18. B. Sch¨ olkopf, J.Platt, et al. Estimating the support of a high-dimensional distribution. Technical Report 99-87, Microsof Research, 1999. 412 19. Friedhelm Schwenker. Hierarchical support vector machines for multi-class pattern recognition. In Proceedings Fourth International Conference on Knowledge-Based Intelligent Engineering Systems, September 2000, pages 561–565, Brighton, UK, 2000. 411 20. V. N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, New York, 1995. 408, 415, 416 21. V. N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, New York, 2nd edition, 1999. 409, 415, 416 22. J. Weston and C. Watkins. Support vector machines for multiclass pattern recognition. In Proceedings of the Seventh European Symposium On Artificial Neural Networks, 4 1999. 411
Author Index
Ayat, Nedjem-Eddine . . . . . . . . . . 354 Bang, Sung-Yang . . . . . . . . . . . . . . 397 Barla, Annalisa . . . . . . . . . . . . . . . . . 83 Bengio, Samy . . . . . . . . . . . . . . . . . . . . 8 Bengio, Yoshua . . . . . . . . . . . . . . . . . . 8 Bigun, Josef . . . . . . . . . . . . . . . . . . . 249 Bileschi, Stanley M. . . . . . . . . . . . .135 Blanz, Volker . . . . . . . . . . . . . . . . . . 334 Boratyn, Grzegorz M. . . . . . . . . . .198 Byun, Hyeran . . . . . . . . . . . . . 213, 268 Caputo, B. . . . . . . . . . . . . . . . . . . . . . .97 Cauwenberghs, Gert . . . . . . 120, 278 Chakrabartty, Shantanu . . . . . . . 278 Chang, Ming-Wei . . . . . . . . . . . . . . 159 Cheriet, Mohamed . . . . . . . . . . . . . 354 Collobert, Ronan . . . . . . . . . . . . . . . . 8 Ding, Xiaoqing . . . . . . . . . . . . . . . . 260 Dong, Jian-xiong . . . . . . . . . . . . . . . .53 Dork´ o, Gy. . . . . . . . . . . . . . . . . . . . . . .97 Franceschi, Emanuele . . . . . . . . . . . 83 Fugate, Mike . . . . . . . . . . . . . . . . . . 186 Fumera, Giorgio . . . . . . . . . . . . . . . . 68 Gattiker, James R. . . . . . . . . . . . . 186 Genov, Roman . . . . . . . . . . . . . . . . .120 Gerstner, Wulfram . . . . . . . . . . . . . 249 Gomes, Herman Martins . . . . . . . 408 Heisele, Bernd . . . . . . . . . . . . 135, 334 Huang, Jennifer . . . . . . . . . . . . . . . .334 Itakura, Fumitada . . . . . . . . . . . . . 171 Je, Hong-Mo . . . . . . . . . . . . . . . . . . .397 Johansson , Bj¨ orn . . . . . . . . . . . . . 388 Jung, Keechul . . . . . . . . . . . . . . . . . 293 Juszczak, Piotr . . . . . . . . . . . . . . . . . 40 Kahl, Fredrik . . . . . . . . . . . . . . . . . . 388 Kang, Seonghoon . . . . . . . . . . . . . . 268 Kim, Daijin . . . . . . . . . . . . . . . . . . . .397
Kim, Hyun-Chul . . . . . . . . . . . . . . . 397 Kim, Jin Hyung . . . . . . . . . . . . . . . 293 Kim, Kwang In . . . . . . . . . . . . . . . . 293 Kitamoto, Asanobu . . . . . . . . . . . . 237 Krzy˙zak, Adam . . . . . . . . . . . . . . . . . 53 Lee, Seong-Whan . . . . 213, 268, 370 Li, Jianmin . . . . . . . . . . . . . . . . . . . . 342 Li, Zeyu . . . . . . . . . . . . . . . . . . . . . . . 321 Lin, Chih-Jen . . . . . . . . . . . . . . . . . . 159 Lin, Fuzong . . . . . . . . . . . . . . . . . . . .342 Ma, Yong . . . . . . . . . . . . . . . . . . . . . . 260 Milanova, Mariofanna . . . . . . . . . .198 Mukherjee, Neelanjan . . . . . . . . . . . . 1 Mukherjee, Sayan . . . . . . . . . . . . . . . . 1 Nakajima, Chikahito . . . . . . . . . . . 112 Narasimhamurthy, Anand M. . . 144 Navarrete, Pablo . . . . . . . . . . . . . . . . 24 Niemann, H. . . . . . . . . . . . . . . . . . . . . 97 Odone, Francesca . . . . . . . . . . . . . . . 83 Pang, Shaoning . . . . . . . . . . . . . . . . 397 Pontil, Massimiliano . . . . . . . . . . . 112 Roli, Fabio . . . . . . . . . . . . . . . . . . . . . .68 R¨ uping, Stefan . . . . . . . . . . . . . . . . .310 Santos, Eulanda Miranda dos . . 408 Sekhar, C. Chandra . . . . . . . . . . . .171 Sharma, Rajeev . . . . . . . . . . . . . . . .144 Smeraldi, Fabrizio . . . . . . . . . . . . . 249 Smolinski, Tomasz G. . . . . . . . . . . 198 Solar, Javier Ruiz del . . . . . . . . . . . 24 Suen, Ching Y. . . . . . . . . . . . . 53, 354 Takeda, Kazuya . . . . . . . . . . . . . . . 171 Tang, Shiwei . . . . . . . . . . . . . . . . . . .321 Tax, David M. J. . . . . . . . . . . . . . . . 40 Verri, Alessandro . . . . . . . . . . . . . . . 83
420
Author Index
Walawalkar, L. . . . . . . . . . . . . . . . . 144 Weng, Ruby C. . . . . . . . . . . . . . . . . 159 Wrobel, Andrzej . . . . . . . . . . . . . . . 198 Xi, Dihua . . . . . . . . . . . . . . . . . . . . . . 370
Yan, Shuicheng . . . . . . . . . . . . . . . . 321 Yeasin, Mohammad . . . . . . . . . . . . 144 Zhang, Bo . . . . . . . . . . . . . . . . . . . . . 342 Zurada, Jacek M. . . . . . . . . . . . . . . 198