ADVANCES IN IMAGING AND ELECTRON PHYSICS VOLUME 134
EDITOR-IN-CHIEF
PETER W. HAWKES CEMES-CNRS Toulouse, France
ASSOCIATE EDITORS
BENJAMIN KAZAN Palo Alto, California
HONORARY ASSOCIATE EDITOR
TOM MULVANEY
Advances in
Imaging and Electron Physics
Edited by
PETER W. HAWKES CEMES-CNRS Toulouse, France
VOLUME 134
Elsevier Academic Press 525 B Street, Suite 1900, San Diego, California 92101-4495, USA 84 Theobald’s Road, London WC1X 8RR, UK
This book is printed on acid-free paper. Copyright ß 2005, Elsevier Inc. All Rights Reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the Publisher. The appearance of the code at the bottom of the first page of a chapter in this book indicates the Publisher’s consent that copies of the chapter may be made for personal or internal use of specific clients. This consent is given on the condition, however, that the copier pay the stated per copy fee through the Copyright Clearance Center, Inc. (www.copyright.com), for copying beyond that permitted by Sections 107 or 108 of the U.S. Copyright Law. This consent does not extend to other kinds of copying, such as copying for general distribution, for advertising or promotional purposes, for creating new collective works, or for resale. Copy fees for pre-2005 chapters are as shown on the title pages. If no fee code appears on the title page, the copy fee is the same as for current chapters. 1076-5670/2005 $35.00 Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone: (þ44) 1865 843830, fax: (þ44) 1865 853333, E-mail:
[email protected]. You may also complete your request on-line via the Elsevier homepage (http://elsevier.com), by selecting ‘‘Customer Support’’ and then ‘‘Obtaining Permissions.’’ For all information on all Academic Press publications visit our Web site at www.books.elsevier.com ISBN: 0-12-014776-9 PRINTED IN THE UNITED STATES OF AMERICA 05 06 07 08 09 10 9 8 7 6 5 4 3 2
1
CONTENTS
Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Future Contributions . . . . . . . . . . . . . . . . . . . . . . . .
vii ix xi
Circulant Matrix Representation of Feature Masks and Its Applications Rae-hong Park and Byung Ho Cha I. II. III. IV. V.
Introduction . . . . . . . . . . Mathematical Preliminaries . . Edge and Feature Detection in Advanced Topics . . . . . . . Conclusions . . . . . . . . . . References . . . . . . . . . . .
. . . . the . . . . . .
. . . . . . DFT . . . . . . . . .
. . . . . . . . . . Domain . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
2 3 12 43 65 67
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. 70 . 74 . 80 . 85 . 99 . 106 . 109
Phase Problem and Reference-Beam Diffraction Qun Shen I. II. III. IV. V. VI.
Introduction and Overview . . . . . . . . Phase-Sensitive DiVraction Theory . . . . Geometry and Symmetry Considerations . Experiment Procedure and Examples . . . Retrieval of Individual Phases. . . . . . . Discussion and Conclusions . . . . . . . . References . . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
Fractal Encoding Domenico Vitulano I. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 II. Fractals Catch Natural Information . . . . . . . . . . . . . . . 115 III. Improving the Classical Fractal Transform . . . . . . . . . . . . 127 v
vi
CONTENTS
IV. Fractals Meet Other Coding Transforms . . . . . . V. Conclusions . . . . . . . . . . . . . . . . . . . . . . Appendix: Another Way of Fractal Encoding: Bath Fractal Transform . . . . . . . . . . . . . . . . . . References. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
164 169
. . . . . . . . . . . .
169 170
Morphologically Debiased Classifier Fusion: A Tomography-Theoretic Approach David Windridge I. II. III. IV. V.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . Methodology of Tomographic Classifier Fusion . . . . . . Postcombination Tomographic Filtration . . . . . . . . . An Example Application . . . . . . . . . . . . . . . . . . Dimensionality Issues: Empirical and Theoretical Constraints on the Relative Performance of Tomographic Classifier Fusion . . . . . . . . . . . . . . . . . . . . . . VI. Morphology-Centered Classifier Combination: Retrospect, Prospect . . . . . . . . . . . . . . . . . . . . References. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
182 190 220 235
. . .
244
. . . . . .
261 265
CONTRIBUTORS
Numbers in parentheses indicate the pages on which the authors’ contributions begin.
Byung Ho Cha (1), Department of Electronic Engineering, Sogang University, Seoul 100–611, Korea Rae-hong Park (1), Department of Electronic Engineering, Sogang University, Seoul 100–611, Korea Qun Shen (69), Advanced Photon Source, Argonne National Laboratory, Argonne, Illinois 60439, USA Domenico Vitulano (113), Instituto per le Applicazioni del Calcolo, Consiglio Nazionale delle Richerche, Viale del Policlinico 137, 00161-Rome, Italy David Windridge (181), School of Electronics and Physical Science, University of Surrey, Guildford Surrey, GU2 7XH, United Kingdom
vii
PREFACE
This new volume opens with a contribution to the continuing task of putting image processing routines on a sound mathematical footing. R.-H. Park and B.H. Cha first present the properties of circulant matrices and their special properties, which make them valuable for representing masks. They then go on to explore more advanced topics, notably orthogonal transform-based interpretation and optical control systems. The phase problem is another ongoing task in many branches of physics. In x-ray crystallography, many ways of obtaining the phase of the structure factor have been devised and a recent addition to the armoury is reference-beam diVraction, introduced by Q. Shen who describes the procedure in detail here. The author takes us systematically through phase-sensitive diVraction theory, geometry, experimental procedures and examples and retrieval of individual phases. The third contribution, by D. Vitulano, explains how fractals are used in coding theory. After explaining the technique at length, there is a very useful section on the relations between fractal coding and the more traditional approaches to coding, including vector quantization. The volume ends with a description of a tomography-theoretic approach to classifier fusion by D. Windridge. This is an attempt to construct a general theoretical framework for constructing multiple classifier systems. I am convinced that this very complete presentation of these ideas will help to make them more widely recognised. In conclusion, I thank all the authors for the trouble they have taken to make their material accessible to a wide readership and list forthcoming contributions to these Advances. Peter Hawkes
ix
FUTURE CONTRIBUTIONS
G. Abbate New developments in liquid-crystal-based photonic devices S. Ando Gradient operators and edge and corner detection A. Asif Applications of noncausal Gauss-Markov random processes in multidimensional image processing C. Beeli Structure and microscopy of quasicrystals M. Bianchini, F. Scarselli, and L. Sarti Recursive neural networks and object detection in images G. Borgefors Distance transforms A. Bottino Retrieval of shape from silhouette A. Buchau Boundary element or integral equation methods for static and timedependent problems B. Buchberger Gro¨bner bases J. Caulfield Optics and information sciences C. Cervellera and M. Muselli The discrepancy-based approach to neural network learning T. Cremer Neutron microscopy H. Delingette Surface reconstruction based on simplex meshes A. R. Faruqi Direct detection devices for electron microscopy xi
xii
FUTURE CONTRIBUTIONS
R. G. Forbes Liquid metal ion sources J. Y.-l. Forrest Grey systems and grey information E. Fo¨ rster and F. N. Chukhovsky X-ray optics A. Fox The critical-voltage eVect P. Geuens and D. van Dyck (vol. 136) S-matrix theory for electron channelling in high-resolution electron microscopy G. Gilboa, N. Sochen, and Y. Y. Zeevi (vol. 136) Real and complex PDE-based schemes for image sharpening and enhancement L. Godo & V. Torra Aggregation operators A. Go¨ lzha¨ user Recent advances in electron holography with point sources H. Harmuth and B. MeVert (vol. 137) Dogma of the continuum and the calculus of finite diVerences in quantum physics K. Hayashi X-ray holography M. I. Herrera The development of electron microscopy in Spain D. Hitz Recent progress on HF ECR ion sources H. Ho¨ lscher and A. Schirmeisen (vol. 135) Dynamic force microscopy and spectroscopy D. P. Huijsmans and N. Sebe Ranking metrics and evaluation measures K. Ishizuka Contrast transfer and crystal images
FUTURE CONTRIBUTIONS
xiii
K. Jensen Field-emission source mechanisms L. Kipp Photon sieves G. Ko¨ gel Positron microscopy T. Kohashi Spin-polarized scanning electron microscopy W. Krakow Sideband imaging B. Lencova´ Modern developments in electron optical calculations R. Lenz (vol. 138) Aspects of colour image processing W. Lodwick Interval analysis and fuzzy possibility theory R. Lukac Weighted directional filters and colour imaging L. Macaire, N. Vandenbroucke, and J.-G. Postaire Color spaces and segmentation M. Matsuya Calculation of aberration coeYcients using Lie algebra L. Mugnier, A. Blanc, and J. Idier Phase diversity K. Nagayama (vol. 137) Electron phase microscopy A. Napolitano (vol. 135) Linear filtering of generalized almost cyclostationary signals S. A. Nepijko, N. N. Sedov, and G. Scho¨ nhense (vol. 136) Measurement of electric fields on the object surface in emission electron microscopy
xiv
FUTURE CONTRIBUTIONS
M. A. O’Keefe Electron image simulation J. OrloV and X. Liu (vol. 138) Optics of a gas field-ionization source D. Oulton and H. Owens Colorimetric imaging N. Papamarkos and A. Kesidis The inverse Hough transform K. S. Pedersen, A. Lee, and M. Nielsen The scale-space properties of natural images E. Rau Energy analysers for electron microscopes H. Rauch The wave-particle dualism E. Recami Superluminal solutions to wave equations ˇ eha´ cˇ ek, Z. Hradil, J. Perˇina, S. Pascazio, P. Facchi, and M. Zawisky J. R Neutron imaging and sensing of physical fields G. Ritter Lattice-based artifical neural networks J.-F. Rivest Complex morphology G. Schmahl X-ray microscopy G. Scho¨ nhense, C. M. Schneider, and S. A. Nepijko Time-resolved photoemission electron microscopy F. Shih General sweep mathematical morphology R. Shimizu, T. Ikuta, and Y. Takai Defocus image modulation processing in real time
FUTURE CONTRIBUTIONS
xv
S. Shirai CRT gun design methods K. Siddiqi and S. Bouix (vol. 135) The Hamiltonian approach to computer vision N. Silvis-Cividjian and C. W. Hagen Electron-beam-induced deposition T. Soma Focus-deflection systems and their applications Q. F. Sugon Geometrical optics in terms of CliVord algebra W. Szmaja Recent developments in the imaging of magnetic domains I. Talmon Study of complex fluids by transmission electron microscopy I. J. Taneja (vol. 138) Divergence measures and their applications M. E. Testorf and M. Fiddy Imaging from scattered electromagnetic fields, investigations into an unsolved problem R. Thalhammer (vol. 135) Virtual optical experiments M. Tonouchi Terahertz radiation imaging N. M. Towghi Ip norm optimal filters Y. Uchikawa Electron gun optics K. Vaeth and G. Rajeswaran Organic light-emitting arrays J. Valde´ s (vol. 138) Units and measures, the future of the SI D. Walsh (vol. 138) The importance-sampling Hough transform
xvi
FUTURE CONTRIBUTIONS
G. G. Walter Recent studies on prolate spheroidal wave functions C. D. Wright and E. W. Hill Magnetic force microscopy B. Yazici Stochastic deconvolution over groups M. Yeadon Instrumentation for surface studies T. Zouros Hemispherical deflector analysers
ADVANCES IN IMAGING AND ELECTRON PHYSICS, VOL. 134
Circulant Matrix Representation of Feature Masks and Its Applications RAE-HONG PARK AND BYUNG HO CHA Department of Electronic Engineering, Sogang University, Seoul 100–611, Korea
I. Introduction . . . . . . . . . . . . . . . . . . II. Mathematical Preliminaries. . . . . . . . . . . . . A. Vector-Matrix Representation . . . . . . . . . . 1. Background of Vector-Matrix Representation . . . 2. Special Matrices . . . . . . . . . . . . . . B. DFT Domain Interpretation . . . . . . . . . . . 1. Circulant Matrix Interpretation in the DFT Domain 2. Eigenvalue Analysis in the DFT Domain . . . . . C. Orthogonal Transforms . . . . . . . . . . . . . D. Summary . . . . . . . . . . . . . . . . . . III. Edge and Feature Detection in the DFT Domain . . . . A. Edge Detection . . . . . . . . . . . . . . . . 1. Compass Gradient Edge Masks and Their Eigenvalue 2. Frei–Chen Edge Masks . . . . . . . . . . . . 3. Complex-Valued Edge Masks . . . . . . . . . B. Feature Detection . . . . . . . . . . . . . . . 1. Compass Roof Edge Masks . . . . . . . . . . 2. Frei–Chen Line Masks . . . . . . . . . . . . 3. Complex-Valued Feature Masks . . . . . . . . C. Summary . . . . . . . . . . . . . . . . . . IV. Advanced Topics . . . . . . . . . . . . . . . . A. Orthogonal Transform-Based Interpretation . . . . . 1. DCT and DST Interpretation . . . . . . . . . 2. DHT Interpretation . . . . . . . . . . . . . 3. KLT Interpretation . . . . . . . . . . . . . B. Application to Other Fields . . . . . . . . . . . 1. Optical Control System . . . . . . . . . . . . 2. Information System . . . . . . . . . . . . . C. Results and Discussions . . . . . . . . . . . . D. Summary . . . . . . . . . . . . . . . . . . V. Conclusions . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 3 4 4 6 8 8 10 11 12 12 12 13 25 33 35 35 36 37 43 43 43 44 48 54 59 59 61 63 65 65 67
1 ISSN 1076-5670/05 DOI: 10.1016/S1076-5670(04)34001-2
Copyright 2005, Elsevier Inc. All rights reserved.
2
PARK AND CHA
I. Introduction In this article, we present a circulant matrix interpretation of the edge and feature detection of images in the frequency domain and its further applications in various fields. This article presents a unified framework of previously published papers on the circulant matrix representation of edge detection and feature detection. We have reviewed the literature and unified the contexts based on a common vector-matrix representation of the circulant matrix. The spatial-domain relationship between the circulant matrix and the convolution in the linear time invariant (LTI) system is presented as a key idea of this work. Using the input-output relationship, we have presented the LTI system in vector-matrix form using a circulant matrix. The circulant matrix has several useful properties in the discrete Fourier transform (DFT) domain and it is related to edge detection by compass gradient masks and Frei–Chen masks. We have presented the eigenvalue interpretation in the one-dimensional (1D) DFT domain, which makes the analysis of the compass gradient masks and Frei–Chen masks in the DFT domain simple. Similarly, feature masks are analyzed in the DFT domain. As an extension to the feature masks, the discrete cosine transform (DCT), discrete sine transform (DST), and discrete Hartley transform (DHT) masks are interpreted in the context of Frei–Chen masks. Also, the DHT interpretation using the singular value decomposition (SVD), based on the Karhunen–Loeve transform (KLT), is introduced by the circulant symmetric matrix. The rest of this article is structured as follows: Section II reviews the basic matrix theory, especially special matrices such as circulant and circulant symmetric matrices, and a general case of orthogonal transforms. We explain the relationship between the convolution operation in the LTI system and the Toeplitz matrix. According to the properties of a circulant matrix, we can use eigenvalue analysis and diagonalization property in the DFT domain (Davis, 1994; Gray, 2000; Park and Choi, 1989, 1992). The circulant symmetric matrix also is related to representation of covariance matrices (Lay, 2003; Uenohara and Kanade, 1998). In Section III, edge or line detection, one of fundamental steps in computer vision and pattern analysis, is presented, where edges or lines as features represent abrupt discontinuities in gray level, color, texture, motion, and so on (Gonzalez and Wintz, 1977; Jain, 1989; Rosenfeld and Kak, 1982). Twodimensional (2D) 3 3 compass gradient masks such as the Prewitt, Sobel, and Kirsch masks, each of which consists of a set of eight directional edge components, have been commonly used in edge detection for their simplicity (Park and Choi, 1989, 1992; Yu, 1983). For a given pixel, at which 3 3
CIRCULANT MATRIX REPRESENTATION OF MASKS
3
masks are centered, the relationship between eight intensity values of neighboring pixels and their kth (0 k 8) directional edge strength values are presented. The relationship can be expressed in vector-matrix form, where neighboring pixels are defined as the pixels covered by the 3 3 masks and diagonalized by the DFT matrix. In addition, edge detection by the orthogonal set of the 3 3 Frei–Chen masks is proposed based on a vector space approach (Frei and Chen, 1977; Park, 1990, 1998a; Park and Choi, 1990). Edge detection using the Frei–Chen masks is achieved by mapping the intensity vector using a linear transformation and then detecting edges based on the angle between the nine-dimensional (9D) intensity vector and its projection onto the edge subspace (Park, 1990). The 1D DFT domain interpretation of 3 3 compass gradient edge masks can be extended to complex-valued mask cases, making use of the circularity of the complexvalued weight matrix. Complex-valued compass gradient Prewitt and Sobel edge masks are expressed, in the 1D spatial and frequency domains, in terms of the two types of real-valued Frei–Chen masks (Park, 1998b, 1999a). The relationship between the compass roof masks and the Frei–Chen line masks also is presented (Park, 1998a). Generalization to analysis of N-directional complex-valued feature masks is presented (Park, 2002c). Simulation results with the synthetic image show the validity of the proposed interpretation of the edge and feature masks. In Section IV, we can interpret four edge masks derived from the eightpoint DHT. The DHT is real valued and computationally fast; thus it has been applied to various signal processing and interpretation applications. Basis functions of the eight-point DHT formulate the 3 3 DHT masks that are closely related to the Frei–Chen masks (Bracewell, 1986; Park et al., 1998). Similarly, the DCT and DST edge masks can be derived from the eight-point DCT and DST basis functions, respectively (Park, 1999b). We present DCT and DST basis functions first and then DHT basis functions. In addition, DHT basis functions are closely connected with KLT basis functions in advanced applications (Park, 2000, 2002a,b). Simulations with the synthetic and real (Lena) images show the validity of the proposed interpretation of various edge and feature masks. Section V concludes the work.
II. Mathematical Preliminaries This section reviews the matrix representation of a general matrix and special kinds of matrices such as circulant matrices and symmetric circulant matrices. Properties of diagonalization and orthogonality of these matrices
4
PARK AND CHA
in the DFT domain are explained in detail. Finally, we present N-point orthogonal transforms. A. Vector-Matrix Representation This section describes a general representation of a matrix and matrix properties, such as transpose, inverse, symmetry, diagonalization, and orthogonality. In addition, special matrices such as circulant matrices and symmetric circulant matrices are explained as mathematical tools for signal processing, especially related to the convolution in the LTI system. 1. Background of Vector-Matrix Representation A matrix is a concise and useful way of representing a linear transformation. The transformations given by the equations g1 ¼ a11 f1 þ a12 f2 þ a13 f3 þ . . . þ a1N fN g2 ¼ a21 f1 þ a22 f2 þ a23 f3 þ . . . þ a2N fN g3 ¼ a31 f1 þ a32 f2 þ a33 f3 þ . . . þ a3N fN .. . gN ¼ aN1 f1 þ aN2 f2 þ aN3 f3 þ . . . þ aNN fN are represented in vector-matrix form by 2 3 2 g1 a11 a12 a13 6 g2 7 6 a21 a22 a23 6 7 6 6 g3 7 6 a31 a32 a33 6 7¼6 6 .. 7 6 .. .. .. 4 . 5 4 . . . gN
aN1
aN2
aN3
... ... ...
a1N a2N a3N .. .
. . . aNN
32
f1 7 6 f2 76 7 6 f3 76 76 .. 54 .
ð1Þ
3 7 7 7 7: 7 5
ð2Þ
fN
Eq. (2) can be expressed simply by g ¼ Af;
ð3Þ
which represents the input-output relationship of the linear system that will be explained in detail in Section III. Note that A denotes the linear system transformation with f (g) denoting the input (output) of the linear system. In Eq. (3), the N N matrix A has several useful properties under the proper constraints. The transpose of the N N matrix A is denoted by At. The ith row of A is equal to the ith column of At, i.e., ðAt Þij ¼ Aji :
5
CIRCULANT MATRIX REPRESENTATION OF MASKS
2
a11 6 a21 6 6 A ¼ 6 a31 6 .. 4 .
a12 a22 a32 .. .
a13 a23 a33 .. .
... ... ...
a1N a2N a3N .. .
aN1
aN2
aN3
...
aNN
3
2
a11 7 6 a12 7 6 7 6 7 and At ¼ 6 a13 7 6 .. 5 4 .
a21 a22 a23 .. .
a31 a32 a33 .. .
... ... ...
aN1 aN2 aN3 .. .
a1N
a2N
a3N
...
aNN
3 7 7 7 7 7 5
ð4Þ where Aij signifies the (i, j)th element of matrix A. If the N N matrix A satisfies At ¼ A, the matrix A is called a symmetric matrix. The inverse of the N N matrix A is denoted by A1. If there exists an N N matrix A1 such that A1 A ¼ IN and AA1 ¼ IN , the N N matrix A is invertible, where IN denotes the N N identity matrix. Let A ¼ ½a0 a1 . . . aN1 represent the N N matrix, with the term in brackets denoting the component of the matrix. Then the orthonormal matrix A satisfies 2 t 3 2 3 a0 1 0 ... 0 6 at 7 60 1 ... 07 6 1 7 ¼ IN At A ¼ 6 . 7½ a0 a1 . . . aN1 ¼ 6 ð5Þ .. 7 4 ... ... 4 .. 5 .5 0 0 ... 1 at N1
where A A ¼ IN and At ¼ A1 , i.e., ( 1 for i ¼ j t : ai aj ¼ 0 otherwise t
ð6Þ
The N N matrix A is diagonalizable if and only if the matrix A has n linearly independent eigenvectors. In fact, A ¼ PDP1 , with a diagonal matrix D, if and only if the columns of P are n linearly independent eigenvectors of A. In this case, the diagonal entries of D are eigenvalues of A that correspond, respectively, to the eigenvectors in P. According to the eigenvalue decomposition theorem, the input-output equation g ¼ Af in Eq. (3) can be written by g ¼ PDP1 f: Multiplying both sides by P
1
gives
P1 g ¼ DP1 f: 1
0
ð7Þ
ð8Þ
1
With g0 ¼ P g and f ¼ P f, Eq. 8 can be rewritten as g0 ¼ Df 0
ð9Þ
where g0 and f0 are transforms of g and f, respectively. Transformation
6
PARK AND CHA
reduces the number of coeYcients, providing the simple representation of a system in the transform domain. 2. Special Matrices As shown in Figure 1, the input-output relationship is given by g ¼ Hf
ð10Þ
where f and g are N-dimensional (N-D) column vectors and the matrix H denotes a linear transformation of the LTI system. H is linear if H½k1 f 1 þ k2 f 2 ¼ k1 Hf 1 þ k2 Hf 2
ð11Þ
where k1 and k2 are constants and f1 and f2 are any two inputs. If we assume k1 ¼ k2 ¼ 1, we can write H½f 1 þ f 2 ¼ Hf 1 þ Hf 2 ;
ð12Þ
which is called the additivity. If we assume k2 ¼ 0, we can write H ½k1 f 1 ¼ k1 Hf 1 ;
ð13Þ
which is called the homogeneity. The input-output relationship is said to be position-invariant if gðx x0 Þ ¼ H½ f ðx x0 Þ
ð14Þ
where x represents an independent position variable, H[] denotes the linear input-output transformation, and x0 denotes a constant. In a positioninvariant system, the shifted input gives the shifted output. The discrete convolution formulation is based on the assumption that the sampled functions are periodic with a period N. Let f(x), g(x), and h(x) are an input, output, and impulse response of the linear position-invariant system with the length equal to Nf and Nh, respectively. Then overlap in the individual periods of the resulting convolution is avoided by choosing N Nf þ Nh 1, with the resulting length equal to N by zero-padding. Therefore its convolution is given by gðxÞ ¼
N 1 X
f ðnÞhðx nÞ
n¼0
where x ¼ 0; 1; 2; . . . ; N 1.
Figure 1. Model of the linear system.
ð15Þ
CIRCULANT MATRIX REPRESENTATION OF MASKS
7
If Eq. (15) is interpreted as the linear position-invariant system, we define linear convolution. Then the matrix H is represented by 3 2 h0 h1 h2 h3 h4 h5 . . . hNþ1 6 h1 h0 h1 h2 h3 h4 . . . hNþ2 7 7 6 6 h2 h h h h h3 . . . hNþ3 7 1 0 1 2 7 6 6 h3 h2 h1 h0 h1 h2 . . . hNþ4 7 7 6 ð16Þ H ¼ 6 h4 h3 h2 h1 h0 h1 . . . hNþ5 7 7 6 7 6 h5 h h h h h . . . h 4 3 2 1 0 Nþ6 7 6 6 .. .. 7 .. .. .. .. .. 4 . . 5 . . . . . hN1 hN2 hN3 hN4 hN5 hN6 . . . h0 where H is a Toeplitz matrix (constant value along each diagonal) and hi ; N þ 1 i N 1, is an element of the Toeplitz matrix. The system is completely defined by the 2N 1 impulse response coeYcients. As an extension, if two convolving sequences are periodic, then their convolution is also periodic (i.e., circular convolution). In this case, we define circular convolution and the matrix H is represented by 3 2 h0 hN1 hN2 hN3 hN4 hN5 . . . h1 6 h1 h0 hN1 hN2 hN3 hN4 . . . h2 7 7 6 6 h2 h1 h0 hN1 hN2 hN3 . . . h3 7 7 6 6 h3 h2 h1 h0 hN1 hN2 . . . h4 7 7 6 ð17Þ H ¼ 6 h4 h3 h2 h1 h0 hN1 . . . h5 7 7 6 7 6 h5 h4 h3 h2 h1 h0 . . . h6 7 6 6 .. .. 7 .. .. .. .. .. 4 . . 5 . . . . . hN1
hN2
hN3
hN4
hN5
hN6
. . . h0
where H is a circulant matrix and hi ; 0 i N 1, is an element of the circulant matrix. In the circulant matrix, each column is obtained by a circulant shift of the preceding column, and the first column is a circulant shift of the last column. That is, the elements of each row of H are identical to those of the previous row, but are moved one position to the right and wrapped around. The circulant matrix is evidently determined by the first row (or column) (Davis, 1994; Gray, 2000). Let the matrix X ¼ ½x0 x1 . . . xN1 be constructed by the column vector xi ; 0 i N 1. Because of the circular nature of X and definition of the N N inner product matrix, the matrix V defined by 2 t 3 x0 6 xt 7 6 1 7 6 t 7 t x 7 V¼XX¼6 6 2 7½ x0 x1 x2 . . . xN1 6 .. 7 4 . 5 xtN1
8
PARK AND CHA
2
xt0 x0 6 xt x 6 1 0 6 t x x ¼6 6 2 0 6 .. 4 . xtN1 x0
xt0 x1 xt1 x1 xt2 x1 .. . xtN1 x1
3 xt0 xN1 xt1 xN1 7 7 7 t x2 xN1 7 7 7 .. 5 . . . . xtN1 xN1
xt0 x2 xt1 x2 xt2 x2 .. . xtN1 x2
... ... ...
ð18Þ
has the following properties: (1) Toeplitz (constant value along each diagonal), (2) symmetric (V ¼ Vt ), (3) circulant (each row is a circular shift of the previous one), and (4) real. This matrix V is defined as the circulant symmetric matrix (Lay, 2003; Uenohara and Kanade, 1998). B. DFT Domain Interpretation In this section, first, we present the property of an N N circulant matrix in the DFT domain. Second, we introduce eigenvalue computation of the N N circulant matrix in the DFT domain. 1. Circulant Matrix Interpretation in the DFT Domain For the circulant matrix H, hi has the property of periodicity, that is, hi ¼< hNþi >N , where < >N denotes the modulo-N operation. According to the property of a circulant matrix, the matrix multiplication is satisfied (Gonzalez and Wintz, 1977): H wk ¼ rk wk
ð19Þ
where 2 6 6 6 6 6 6 6 H¼6 6 6 6 6 6 6 4
3
h0
hN1
hN2
hN3
hN4
hN5
. . . h1
h1 h2
h0 h1
hN1 h0
hN2 hN1
hN3 hN2
hN4 hN3
h3 h4
h2 h3
h1 h2
h0 h1
hN1 h0
hN2 hN1
h5 .. .
h4 .. .
h3 .. .
h2 .. .
h1 .. .
h0 .. .
. . . h2 7 7 7 . . . h3 7 7 . . . h4 7 7 7 . . . h5 7 7 . . . h6 7 7 7 .. 7 . 5
hN1
hN2
hN3
hN4
hN5
hN6
. . . h0
2p 2p 2p rk ¼ h0 þ hN1 exp j k þ hN2 exp j 2k þ . . . þ h1 exp j ðN 1Þk N N N
ð20Þ
CIRCULANT MATRIX REPRESENTATION OF MASKS
2p wk ¼ 1 exp j k N
9
t pffiffiffiffiffiffiffi 2p 2p exp j 2k . . . exp j ðN 1Þk ð j ¼ 1Þ: ð21Þ N N
This expression indicates that wk is an eigenvector of the circulant matrix H and rk is its corresponding eigenvalue. Suppose that we form an N N matrix W by using the N eigenvectors of H as columns; that is, W ¼ ½ w0
w1
w2
. . . wN1
exp j 2p ... exp j 2p N N 2 2p 2p 61 exp j N 2 exp j N 4 ... 6 ¼6 .. .. 6 .. 4. . 2p . 1 exp j 2p ðN 1Þ exp j N 2ðN 1Þ . . . N 2
1
3 exp j 2p N ðN 1Þ 7 exp j 2p N 2ðN 1Þ 7 7: .. 7 . 5 2 exp j 2p N ðN 1Þ ð22Þ
The (k, i)th element of W, denoted as Wki ; 0 k; i N 1, is given by
2p ð23Þ Wki ¼ exp j ki : N Due to the orthogonality property of the complex exponential, the (k, i)th element of the inverse matrix W1, is given by
1 2p 1 ð24Þ Wki ¼ exp j ki : N N It can be verified by using Eqs. (23) and (24): WW1 ¼ W1 W ¼ IN :
ð25Þ
Therefore, the circulant matrix H can be diagonalized by using W and W1, H ¼ WDW1 or D ¼ W1 HW
ð26Þ
where D is a diagonal matrix whose diagonal elements Dkk are the eigenvalues of H; i.e., Dkk ¼ rk . Accordingly, the diagonal elements of Dkk can be obtained from Eq. (21) by using expð jð2p=NÞðN lÞkÞ ¼ expðjð2p=NÞlkÞ:
2p 2p Dkk ¼ rk ¼ h0 þ hN1 exp j k þ hN2 exp j 2k N N
2p þ . . . þ h1 exp j ðN 1Þk N
10
PARK AND CHA
2p 2p ¼ h0 þ h1 exp j k þ h2 exp j 2k N N
2p þ . . . þ hN1 exp j ðN 1Þk N
N 1 X 2p ¼ hi exp j ki : N i¼0
ð27Þ
According to Eq. (27), the eigenvalue rk, the (k, k)th element of a diagonal matrix D, is the kth coeYcient of the 1D DFT of the first column vector h ¼ ½h0 h1 h2 . . . hN1 t . 2. Eigenvalue Analysis in the DFT Domain In this section, we explain two methods for calculating eigenvalues of the circulant matrix H. First, any circulant matrix H of size N N can be written as H ¼ c0 IN þ c1 J þ c2 J2 þ . . . þ cN1 JN1
ð28Þ
where the coeYcients (c0 ; c1 ; c2 ; . . . ; cN1 ) give the first row of the circulant matrix H, and J is the circulant matrix with first row given by (0, 1, 0, . . ., 0). Since JN ¼ IN , the Nth principal roots of unity are the eigenvalues of the permutation matrix J, i.e.,
2p ð29Þ Wk ¼ exp j k ; 0 n N 1: N Then the eigenvalues of the circulant matrix H can be rewritten as (Deo and Krishnamoorthy, 1989) rk ¼ c0 þ c1 Wk þ c2 Wk2 þ . . . þ cN1 WkN1 :
ð30Þ
In other words, the eigenvalues of H could be obtained by the above polynomial from the principal roots of unity. The relationship given by Eq. (27) implies the fact that rk is the kth coeYcient of the 1D DFT of the first column of H. In a second method to calculate eigenvalue of the circulant matrix H, we should solve the characteristic equation detðsIN HÞ ¼ 0
ð31Þ
where det() signifies the determinant of a matrix and s is the eigenvalue of a circulant matrix H. If a circulant matrix H is expressed as " # A D H¼ ð32Þ C B
CIRCULANT MATRIX REPRESENTATION OF MASKS
with a nonsingular matrix A, its determinant can be obtained as " # A D det ¼ det A det½B CA1 D : C B
11
ð33Þ
C. Orthogonal Transforms A 1D signal can be represented by orthogonal series of basis functions. For a 1D sequence g(n), a general 1D N-point orthogonal transform is represented by f ðkÞ ¼
N 1 X
gðiÞaðk; iÞ;
0kN 1
i¼0
gðiÞ ¼
N 1 X
ð34Þ f ðkÞa ðk; iÞ;
0 i N 1:
k¼0
Eq. (34) is rewritten in vector-matrix form: f ¼ Ag g ¼ ðA Þt f
ð35Þ
t
where A1 ¼ ðA Þ and this property is called unitary. We briefly explain the unitary property of orthogonal transforms such as the DFT, DCT, DST, and DHT and their interrelationships. General cases of the N-point transforms are presented, which will help the reader to understand better the next sections. For a real-valued sequence gðiÞ; 0 i N 1, the N-point DCT C(k) and DST S(k) are defined by
N 1 X 2p CðkÞ ¼ ki ; 0 k N 1 gðiÞcos N i¼0 ð36Þ
N1 X 2p SðkÞ ¼ ki ; 0 k N 1 gðiÞsin N i¼0 respectively. The DCT and DST of a real-valued sequence are also real valued. Their computation does not involve complex-valued operations, which is a potential advantage if the transform is to be explicitly computed. The DHT and DFT are explicitly related to the DCT and DST. By definition, the DHT H(k) is a sum of the DCT C(k) and DST S(k) (Bracewell, 1986):
12
PARK AND CHA
HðkÞ ¼
N 1 X i¼0
gðiÞ cas
2p ki N
¼ CðkÞ þ SðkÞ;
0k N 1
ð37Þ
where casy ¼ cosy þ siny. The real and imaginary parts of the DFT F(k) are equal to the DCT C(k) and DST S(k):
N 1 X 2p FðkÞ ¼ gðiÞexp j ki ¼ CðkÞ jSðkÞ; 0 k N 1: ð38Þ N i¼0
D. Summary The material presented in this section is applied to edge and feature detection in Section III and to orthogonal transforms in Section IV. To unify various representations, we have introduced the circulant matrix that has been given by the input-output relationship in vector-matrix form. In the DFT domain, eigenvector and eigenvalue analysis has been presented through diagonalization of the circulant matrix. This approach is applied to the 1D DFT interpretation of compass gradient masks and Frei–Chen masks in Section III. The orthogonal transforms and circulant symmetric matrix, in the context of Frei–Chen masks, gives a new interpretation of feature masks, which is applied to various fields in Section IV.
III. Edge and Feature Detection in the DFT Domain In Section II, we reviewed the property of the circulant matrix in terms of the DFT. This section focuses on 1D DFT domain interpretation of edge and feature detection using compass gradient operators and Frei–Chen orthogonal masks, which provides a basis of understanding of the DCT, DST, and DHT masks in Section IV. A. Edge Detection We focus on the edge detection of compass gradient and Frei–Chen masks. In addition, making use of the circularity of the weight matrix, we generalize the interpretation in the context of compass gradient edge masks to complex-valued mask cases.
CIRCULANT MATRIX REPRESENTATION OF MASKS
13
1. Compass Gradient Edge Masks and Their Eigenvalue Interpretation 2D 3 3 Compass gradient masks, whose sums of the weights are zero, are commonly used in edge detection. They usually have eight directional components as shown in Figure 2 (Robinson, 1977). The neighboring pixels are defined as the pixels that are covered by a 3 3 mask except for the center
Figure 2. 3 3 Compass gradient masks (Sobel, Prewitt, and Kirsch masks).
14
PARK AND CHA
pixel (i, j). The intensity values of neighboring pixels are represented by p0 ; p1 ; . . . ; p7 in the clockwise direction from the top left pixel, as shown in Figure 3. Let hi ði ¼ 0; 1; . . . ; 7Þ be each weight of the north (N) directional mask of 2D Sobel, Prewitt, and Kirsch masks, as shown in Figure 4, except for the center weight, which is zero. Each weight of seven other directional (NW, W, SW, S, SE, E, and NE) masks can be obtained by rotating each mask in the counterclockwise direction (Park and Choi, 1989). The edge value of each directional mask, which is the sum of the multiplication of the weight of each directional mask and the intensity value of the corresponding neighboring pixel, can be represented by a matrix multiplication 32 3 2 3 2 h0 h7 h6 h5 h4 h3 h2 h1 p0 e0 6 e 1 7 6 h1 h0 h7 h6 h5 h4 h3 h2 7 6 p1 7 76 7 6 7 6 6 e 2 7 6 h2 h1 h0 h7 h6 h5 h4 h3 7 6 p2 7 76 7 6 7 6 6 e 3 7 6 h3 h2 h1 h0 h7 h6 h5 h4 7 6 p3 7 7 76 7 ¼ Hp 6 6 ð39Þ e¼6 7¼6 76 7 6 e 4 7 6 h4 h3 h2 h1 h0 h7 h6 h5 7 6 p4 7 6 e 5 7 6 h5 h4 h3 h2 h1 h0 h7 h6 7 6 p5 7 76 7 6 7 6 4 e 6 5 4 h6 h5 h4 h3 h2 h1 h0 h7 5 4 p6 5 e7 h7 h6 h5 h4 h3 h2 h1 h0 p7
Figure 3. Pixel values in a 3 3 mask.
Figure 4. 3 3 Mask in terms of hi’s.
CIRCULANT MATRIX REPRESENTATION OF MASKS
15
where e is a column vector consisting of edge values of eight-directional components (e0 ; e1 ; . . ., and e7 are the N, NW, . . ., and NE directional edge values, respectively); p is a column vector consisting of p0 ; p1 ; . . . ; p7 ; and H, in the form of a circulant matrix, is a mapping matrix from p into e. Eight-directional compass gradient masks are represented in matrix form, as shown in Figure 5. For example, each directional mask of Sobel masks can be expressed in the form of weights hi’s, which can be expressed in the form of the 8 8 matrix H, as shown in Figure 6. Figure 7 shows the edge detection results when each directional mask of N, NW, W, SW, NE, E, SE, and S Sobel masks is applied to a synthetic image, in which an original image, edge images detected by N, NW, W, SW, NE, E, SE, and S masks, and a final edge image by eight-directional masks are shown in the clockwise direction. The final edge map by eight-directional masks is obtained by combining eight-directional masking images in Figure 7(b)–7(i) with the thresholding value of 128. Similarly, each directional mask of a Prewitt (Kirsch) masks and its matrix representation in terms of hi’s are shown in Figure 8 (see Figure 10). In the same way, results by Prewitt (Kirsch) masks are illustrated in Figure 9 (see Figure 11). According to the property of the circulant matrix in the DFT domain, the circulant matrix H can be diagonalized by using W matrix and W1 matrix, H ¼ WDW1
or
D ¼ W1 HW
ð40Þ
where D is a diagonal matrix whose diagonal elements Dkk are the eigenvalues of H. The values of Dkk eigenvalues of H of Sobel, Prewitt, and Kirsch masks, are listed in Table 1. For the interpretation of compass gradient edge masks, Eq. (39) can be rewritten as e ¼ Hp ¼ WDW1 p:
ð41Þ
We can interpret Eq. (41) as follows. First, we start with discrete Fourier transforming the column vector p,
Figure 5. Eight-directional compass gradient mask in matrix form.
16
PARK AND CHA
Figure 6. Matrix representation of the Sobel masks in terms of hi’s.
pˆ ¼ W1 p ¼ FTfpg
ð42Þ
where FT is a 1D DFT operator. In the DFT domain, we multiply the transform coeYcient pˆ element by the constant vector where elements are the eigenvalues rk’s of the circulant matrix H and obtain Dpˆ. The eigenvalues rk’s depend only on the type of the compass gradient mask, as listed in Table 1. Finally, we do inverse DFT (IDFT) Dpˆ and derive the e vector. Figure 12 shows the block diagram of this interpretation in terms of the DFT and IDFT. This interpretation shows that Sobel, Prewitt, and Kirsch masks have a similar structure and the only diVerence is the eigenvalues rk’s, which correspond to the nonlinear filtering coeYcients. It is expected that a new edge mask can be found by changing the eigenvalues rk’s in the DFT domain. Other masks such as average smoothing can be interpreted similarly by rearranging the weights of masks in the form
CIRCULANT MATRIX REPRESENTATION OF MASKS
17
Figure 7. Edge detection results of Sobel masks. (a) Original image; (b–j) edge detection results by (b) N, (c) NW, (d) W, (e) SW, (f ) NE, (g) E, (h) SE, (i) S; ( j) final edge detection result.
of the circulant matrix. However, the redundancy among results may exist if the mask does not have directionality. The DFT used in this analysis requires complicated computation in the digital computer system, but this complicated transform can be performed simply in an optical system. The optical transform is faster and has more information capacity than transforms performed by means of electronic circuitry, and the optical spectrum analyzer can be modified to yield a multichannel 1D spectrum analyzer by adding a cylindrical lens (Yu, 1983). Therefore, if we arrange pixels in the form of p0, p1, . . ., p7 and assign them to each channel of a 1D spectrum analyzer, we can calculate edge values using optical systems based on the new interpretation of the compass gradient edge masks. To consider the 1D case, let p, h, and e be column vectors consisting of eight intensity values, eight weights of the edge mask, and eight directional edge values, respectively (Park, 1998a). Note that eight intensity values and the corresponding weights are scanned counterclockwise from the top left pixel of the 3 3 mask, centered at (i, j). For example, the intensity vector p and the kernel weight vector h of the compass gradient edge mask can be expressed as p ¼ ½ p0
p1
p2
p3
p4
p5
p6
p7 t
ð43Þ
h ¼ ½ h0
h1
h2
h3
h4
h5
h6
h7 t
ð44Þ
where pi and hi, 1 i 8, represent the ith intensity values and their corresponding weights of the mask, respectively. For example, weight
18
PARK AND CHA
Figure 8. Matrix representation of the Prewitt masks in terms of hi’s.
vectors hP, hS, and hK of the Prewitt, Sobel, and Kirsch masks, respectively, can be expressed as hP ¼ ½ 1
1
1
0 1
1 1
0 t
ð45Þ
hS ¼ ½ 1
2
1
0 1
2 1
0 t
ð46Þ
3
3 t :
ð47Þ
hk ¼ ½ 5
5
5
3
3
3
Eight compass gradient masks yield eight column vectors, each of which is a circularly shifted version of h. Thus these eight vectors form the 8 8 circulant matrix H, with the first column vector corresponding to h and each column representing each directional weight vector. Then edge detection by compass gradient edge masks can be expressed in vector-matrix form: e ¼ Hp. The circulant matrix can be diagonalized by the DFT matrix, and
CIRCULANT MATRIX REPRESENTATION OF MASKS
19
Figure 9. Edge detection results of Prewitt masks. (a) Original image; (b–j) edge detection results by (b) N, (c) NW, (d) W, (e) SW, (f ) NE, (g) E, (h) SE, (i) S; ( j) final edge detection result.
the corresponding eigenvalues rk ; 1 k 8, compose the eigenvalue vector r. Note that the eigenvalue vector r is expressed as the 1D DFT of h and that eigenvalue vectors rP, rS, and rK of the Prewitt, Sobel, and Kirsch masks, respectively, can be expressed as rP ¼ ½ 0
rS ¼ 0
pffiffiffi 2u
rK ¼ ½ 0 4u
u 0
0 v 0 v 0 u t pffiffiffi pffiffiffi pffiffiffi t 2v 0 2v 0 2u
j8
4v 8
4v
j8
4u t
ð48Þ ð49Þ ð50Þ
where the subscriptpffiffiffi denotes complex conjugate pffiffiffi and complex constants are given by u ¼ ð2 þ 2Þð1 jÞ and v ¼ ð2 2Þð1 þ jÞ. From Eqs. (48), (49), and (50), it is noted that the three compass gradient edge masks have the similar structure in the 1D frequency domain, with the only diVerence being the corresponding eigenvalues. In another method to calculate eigenvalues of the matrix H, we can solve the characteristic equation (Park and Choi, 1992): detðsI8 HÞ ¼ 0:
ð51Þ
By the way, when we calculate the eigenvalues of circulant matrices for Sobel and Prewitt masks, we may use the special structure of their circulant matrices. The 8 8 circulant matrices for Sobel and Prewitt masks are represented in the following form:
20
PARK AND CHA
Figure 10. Matrix representation of the Kirsch masks in terms of hi’s.
A H¼ A
A A
where 82 1 > > > 6 > > > 62 > > 6 > > 41 > > > < 0 A¼ 2 > 1 > > > > 61 > > 6 > > 6 > > 41 > > : 0
0 1
1 0
3 2 1 7 7 7; for Sobel masks 05
2 1 1 2 1 3 0 1 1 1 0 1 7 7 7; for Kirsch masks: 1 1 05 1
1
1
ð52Þ
21
CIRCULANT MATRIX REPRESENTATION OF MASKS
(a)
( j)
(b)
(c)
(d)
(i)
(h)
(g)
(e)
(f )
Figure 11. Edge detection results of Kirsch masks. (a) Original image; (b–j) edge detection results by (b) N, (c) NW, (d) W, (e) SW, (f ) NE, (g) E, (h) SE, (i) S; ( j) final edge detection result. TABLE 1 Eigenvalues rk of Compass Gradient Edge Masks k
Sobel S
Prewitt P
Kirsch K
0 1 2 3 4 5 6 7
0 ffiffiffi p pffiffiffi 2ð2 þ 2Þð1 jÞ 0 pffiffiffi pffiffiffi 2ð2 2Þð1 þ jÞ 0 pffiffiffi pffiffiffi 2ð2 2Þð1 jÞ 0 ffiffiffi p pffiffiffi 2ð2 þ 2Þð1 þ jÞ
0 pffiffiffi ð2 þ 2Þð1 jÞ 0 pffiffiffi ð2 2Þð1 þ jÞ 0 pffiffiffi ð2 2Þð1 jÞ 0 pffiffiffi ð2 þ 2Þð1 þ jÞ
0 pffiffiffi 4ð2 þ 2Þð1 jÞ j8 pffiffiffi 4ð2 2Þð1 þ jÞ 8 pffiffiffi 4ð2 2Þð1 jÞ j8 pffiffiffi 4ð2 þ 2Þð1 þ jÞ
To determine the eigenvalues of the circulant matrix H, we solve the following characteristic equation using Eq. (33), A A sI4 sI4 A ¼ det det A sI4 A sI4 sI4 A ¼ detðsI4 Þ det½ðsI4 AÞ sI4 ðsI4 Þ1 A ¼ detðsI4 Þ 2det
s 2
I4 A ¼ 0:
ð53Þ
22
PARK AND CHA
Figure 12. Block diagram of the new interpretation of compass gradient edge masks in the DFT domain.
From the first term, we get four zero eigenvalues. From the second one, we obtain four eigenvalues of which values are twice as large as eigenvalues of A. Therefore we determine the eigenvalues of 8 8 circulant matrices H in the case of Sobel and Prewitt masks simply by adding four zeros, and calculating eigenvalues of the 4 4 matrix A and then multiplying their values by 2. In the case of the Kirsch masks, the 8 8 circulant matrix is represented in the following form: A B H¼ ð54Þ B A where 2
5 6 5 A¼6 4 5 3
3 5 5 5
3 3 5 5
3 2 3 3 6 3 3 7 7 and B ¼ 6 4 3 3 5 5 3
3 3 3 3
5 3 3 3
3 5 57 7: 3 5 3
To determine the eigenvalue of the circulant matrix H, we solve the following characteristic equation using Eq. (33): sI A A A sI det 4 ¼ det 4 A sI4 A sI4 sI4 A ¼ detðsI4 Þ det½ðsI4 AÞ sI4 ðsI4 Þ1 A s ¼ detðsI4 Þ 2det I4 A ¼ 0: 2
ð55Þ
CIRCULANT MATRIX REPRESENTATION OF MASKS
23
From the first and second terms, eight eigenvalues are derived. In other words, we determine eight eigenvalues of the 8 8 circulant matrix H in the case of Kirsch masks simply by calculating eigenvalues of 4 4 matrices (A þ B) and (A B). Generally, a square matrix can be decomposed into a symmetric matrix and a skew-symmetric matrix. Suppose that B is an M M symmetric matrix and C is an M M skew-symmetric matrix. If A is an arbitrary M M square matrix, it can be represented as A¼BþC
ð56Þ
where B ¼ ðA þ At Þ=2 and C ¼ ðA ¼ At Þ=2. The circulant matrices of Sobel, Prewitt, and Kirsch masks are also decomposed into symmetric and skewsymmetric matrices as shown in Figure 13. Decomposed matrices are also the circulant matrices. In the 1D DFT domain, symmetric and skewsymmetric matrices give real and imaginary parts of the complex-valued eigenvalue, respectively, as listed in Table 2. Alternately, we calculate
Figure 13. Required matrix form of the compass gradient edge masks (h ¼ 0:5).
24
PARK AND CHA TABLE 2 Eigenvalues for Symmetric and Skew-Symmetric Matrices of Sobel, Prewitt, and Kirsch Masks Sobel S
Prewitt P
Kirsch K
k
Symmetric
Skewsymmetric
Symmetric
Skewsymmetric
Symmetric
Skewsymmetric
0 1 2 3 4 5 6 7
0 pffiffiffi pffiffiffi 2ð2 þ 2Þ 0 pffiffiffi pffiffiffi 2ð2 2Þ 0 pffiffiffi pffiffiffi 2ð2 2Þ 0pffiffiffi pffiffiffi 2ð2 þ 2Þ
0 pffiffiffi pffiffiffi 2ð2 þ 2Þj 0 pffiffiffi pffiffiffi 2ð2 2Þj 0 ffiffiffi p pffiffiffi 2ð2 2Þj 0 ffiffiffi p pffiffiffi 2ð2 þ 2Þj
0 pffiffiffi ð2 þ 2Þ 0 pffiffiffi ð2 2Þ 0 pffiffiffi ð2 2Þ 0 pffiffiffi ð2 þ 2Þ
0 pffiffiffi ð2 þ 2Þj 0 pffiffiffi ð2 2Þj 0 pffiffiffi ð2 2Þj 0 pffiffiffi ð2 þ 2Þj
0 pffiffiffi 4ð2 þ 2Þ 0 pffiffiffi 4ð2 2Þ 8 pffiffiffi 4ð2 2Þ 0 pffiffiffi 4ð2 þ 2Þ
0 pffiffiffi 4ð2 þ 2Þj j8 pffiffiffi 4ð2 2Þj 0 pffiffiffi 4ð2 2Þj j8 pffiffiffi 4ð2 þ 2Þj
eigenvalues of the symmetric and skew-symmetric matrices of Sobel and Prewitt masks using Eq. (53) with corresponding 4 4 matrices, since their matrices are represented as in Eq. (52). Similarly, we obtain the eigenvalues of Kirsch masks by using Eq. (55) with corresponding 4 4 matrices. Let hS;k ; hP;k , and hK;k ; 0 k 7, be the first column of the circulant matrices of Sobel, Prewitt, and Kirsch masks, respectively; then corresponding eigenvalues, rS,k, rP,k, and rK,k are represented by rS;k ¼
N 1 X i¼0
rP;k ¼
N 1 X i¼0
rK;k ¼
N 1 X i¼0
p hS;i exp j ki 4
ð57Þ
p hP;i exp j ki 4
ð58Þ
p hK;i exp j ki : 4
ð59Þ
The kth row of the circulant matrix of the Sobel masks corresponds to the sum of < k 1 >8 th and < k þ 1 >8 th rows of the circulant matrix of the Prewitt masks: hS;k ¼ hP;8 þ hP;8 ;
0 k 7:
ð60Þ
We derive the relationship between the eigenvalues, r0S;k and r0P;k by using Eqs. (57), (58), and (60):
CIRCULANT MATRIX REPRESENTATION OF MASKS
p p rS;k ¼ rP;8 þ rP;8 ¼ rP;k exp j k þ rP;k exp j k 4 p4 ¼ 2rP;k cos k : 4
25
ð61Þ
We also get the kth row of the circulant matrix of Prewitt masks by subtracting the < k þ 4 >8 th row from the kth row of the circulant matrix of the Kirsch masks and dividing it by 8. The Sobel masks are related to the Kirsch masks in a similar way: 1 hP;k ¼ ðhK;k hK;8 Þ; 8
0 k 7:
ð62Þ
We drive the relationship between the eigenvalues rP,k and rK,k by using Eqs. (58), (59), and (62): 1 1 1 rP;k ¼ ðrK;k rK;8 Þ ¼ rK;k ð1 expðjpkÞÞ ¼ rP;k cosð1 cosðpkÞÞ 8 8 8 1 ¼ rP;k cos 1 ð1Þk : 8
ð63Þ
We also derive the relationship between the eigenvalues rS,k and rK,k from Eqs. (61) and (63): p 1 ð64Þ rS;k ¼ rK;k 1 ð1Þk cos k : 4 4 As shown in Table 2, for fixed k, the eigenvalues for three edge masks have the same structure except for the constant factors, which are explained in Eqs. (61), (63), and (64). 2. Frei–Chen Edge Masks The spiral scanning of the 2D intensity pattern in the clockwise direction from the top left pixel gives a 9D column vector, as shown in Figure 14. The Frei–Chen masks Fi ; 1 i 9, defined on a 3 3 window (Figure 15), span the edge, line, and average subspaces (Frei and Chen, 1977; Park and Choi, 1990). The four-dimensional (4D) edge subspace is spanned by the isotropic average gradient masks F1 and F2, and the ripple masks F3 and F4. The 4D line subspace is spanned by the directional masks F5 and F6, and the nondirectional Laplacian masks F7 and F8. The average mask F9 forms the average subspace. We can form the 9D column vector fi for each mask Fi. Then the nine vectors define the 9 9 Frei–Chen matrix F. To make the matrix F orthogonal, we should make the column vectors fi orthonormal.
26
PARK AND CHA
Figure 14. 9D Vector description of a 3 3 neighborhood.
Figure 15. Orthogonal set of the Frei–Chen masks (a ¼
pffiffiffi 2).
Since the vectors pffiffiffi fi are orthogonal, we just normalize them with normalization factors: 2 2 for F1, F2, F3, and F4; 2 for F5 and F6; 6 for F7 and F8; and 3 for F9. We denote the orthonormal Frei–Chen vector by vi and the orthogonal matrix V (Park, 1990):
CIRCULANT MATRIX REPRESENTATION OF MASKS
27
3 3 2 vt1 c d c 0 c d c 0 0 6 vt 7 6 27 6 c 0 c d c 0 c d 07 7 6 t7 6 6 v3 7 6 0 c d c 0 c d c 07 7 6 7 6 7 6 vt 7 6 6 4 7 6 d c 0 c d c 0 c 0 7 7 6 t7 6 v 7 6 ð65Þ d 0 d 0 d 0 d 0 7 V¼6 7 6 57¼6 0 7 6 vt 7 6 d 0 d 0 d 0 d 0 0 7 6 67 6 7 6 t7 6 6 v7 7 6 e f e f e f e f g 7 7 6 t7 6 6 v 7 4 f e f e f e f e g5 4 85 vt9 f f f f f f f f f pffiffiffi where c ¼ 1=2 2; d ¼ 1=2; e ¼ 1=6; f ¼ 1=3, and g ¼ 2=3. Edge detection using the Frei–Chen masks is done by mapping the intensity vector by the linear transformation V and then detecting edges based on the angle between the intensity vector and its projection onto the edge subspace. This can be done by thresholding the value of 2
4 P
ðvi p0 Þ2
i¼1
ðp0 p0 Þ2
ð66Þ
where vip0 denotes the inner product of the vectors vi and p0 . The result of edge detection using the Frei–Chen masks is shown in Figure 16, in which the threshold value used in this experiment is 0.12. Determining the optimal thresholding for edge detection in noisy images is explained in Abdou and Pratt (1979).
Figure 16. Edge detection results by Frei–Chen masks. (a) Original image, (b) final result.
28
PARK AND CHA
We briefly explained the edge detection method by Frei–Chen masks. Hereafter, we consider the relationship between the Frei–Chen space and the eight-dimensional (8D) DFT space. We call hi (the noncenter weights in the 3 3 normalized Frei–Chen masks scanned in the clockwise direction with the length equal to 8, a power of 2) the modified version of the normalized Frei–Chen weight vector vi. The exclusion of the center pixel can be represented in vector-matrix form. The modified 8D intensity vector pM can be written as t pM pM pM pM pM pM pM p M ¼ pM 5 7 0 1 2 3 4 6 2
1 60 6 60 6 60 ¼6 60 6 60 6 40 0
0 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 1 0 0
0 0 0 0 0 0 1 0
3 0 07 7 07 7 07 7p0 ¼ ½ I8 07 7 07 7 05 0
0 0 0 0 0 0 0 1
0 p0 ¼ IM p0
ð67Þ
where I8 and 0 are the 8 8 identity matrix and the 8D zero vector, respectively. IM is the 8 9 modification matrix that maps the nine pixel values into the 8D vector space by excluding the center pixel. ˜ t defined by the modified 8D weight vectors hi can Similarly, the matrix H be written as ˜ t ¼ ½ h1 H
h2
h3
h4
h5
h6 F
h7
h8
The 8D Fourier transform vector t of the data p
h9 ¼ IM Vt : M
ð68Þ
is defined by
tF ¼ TF pM
ð69Þ
where tFk ¼
N 1 X i¼0
1 ki ; pM exp j i N
0 k N 1:
TF is the unitary matrix mapping into the 8D DFT space. Since pM is real valued, we have tFk ¼ ðtFNk Þ . We can relate the DFT vector Q of the 8D intensity vector pM to the 9D resulting convolution vector b of the normalized Frei–Chen masks by the linear transformation TA: tF ¼ TA b:
ð70Þ
29
CIRCULANT MATRIX REPRESENTATION OF MASKS
Using Eqs. (67) and (69), we can write TF IM p0 ¼ TA Vp0 :
ð71Þ
A
Then the linear transformation T is given by TA ¼ TF IM V1 :
ð72Þ t
Due to the orthonormality of V (i.e., V1 ¼ V ) and Eq. (68), we can write ˜t TA ¼ TF IM Vt ¼ TF H 2 0 0 0 6 a ja a þ ja 0 6 6 6 0 0 0 6 6 0 0 j2 6 ¼6 6 0 0 0 6 6 0 0 j2 6 6 4 0 0 0
0 0
0 0
0 0
g 0
g 0
0
j2
2
0
0
2 0
0 0
0 0
0 2
0 2
2 0
0 j2
0 2
0 0
0 0
3 z 07 7 7 07 7 07 7 7 07 7 07 7 7 05
ð73Þ
a þ ja a ja 0 0 0 0 0 0 0 pffiffiffi where a ¼ 2; g ¼ 2=3, and z ¼ 8=3. The transformation matrix TA is equal ˜ t, which can be expressed as to the DFT of the modified Frei–Chen matrix H t
˜ ¼ TF IM Vt ¼ ½TF TF H
0 Vt :
ð74Þ
Eq. (74) shows that the DFT of the given matrix with the center weight excluded is mathematically equivalent to the mapping of the original matrix with the mapping matrix determined by the DFT vectors and the zero vectors. Eq. (73) explains the relationship between the subspaces of the modified Frei–Chen basis vectors and their spatial frequency components. The h1 and h2 weight vectors give the fundamental frequency components only. The h3 and h4 weight vectors exhibit only the third harmonic components. In other words, the edge subspaces of the original/modified Frei–Chen weight vectors correspond to the fundamental and third harmonic components of the clockwise-scanned 8D intensity vectors. Note that the edge responses of the normalized edge masks are zero. Also the h5, h6, h7, and h8 weight vectors, spanning the line subspace, represent the second and fourth harmonic components. Note that the zero-frequency components of h7 and h8 are due to the zero center weights in the modified discrete Laplacian masks. The average mask h9 gives the zero-frequency component only.
30
PARK AND CHA
From Eq. (69), the complex-valued DFT matrix TF can be written as 3 2 1 1 1 1 1 1 1 1 61 t 0 t 1 t 0 t7 7 6 7 61 0 1 0 1 0 1 0 7 6 7 6 1 t 0 t 1 t 0 t F 7 6 T ¼6 7 1 1 1 1 1 1 1 1 7 6 7 6 1 t 0 t 1 t 0 t 7 6 41 0 1 0 1 0 1 05 1 t 0 t 1 t 0 t 3 2 0 0 0 0 0 0 0 0 6 0 t 1 t 0 t 1 t7 7 6 6 0 1 0 1 0 1 0 17 7 6 6 0 t 1 t 0 t 1 t7 7 ð75Þ þ j6 60 0 0 0 0 0 0 07 7 6 60 t 1 t 0 t 1 t 7 7 6 40 1 0 1 0 1 0 1 5 0 t 1 t 0 t 1 t pffiffiffi where t ¼ cosðp=4Þ ¼ 1= 2. Examination shows that the row vectors of TF are related to the weight vectors vti , i.e., 22 3 2 t 33 3vt9 0 66 2gt 7 6 2gt 77 7 66 7 1 6 2 77 66 7 77 6 66 2vt6 7 t 6 2v5 77 66 7 7 6 66 2vt 7 6 2vt 77 4 6 7 6 F 77 6 3 T ¼ 66 t 7 IM 7 þ j6 t 77 66 2v7 2vt8 7 7 6 0 66 7 77 6 66 2vt4 7 6 2vt 77 66 7 6 3 77 66 2vt 7 4 2vt 57 5 44 5 6 5
2gt2
2gt1
2
0 62 6 6 60 6 60 6 ¼6 60 6 60 6 60 4 2
0 j2 0 0 0 0 0 j2
0 0 0 j2 0 j2 0 0
0 0 0 0 0 j2 2 0 0 0 2 0 0 j2 0 0
0 0 2 0 0 0 2 0
0 0 0 0 2 0 0 0
2 3 3 vt1 0 3 6 t7 6 v2 7 0 07 76 t 7 76 v 7 0 0 76 3t 7 76 v 7 6 47 0 07 76 t 7 M v 7I ¼ QMIM 7 2 0 76 6 5t 7 76 v 7 0 0 76 6 7 76 vt 7 6 7 0 07 56 7t 7 4 v8 5 0 0 vt9
ð76Þ
CIRCULANT MATRIX REPRESENTATION OF MASKS
31
where g1 denotes the circularly shifted version of the weight vector v1 by 1 (i.e., g1k ¼ v1n with n ¼< k þ 1 >8 ) (note that the symbol < >8 denotes the modulo-8 operation). Q is the matrix defined by the normalized Frei–Chen weight vectors and their circular shifts. The matrix IM signifies the relationship between the DFT basis vector and the normalized Frei–Chen weight vectors. The weight vectors of the Frei–Chen masks and the 8D DFT basis vectors are closely related, as shown in Eq. (76). From our observations so far, we can design a set of eight orthogonal masks that span the edge, line, and average subspaces. From Eq. (69), we can express tF1 as tF1 ¼ ½1
t jt
j
t jt
¼ ðd1 pM Þ þ jðd2 pM Þ:
1
t þ jt
j
t þ jt pM
ð77Þ
The weight vectors d1 and d2 can be arranged into 2D masks. Note that, neglecting the constant factor, d1 and d2 are versions of v1 and v2 shifted circularly by 1. Similarly, to obtain the tF3 component, we need two mask vectors d3 and d4, which are equivalent to v4 and v3, respectively. Thus, a set of mask vectors d1, d2, d3, and d4 for the edge subspace can be obtained. In a similar way, a set of masks d5, d6, and d7 for the line subspace and d8 for the average subspace are obtained. The normalized set of eight designed mask vectors ni is shown in matrix form: 3 2 d c 0 c d c 0 c 6 0 c d c 0 c d c7 7 6 6 d c 0 c d c 0 c 7 7 6 6 0 c d c 0 c d c7 7 ð78Þ N¼6 6d 0 d 0 d 0 d 07 7 6 6 0 d 0 d 0 d 0 d7 7 6 4 c c c c c c c c 5 c c c c c c c c pffiffiffi where c ¼ 1=2 2 and d ¼ 1=2. The proposed eight masks are orthogonal masks and they are in the same category as the Frei–Chen masks. We observe that the intensity vector pM 1 1 1 0 t e ¼ ½1 1 1 0 or combinations of its circular shifts span the edge subspace and have only tF1 and tF3 components. Their ratio jtF1 =tF3 j is equal to ð1þtÞ=ð1 tÞ ¼ 5:38, where || denotes the absolute value of a vector. Also, the input vector t pM l ¼ ½1 1 1 0 1 1 1 0 or combinations of its circular shift span the line subspace and have only tF2 and tF4 components, whose ratio jtF2 =tF4 j is equal to 1. Of course, the uniform pattern pM a ¼ ½1 1 1 1 1 1 1 1 t gives only the nonzero dc component. We can detect edges based on a decision rule similar to Eq. (66):
32
PARK AND CHA
2 2 2 2 2 2 2 tF1 þ tF3 tF þ tF þ tF þ tF 5 7 1 3 ¼ > Th 7 7 P P 2 tF tF 2 k k k¼0
ð79Þ
k¼0
where Th is an arbitrary threshold. In the previous section, we mentioned that the idea of new edge mask design was motivated by the fact that the compass gradient edge masks have a similar structure in the 1D frequency domain (Park, 1998a). The idea is to derive a new set of compass gradient edge masks that has a unifying property in terms of eigenvalues. The simplest but most meaningful processing in the frequency domain is multiplication—circular convolution operation in the spatial domain. We can easily specify corresponding mask operations in the 1D spatial domain, which is equivalent to multiplication operations in the 1D frequency domain. For example, element-by-element multiplication of rP by rP yields another eigenvalue vector represented by t rnew ¼ 0 u2 0 v2 0 v 2 0 u 2 ð80Þ pffiffiffi pffiffiffi where u ¼ ð2 þ 2Þð1 jÞ and v ¼ ð2 2Þð1 þ jÞ. Eq. (80) corresponds to the new set of compass gradient edge masks hnew ¼ ½ 4
6
4 0
4
6
4
0 t
ð81Þ
obtained by circular convolution of two weight vectors of the Prewitt masks in the 1D spatial domain. From Eqs. (48), (49), and (50), it is noted that the second and eighth elements in the eigenvalue vector have the largest value, that is, the fundamental frequency component is the largest. Note that the first element corresponds to the zero-frequency value, which is zero. Hermitian property of the eigenvalue is obvious since the weights of the compass gradient edge masks are real valued. Thus, infinite element-by-element multiplication of the eigenvalue vector of the three compass gradient edge masks in the 1D frequency domain yields the same eigenvalue vector, in which only the second and eighth elements are nonzero and other elements are zero. The resulting eigenvalue vector is expressed as r0new ¼ ½ 0
w
0
0 0
0
0
w t
ð82Þ
where w is a nonzero normalization constant. This eigenvalue vector corresponds to a new set of compass gradient edge masks, expressed as t pffiffiffi pffiffiffi h0new ¼ 1 ð83Þ 2 1 0 1 2 1 0 where w ¼ 4ð1jÞ for simplicity. The weight vector in Eq. (83) and its circularly shifted version correspond to the first-type Frei–Chen edge masks
33
CIRCULANT MATRIX REPRESENTATION OF MASKS
containing only the fundamental frequency component of the weight vector. The weight vector specified by t pffiffiffi pffiffiffi 00 hnew ¼ 1 2 1 0 1 ð84Þ 2 1 0 and its circularly shifted version form the second-type Frei–Chen edge masks. The corresponding eigenvalue vector is expressed as 00
rnew ¼ ½ 0
0 0
w
0
w
0 0 t :
ð85Þ
Note that the second-type Frei–Chen edge masks contain only the third harmonic frequency component. The first-type and second-type Frei–Chen masks constitute the edge subspace described above. These two types of edge components lead to the decision rule of edge detection expressed in the 1D frequency domain in Eq. (79). Thus, four Frei–Chen edge masks and their corresponding weights can be interpreted by the 1D frequency domain analysis. 3. Complex-Valued Edge Masks We can generalize the 1D interpretation of the compass gradient edge masks to complex-valued cases. An optical implementation of real-time edge enhancement filters has been presented (Corecki and Trolard, 1998), in which, for example, the 3 3 complex-valued Sobel mask Sc can be approximated by the complex addition: 2 3 2 3 2 3 1 0 1 1 2 1 1 j j2 1 j c h v S ¼ S þ jS ¼ 4 2 0 2 5 þ j 4 0 0 0 5 ¼ 4 2 0 2 5 1 0 1 1 2 1 1 þ j j2 1þj ð86Þ where Sh and Sv denote the derivatives approximated by the Sobel masks along the horizontal and vertical directions, respectively. The optical realization of the complex-valued mask uses an optical correlator with a matched filter whose impulse response is a weighted sum of eight delta functions. With the kernel matrix Sc, we can construct the complex-valued kernel weight vector hcS of the complex-valued compass Sobel edge masks (Park, 1998b): hcS ¼ ½ 1 j
j2
1j
2
1þj
Then the corresponding eigenvalue vector
rcS
j2
1 þ j
2 t :
ð87Þ
is expressed as
¼ ½ 0 0 0 4ða 1Þð1 þ jÞ 0 0 0 4ða þ 1Þð1 þ jÞ t ð88Þ pffiffiffi where a ¼ 2. For example, the complex edge value at (i, j) is obtained from convolution of the complex-valued Sobel mask Sc with the 3 3 intensity mask, which is centered at (i, j). Also it yields the edge strength rcS
34
PARK AND CHA
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Mði; jÞ ¼ ðReðhcS Þt pÞÞ2 þðImðhcS Þt pÞÞ2 and orientation of the edge normal c yði; jÞ ¼ tan1 ðImðhS Þt pÞ=ReðhcS Þt pÞÞ, where Re(x) and Im(x) denote real and imaginary parts of the complex edge value x, respectively. In a similar way, we can write the complex-valued kernel weight vector hcP of the complex-valued compass Prewitt edge masks as hcP ¼ ½ 1 j
j
1j
1þj
1
0
0 2ða 1Þð1 þ jÞ 0
1 þ j
rcP
expressed as
0
0
with the corresponding eigenvalue vector rcP ¼ ½ 0
j
1 t
2ða þ 1Þð1 þ jÞ t :
ð89Þ
ð90Þ
By the way, two types of the Frei–Chen edge masks, each of which consists of two masks, constitute the 4D edge subspace in the 9D Frei–Chen space. The same interpretation can be applied to each type of the complex-valued compass Frei–Chen edge masks: 0
hcF ¼ ½ 1 j 00
hcF ¼ ½ 1 j
ja 1 j ja 1 j
a 1þj a
1þj
ja 1 þ j ja
1 þ j
a t
ð91Þ
a t
ð92Þ
where the superscripts 0 and 00 denote two diVerent types of the complexvalued compass Frei–Chen edge masks. The corresponding eigenvalue vec0 00 tors rcF and rcF are expressed as 0
rcF ¼ ½ 0 00
rcF ¼ ½ 0
0
0
0 0
0
0
0
0 8ð1 þ jÞ 0
8ð1 þ jÞ t
ð93Þ
0 t :
ð94Þ
0
0
Note that both eigenvalue vectors of the complex-valued compass Frei– Chen edge masks have only one nonzero component, which represents the fact that each complex-valued kernel weight vector is the same as the specific vector component of the DFT weight matrix except for the constant factor, with each column/row vector of the DFT matrix orthogonal to each other. Thus, the eigenvalue vector of the complex-valued compass Prewitt and Sobel edge masks can be expressed as a weighted sum of the two types of complex-valued compass Frei–Chen eigenvalue vectors, in the frequency domain: 0 00 a ða þ 1ÞrcF þ ða 1ÞrcF c rP ¼ ð95Þ 4 0 00 ða þ 1ÞrcF ða 1ÞrcF c rS ¼ : ð96Þ 2 Or equivalently, the complex-valued kernel weight vectors of the complexvalued compass Prewitt and Sobel edge masks can be represented as a linear
CIRCULANT MATRIX REPRESENTATION OF MASKS
35
combination of those of the complex-valued compass Frei–Chen edge masks, in the spatial domain: 0 00 a ða þ 1ÞhcF þ ða 1ÞhcF c ð97Þ hP ¼ 4 0 00 ða þ 1ÞhcF ða 1ÞhcF c hS ¼ : ð98Þ 2 Note that all the eigenvalue vectors of the complex-valued compass Prewitt, Sobel, and Frei–Chen edge masks have (1 þ j) terms in common, which can be eliminated by shifting clockwise the kernel weight vectors by unity. The corresponding eigenvalue vectors are obtained using the circular shift property of the DFT. Then, the real eigenvalue vectors are constructed, in which the corresponding complex-valued compass kernel weight vectors satisfy the Hermitian property. In a similar way, we can interpret four edge masks derived from the eight-point DHT, since basis functions of the eightpoint DHT formulate the 3 3 DHT masks that are closely related to the Frei–Chen masks, which are explained in Section IV. B. Feature Detection We focus on the feature detection of compass roof edge masks and Frei–Chen line masks and extend to complex-valued feature mask cases of directional filtering. 1. Compass Roof Edge Masks Roof edge masks can be interpreted in the context of circularity of compass feature masks. For example, four roof edge masks are defined as follows (Lee et al., 1993): 2 3 2 3 1 1 1 1 1 3 1 6 1 6 7 7 Rv ¼ 4 3 0 3 5 Rld ¼ 4 1 0 1 5 12 12 1 1 1 3 1 1 2 3 2 3 ð99Þ 1 3 1 3 1 1 1 6 1 6 7 7 0 1 5 Rh ¼ 4 1 0 1 5 Rrd ¼ 4 1 12 12 1 3 1 1 1 3 where the subscripts v, ld, h, and rd denote vertical, left-diagonal, horizontal, and right-diagonal, respectively. By neglecting a constant factor, we will formulate the compass roof edge masks with the kernel matrix R:
36
PARK AND CHA
2
1 R¼4 3 1
1 0 1
3 1 3 5: 1
ð100Þ
We can construct the kernel weight vector hR of the compass roof edge masks (Park, 1999a): hR ¼ ½ 1
1
1
3 1
1 1
3 t :
ð101Þ
Then, the corresponding eigenvalue vector rR is expressed as rR ¼ ½ 0 0
j8
0 8
0
j8
0 t :
ð102Þ
Note that any matrix in Eq. (99) can be used to construct the kernel weight vector, with the corresponding eigenvalue vector obtained using the shift property of the DFT. 2. Frei–Chen Line Masks Two types of Frei–Chen line masks, each of which consists of two masks, constitute the 4D line subspace in the 9D Frei–Chen space (Frei and Chen, 1977). The same interpretation can be applied to one type of Frei–Chen line masks, consisting of two line masks L1 and L2: 2 3 2 3 1 0 1 0 1 0 L1 ¼ 4 0 0 0 5 L2 ¼ 4 1 0 1 5: ð103Þ 1 0 1 0 1 0 Similarly, we can construct the kernel weight vector hL of the Frei–Chen line masks, with the kernel matrix Li (Park, 1999a): hL ¼ ½ 1
0
1 0
1
0 1
0 t :
ð104Þ
Then, the corresponding eigenvalue vector rL is expressed as rL ¼ ½ 0
0
4 0
0
0
4
0 t :
ð105Þ
The other type of the Frei–Chen line masks, with nonzero center weights, is not suitable for this frequency domain analysis. Note that hi ¼ hiþ4 ; 0 i 3, for the compass gradient edge masks such as Sobel, Prewitt, and Kirsch masks in Section III.A.3, whereas hi ¼ hiþ4 ; 0 i 3, for the compass roof edge and Frei–Chen line masks. For the latter cases, the first four row vectors of the weight matrix H are the same as the next four row vectors. The third, fifth, and seventh elements of the eigenvalue vectors of the compass roof edge masks are nonzero, whereas the third and seventh elements of the compass Frei–Chen line masks are nonzero. Note that dc and
CIRCULANT MATRIX REPRESENTATION OF MASKS
37
odd-order harmonic components are zero, because of the periodicity inherent in the weight vector. In this case, we can construct the vector-matrix relationships with the reduced data size, i.e., h# R ¼ ½ 1
1
1
h# L ¼ ½ 1 0
1
3 t
ð106Þ
0 t
ð107Þ
where the superscript # denotes the reduced vector-matrix representation, and the subscripts R and L signify the compass roof masks and Frei–Chen line masks, respectively. The corresponding intensity kernel vector is given by p# 4 ¼ ½ p0 þ p4
p1 þ p5
p 2 þ p6
p3 þ p7 t :
ð108Þ
Then, we have the corresponding eigenvalue vectors: r# R ¼ ½0
j4
r# L ¼ ½0
2 0
4
j4 t
ð109Þ
2 t :
ð110Þ
Note that the eigenvalue vectors of Eqs. (109) and (110) in reduced vector representation are subsampled versions of Eqs. (102) and (105), respectively by a factor of two neglecting the normalization constant. By the unified eigenvalue analysis of compass feature masks in the 1D frequency domain, the compass roof edge and Frei–Chen line masks are investigated. 3. Complex-Valued Feature Masks Let pðxÞ and gcn ðxÞ denote the original image and the nth conjugate image filtered (convolved) by the nth mask un ðxÞ, respectively, 1 n N (Paplinski, 1998). Then the complex-valued edge strength ec ðxÞ can be regarded as a vector addition of conjugate images: ec ðxÞ ¼
N X
gcn ðxÞ expð jan Þ
ð111Þ
n¼1
with its magnitude and direction corresponding to a standard edge magnitude and orientation, respectively (Abdou and Pratt, 1979). The edge strength ec ðxÞ can be rewritten as a convolution of the original image pðxÞ and the complex-valued edge filter uc ðxÞ, which is specified by uc ðxÞ ¼
N X
un ðxÞ expð jan Þ:
ð112Þ
n¼1
Note that uc ðxÞ is a sum of appropriately rotated filter components. The complex-valued mask U c ðxÞ can be constructed by a pair of real-valued
38
PARK AND CHA
masks U r ðxÞ and U i ðxÞ, where U r ðxÞ and U i ðxÞ correspond to the real and imaginary parts of U c ðxÞ, respectively, regardless of the number of filtering directions N. A simple example of defining a gradient magnitude and orientation in two orthogonal directions is based on the estimation of horizontal (real) and vertical (imaginary) components of the intensity gradient vector using horizontal and vertical Sobel masks. For 3 3 masks with zero center weights, the 8D weight vector un is formulated by scanning eight weights counterclockwise from the center right weight of the mask, with u1 denoting the kernel weight vector for compass feature mask formulation. Similarly, p and g are defined. Complex-valued ec ðxÞ and uc ðxÞ yield complex-valued vectors ec and uc, respectively. Equivalently, we can express the complex-valued edge mask Uc in matrix form (Park, 2002c): Uc ¼
N X
Un expð jan Þ ¼ Ur þ jUi
ð113Þ
n¼1
where an is assumed to be 2pðn 1Þ=N for the nth feature mask Un. The 3 3 masks Uc and Un with zero center weights correspond to 8D weight vectors uc and un, respectively. Thus, N real-valued directional masks Un, 1 n N, are combined to yield one complex-valued mask Uc ¼ Ur þ jUi , where Ur and Ui represent real and imaginary masks of Uc, or equivalently, correspond to 8D weight vectors ur and ui, respectively. For simplicity, 3 3 Ndirectional feature masks are analyzed, with N ¼ 8; 4; and 2, in which an is assumed to be 2pðn 1Þ=N; 1 n N. a. N ¼ 8 Cases. Figure 17 shows 3 3 real-valued compass gradient edge masks Gn, with G1 representing the kernel mask, where the constant b specifies the type p offfiffiffiedge masks: b ¼ 1 for Prewitt masks, b ¼ 2 for Sobel masks, and b ¼ 2 for Frei–Chen isotropic average gradient masks. Figure 18 shows 3 3 Frei–Chen kernel masks: FRP1 for the ripple edge
Figure 17. 3 3 Compass gradient edge masks.
CIRCULANT MATRIX REPRESENTATION OF MASKS
Figure 18. 3 3 Frei–Chen kernel masks (a ¼
39
pffiffiffi 2).
Figure 19. 3 3 Kirsch and roof kernel masks.
mask and FL1 for the directional line mask. Note that a set of compass feature masks can be constructed by rotating the kernel mask by an incremental angle of p/4 as shown in Figure 17, where eight compass gradient edge masks Gn ; 1 n N ¼ 8, are constructed from the kernel mask Gl. Similarly, Figure 19 shows the 3 3 Kirsch kernel mask K1 and the roof kernel mask R1. Let hc, kc, and rc be 8D complex-valued weight vectors corresponding to the complex-valued edge mask Gc, Kirsch mask Kc, and roof edge mask Rc, respectively. Similarly, let f cRP and f cL be complex-valued weight vectors corresponding to the complex-valued Frei–Chen ripple edge mask FcRP and line mask FcL , respectively. Then 8D complex-valued weight vectors, with an equal to pðn 1Þ=4; 1 n N ¼ 8, can be written as hc ¼ 2ða þ bÞ½ q0
q1
q2
q3
q4
q5
q6
t
q7 ¼ 2ða þ bÞw
ð114Þ
f cRP ¼ f cL ¼ rc ¼ 0
ð115Þ
kc ¼ 8ð1 þ aÞw
ð116Þ
where b specifies the mask type, and the complex-valued phase term q is pffiffiffi equal to q ¼ expð jp=4Þ ¼ ða=2Þð1þjÞ with a ¼ 2. w is a 1D DFT column vector consisting of Nth roots of unity, and 0 is an 8D zero column vector whose elements are all equal to zero. Note the periodicity of the mask weights: odd symmetry Un;i ¼ Un;iþ4 for G1 and FRP1 masks, and even symmetry Un;i ¼ Un;iþ4 for FL1 and R1
40
PARK AND CHA
masks. The evenness or oddness can be understood in the context of the 3 3 masks. For FL1 and R1 masks, the corresponding complex-valued vectors are zero due to even symmetry of the mask weights. For the FRP1 mask, the complex-valued weight vector f cRP is a zero vector due to the fact that the mask weight is closely related to the irrational factor in the complex-valued phase term q. Note that the weight magnitude of the complex-valued masks Gc and Kc at each neighboring pixel is the same (nonzero constant), with the phase angle increased by the same amount of pn/4, if we scan the weights counterclockwise from the center right weight. The complex-valued masks Gc and Kc show the same magnitude for all neighboring pixels—eight peaks along all directions: horizontal, vertical, diagonal, and anti-diagonal directions. With b ¼ 1, the resulting complex-valued Prewitt and Kirsch weight vectors are the same, neglecting the constant term. For better understanding, we give a numeric example, in which compass gradient edge masks shown in Figure 17 are considered. Figure 20 shows a 3 3 image containing a vertical edge. Edge values computed using masks Un ¼ Gn ; 1 n N ¼ 8 with b ¼ 1 (Prewitt masks), at the center of the image mask are given by 300, 200, 0, 200, 300, 200, 0, and 200, respectively. The mask G1 yields the maximum value, representing the edge orientation. Using Eq. (113), p weffiffiffi can obtain the real edge value of 600 þ 400a, as expected, where a ¼ 2. Or equivalently, we can obtain the edge value using the complex-valued edge mask Gc ¼ Gr þ jGi , where Gr and Gi denote real and imaginary parts of the mask Gc, respectively. Figure 21 shows Gr and Gi of the compass gradient edge masks, respectively (note the constant factor (a þ b) ), corresponding to Frei–Chen masks if the normalization constant is neglected. Convolving the image in Figure 20 with masks (with b ¼ 1) in Figure 21 gives the same numerical results: Gr ¼ 600 þ 400a and Gi ¼ 0. b. N ¼ 4 Cases. With pðn 1Þ=2; 1 n N ¼ 4, we can write the corresponding complex-valued weight vectors as hc ¼ ½ ba0
aq1
f 0RP ¼ ½ q0
ba2 q5
q2
aq3
ba4
aq5
q7
q4
q1
ba6 q6
aq7
q3
t
aq1
a2
aq3
a4 c
ð117Þ ð118Þ
f cL ¼ rc ¼ 0 k c ¼ ½ a0
t
ð119Þ aq5 c
a6
t
aq7
ð120Þ
where q ¼ expð jp=4Þ ¼ ða=2Þð1þjÞ and h and k give two weight magnitude values depending on the direction from the center pixel to the neighboring
CIRCULANT MATRIX REPRESENTATION OF MASKS
41
Figure 20. Example of a 3 3 image containing a vertical edge.
Figure 21. 3 3 Real and imaginary masks (a ¼
pffiffiffi 2).
pixel considered: horizontal/vertical or diagonal/anti-diagonal. pffiffiffi For the Frei– Chen isotropic average gradient masks Hn with b ¼ a ¼ 2, hc gives the same weight magnitude at all neighboring pixels, that is, isotropic characteristics of weights. The Frei–Chen ripple edge weight vector fRP also has the same weight magnitude at all neighboring pixels, with the phase angle increased by the same amount of 5 pn/4, if we scan the weights counterclockwise from the center right weight. Note that the resulting complex-valued weight vectors f c1 and rc are trivial. Also, if b ¼ 1, two complex-valued masks Gc and Kc are the same (i.e., the Prewitt gradient masks Pn and the Kirsch edge masks Kn yield the same complex-valued weight vector, neglecting the constant factor). Note the periodicity of the mask weights: odd symmetry Un;i ¼ Un;iþ4 for G1 and FRP1 masks, and even symmetry Un;i ¼ Un;iþ4 for FL1 and R1 masks. For even symmetry cases such as in compass gradient edge masks Hn, defining a gradient magnitude and orientation in two orthogonal directions is based on the estimation of horizontal and vertical components of the intensity gradient vector using two (horizontal and vertical) edge masks. Using the periodicity inherent in the weight vector, for example, for odd symmetry, we can construct the vector-matrix formulation with the reduced data size, that is, the 8D weight vector un is reduced to the 4D weight vector u# n u# n ¼ ½ un;0
un;1
un;2
un;3 t
ð121Þ
where the superscript # denotes the reduced vector representation, and the corresponding intensity vector is given by p# odd ¼ ½ p0 p4
p1 p5
p2 p6
p3 p7 t :
ð122Þ
42
PARK AND CHA
Note that the weight vector of Eq. (121) and the intensity vector of Eq. (122) in reduced vector representation are obtained by taking the first four elements of the original weight vector representation, respectively. The reduced complexvalued weight vectors hc# and f c# RP with odd symmetry yield hc# ¼ 2½ bq0 0 f c# RP ¼ 2½ q
aq1
t
bq2
q5
q2
aq3
ð123Þ
t
ð124Þ
q7 ;
which are constructed by taking the first four elements of hc and f cR , respectively. Similarly, for even symmetry, the corresponding intensity vector is given by p# even ¼ ½ p0 þ p4
p1 þ p5
p2 þ p6
The reduced complex-valued weight vectors yield c# f c# ¼ ½0 0 L ¼r
p3 þ p7 t :
ð125Þ
f c# L
and rc# with even symmetry
0
0 t ;
ð126Þ
which are formulated by taking the first four elements of respectively.
f cL
and rc,
c. N ¼ 2 Cases. With an ¼ pðn 1Þ; 1 n N ¼ 2, we can write the corresponding complex-valued weight vectors as
kc ¼ 8½ 1
1
hc ¼ 2hc1
ð127Þ
f cRP ¼ 2f cRP1
ð128Þ
f cL ¼ rc ¼ 0
ð129Þ
0 1
1 1
0
1 t :
ð130Þ
For N ¼ 2 cases, the corresponding complex-valued weight vectors are degenerate, resulting in real-valued ones (e.g., the kernel weight vector itself, neglecting the constant factor, vertical Prewitt weight vector, or zero weight vector). Note the symmetry of the mask weights: Un;i ¼ Un;iþ4 for G1 and FRP1 masks, in which the formulated complex-valued edge vector is the realvalued kernel edge vector mask itself, and Un;i ¼ Un;iþ4 for FL1 and R1 masks, in which the combined edge weight vector is a zero vector due to the symmetry of the mask weight. The complex-valued feature mask for the compass Kirsch masks corresponds to the vertical Prewitt mask, neglecting the constant factor. Also note that h with b ¼ 1 is equivalent to kc, neglecting the constant factor. Tridirectional filters with three directions are described for gradient calculation, in which tridirectional filtering involves a vector sum of three halves of
CIRCULANT MATRIX REPRESENTATION OF MASKS
43
such filters (Paplinski, 1998). The filters are described by the 4 4 complexvalued matrix and applied to posterior eye capsule images and an ordinary image for edge detection. Also, the concept of directional filtering can be applied to finding edges of various orientations in fingerprint images (Jain et al., 1999). An optical implementation of real-time edge enhancement filters has been presented (Corecki and Trolard, 1998), in which, for example, the 3 3 complex-valued Sobel mask Sc with b ¼ 2 is used. The optical realization of the complex-valued mask uses an optical correlator with a matched filter whose impulse response is a weighted sum of eight delta functions. The complexvalued gradient edge masks are analyzed in the 1D frequency domain. C. Summary This section focuses on edge detection and feature detection, where the feature detection is represented by the extended form of edge detection. We have interpreted compass gradient masks and Frei–Chen masks in the case of N ¼ 8. This case also has been extended to complex-valued mask cases. Most interpretations are related to the circulant matrix and eigenvalue analysis in the DFT domain. In addition, it has been shown that Frei–Chen masks introduced as orthogonal masks have a similar structure as the compass gradient masks in the DFT domain. Compass gradient masks and Frei–Chen masks are closely related to orthogonal DCT, DST, and DHT masks, which are presented in Section IV. IV. Advanced Topics This section describes the DCT, DST, and DHT masks in terms of Frei– Chen orthogonal masks and presents their properties of edge detection. We focus especially on the DHT masks and show the relationship between DHT masks and KLT using the properties of the circulant symmetric matrix through SVD. Finally, their relationship is applied to the optical control system and information system. A. Orthogonal Transform-Based Interpretation We describe the DCT, DST, and DHT masks in terms of Frei–Chen masks and present the similarity between these orthogonal masks and Frei–Chen masks. Interpretation of the DCT, DST, and DHT masks is followed by the presentation of the relationship between the DHT and KLT.
44
PARK AND CHA
1. DCT and DST Interpretation The N-point for DCT and DST are defined in Section II. Especially, the case S of N ¼ 8 is presented here (Park, 1999b). Let TC k and Tk be the kth 3 3 DCT and DST masks, 0 k 7, formulated by circularly scanning along the clockwise direction, with eight DCT weights cosð2pkn=NÞ and DST weights sinð2pkn=NÞ, respectively, from the top left position (n ¼ 0) of a 3 3 window and its center weight equal to zero. Note that the orthogonal Frei–Chen masks span the 9D space, whereas the orthogonal DCT and DST masks form the 8D space. Figure 22 shows five 3 3 orthogonal DCT masks generated by the eight-point even DCT basis functions, with the center weight equal to zero. We can express the relationship between the DCT masks TC k and the Frei–Chen masks Fk as M TC 0 ¼ F9
1 1 C TC 1 ¼ T7 ¼ F1qccw ¼ F2qcw a a C TC ¼ T ¼ F ¼ F 5qcw 6 2 6 1 C TC 3 ¼ T5 ¼ F4 a 1 TC 4 ¼ ðF7 F8 Þ 3
ð131Þ
pffiffiffi where a ¼ 2; FM 9 is the modified average mask generated by setting the center weight of F9 to zero, and F1qccw (F2qcw) denotes the mask obtained by rotating F1 (F2) by 45 in the counterclockwise (clockwise) direction. Figure 23 shows four 3 3 orthogonal DST masks generated by the eight-point odd DST basis functions, with the center weight equal to zero. Similarly, we can express the relationship between the DST masks TSk and the Frei–Chen masks Fk as 1 1 TS1 ¼ TS7 ¼ F1qcw ¼ F2qccw a a TS2 ¼ TS6 ¼ F5 1 TS3 ¼ TS5 ¼ F3 a S S T0 ¼ T4 ¼ 0:
ð132Þ
According to Eqs. (131) and (132), the DCT/DST masks span the edge, line, and average subspaces as the Frei–Chen masks do. Weight masks C C C S S S S TC 1 ¼ T7 and T3 ¼ T5 ðT1 ¼ T7 and T3 ¼ T5 ) form the edge subspace, S C C S with T1 ¼ T7 ðT1 ¼ T7 ) representing the isotropic average gradient
CIRCULANT MATRIX REPRESENTATION OF MASKS
Figure 22. 3 3 Orthogonal DCT masks (a ¼
pffiffiffi 2).
Figure 23. 3 3 Orthogonal DST masks (a ¼
pffiffiffi 2).
45
46
PARK AND CHA
S C S mask and with TC 3 ¼ T5 ðT3 ¼ T5 Þ denoting the ripple mask. Similarly, S C C C weight masks T2 ¼ T6 and T4 ðT2 ¼ T6 S Þ form the line subspace, with S C S C TC 2 ¼ T6 ðT2 ¼ T6 ) signifying the directional mask and with T4 S C corresponding to the nondirectional Laplacian mask. Note that T0 ðT0 ¼ TS4 Þ represents the modified average (zero) subspace by setting the center pixel of the 3 3 window to zero. The proposed DCT/DST masks (constructed by selecting DCT and DST masks) are similar to the Frei–Chen masks, in the sense that they are similar to each other, except for the exclusion of the center weight, normalization factor, circular shift, and linear combination of masks. Note that the average subspace is defined in the 8D and 9D spaces for the DCT masks and the Frei–Chen masks, respectively, and that the zero subspace is defined in the 8D space for the DST masks. The 3 3 complex-valued DFT masks can be constructed by combining 3 3 DCT and DST masks, based on the property given by Eq. (38). Let us investigate on edge detection using 3 3 edge masks. We can define the 8D column vector fk and the input intensity column vector p ¼ ½ p0 p1 p2 p3 p4 p5 p6 p7 t by circularly scanning eight weights of the 3 3 Frei–Chen masks Fk and eight graylevel values of the 3 3 window in the clockwise direction from the top left position, respectively. For the Frei–Chen masks, edge detection is performed by the angle yf defined between the 8D intensity vector p and its projection to the edge subspace: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uP u ðf pÞ2 uk2E k F ð133Þ yf ¼ cos1 u uP t 7 2 ðf k pÞ k¼0
where fk p represents the inner product of fk and p, and EF ¼ f1; 3; 5; 7g denotes a set of indices forming the edge subspace, that is, F1, F3, F5, and F7 form the edge subspace. S Similarly, we can define the 8D column vector tC k (tk ) by circularly scanC ning eight weights of the 3 3 DCT (DST) mask Tk (TSk ) in the clockwise direction from the top left position. From Eqs. (131) and (132), we can write ! 7 7 7 X X 1 X 2 C 2 S 2 ðpn Þ ¼ T þ Tk ð134Þ 8 k¼0 k n¼0 k¼0 by Parseval’s theorem of the DFT. With eight hybrid masks TCS k ; 0 k 7, selected from among the 3 3 DCT/DST masks, we can construct the C C CS C C CS 8D orthogonal space: four masks (TCS 1 ¼ T1 ¼ T7 ; T3 ¼ T3 ¼ T5 ; T5 S S CS S S ¼ T5 ¼ T3 , and T7 ¼ T7 ¼ T1 ) form the edge subspace; three masks C C CS C CS S S (TCS 2 ¼ T2 ¼ T6 ; T4 ¼ T4 , and T6 ¼ T6 ¼ T2 ) form the line subspace;
CIRCULANT MATRIX REPRESENTATION OF MASKS
47
C and TCS 0 ¼ T0 form the average subspace. In Figures 22 and 23, notations for eight orthogonal DCT/DST masks are also listed. Similar to cases for the Frei–Chen masks, edge detection by the proposed DCT/DST masks TCS k is performed by the angle ycs defined between the 8D intensity vector p and its projection to the edge subspace: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u P CS u ðt pÞ2 uk2E k CS u ð135Þ ycs ¼ cos1 u 7 t P CS ðtk pÞ2 k¼0
CS CS where tCS k p represents the inner product of tk and p, with tk signifying CS the vector notation corresponding to the mask Tk , and ECS ¼ f1; 3; 5; 7g CS CS CS denotes a set of indices forming the edge subspace (i.e., tCS 1 ; t3 ; t5 , and t7 form the edge subspace). Computer simulations with several test images show that edge detection performance of the 3 3 orthogonal hybrid DCT/DST masks is similar to that of the Frei–Chen edge masks in Figure 24. Our experiments include a
CS Figure 24. Result of hybrid DCT/DST masks. (a) Original image, (b) (TCS 1 ; T7 ), CS CS CS CS CS CS CS CS (c) (T1 ; T7 ; T3 ; T5 ), (d) (T1 ; T7 ; T2 ; T6 ).
48
PARK AND CHA
CS CS CS CS CS half of an edge subspace (TCS 1 ; T7 ), edge subspace (T1 ; T7 ; T3 ; T5 ), CS CS CS CS and half of edge subspace plus line subspace (T1 ; T7 ; T2 ; T6 ).
2. DHT Interpretation The N-point DHT is described in Section II. The case of N ¼ 8 is considered here (Park et al., 1998). Let TH k ; 0 k N 1, be the kth DHT mask, formulated by circularly scanned (in the clockwise direction) eight DHT weights, casð2pki=NÞ, of TH k from the top left position of a 3 3 window, with its center weight equal to zero. Note that the orthogonal Frei–Chen edge masks span the 9D space, whereas the orthogonal DHT masks form the 8D space. We can express the relationship between the DHT masks and Frei–Chen masks as M H H H TH 0 ¼ F7þ8 ; T1 ¼ F1 ; T2 ¼ F56 ; T3 ¼ F3qcw 1 H H H TH 4 ¼ F78 ; T5 ¼ F4qcw ; T6 ¼ F56qccw ; T7 ¼ F2 3
ð136Þ
M M where FM 7þ8 and F78 ðF56 ) denote F7 þ F8 and F7 F8 ðF5 F6 ), respecM M tively, with masks F7 and F8 being generated by setting center weights of F7 and F8 to zero, respectively. Masks F3qcw (F4qcw) and F56qccw represent the modified masks of F3 (F4) and F5 F6 by rotating by 45 in the clockwise and counterclockwise directions, respectively. According to Eq. (136), the DHT masks span the edge, line, and average subspaces similar to those of Frei–Chen masks. The 8D DHT masks can be used as proper measures explaining the local properties of the 3 3 neighH H borhood: ‘‘edginess,’’ ‘‘lineness,’’ and ‘‘uniformity.’’ Weight masks TH 1 ; T3 ; T5 , H H H and T7 form the edge subspace, with T1 and T7 representing isotropic H average gradient masks, and with TH 3 and T5 denoting ripple masks. SimiH H H H larly, weight masks T2 , T4 , and T6 form the line subspace, with TH 2 and T6 H signifying directional masks, and with T4 corresponding to the nondirectional Laplacian mask. Note that TH 0 represents the modified average subspace excluding the center pixel of the 3 3 neighborhood. The proposed DHT masks are similar to the Frei–Chen masks, in the sense that two sets of masks are similar except for the exclusion of the center weight, normalization factor, circular shift, and linear combination of operators. Note that the average subspaces are defined in the 8D and 9D spaces for the DHT masks and Frei–Chen masks, respectively. Figure 25 shows the result of an isotopic H average gradient edge (TH 1 ; T7 ), isotopic average gradient edge plus ripple H H H H edge (T1 ; T7 ; T3 ; T5 ), and isotropic average gradient edge plus line edge H H H (TH 1 ; T7 ; T2 ; T6 ). We can define the 8D column vector tH k and the input intensity vector p by circularly scanning eight weights of the 3 3 DHT mask TH k and eight
CIRCULANT MATRIX REPRESENTATION OF MASKS
49
H H H H H Figure 25. Result of DHT masks. (a) Original image, (b) (TH 1 ; T7 ), (c) (T1 ; T7 ; T3 ; T5 ), H H H (d) (TH ; T ; T ; T ). 1 7 2 6
graylevel values of the 3 3 window in the clockwise direction from the top left position, respectively. Similar to the Frei–Chen masks, edge detection by the proposed DHT masks is determined by the angle yh defined between the 8D intensity vector p and its projection to the edge subspace: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uP H 2 u uk2E ðtk pÞ H ð137Þ yh ¼ cos1 u uP t 7 H 2 ðtk pÞ k¼0
H where tH k p represents the inner product of tk and p, and EH denotes a set of H H indices forming the edge subspace; EH ¼ f1; 3; 5; 7g, that is, tH 1 ; t3 ; t5 , and tH form the edge subspace. 7 Let p, un (0 n 7), and g be 8D column vectors consisting of intensity values of neighboring pixels, weights of the nth feature mask, and feature strength values detected by eight feature masks, respectively. The subscript 0 in feature mask notation u0 represents the kernel weight vector from which other compass weight vectors un, 1 n 7, are constructed by circularly
50
PARK AND CHA
Figure 26. 3 3 Image (left) and its corresponding kernel mask (right).
shifting the weights n times. Note that eight neighboring intensity values pi and their corresponding weights ui are scanned counterclockwise from the center right pixel of the 3 3 mask. Figure 26 shows a 3 3 image represented by the intensity vector p and the 3 3 compass feature mask U0 corresponding to the kernel weight vector u0: p ¼ ½ p0 u0 ¼ ½ u 0
p1 u1
p2 u2
P3
P4
P5
p6
p7 t
ð138Þ
u3
u4
u5
u6
u7 t
ð139Þ
where pi and ui, 0 i 7, represent the intensity value of the ith neighboring pixel and its corresponding weight of the kernel feature mask U0, respectively. The feature strength vector g can be defined by g ¼ ½ g0
g1
g2
g3
g4
g5
g6
g7 t :
ð140Þ
Eight 3 3 compass feature masks Un, 1 n 7, generate eight 8D column vectors un, each of which is a circularly shifted version of the kernel weight vector u0. Thus, these transposed vectors form the 8 8 circulant weight matrix Hu with the first row vector corresponding to the transposed kernel weight vector ut0 and each row representing each transposed directional weight vector: Hu ¼ ½u0 u1 u2 u3 u4 u5 u6 u7 t . Then feature detection by compass feature masks can be expressed in vector-matrix form: 32 3 2 3 2 p0 g0 u0 u7 u6 u5 u4 u3 u2 u1 6 g1 7 6 u1 u0 u7 u6 u5 u4 u3 u2 76 p1 7 76 7 6 7 6 6 g2 7 6 u2 u1 u0 u7 u6 u5 u4 u3 76 p2 7 76 7 6 7 6 6 g3 7 6 u3 u2 u1 u0 u7 u6 u5 u4 76 p3 7 7 76 7: ð141Þ 6 6 g ¼ 6 7 ¼ Hu p ¼ 6 76 7 6 g4 7 6 u4 u3 u2 u1 u0 u7 u6 u5 76 p4 7 6 g5 7 6 u5 u4 u3 u2 u1 u0 u7 u6 76 p5 7 76 7 6 7 6 4 g6 5 4 u6 u5 u4 u3 u2 u1 u0 u7 54 p6 5 g7 u7 u6 u5 u4 u3 u2 u1 u0 p7 The circulant matrix Hu can be diagonalized by the DFT matrix, and the corresponding eigenvalues rk, 0 k 7, construct the eigenvalue vector r.
CIRCULANT MATRIX REPRESENTATION OF MASKS
51
The eigenvalue vector r is expressed as the eight-point DFT of the first column of the matrix U, or equivalently, as the eight-point inverse DFT of u0. The eigenvectors of the circulant matrix are always the same regardless of the specific forms of the matrix. The only characteristics distinguishing one circulant matrix from another are its eigenvalues. If we generate the kernel masks as in Figures 27 and 28, with ui ¼ uNi, 0 i N 1 ¼ 7, the formulated matrix Hu becomes a symmetric circulant matrix: 3 2 u0 u1 u2 u 3 u4 u3 u2 u1 6 u1 u0 u1 u 2 u3 u4 u3 u2 7 7 6 6 u2 u1 u0 u 1 u2 u3 u4 u3 7 7 6 6 u3 u2 u1 u 0 u1 u2 u3 u4 7 7 ¼ Ht : 6 Hu ¼ 6 ð142Þ u 7 6 u4 u3 u2 u 1 u0 u1 u2 u3 7 6 u3 u4 u3 u 2 u1 u0 u1 u2 7 7 6 4 u2 u3 u4 u 3 u2 u1 u0 u1 5 u1 u2 u3 u 4 u3 u2 u1 u0 Figure 27 shows the 3 3 compass gradient edge masks, such as Prewitt (P0), Sobel (S0), Kirsch (K0), and roof (R0) masks, where the subscript 0 signifies the kernel masks. Compass feature masks Un can be generated by counterclockwise rotating the kernel mask U0 by np/4, where 1 n 7. Similarly, Figure 28 shows 3 3 Frei–Chen kernel masks, with FG0, FRP0, and FL0 representing the gradient edge mask, the ripple edge mask, and the directional line mask, respectively. The real-valued column vectors fk ¼ ½cosð2pki=NÞ , 0 k; i N 1, can be formulated, where the term in brackets denotes the component of the column vector. To diagonalize the circulant matrix Hu, the complex-valued
Figure 27. 3 3 Gradient edge kernel masks.
Figure 28. 3 3 Frei–Chen kernel masks (a ¼
pffiffiffi 2).
52
PARK AND CHA
eigenvectors wk ¼ ½expð j2pki=NÞ ; 0 k; i N1, are required (Gonzalez and Woods, 1992). In this case, a new interpretation of the 3 3 compass feature masks can be obtained in the real-valued DHT domain, rather than in the complex-valued DFT domain. For eYcient computation, the realvalued basis vectors for the DHT can be used because of the symmetry property of the weights. With the symmetry property of the weights ui ¼ uNi ; 0 i N 1 ¼ 7, the corresponding eigenvalues rk are real valued and given by
N
N 1 1 X X 2p 2p ki : ð143Þ rk ¼ rNk ¼ ui exp j ki ¼ ui cos N N i¼0 i¼0 With N ¼ 8 for our compass feature masks, we can explicitly write realvalued eigenvalues as r0 ¼ ðu0 þ u4 Þ þ 2ðu1 þ u2 þ u3 Þ pffiffiffi r1 ¼ ðu0 u4 Þ þ 2ðu1 u3 Þ ¼ r7 r2 ¼ ðu0 þ u4 Þ 2u2 ¼ r6 pffiffiffi r3 ¼ ðu0 u4 Þ 2ðu1 u3 Þ ¼ r5 r4 ¼ ðu0 þ u4 Þ 2ðu1 u2 þ u3 Þ:
ð144Þ
Table 3 lists the real-valued eigenvalues for various 3 3 feature masks, such as Prewitt (P), Sobel (S), Kirsch (K), roof (R), Frei–Chen gradient edge (FG), Frei–Chen ripple edge (FRP), and Frei–Chen line (FL) masks. These compass feature masks have the same eigenvectors, with the only diVerence being their eigenvalues. From Table 3, it is noted that the three Frei–Chen masks (FG, FRP, and FL) and the roof edge mask R can be regarded as the fundamental masks, in the sense that other feature masks are expressed as linear combinations of these masks. For example, the Prewitt and Sobel masks can be expressed in terms of FG and FRP. Similarly, the Kirsch masks can be expressed as a linear combination of masks FG, FRP, FL, and R. TABLE 3 Real-Valued Eigenvalues of Various 3 3 Compass Feature Masks Mask Eigenvalue
P
S
K
R
FG
FRP
FL
r0 r1 ¼ r7 r2 ¼ r6 r3 ¼ r5 r4
0 pffiffiffi 2þ 2 0 pffiffiffi 2 2 0
0 pffiffiffi 4þ2 2 0 pffiffiffi 42 2 0
0 pffiffiffi 8þ8 2 0 pffiffiffi 88 2 8
0 0 8 0 8
0pffiffiffi 4 2 0 0 0
0 0 0pffiffiffi 4 2 0
0 0 4 0 0
53
CIRCULANT MATRIX REPRESENTATION OF MASKS
The structure of Prewitt and Sobel masks are the same in the sense that their eigenvalues are the same, neglecting the constant factor. Four zeros are resulted from the dependency of edge strength: gn ¼ gnþ4 ; 0 n 3. Eigenvalues and eigenvectors of Frei–Chen gradient edge and ripple edge masks (FG and FRP) are diVerent from those of Prewitt masks. It is noted that r0 denotes the average value of the weights, which is equal to zero. Also note that edge masks (Prewitt, Sobel, Kirsch, Frei–Chen gradient edge, and Frei-Chen ripple edge masks) give nonzero r1 ¼ r7 and/or r3 ¼ r5, whereas the Kirsch, roof, and Frei–Chen line masks yield nonzero r2 ¼ r6 and/or r4. Similarly, the basis vectors of other orthonormal masks (e.g., masks derived from the DCT, DST, and DHT basis functions) can be obtained in terms of those listed in Table 3 (e.g., using the relationships between the masks and Frei–Chen masks). The eigenvalues of the complex-valued masks also can be expressed in terms of the real-valued eigenvalues listed in Table 3. The circulant matrix Hu can be written as Hu ¼ WDW1
ð145Þ
where D is a diagonal matrix whose main diagonal elements are equal to rk. The complex-valued DFT matrix W1 is constructed by columnwise stacking of eight eigenvectors wk ; 0 k 7. Similarly, we can construct the realvalued DCT matrix TC and the DST matrix TS by columnwise stacking of fk ¼ ½cosð2pki=NÞ and j k ¼ ½cosð2pki=NÞ ; 1 k; i 8, respectively. It is easy to show that the DCT matrix TC alone, constructed by fk, does not diagonalize Hu. For the symmetric circulant matrix Hu with real-valued eigenvalues rk, we can write D ¼ W1 Hu W ¼ TC Hu TC þ TS Hu TS 1
C
S
C
S
C
S
ð146Þ S
C
using W ¼ T jT ; W ¼ T þ jT , and T Hu T þ T Hu T ¼ 0. Both the real-valued column vectors fk and jk are required to diagonalize Hu: the complex-valued eigenvectors wk or the real-valued column vectors lk ¼ fk þ jk. For eYcient computation, real-valued column vectors can be derived. From Eq. (146), we can rewrite the diagonal matrix D as D ¼ TC Hu TC þ TS Hu TS ¼ ðTH Þ1 Hu TH
ð147Þ
where TH ¼ ðTH Þ1 ¼ TC þ TS, neglecting the normalization constant N. The DHT matrix TH is obtained by columnwise stacking of lk ¼ fk þ j k :
2p 2p 2p ki þ sin ki ¼ cas ki ð148Þ lk ¼ fk þ j k ¼ cos N N N
54
PARK AND CHA
with the same corresponding eigenvalues rk in Eq. (143), where casy ¼ cosy þ siny. The corresponding column vectors are the basis functions for the DHT. With Eqs. (142) and (147), feature detection can be considered as a series of real-valued operations: DHT of the intensity vector p, multiplication of the eigenvalues rk in the DHT domain, and then the inverse DHT: g ¼ Hu p ¼ ðTH DðTH Þ1 Þp ¼ ðTH Þ1 DðTH pÞ
ð149Þ
where the eigenvalues for various types of feature masks are listed in Table 3. The real-valued DHT domain interpretation is applied to various 3 3 compass feature masks, such as edge, line, and roof masks, by using the symmetry property of the weight vector components. This interpretation can be eYciently applied to real-time processing of real-valued signals. Future research will focus on the extension of the DHT domain interpretation to various filters and directional filtering. 3. KLT Interpretation The orthonormal basis functions can be derived using the SVD (Jain, 1989). SVD analysis is used to correctly present the real-valued eigenvectors for diagonalization of the inner product matrix. Assume that X is the M N matrix with its rank (g) satisfying the inequality g M, N. The matrices XtX and XXt are symmetric and have the identical eigenvalues rk, 0 k g 1. Then wv,k and wu,k satisfying Xt Xwv;k ¼ rk wv;k ; 0 k g 1
ð150Þ
XXt wu;k ¼ rk wu;k ; 0 k g 1
ð151Þ
can be obtained, where wv,k and wu,k are the kth orthonormal eigenvectors of XtX and XXt, respectively, with the same corresponding eigenvalues rk. Using the SVD on X, the matrix X can be written as pffiffiffiffi X ¼ Wu LWtv ð152Þ g ffiffiffiffi matrices, in which wv,k and wu,k where Wv and Wu denote N g and M p pffiffiffiffi signify the kth columns, respectively, and L ¼ diagf rk g is a g g diagonal matrix. The N N vector inner product matrix V ¼ Xt X can be expressed as Wv LWtv , from which Wv and L can be directly determined, noting that Wv contains eigenvectors of V and that L has eigenvalues of V. pffiffiffiffi Next, we can get Wu ¼ XWv = L. Note that the covariance matrix of images is represented as C ¼ XXt ¼ Wu LWtu . The eigenvectors of the matrix with nondegenerate (i.e., distinct) eigenvalues are complete and orthonormal, spanning the entire vector space. For the
CIRCULANT MATRIX REPRESENTATION OF MASKS
55
matrix with degenerate eigenvalues, we have the freedom of replacing the eigenvectors corresponding to degenerate eigenvalues by a linear combination of themselves. Then, we can always perform orthogonalization and obtain a set of eigenvectors that are complete and orthonormal. This section presents two real-valued eigenvector matrix representations derived from the DFT matrix (Park, 2002b). Let the matrix X ¼ ½x0 x1 . . . xN1 be constructed by columnwise stacking of N uniformly rotated images xi, 0 i N 1, where the term in brackets denotes the component (column vector) of the matrix. Because of the nature of X and definition of the N N inner product matrix V ¼ Xt X, V has several properties, as shown in Section II. From properties (2) symmetric (V ¼ Vt ), and (3) circulant, the elements in the first row ½b0 b1 . . . bN1 are such that bm ¼ bNm ; 0 m N 1, where the matrix elements bm are defined by bm ¼ xti xim . Because V is real and symmetric, its eigenvalues are real. Eigenvectors of any circulant matrix can be taken as wk ¼ ½expð j2pkm=NÞ ; 0 k; m N 1, where wk represent basis column vectors of the DFT matrix. Their conjugate vectors wk ¼ ½expðj2pkm=NÞ ; 0 k; m N 1, are similarly defined, where denotes complex conjugation. The real-valued vectors, the sampled cosines, fk ¼ ½cosð2pkm=NÞ ; 0 k; m N 1, are defined by taking real parts of wk . With the symmetry property of bm ¼ bNm , the corresponding eigenvalues rk for wk are given by
N
N 1 1 X X 2p 2p km : ð153Þ rk ¼ rNk ¼ bm exp j km ¼ bm cos N N m¼0 m¼0 In general, the eigenvalues of a circulant matrix are complex, whereas those of a real symmetric circulant matrix (which we have here) are real. The real-valued eigenvalues rk for the real symmetric circulant matrix V are given by the DFT of the first column (row) of the matrix V, the autocorrelation element vector bm ¼ ½bm ; 0 m N 1 (Jain et al., 1999), or equivalently, by the inner products of bm and fk. The matrix V can be written as V ¼ WLW1
ð154Þ
where L is the diagonal matrix with the eigenvalues rk along the diagonal. The DFT matrix W1 is constructed by columnwise stacking of N eigenvectors wk ; 0 k N 1, and its inverse DFT matrix is given by W/N. Similarly, we can construct the matrices TC and TS by columnwise stacking of fk ¼ ½cosð2pkm=NÞ and j k ¼ ½sinð2pkm=NÞ ; 0 k N 1, respectively. Note that vectors fk (jk) correspond to the real (imaginary) parts of the complex-valued DFT vectors wk , that is, TC ¼ ReðW1 Þ and TS ¼ ImðW1 ). It is easy to show that the matrix TC, constructed by fk,
56
PARK AND CHA
does not diagonalize V because of interdependency of fk vectors (fk ¼ fNk ). With distinct eigenvalues, eigenvectors corresponding to eigenvalues span the orthonormal space. With the repeated eigenvalue rk ¼ rNk, we can construct a new pair of orthonormal and complete eigenvectors. Each pair of eigenvectors having the duplicated eigenvalue rk ¼ rNk can be transformed by 1 1 1 ð155Þ TV ¼ 2 j j with the complex-valued matrix ½wk wNk t converted into the real-valued matrix ½fk j k t , where the real-valued eigenvectors fk and jk can be obtained from the SVD of Xt X ¼ Wv LWtv . With rk ¼ rNk , the eigenvector matrix consisting of the N real-valued eigenvectors is given by Wv ¼ ½ wv;0 wv;1 . . . wv;N1 " ½f0 f1 . . . fN=2 j 1 j 2 . . . j N=21 for even N ¼ ½f0 f1 . . . fðN1Þ=2 j 1 j 2 . . . j ðN1Þ=2 for odd N
ð156Þ
where it can be constructed by the SVD, neglecting the normalization constants. Note that the transformation TV is not unique. For the symmetric circulant matrix V with the real-valued rk, we can write L ¼ W1 VW ¼
1 C ðT VTC þ TS VTS Þ N
ð157Þ
using W1 ¼ TC jTS ; W ¼ ð1=NÞðTC þ jTS Þ, and TC VTS ¼ TS VTC ¼ 0. Both fk and jk are needed to diagonalize V. Optimal representation should be made with the complex-valued eigenvectors wk , or equivalently with both the real-valued eigenvectors fk and jk, by which the diagonal matrix L is constructed. The corresponding eigenvectors wu,k of the covariance matrix V ¼ XXt are represented as the linear combinations of the input image xi: 1 X 1 N wu;k ¼ pffiffiffiffi ðwv;k Þi xi rk i¼0
ð158Þ
where (wv,k)i signifies the ith element of the eigenvector wv,k. As mentioned previously, for the repeated eigenvalue cases, construction of the eigenvector matrix is not unique. The transformation TU " # 1 1þj 1j U T ¼ ð159Þ 2 1j 1þj
CIRCULANT MATRIX REPRESENTATION OF MASKS
converts the complex-valued matrix ½wk ½ fk þ j k
57
wNk t into the real-valued matrix:
fNk þ j Nk t ¼ ½ lk
lNk t ;
ð160Þ
which leads to the DHT interpretation. The DHT is the real part minus the imaginary part of the DFT (Bracewell, 1986). It is real valued and computationally fast; thus, the DHT has been applied to various signal processing and interpretation applications. It is shown that basis functions of the eightpoint DHT construct the 3 3 DHT masks that are closely related to the Frei–Chen masks. From Eq. (157), we can rewrite the diagonal matrix L as L¼
1 C ðT VTC þ TS VTS Þ ¼ TH VðTH Þ1 N
ð161Þ
where (TH Þ1 ¼ ð1=NÞTH ¼ ð1=NÞðTC þ TS ). The DHT matrix TH is obtained by columnwise stacking of lk. The eigenvectors lk can be expressed as
2p 2p km þ sin km lk ¼ fk þ j k ¼ cos N N
ð162Þ pffiffiffi 2p 2p p ¼ cas km ¼ 2 cas km N N 4 with the corresponding eigenvalues rk. Neglecting the constant factor and with Dk ¼ p=4, the derived eigenvectors lk are equal to the real parts of the DFT basis functions with some oVset phase Dk. Then the corresponding eigenvectors of the covariance matrix V can be represented as the DHT of the input image xi, or equivalently as the inner products of xi and the real parts fk of the DFT basis functions with some oVset phase: sffiffiffiffi
1 1 X X 1 N 2p 2N 2p p ki xi ¼ ki wu;k ¼ pffiffiffiffi cas cos ð163Þ xi : rk i¼0 N rk i¼0 N 4 Due to the repeated eigenvalues, the eigenvector matrix of V is not unique. Instead of the complex-valued eigenvector matrix W1, we can construct the real-valued eigenvector matrix Wv by the SVD or TH from the DHT, in which both real-valued eigenvector matrices are interrelated by a linear transformation. The L basis vector calculations for approximation of N uniformly rotated images are summarized as follows. (1) Compute the autocorrelation um of the rotated images. (2) Compute rk using Eq. (153). Order rk by decreasing order of magnitude. (3) Construct the real-valued eigenvectors wu,k using Eq. (158) or Eq. (163), whose corresponding rk are the largest k. Note that two sets of basis vectors obtained from Eqs. (158) and (163) are related by a simple linear transformation.
58
PARK AND CHA
As an example, for N ¼ 4 we show that fk ; 0 k 3, are not linearly independent and cannot diagonalize V. Let V be given by 2 3 b0 b1 b2 b1 6 b1 b0 b1 b2 7 7 V¼6 ð164Þ 4 b2 b1 b0 b1 5 b1 b2 b1 b0 where bm ¼ b4m ; 0 m 3, denote autocorrelation elements. Then TC and TS constructed by eigenvectors fk and j k ; 0 k 3, are expressed as 2 3 1 1 1 1 61 0 1 07 7 TC ¼ ½ f0 f1 f2 f3 ¼ 6 4 1 1 1 1 5 1 0 1 0 and 2
TS ¼ ½ j 0
j1
j2
0 60 j3 ¼ 6 40 0
0 1 0 1
3 0 0 0 1 7 7; 0 05 0 1
ð165Þ
respectively. Note that TS TC ¼ TC TS ¼ 0 and TC VTS ¼ TS VTC ¼ 0. The columns of both matrices are not linearly independent, thus either TC or TS cannot diagonalize V. However, the matrices 2 3 1 1 1 0 61 0 1 17 7 T Q ¼ ½ tQ ¼ ½ f0 f1 f2 j 1 ¼ 6 tQ tQ tQ 4 1 1 0 1 2 1 1 05 1 0 1 1 and T H ¼ ½ l0
l1
l2
l3 ¼ ½ f0 þ j 0 f1 þ j 1 f2 þ j 2 2 3 1 1 1 1 61 1 1 1 7 7 ¼6 4 1 1 1 1 5 1 1 1 1
f3 þ j 3
diagonalize V, that is 1 V ¼ ðTQ Þ1 LTQ ¼ TH RðTH Þ1 ¼ ðTC RTC þ TS RTS Þ 4 ¼ diag fr0 ¼ b0 þ 2b1 þ b2 ; r1 ¼ b0 b2 ;
ð166Þ
CIRCULANT MATRIX REPRESENTATION OF MASKS
r2 ¼ b0 2b1 þ b2 ; r3 ¼ r1 ¼ b0 b2 g:
59 ð167Þ
Note that 2
1 ðTH Þ1 ¼ TH 4
and ðTQ Þ1
1 16 2 ¼ 6 441 0
1 0 1 2
1 2 1 0
3 1 07 7: 1 5 2
ð168Þ
The eigenvector matrix of V with repeated eigenvalues is not unique. Two representations for constructing it are explained: TQ and TH. The matrix TH can be constructed from the matrix TQ, or vice versa (i.e., Q Q Q Q Q l0 ¼ tQ 0 ; l1 ¼ t0 þ t3 ; l2 ¼ t2 , and l3 ¼ t1 t3 ). The two eigenvectors l1 H and l3 in T having the same eigenvalue r1 ¼ r3 can be related to the two Q Q eigenvectors tQ 1 and t3 in T by the 2 2 transformation matrix: 1 1 T¼ ð169Þ 1 1 which is the two-point DHT matrix. Pairs of basis vectors obtained from two conventions, corresponding to the degenerate eigenvalues, are related by T. B. Application to Other Fields In this section, we discuss advanced topics that are related to circulant systems and DHT masks. We introduce two examples—optical control system and information system. The primary goal of this section is to show that our previous interpretation can be applied to various fields. 1. Optical Control System Exploiting the circularity of the mirror and a suitable model of atmospheric distortion, the control system is divided into a number of smaller decoupled control problems (Miller and Grocott, 1999). The decoupled nature of the control problem permits significant computation reduction when implementing the control system for real-time applications. A system is represented by the cross-interactions between a particular system and the remaining subsystems, and the self-interactions. A system is circular if all of the subsystems are identical, having the same self-dynamics and crossinteractions. In such a system, the origin of the system is arbitrary, invariant to the circular ordering of the subsystems. This property inherent in a circular system allows a high degree of decoupling in the adaptive optics control design and implementation.
60
PARK AND CHA
In general, 2 y0 6 y1 6 6 y ¼ 6 y2 6 .. 4 .
adaptive optics systems can 3 2 a0 aN1 aN2 7 6 a1 a0 aN1 7 6 7 6 a2 a1 a0 7¼6 7 6 .. .. .. 5 4 . . .
yN1
aN1
aN2
aN3
be described by 32 x0 a1 6 a2 7 76 x1 6 a3 7 76 x2 .. 76 .. .. . 54 . . a0
the relationship 3 7 7 7 7 ¼ Ax; 7 5
ð170Þ
xN1
where x and y represent the input and output vectors, respectively; N signifies the size of the vectors; and A denotes the circulant matrix showing the circularity of the subsystems. The circulant matrix is diagonalized by the DFT matrix; thus, the analysis can be done in the frequency domain and used to reduce the computation time in real-time implementations. The objective of this section is to point out that the further real-valued symmetry assumption of A ¼ At makes it possible to analyze the control system in the real-valued DHT domain rather than in the complex-valued DFT domain (Park, 2000). Note that the orthonormal DFT basis vectors are used for diagonalization of the symmetric circulant matrix A. Or equivalently, for eYcient computation, the real-valued basis vectors for the DHT can be used, utilizing the symmetry of the real-valued matrix A. Note that the following mathematical formulation is valid for the realvalued symmetric circulant matrix A. The matrix A is constructed by columnwise stacking of N vectors ai ; 0 i N 1. In the N N circulant matrix A ¼ [aki], the (k, i)th element depends only on | k i |, where the term in brackets denotes the component of the matrix. To diagonalize the circulant matrix, the complex-valued eigenvectors wk ¼ ½expð j2pki=NÞ , 0 k; i N 1, are used, where the term in brackets denotes the component of the column vector. Their conjugate vectors wk ¼ ½expðj2pki=NÞ , 0 k; i N 1, are similarly defined. The complex-valued basis vectors wk for the DFT are the orthonormal eigenvectors. The real-valued vectors fi ¼ ½cosð2pki=NÞ ; 0 k; i N 1, are defined by taking the real parts of wk . For the symmetric matrix A ¼ At, the corresponding eigenvalues rk for wk ; 0 k N 1, are given by rk ¼ rNk ¼
N 1 X
ai expðj2pki=NÞ ¼
i¼0
N 1 X
ai cosð2pki=NÞ:
i¼0
The matrix A can be written as previously stated: A ¼ WDW1 : The DFT matrix W1 ¼ ½w0 w1 . . . wN1 ] (inverse DFT matrix W) is constructed by columnwise stacking of N eigenvectors wk ðwk Þ; 0 k N 1. It
61
CIRCULANT MATRIX REPRESENTATION OF MASKS
is noted that W1 ¼ WZ, where the superscript Z signifies the Hermitian, or complex conjugate, transpose. Similarly, we can construct the modified DCT matrix TC and the modified DST matrix TS by columnwise stacking of fk ¼ ½cosð2pki=NÞ and j k ¼ ½sinð2pki=NÞ ; 0 k; i N 1, respectively. We can write D ¼ W1 AW ¼ TC ATC þ TS ATS :
ð171Þ
Note that the real-valued matrix W given by Eq. (156): ( ½f0 f1 fN=2 j 1 j 2 j N=21 for even N W¼ ½f0 f1 fðN1Þ=2 j 1 j 2 j ðN1Þ=2 for odd N consisting of the N real-valued vectors, diagonalizes A (i.e., W1 AW is a diagonal matrix). For eYcient computation, real-valued eigenvectors diagonalizing A can be constructed. From Eq. (171), we can rewrite the diagonal matrix D as D ¼ TC ATC þ TS ATS ¼ ðTH Þ1 ATH H
H 1
C
ð172Þ
S
where T ¼ ðT Þ ¼ T þ T neglecting the normalization constant N. Note that the diagonal matrix D is real valued for the real-valued matrix A. The DHT matrix TH is formulated by columnwise stacking of the realvalued eigenvectors li. Using Eq. (172) and neglecting the normalization constant N, the symmetric circulant matrix A can be expressed as A ¼ ðTH Þ1 DTH :
ð173Þ
Thus y can be obtained by first computing the DHT of x, multiplying eigenvalues rk representing the decoupled scalar constants in the DHT domain, and finally performing the IDHT. For real-valued x, y, and A, the fast computations are possible in the real-valued DHT domain. Equivalently, the relationship between individual terms in x and y can be expressed as a circular convolution. Noting that a0 is even since A is a symmetric circulant matrix and using Eq. (173), the multiplication relationship in the DHT domain for each decoupled subsystem is explained. 2. Information System The circulant Gaussian channel was presented as an example to compute canonical correlations and directional cosines and to derive Shannon’s capacity theorem (Scharf and Mullis, 2000). In the circulant Gaussian channel, the DFT representations are employed using the property of the circulant matrix, in which the unitary DFT basis vectors are used for
62
PARK AND CHA
diagonalization of the circulant covariance matrices. This section presents a simpler real-valued representation by the DHT, in place of the complexvalued representation by the DFT (Park, 2002b). In other words, the real-valued basis vectors for the DHT can be used, further utilizing the symmetry property of real-valued covariance matrices. This simple representation by the DHT can be applied to various applications, such as adaptive filtering and filter design, if the matrix is real, symmetric, and circulant. As an example in edge detection, utilizing the special structure of 3 3 realvalued compass gradient edge masks, the relationship between the DFT interpretation and the DHT interpretation was investigated. The N N covariance matrix is real valued, circulant, and symmetric (i.e., K ¼ Kt ). The matrix K ¼ ½k0 k1 . . . kN1 ] can be expressed in terms of the covariance element vectors ki ¼< kðk iÞ >N ; 0 i; k N 1, where the terms in brackets denote the elements of the column vectors ki and the subscript ‘mod’ signifies the modulo operation. To diagonalize any circulant matrix, the complex-valued eigenvectors, wk ¼ ½expð j2pki=NÞ ; 0 k; i N 1, are required. Their conjugate vectors wk ¼ ½expðj2pki=NÞ are similarly defined. The real-valued vectors, the sampled cosines, fi ¼ ½cosð2pki=NÞ ; 0 i; k N 1, are defined by taking real parts of wk , with the corresponding eigenvalues rk given by the DFT of the first column of the matrix K. The covariance matrix K can be written as K ¼ WDW1
ð174Þ
where D is the diagonal matrix with the eigenvalues rk along the diagonal. The N eigenvectors wk ; 0 k N 1, are the columns (basis vectors) of the unitary DFT matrix W1, that is, W1 ¼ ½w0 w1 . . . wN1 ], and its inverse W is given by W ¼ ð1=NÞ½w0 w1 . . . wN1 ]. In the same way in the previous section, fk ¼ ½cosð2pki=NÞ and j k ¼ ½sinð2pki=NÞ , 0 k; i N 1, construct the N N matrices. For the symmetric circulant matrix K, we can write D ¼ W1 KW:
ð175Þ
According to the analysis with Section IV.A.3, we can rewrite Eq. (175) as 1 1 C C T KT þ TS KTS D ¼ W1 KW ¼ TH KðTH Þ1 ¼ ð176Þ N N where TH ¼ TC þ TS denotes the real-valued DHT matrix, ðTH Þ1 ¼ ð1=NÞTH , and TC KTS ¼ TS KTC ¼ 0. The equalities TC KTS ¼ TS KTC ¼ 0 can be verified using the facts that TC TS ¼ TS TC ¼ 0, and the matrices D ¼ W1 KW; TC KTS , and TS KTC are real valued.
CIRCULANT MATRIX REPRESENTATION OF MASKS
63
The symmetric circulant Gaussian channel leads to simpler representations, compared with the DFT representations, for computation of canonical correlations and the related quantities. Let the measurement y ¼ x þ n be the sum of the signal x and the channel noise n. Assume that the real-valued covariance matrices Kxx ¼ ½k00 k01 . . . k0N1 and Knn ¼ ½k000 k001 . . . k00N1 are symmetric and circulant, and Kyy ¼ Kxx þ Knn , where Kxx ; Knn , and Kyy are covariance matrices of x, n, and y, respectively. Also assume that the cross-covariance matrix Kxy is equal to Kxx . These real-valued symmetric circulant matrices have rather simpler DHT representations (Park, 2002b): 1 H 1 T Mxx TH ¼ ðTC Mxx TC þ TS Mxx TS Þ N N 1 H 1 H 1 H H ¼ ðT Þ Mnn T ¼ T Mnn T ¼ ðTC Mnn TC þ TS Mnn TS Þ N N 1 H 1 H 1 H H ¼ ðT Þ Mxx T ¼ T Mxx T ¼ ðTC Mxx TC þ TS Mxx TS Þ ð177Þ N N 1 ¼ ðTH Þ1 ðMxx þ Mnn ÞTH ¼ TH ðMxx þ Mnn ÞTH N 1 ¼ fTC ðMxx þ Mnn ÞTC þ TS ðMxx þ Mnn ÞTS g; N
Kxx ¼ ðTH Þ1 Mxx TH ¼ Knn Kxy Kyy
where Mxx (Mnn) represents the diagonal line spectrum matrix of x (n): ( ) N 1 X 2p 0 ki cos ki and Mxx ¼ diagfMxx ðkÞg ¼ diag N i¼0 ( ) N 1 X 2p 00 Mnn ¼ diagfMnn ðkÞg ¼ diag ki cos ki ; 0 i; k N 1: N i¼0 The coherence matrix is also symmetric circulant and the canonical correlation matrix consists of ratios that might loosely be called voltage ratios. With these matrix representations by the DHT, we follow the same derivation steps for various interesting quantities, such as the direction cosines/sines, the error covariance matrix, and the channel capacity. C. Results and Discussions This section shows experimental results of various masks such as edge masks, feature masks, and orthogonal masks presented in this article. We have applied masks to a synthetic image to show the validity of the proposed interpretation. Also, we have applied masks to a real image, in which 256 256 Lena image quantized to 8 bits is used as a test image as shown in Figure 29. Resulting images are obtained by using diVerent
64
PARK AND CHA
Figure 29. Original Lena image (256 256, 256 levels).
Figure 30. Resulting images by edge masks. (a) Prewitt masks, (b) Sobel masks, (c) Kirsch masks, (d) Frei–Chen masks.
threshold values, in which the threshold values are selected experimentally to obtain the optimal edge detection results. Figure 30 shows resulting images by edge masks: Prewitt, Sobel, Kirsch, and Frei–Chen. In the case of Prewitt, Sobel, and Kirsch masks, all eight-directional results are combined and displayed using the same threshold value of 128. Results of Frei–Chen masks in edge subspace are displayed with the threshold value of 0.03. Figure 31 displays resulting images by feature masks: roof and line masks. Resulting images are obtained using the same threshold value of 20. They can be interpreted in the context of compass feature masks in the DFT domain. Figure 32 shows resulting images by orthogonal masks: DCT, DST, DCT/ DST, and DHT masks. All results consist of isotopic average gradient edge plus ripple edge and processed by the threshold value of 1.55. They are similar to each other and especially, results of the DHT masks are very similar to those of Frei–Chen masks shown in Figure 30.
CIRCULANT MATRIX REPRESENTATION OF MASKS
65
Figure 31. Resulting images by feature masks. (a) Roof masks, (b) line masks.
Figure 32. Resulting images by orthogonal masks. (a) DCT masks, (b) DST masks, (c) DCT/DST masks, (d) DHT masks.
D. Summary This section asserts that DCT, DST, and DHT masks can be used for edge and feature detection, and their relationships can be applied to various advanced applications. We have derived the relationship between the 3 3 DCT/DST masks and the Frei–Chen masks. In addition, we have shown that the real-valued DHT domain interpretation has been applied to various 3 3 compass feature masks. The relationship between DHT masks and KLT has been illustrated and it has been successfully applied to other fields such as the optical control system and information system.
V. Conclusions The properties of special matrices such as the circulant and circulant symmetric matrices are fundamental to understanding the overall context of this article. The input-output relationship represented by a matrix is related to
66
PARK AND CHA
the convolution equation in LTI systems. In addition, the transform matrix between input and output has been expressed by the circulant matrix or circulant symmetric matrix. It is important to note that those matrices are related to 3 3 edge and feature detection masks, such as compass gradient masks and orthogonal masks, which can be generalized to the N N matrix cases. We have focused on the diagonalization property and eigenvalue analysis of the N N circulant matrix in the DFT domain. In addition, we have briefly introduced the N-point DCT, DST, and DHT masks, from which we have unified the mathematical background for interpretation of edge and feature detection. The circulant matrix and its frequency domain interpretation have been used to explain useful properties of 3 3 compass gradient edge masks such as Sobel, Prewitt, and Kirsch masks. This interpretation has been extended to a simple eigenvalue computation method of the circulant matrices of Sobel, Prewitt, and Kirsch masks. It is related to the interpretation of the Frei–Chen masks in terms of the 8D DFT basis vectors. We have shown that compass gradient edge masks have a similar structure in the 1D frequency domain. By unifying eigenvalue analysis in the 1D frequency domain, the Frei–Chen edge masks and their irrational weights are well explained. In addition, the complex-valued compass gradient edge masks have been analyzed in the 1D frequency domain. The complex-valued compass Prewitt and Sobel masks have been represented in terms of the two types of complexvalued compass Frei–Chen edge masks, in the spatial and frequency domains. By the unified eigenvalue analysis of compass feature masks in the 1D frequency domain, the compass roof edge and Frei–Chen line masks have been investigated. The directional filter formulation has been applied to various 3 3 compass feature masks, such as edge, line, and roof masks, with the diVerent number of directions N. Interpretation of the 3 3 compass gradient masks and Frei–Chen masks in the DFT domain is the framework of this work. We have derived the relationships between the 3 3 DCT/DST masks and the Frei–Chen masks. In addition, the real-valued DHT domain interpretation has been applied to various 3 3 compass feature masks, such as edge, line, and roof masks, by using the symmetric property of the weight vector components. The connection between the DHT masks and the KLT has been discussed and it has been successfully applied to other advanced fields. In summary, we have presented a circulant matrix interpretation of the edge and feature detection in the frequency domain and its further applications. The key contexts of the review article include the relationship between the circulant matrix and the convolution operation in the LTI system, the properties of the circulant matrix in the DFT domain, the relationship
CIRCULANT MATRIX REPRESENTATION OF MASKS
67
between compass gradient masks and Frei–Chen masks, the relationship between the DCT, DST, DHT masks and compass gradient masks or Frei–Chen masks, and the relationship between the DHT masks and the KLT. Further research will focus on the development of adaptive signal processing algorithms and their application to various filtering problems.
References Abdou, I. E., and Pratt, W. K. (1979). Quantitative design and evaluation of enhancement/ thresholding edge detectors. Proc. IEEE 67, 753–763. Bracewell, R. N. (1986). The Hartley Transform. New York: Oxford University Press. Corecki, C., and Trolard, B. (1998). Optoelectronic implementation of adaptive image preprocessing using hybrid modulations of Epson liquid crystal television: Applications to smoothing and edge enhancement. Optical Engineering 37, 924–930. Davis, P. J. (1994). Circulant Matrices. New York: Chelsea Publishing. Deo, N., and Krishnamoorthy, M. S. (1989). Toeplitz networks and their properties. IEEE Trans. Circuits and Systems 36, 1089–1092. Frei, W., and Chen, C. (1977). Fast boundary detection: A generalization and a new algorithm. IEEE Trans. Computer 26, 988–998. Gonzalez, R. C., and Wintz, P. (1977). Digital Image Processing. Reading, MA: AddisonWesley. Gonzalez, R. C., and Woods, R. E. (1992). Digital Image Processing. Reading, MA: AddisonWesley. Gray, R. M. (2000). Toeplitz and circulant matrices: A review [online]. Available at: http:// www-ee.stanford.edu/gray/toeplitz.pdf. Jain, A. K. (1989). Fundamentals of Digital Image Processing. Englewood CliVs, NJ: PrenticeHall. Jain, A., Prabhakar, K. S., and Hong, L. (1999). A multi-channel approach to fingerprint classification. IEEE Trans. Pattern Analysis and Machine Intelligence 21, 348–395. Lay, D. C. (2003). Linear Algebra and Its Application. 3rd ed. New York: Addison-Wesley. Lee, X., Zhang, Y.-Q., and Leon-Garcia, A. (1993). Information loss recovery for block-based image coding techniques: A fuzzy logic approach. IEEE Trans. Image Processing 4, 259–273. Miller, D. W., and Grocott, S. C. O. (1999). Robust control of the multiple mirror telescope adaptive secondary mirror. Optical Engineering 38, 1276–1289. Paplinski, A. P. (1998). Directional filtering in edge detection. IEEE Trans. Image Processing 7, 611–615. Park, R.-H. (1990). A Fourier interpretation of the Frei-Chen masks. Pattern Recognition Letters 11, 631–636. Park, R.-H. (1998a). 1-D frequency domain analysis of the Frei-Chen edge masks. Electronics Letters 34, 535–537. Park, R.-H. (1998b). 1-D frequency domain interpretation of complex compass gradient edge masks. Electronics Letters 34, 2021–2022. Park, R.-H. (1999a). 1-D frequency domain interpretation of compass roof edge and Frei-Chen line masks. Pattern Recognition Letters 20, 281–284.
68
PARK AND CHA
Park, R.-H. (1999b). Interpretation of eight-point discrete cosine and sine transforms as 3 3 orthogonal edge masks in terms of the Frei-Chen masks. Pattern Recognition Letters 20, 807–811. Park, R.-H. (2000). Comments on ‘Robust control of the multiple mirror telescope adaptive secondary mirror.’ Optical Engineering 39, 3321–3322. Park, R.-H. (2002a). Comments on ‘Optimal approximation of uniformly rotated images: Relationship between Karhunen-Loeve expansion and discrete cosine transform.’ IEEE Trans. Image Processing 11, 322–324. Park, R.-H. (2002b). Comments on ‘Canonical coordinates and the geometry of inference, rate, and capacity.’ IEEE Trans. Signal Processing 50, 1248–1249. Park, R.-H. (2002c). Complex-valued feature masks by directional filtering of 3 3 compass feature masks. Pattern Analysis and Applications 5, 363–368. Park, R.-H., and Choi, W.-Y. (1989). A new interpretation of the compass gradient edge operators. Computer Vision, Graphics, Image Processing 47, 259–265. Park, R.-H., and Choi, W.-Y. (1990). Comments on ‘A three-module strategy for edge detection.’ IEEE Trans. Pattern Analysis and Machine Intelligence 12, 223–224. Park, R.-H., and Choi, W.-Y. (1992). Eigenvalue analysis of compass gradient edge operators in Fourier domain. J. Circuits Systems Computers 2, 67–74. Park, R.-H., Yoon, K.-S., and Choi, W.-Y. (1998). Eight-point discrete Hartley transform as an edge operator and its interpretation in the frequency domain. Pattern Recognition Letters 19, 569–574. Robinson, G. S. (1977). Edge detection by compass gradient masks. Computer Graphics and Image Processing 6, 492–501. Rosenfeld, A., and Kak, A. C. (1982). Digital Picture Processing. 2nd ed., Vol. 2. New York: Academic Press. Scharf, L. L., and Mullis, C. T. (2000). Canonical coordinates and the geometry of inference, rate, and capacity. IEEE Trans. Signal Processing 48, 824–831. Uenohara, M., and Kanade, T. (1998). Optimal approximation of uniformly rotated images: Relationship between Karhunen-Loeve expansion and discrete cosine transform. IEEE Trans. Image Processing 7, 116–119. Yu, F. T. S. (1983). Optical Information Processing. New York: Addison-Wesley.
ADVANCES IN IMAGING AND ELECTRON PHYSICS, VOL. 134
Phase Problem and Reference-Beam Diffraction QUN SHEN Advanced Photon Source, Argonne National Laboratory, Argonne, Illinois 60439, USA
I. Introduction and Overview . . . . . . . . . . . . . . . . . . . . . .
II.
III.
IV.
V.
VI.
A. Phase Problem: Existing Methods . . . . B. Three-Beam DiVraction. . . . . . . . C. Reference-Beam DiVraction . . . . . . D. Outline of This Review . . . . . . . . Phase-Sensitive DiVraction Theory . . . . . A. N-Beam Dynamical Theory . . . . . . B. Second-Order Born Approximation . . . C. Expanded Distorted-Wave Approximation Geometry and Symmetry Considerations . . A. Geometric Three-Beam Condition . . . . B. Lorentz Factor and Angle Correction . . C. Polarization EVect . . . . . . . . . . D. Symmetry Considerations . . . . . . . 1. In-Out Symmetry . . . . . . . . . 2. Coupling Inversion Symmetry . . . . 3. Triplet Inversion Symmetry . . . . . Experiment Procedure and Examples . . . . A. Special Five-Circle DiVractometer . . . . B. Data Collection . . . . . . . . . . . 1. Orientation of Reference Reflection . . 2. Reference-Beam Oscillation . . . . . C. Data Processing. . . . . . . . . . . 1. Indexing, Integration, and Scaling. . . 2. Triplet-Phase Curve Fitting . . . . . 3. Rejection of Unreliable Phases . . . . D. Examples and Results . . . . . . . . 1. Example on Proteins . . . . . . . . 2. Precision of Measured Phases . . . . 3. Enantiomorph Determination . . . . 4. Example on a Quasicrystal . . . . . Retrieval of Individual Phases . . . . . . A. Direct-Methods Approach. . . . . . . B. Recursive Reference-Beam Phasing . . . C. Phase Extension and Iterative Algorithms . D. Identification of Enantiomorph . . . . . Discussion and Conclusions . . . . . . . References . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70 70 71 72 73 74 74 76 78 80 80 82 82 83 84 84 85 85 86 88 88 89 90 90 92 92 93 93 94 97 97 99 100 101 105 105 106 109
69 ISSN 1076-5670/05 DOI: 10.1016/S1076-5670(04)34002-4
Copyright 2005, Elsevier Inc. All rights reserved.
70
SHEN
I. Introduction and Overview X-ray diVraction from crystalline materials is a widely used method in many fields of modern science, such as structural biology and materials science, for solving and analyzing crystal structures with atomic-scale resolution. A typical x-ray diVraction or crystallography experiment involves measuring a large number of diVraction peaks or Bragg reflections while rotating or oscillating a crystal specimen. The intensity recorded for each Bragg reflection depends on the magnitude of its structure factor FH, which is defined as the Fourier transform coeYcient of the charge density r(r) inside the unit cell of volume Vc: 1 X rðrÞ ¼ FH eiHr : Vc H In general, the structure factor FH is a complex number and has a magnitude and a phase, both of which are needed to determine the atomic positions in a crystal. Thus, in a typical diVraction experiment, the phase of a structure factor is lost. This is the fundamental phase problem in diVraction and its general solution remains one of the most diYcult parts of a structure determination, especially for biological macromolecular crystals (Vainshtein, 1981). A. Phase Problem: Existing Methods Ever since the discovery of the x-ray diVraction from crystals, there have been constant eVorts to find better ways to solve the phase problem. Over the years, many powerful practical methods have been developed. To date, all practical methods leading to solutions of the phase problem in crystallography can be grouped into three categories. In the first category are various mathematical techniques, such as the ab initio method of Patterson search (Patterson, 1934) and the direct methods (Hauptman, 1986; Hauptman and Karle, 1953). The direct methods rely largely on the overdetermination in the intensity measurements of a great number of Bragg reflections and use a probability distribution of possible phases to solve a crystal structure. While very powerful for small molecule structures, application of these mathematical or statistics-based methods to larger crystal structures remains diYcult and is still an active area of research (Miller et al., 1993; Weeks et al., 1993). In the second category of crystallographic phasing techniques are various chemical methods based on heavy-atom derivatives and replacements (Blundell and Johnson, 1976), using x-ray dispersion corrections in
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
71
the heavy-atom scattering factor in either single- or multiple-wavelength anomalous diVraction (Hendrickson, 1991). With these chemical methods, a crystal structure is solved by the additional phase information provided by the anomalous scatterer substructure. In general, the chemistry-based techniques often require complex and time-consuming chemical or biochemical treatments to bond anomalous scatterers in proteins and other biological systems. Finally, for macromolecular structures involving a partially known subunit, the method of molecular replacement (Rossmann, 1972) has been widely used. This method is limited to a family of compounds with previously solved partial molecular structures. B. Three-Beam DiVraction In recent years, there have been considerable eVorts to find a physical solution to the phase problem, namely, to obtain the phases of the Fourier components directly from diVraction experiments. One promising physical solution is the multiple-beam or three-beam Bragg diVraction, which is based on the interference among simultaneously excited Bragg reflections. The idea was first proposed by Lipscomb (1949) half a century ago and was demonstrated by Colella (1974) in a computer simulation and by Post (1977) in an experiment on perfect crystals. Early works also included those by Hart and Lang (1961) on three-beam Pendellosung fringes and by Ewald and Heno (1968) on theoretical aspects of three-beam dispersion surfaces. The method was further developed in the 1980s by several groups in both theory (Chang, 1984; Hummer and Billy, 1986; Juretschke, 1982; Shen, 1986; Thorkildsen, 1987) and experiments (Chang, 1982; Chapman et al., 1981; Hummer et al., 1990; Schmidt and Colella, 1985; Shen and Colella, 1987; Shen and Finkelstein, 1990; Tischler and Batterman, 1986; Tischler et al., 1985) to show that the technique can be applied not only to perfect crystals but also to real, mosaic crystals. Since the past decade, the three-beam interference eVects have been shown visible for complex crystals such as quasicrystals (Lee and Colella, 1993) and proteins (Chang and Tang, 1988; Chang et al., 1991; Colella, 1995; Hummer et al., 1991; Mathiesen et al., 1998; Mo et al., 2002; Weckert and Hummer, 1997; Weckert et al., 1993). The conventional method of performing a three-beam experiment involves exciting one Bragg reflection, H, and then rotating the crystal around the scattering vector H to bring another reflection, G, into its diVraction condition. Reflection H is called the main reflection and G the secondary or detour reflection (Umweganregung) as termed in the original work of
72
SHEN
Renninger (1937). A third reflection HG, called coupling reflection, is involved to bring the G-diVracted beam back along the H-diVracted direction. This method allows the measurements of three-beam interference profiles one reflection at a time, which is very ineYcient and time consuming (Weckert and Hummer, 1997), making it almost impossible to measure a large number of phases that are required to solve a complex macromolecular crystal structure. To overcome the diYculties in multiple-beam Bragg diVraction experiments, a phase-sensitive reference-beam diVraction (RBD) technique has been developed (Pringle and Shen, 2003; Shen, 1998, 1999a; Shen and Wang, 2003; Shen et al., 2000a,b, 2002). This new method incorporates the principle of multiple-beam diVraction into the most common crystallographic data collection technique—the oscillating crystal method—and allows a parallel collection of many three-beam interference profiles simultaneously. It therefore provides a way to measure both the magnitudes and the phases of a large number of Bragg reflections in a time period that is similar to existing crystallographic techniques such as multiple-wavelength anomalous diVraction (Hendrickson, 1991). C. Reference-Beam DiVraction As illustrated in Figure 1, the RBD technique is a simple conceptual modification (Shen, 1998, 1999a; Shen et al., 2000a,b) to the conventional oscillation camera setup in the direct-beam geometry. Instead of perpendicular to the incident x-ray beam, the oscillation axis in RBD geometry is tilted by the Bragg angle yG of a strong reference reflection, G, which is aligned to coincide with the oscillation axis c. In this way, reflection G can be kept fully excited throughout the crystal oscillation, and the intensities of all Bragg reflections recorded on an area detector during such an oscillation
Figure 1. Schematic illustrations of (a) oscillation method for conventional data collection in crystallography, and (b) reference-beam method for collecting phase-sensitive diVraction data. CCD, Charge-coupled device.
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
73
can be influenced by the interference with the G-reflected reference wave and thus are sensitive to the relative phases of the reflections involved. A complete reference-beam interference profile is measured by taking multiple exposures while stepping angle y through the G reflection rocking curve. In this procedure, the reference-reflection G serves as a single detour or secondary reflection that is common to all primary reflections that are recorded on the area detector. An easy way to understand the phase sensitivity is to realize that the G-reflected wave kG is coherently split from the incident wave k0 and can be viewed as a new incident wave (Shen, 1998). This new incident wave can produce its own diVracted beams during an oscillation. For each Bragg reflection H excited by the original incident beam k0, there exists a reflection H-G excited by kG, whose wavevector kHG is parallel to the original kH of the H-reflection. The two sets of diVraction patterns, one excited by k0 and the other by the reference beam kG, coincide in space and interfere with each other, producing a phase-sensitive image recorded on the area detector. As in any interference phenomenon, the interference intensity in the RBD depends on the relative phase diVerence d between the k0-excited wave kH and the kG-excited wave kHG. According to kinematic diVraction theory, any Bragg-reflected wave has a phase shift of aH, which is the phase of structure factor FH for reflection H. Therefore, for each reflection H, the k0-excited wave kH has a phase shift of aH, and the kG-excited wave kHG has a phase shift of aHG plus aG that already exists in the G wave. Consequently, the interference in the RBD process is sensitive to the following phase diVerence (Shen, 1998): d ¼ aHG þ aG aH ;
ð1Þ
This phase diVerence is independent of choice of origin in a unit cell and is often called the invariant triplet phase in crystallography (Vainshtein, 1981). D. Outline of This Review This article provides a comprehensive review of the RBD technique, both in theory and in experimental procedures. Section II outlines the theoretical considerations that are necessary to describe an RBD process and to quantitatively fit the RBD intensity profiles and retrieve the phase information. In particular, we compare two approximate approaches, a second-order Born approximation (Chang and Tang, 1988; Shen, 1986) and an expanded distorted-wave approximation (Shen, 1999b, 2000; Shen and Huang, 2001), to the results of an exact N-beam dynamical theory (Colella, 1974). These approximate theories provide the basis for simple analytical expressions that
74
SHEN
can be used in an automated RBD data analysis procedure for a large number of Bragg reflections. Section III focuses on typical experimental setup of an RBD experiment, including geometric considerations such as Lorentz and polarization factors that may aVect the interference intensities, as well as symmetry in RBD configurations. A specially designed five-circle diVractomter is also described in this section. In Section IV, we present several examples that demonstrate the details of the RBD data collection technique and data reduction and analysis methods. The procedures have been established in such a way that existing crystallographic software packages can be applied whenever feasible to make the RBD technique as automated as possible. Finally, in Section V, we present the strategies of using the experimentally measured phases that may lead to a structural solution and discuss some of the current problems in the RBD experiment and their potential solutions in the near future.
II. Phase-Sensitive Diffraction Theory One of the main advantages of x-ray diVraction analysis of materials structures is that the diVracted intensities measured in experiments can be easily interpreted by a simple kinematic theory (Warren, 1969) that is based on the first-order Born approximation where single scattering events are the predominant mechanism. The kinematic theory, however, is intrinsically limited by the phase problem of diVraction and is generally insensitive to the phases of scattering amplitudes. Until recently, the only existing phase-sensitive diVraction theory (Figure 2) is the so-called dynamical theory (Batterman and Cole, 1964; Colella, 1974), which includes all possible interactions among multiply excited Bragg reflection waves inside a crystal. This theory is briefly reviewed below. Unfortunately, the dynamical theory in the case of multiple Bragg waves is rather complicated in its mathematical formulation. Thus, two recently developed approximate diVraction theories (Shen, 1986, 1999b, 2000) are described next, giving rise to phase-sensitive analytical expressions to describe interference profiles in experiments. Some of these approximations yield results that agree very well with the full dynamical theory. A. N-Beam Dynamical Theory Since it is intrinsically a three-beam diVraction process, the RBD can be fully described by the N-beam dynamical theory developed by Colella (1974), with a slight modification. Instead of calculating the diVracted intensity for
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
75
Figure 2. Categories of phase-sensitive x-ray diVraction theories.
the ‘‘aligned’’ reflection G, the intensity computation is performed for the reflection H that sweeps through the Ewald sphere. Figure 3(b) shows an example of such calculations for GaAs G ¼ (004) and H ¼ (317), both in thick crystal Bragg geometry with G as the surface normal. The intensities are plotted in logarithmic scale as a function of the oscillation angle DcH ¼ c cH and the rocking angle DyG ¼ y yG of the G reflection. This intensity contour map illustrates the general behavior of three-beam diVraction in reciprocal space in reference-beam geometry. As primary reflection H sweeps its diVraction condition through crystal rotation around cH, the diVraction intensity IH recorded on the area detector is the intensity integrated over c. This integrated intensity, normalized to its two-beam value when G is not excited, is shown in Figure 3(a). It exemplifies the typical three-beam interference profile that is measured in an RBD experiment. The shape of the interference profile is sensitive to the phases of the reflections involved, as already illustrated in Colella’s original article (1974). Thus, the N-beam theory can be used to analyze RBD diVraction data and extract the phase information.
76
SHEN
Figure 3. N-beam dynamical theory calculations of RBD intensities for GaAs (317) with G ¼ (004) as the reference reflection in symmetric Bragg geometry. (a) Integrated intensity IH versus rocking angle DyG. (b) Intensity contour map as a function of oscillation angle c and rocking angle y.
There are, however, two drawbacks of the rigorous N-beam dynamical theory. First, it is not apparent what exact phase dependence exists in the interference eVect. Second, considerable computational procedures make it diYcult to adapt the theory for any automated data analysis, especially for a large number of Bragg reflections that can be recorded on an area detector in an RBD experiment. We therefore seek alternative approaches that may not be as exact as the N-beam but nonetheless can provide analytical expressions with an apparent phase dependence. B. Second-Order Born Approximation According to the standard scattering theories in quantum mechanics and electrodynamics (see, e.g., Jackson, 1974), a scattered x-ray wave-field D(r) from a crystal can be represented by a Born approximation series, with its 0th-order solution D(0) being the incident wave, first-order solution D(1)
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
77
being a singly scattered wave in the usual kinematic or two-beam approximation, and the second-order solution D(2) representing a doubly scattered wave with three-beam interactions, and so on (Shen, 1986): DðrÞ ¼ Dð0Þ þ Dð1Þ þ Dð2Þ þ :
ð2Þ
It follows that the conventional oscillating crystal geometry in Figure 1(a) is based on observing the two-beam reflections D(1) with the rotation axis lined up on the incident beam D(0), while the reference-beam geometry in Figure 1(b) is automatically set up to observe the three-beam interactions D(2) with the rotation axis chosen to diVract a reference beam D(1). Using the Born series Eq. (2), it can be shown (Shen, 1998) that an RBD process can be described within the second-order Born approximation, and that the RBD interference eVect is sensitive to the triplet phase: d ¼ aHG þ aG aH, with aH’s being the phases of the corresponding structure factors. In addition to d, the phase of the G-reflected wave can be tuned by rocking the tilt angle y through the G-reflection rocking curve, much like in an x-ray standing wave experiment (Batterman, 1964). Observations of intensity IH as a function of y for each of the Bragg reflections recorded on the area detector yields a complete interference profile IH(y), which is given by a normalized intensity (Shen, 1998, 1999a): FHG pffiffiffiffiffiffiffiffiffiffiffiffiffi RG ðyÞcos½d þ vG ðyÞ; ð3Þ IH ðyÞ ¼ 1 þ 2 F H
where RG(y) is the reflectivity, nG(y) is the dynamical phase shift of the reference reflection G, and IH(y) has been normalized to the two-beam intensity. It has been known that the second-order Born approximation agrees with N-beam results very well in the wings of a three-beam interference profile (Shen, 1986). But it is expected that near the center of the rocking curve where G is fully excited, the second-order Born approximation may break down and Takagi–Taupin approach may used in this case (Thorkildsen and Larsen, 2002). It is worth noting that once the reference-reflection G is chosen in an experiment, both RG(y) and vG(y) in Eq. (3) are known and common to all reflections recorded on the area detector, and therefore any diVerence in IH(y) between two recorded reflections is due entirely to the diVerence in their triplet phases d. In practice, the dynamical phase shift vG(y) can be approximated by a hyperbolic tangent function (Shen, 1999a) vG ðyÞ ¼ p=2 f1 tanh½ðy yG Þ=Dg; which closely resembles the true phase function in dynamical diVraction theory, convoluted with an experimental resolution and/or mosaic spread
78
SHEN
function of a half-width D and centered at yG. The reflectivity curve RG(y) can be approximated with a Lorentzian that has the same center and half-width D as in vG(y). C. Expanded Distorted-Wave Approximation The result based on the second-order Born approximation [Eq. (3)] includes only the interference term in the diVracted intensity and a phase-insensitive magnitude-squared term has been omitted since it is a higher-order term. For a more rigorous description of the RBD, an expanded distorted-wave approximation (EDWA) has been developed (Shen, 1999b, 2000; Shen and Huang, 2001). This new approach not only provides the best physical interpretation of the RBD process but also yields a quantitatively accurate analytical description of the RBD intensities that are almost identical to the full N-beam dynamical theory (Colella, 1974). The EDWA approach closely follows the algorithm of the conventional distorted-wave Born approximation for x-ray surface scattering studies (Sinha et al., 1988; Vineyard, 1982), with an important revision that a sinusoidal Fourier component G is added to the distorting component of the electric dielectric function (Figure 4). This sinusoidal component represents the perturbing G reflection charge density, and the resulting distorted wave is in fact composed of two waves, O and G waves. Instead of the Fresnel theory used for surface studies, a two-beam dynamical theory (e.g., Batterman and
Figure 4. Schematic illustrations of distorting susceptibility or density component for (a) distorted-wave approximation for surface scattering, and (b) expanded distorted-wave approximation for RBD.
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
79
Cole, 1964) is used to evaluate these distorted waves, and the subsequent scattering of these waves is again handled by the Born approximation. The final result is that the diVracted wave field for H reflection recorded in an RBD oscillation image is given by Shen (1999b, 2000) and Shen and Huang (2001): ð1Þ DH ¼ DH r0 þ jFHG =FH jrG eid ; ð4Þ where D(1) H is the conventional first-order Born wave field. Based on Eq. (4) and the above equations; for diVraction from macromolecular crystals, which are essentially Laue transmission cases, the distorted wave fields r0 and rG are given by: r0 ¼ cosðAZG Þ þ isinðAZG Þ rG ¼ isinðAZG Þ=ZG ; where ZG is a normalized angle parameter and A is the Pendello¨ sung length as defined in the conventional two-beam dynamical theory (Batterman and Cole, 1964). Based on the above equations, the diVracted intensity IH for a given primary reflection H recorded on a reference-beam oscillation image, normalized to its two-beam intensity, is given by the following analytical expression (Shen, 2000; Shen and Huang, 2001): sinDy 2 p p
sinð2DyÞ cosd þ IH ðyÞ ¼ 1 p sind þ 1 ; ð5Þ Dy Dy 2Dy 2Dy where Dy ¼ p(y yG)t/dG, t is the average crystal domain size, dG the d-spacing of the G reflection, p the interference amplitude, and d is the triplet phase. For a symmetric G reflection, p is given by re lt FG FHG ; ð6Þ p¼ Vc cosyG FH ˚ is the classical radius of an electron, Vc is the unit where re ¼ 2.818 105 A cell volume, and FH is the structure-factor amplitude for reflection H. It has been shown that the distorted-wave theory Eq. (5) is much improved over the second-order Born approximation (Shen, 1986), and it is in almost perfect agreement with the full N-beam dynamical theory (Colella, 1974) for macromolecular crystals because the typical crystal size of t < 0.5 mm is much smaller than the Pendellosung period or the extinction length for these light-element crystals (Shen and Huang, 2001). It should be pointed out that Eq. (5) includes both the phase-sensitive and the phase-insensitive terms and is valid for all measured Bragg reflections and for the entire excitation range of the reference reflection G in a
80
SHEN
Figure 5. Calculated reference-beam interference profiles with diVerent phases using Eq. (5) for the (3 1 1)/(1 1 1) reflection of tetragonal lysozyme. Parameters used in the ˚ , structure factors calculation: average crystal domain size t ¼ 10 mm, wavelength l ¼ 1.033 A F(3 1 1) ¼ 1592, F(1 1 1) ¼ 4997, F(2 2 2) ¼ 2069.
reference-beam diVraction experiment. An example of the EDWA calculation is shown in Figure 5 for tetragonal lysozyme (3 1 1)/(1 1 1) reflection
with triplet phase d ¼ 135 (solid curve). Lysozyme is a small protein with ˚ , c ¼ 37.77 A ˚ . The phase space group P43212 and unit cell a ¼ b ¼ 78.54 A sensitivity is illustrated by several calculated interference profiles with other ˚ . It has triplet-phase values. The calculations are preformed at l ¼ 1.033 A been shown (Shen, 1999b; Shen and Huang, 2001) that the intensities given by Eq. (5) agree very well with the N-beam theory for both weak and strong primary reflections. This theory may therefore be used in RBD data analyses to improve the curve-fitting results that yield the triplet-phase values. III. Geometry and Symmetry Considerations This section considers a few geometric factors in RBD setup and symmetry related observations during RBD data collection. A. Geometric Three-Beam Condition The three-beam diVraction process that governs an RBD interference involves the reference reflection G, a primary reflection H, and a coupling reflection HG. The geometrical condition that the H reflection must satisfy
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
81
in the reference-beam diVraction process is exactly the same as the one that can be found in the literature for the conventional three-beam diVraction (Caticha-Ellis, 1975; Cole et al., 1962). If yH is the Bragg angle for H and yG for G, then the scattering plane defined by k0 and G must form a specific rotation (oscillation) angle f with respect to the plane formed by H and G in order for all three reciprocal nodes, O, G, and H, to be on the sphere of reflection simultaneously. It can be shown that this rotation angle c is given by (Shen, 2003) coscH ¼
sinyH cosb sinyG ; sinb cosyG
ð7Þ
where b is the angle between the H and the G reciprocal vectors, yH is the Bragg angle for H, yG for G, and c is defined as the angle between the scattering plane formed by G and incident wave vector k and that formed by G and H. Eq. (7) is essentially the same as that for conventional three-beam diVraction. When yG equals zero, Eq. (7) reduces to the condition for the conventional oscillation method. Using Bragg conditions for H and G, it is straightforward to solve for wavelength l as a function of rotation angle cH: 2dG sinbjcoscH j l ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : ðdG =dH cosbÞ2 þ sin2 b cos2 cH
ð8Þ
Figure 6 shows an example of l versus c dependence in a 360 rotation for a given primary reflection H ¼ (515) and reference reflection G ¼ (311) for a tetragonal lysozyme crystal (space group P43212, unit cell a ¼ b
Figure 6. Illustration of three-beam condition in reference-beam geometry, for lysozyme, H ¼ (5 1 5), G ¼ (3 1 1). Four symmetry-related occurrences are indicated by the solid circles.
82
SHEN
˚ , c ¼ 37.8 A ˚ ). Obviously, every primary reflection H has a diVerent ¼ 78.5 A but similar curve, with its own rotation angle zero setting c ¼ 0 defined when reciprocal vector H is within the G scattering plane. B. Lorentz Factor and Angle Correction In an RBD experiment, the intensity of any Bragg reflection H is integrated by sweeping the Ewald sphere through the corresponding reciprocal node H at a velocity determined by the eVective Lorentz factor, which can be obtained by diVerentiating Eq. (7) with respect to yH: DcH ¼
cosyH DyH ; sinb sinc cosyG
ð9Þ
where DyH is an angular width due to mosaicity and crystal domain size. This expression is essentially the same as that given by, for example, Zachariasen (1945), except that the Bragg law for H has been used to convert the dimension in reciprocal space to the corresponding change in yH. Eq. (9) provides a way to determine whether a Bragg reflection H recorded in an RBD measurement is not completely recorded within a given oscillation range, the so-called partials, due to a very wide DcH, for example. In addition to the Lorentz factor Eq. (9), a diVerentiation of Eq. (7) with respect to yG provides a reference-beam correction angle DcG. This is the correction to the rotation angle c, as defined in Eq. (6), at which H is excited when the reference reflection G is detuned from its Bragg condition by a deviation angle DyG: DcG ¼
cosb sinyH sinyG DyG : sinb sincH cos2 yG
ð10Þ
Eq. (10) can be used to explain the tilted trajectory in Figure 3(b) and is useful in estimating the peak position shifts when multiple frames of oscillation data are collected at slightly diVerent yG positions using, for example, a charge-coupled device (CCD) x-ray camera. Although in the majority of cases DcG is tiny and negligible, special care must be taken for those reflections to be close to coplanar (cH 0) and close to the G reciprocal vector (b 0). C. Polarization EVect In general, the incident polarization for H can be diVerent from that for the reference G reflection, unless H is coplanar to G. We now define s as the unit vector perpendicular to the diVraction plane defined by incident beam k and
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
83
reference reciprocal vector G. Assuming that the incident beam consists of a perpendicular polarization for G, D0 ¼ D0s, it can be shown that both sH and pH components exist in the incident beam for H reflection: sH þ pH pH D0 ¼ D0 qffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; 1 þ p2H where pH ¼
sincH : cotb cosyG coscH sinyG
ð11Þ
Eq. (11) shows that the only situation where pH ¼ 0 is the coplanar case cH ¼ 0. It should be noted that the polarization mixing phenomenon that is present in conventional multiple-beam diVraction (MBD) (Juretschke, 1984; Shen, 1993; Shen and Finkelstein, 1990, 1992) through a doublescattering process occurs at higher-order scattering processes in the case of RBD if an incident beam is purely s-polarized for the reference reflection G. The summary chart below illustrates whether or not polarization mixing exists at certain levels of scattering processes for the conventional MBD and the RBD methods. Intrinsic scattering process
MBD
RBD
Single (O!H) Double (O!G!H) Triple (O!H!G! H)
No Yes Yes
No No Yes
Since triple-scattering is intrinsically weaker than double-scattering processes, a practical implication of lack of polarization mixing in double-scattering in RBD is that it essentially eliminates the eVect of asymmetry reversal in the interference profiles that can exist in the conventional MBD measurements (Juretschke, 1984). This would, in principle, increase the reliability of RBD phase measurements. D. Symmetry Considerations Eqs. (7) and (8) define the symmetry properties in RBD geometry for any given primary reflection H as indicated in Figure 6. These symmetry elements are in addition to the Laue symmetry operations given by the space group of the crystal structure. We now discuss these additional elements in more detail.
84
SHEN
1. In-Out Symmetry As shown in Figure 5, at a given wavelength l ¼ ld, there are four occurrences for a primary reflection H, corresponding to the four interception points of l ¼ l0 with the l(c) curve. Two such cases, related by c, are due to the fact that cosc is an even function and the solution c corresponds to the same reciprocal node H passing through the Ewald sphere from outside to inside and then from inside to outside situations. These in-out conditions have been considered in the context of conventional c-scan threebeam method (Chang, 1982), where a sign change takes place in three-beam interference profiles between the ‘‘in’’ and the ‘‘out’’ cases. In RBD geometry, however, since H reflection is the primary reflection and the reference G reflection is actually the detour excitation, the interference profiles measured in these in-out cases for H/G/HG are completely identical as long as a same rocking angle yG direction is used, which is usually the case. Thus, measurements at c provide a redundancy of two for any given H/G/HG triplet with tripletphase dH ¼ aG + aHG aH, where aHS are the individual structure factor phases. 2. Coupling Inversion Symmetry Figure 1 illustrates that the l(c) curve for each H reflection consists of two
branches: one centered at c ¼ 0 and the other at c ¼ 180 , as derived from
Eq. (8). It can be shown that the branch at c ¼ 180 is in fact due to the Friedel mate GH of the coupling reflection HG, which we will term as the inverse coupling reflection. The best way to rigorously derive this additional symmetry is to use the vector form of the Bragg condition for primary reflection H: jH þ kj ¼ k; where k is the incident wavevector (k ¼ jkj), with its component k== parallel to G given by Bragg’s law for reference reflection G: k== ¼ G=2: With the above two equations, it is easy to obtain the following diVraction condition for H with G reflection excited: 2H? k? ¼ H== G H 2 ;
ð12Þ
where G ¼ jGj; H ¼ jHj, and ? and == denote the vector components perpendicular and parallel to G, respectively. Now, considering reflection M ¼ GH, it is obvious that M? ¼ H? ; M== ¼ G H== ;
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
85
and therefore, M? k? ¼ H? ðk? Þ;
ð13Þ
which means that reciprocal node GH always passes through Ewald sphere
exactly at Dc ¼ 180 from where H passes through. We should note that when G ¼ 0, we recover the situation in standard
rotating crystal method where the measurement at Dc ¼ 180 corresponds to that for the Friedel mate H. In our present reference-beam situation, if the triplet interference profile for H/G/HG is recorded at cH, then the interference profile for its coupling Friedel mate, GH/G/H, will always appear at
cGH ¼ cH + 180 . The triplet phase dGH in this case is equal to dGH ¼ aG + aH aGH, which is identical to dH ¼ aG aH + aHG if anomalous dispersion is negligible. However, since the magnitude of structure factor |FGH| is not necessarily equal to |FH|, the interference profile for GH/G/ H can be very diVerent from that for H/G/HG. We will illustrate this point in Section IV through an example. 3. Triplet Inversion Symmetry Triple inversion symmetry refers to the inversion of every reciprocal vector in a three-beam combination H/G/HG to its Friedel mate, H/G/GH. In this case, the triplet phase measured in the interference profile is dH ¼ aG + aGH aH ¼ dH, assuming again that anomalous dispersions are negligible in the structure factors. The same symmetry transforms the inverse-coupling reflection case GH/G/H into HG/G/H, which has the same triplet phase dH. The inverse-triplet measurements, in general, are useful for two reasons. First, these measurements allow an unambiguous determination of the enantiomorph for noncentro-symmetric systems (Chang et al., 1999a; Shen et al., 2000a; Weckert and Hummer, 1997). Second, by comparing a pair of three-beam interference profiles related by inversion symmetry, a more accurate triplet phase value can be obtained (Figure 7) because a phaseindependent intensity contribution to the profiles for a mosaic crystal can be separated out by the measurement (Chang et al., 1999a; Shen and Wang, 2003; Weckert and Hummer, 1997; Weckert et al., 1993).
IV. Experiment Procedure and Examples This section describes in some detail the experimental setup and procedures in an RBD experiment (Pringle and Shen, 2003; Shen and Wang, 2003; Shen et al., 2000b, 2002). The procedure has been developed for optimal
86
SHEN
Figure 7. Example of triplet–inversion symmetry related RBD profiles for lysozyme, H ¼ (31 1), G ¼ (1 1 1). Filled circles are experimental data and solid curve is a fit to data using EDWA theory [see Eq. (5)].
measurements of triplet phases on protein crystals, but in principle it can also be applied to small molecule structures. Several examples are given in the last subsection. A. Special Five-Circle DiVractometer To bring an arbitrary Bragg reflection G into its diVraction condition in the vertical plane, a standard approach is to use three rotations (f, w, o) as in an Eulerian four-circle geometry (Busing and Levy, 1967). It is possible in this approach to realize a rotation around the scattering vector G, usually called the azimuthal rotation c, by a combination of (f, w, y) angular settings. However, if this method is adopted for RBD experiments, it would require movements of three motors (f, w, y) simultaneously for any oscillation range Dc, which lacks the required mechanical precision due to finite motor steps and is diYcult to implement in any crystallographic software packages for oscillation camera controls. Two advanced six-circle diVractometers have been developed for the purpose of precise control of azimuthal angle c in three-beam diVraction experiments (Thorkildsen et al., 1999; Weckert and Hummer, 1997). These designs could be adapted for reference-beam measurements, but the large size and the high cost of these c-circle diVractometers would be problematic
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
87
for their incorporation into a standard oscillation camera for crystallography, where a compact device is preferred. We have designed and built a new k diVractometer for RBD experiments (Pringle and Shen, 2003). As shown in Figure 7, the new five-circle k diVractometer is built on top of a compact o-2y two-circle goniometer Huber 424 (Huber, 2001). A Huber 410 rotation stage is mounted on the o stage and is served as the oscillation axis c, which is orthogonal to the oaxis. On top of the c-circle, a small two-circle (k, f) goniometer using two
Huber 408s is constructed in a 50 k geometry and is used to bring any given Bragg reflection G parallel to the c-axis. A standard goniometer head with a height of 49 mm can be used on the f-axis for sample mount, and a small pin diode is mounted on the 2y arm for measurements of the reference G reflection rocking curves for alignment purposes. The complete k diVractometer has been designed, assembled, and tested in-house at Cornell High Energy Synchroton Source (CHESS) (Pringle and Shen, 2003). Figure 8 illustrates the diVractometer in an actual RBD experiment. All rotations on the diVractometer are controlled by stepping motors, except that the oscillation axis c is by a direct current servo motor to minimize vibrations induced by a stepping motor. The intrinsic alignment of the multiple axes on the diVractometer is done by iteratively adjusting the mounting adapter plate positions between the rotation stages. Using a sharp centering pin and a high power magnifying telescope, a very low sphere of confusion of 30 mm peak to peak has been achieved for the three inner circles (f, k, c), and about 50 mm peak to peak if the o-axis is included. The angles between the diVerent rotation axes are determined by precisely machined
Figure 8. Experimental setup for reference-beam diVraction experiments with a portable five-circle k-diVractometer and an x-ray CCD camera.
88
SHEN
adapter plates and have been checked with a properly placed machinist level to within 30 arc-seconds of the respective design values. These precisions are quite adequate for reference-beam experiments on protein single crystals. B. Data Collection With the special five-circle k-diVractometer described in the last section, an RBD experiment on a protein crystal typically proceeds as follows (Figure 9). 1. Orientation of Reference Reflection When a fresh sample crystal is mounted, an initial oscillation diVraction image is taken at the default position of the k diVractometer, as shown in Figure 7, where all sample rotation angles are defined as zero: f ¼ k ¼ o ¼ c ¼ 0. This diVraction image is indexed by a crystallographic analysis program such as DENZO (Otwinowski and Minor, 1997) or MOSFLM (CCP4, 1994) from which the initial orientation matrix U of the crystal with respect to the diVractometer is obtained. We then choose a particular Bragg reflection G as the reference reflection and calculate the alignment angle settings (f, k) that are necessary to bring reciprocal vector G parallel to the oscillation axis c.
Figure 9. Schematic procedure of an RBD experiment.
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
89
The c-axis is then tilted by an y-rotation to Bragg angle yG to bring G into its diVraction condition. To check the alignment of G, we measure the G rocking curves at four c
positions: c ¼ 0, 180 , c0, and c0 þ 180 , with c0 close to 90 as much as possible. We have found that the peak positions in these four rocking curves
generally agree with each other to within 0.5 , which is acceptable but not good enough for high-quality protein crystals with extremely narrow mosaicity. The origins for this alignment error may be from the intrinsic indexing accuracy in MOSFLM, or from a slight deviation from the default diVractometer zero position when the initial oscillation image is taken. This misalignment error can be corrected by a realignment procedure based on the components of the azimuthal inclination of G from the four
o peak positions in the rocking curve measurements at c ¼ 0, 180 , c0, and
c0 þ 180 . Details of this procedure are described by Pringle and Shen (2003). This realigning procedure works very well and can often reduce
misalignment from 0.5 to better than 0.01 by only one such iteration. 2. Reference-Beam Oscillation After reference reflection G is aligned parallel to the oscillation axis c, reference-beam data collection can proceed in a way that closely follows the standard oscillating crystal method. A slight modification to the oscillation control has been implemented so that phase-sensitive RBD oscillation images can be recorded on a CCD by multiple exposures at several (typically 10–20) y ¼ yi steps through the G reflection rocking curve (Figure 10). The
Figure 10. Illustration of typical RBD data set consisting of multiple oscillation images taken at diVerent y ¼ yi on reference-reflection rocking curve. Integrated intensities of each Bragg peak as a function of y yi provide phase information.
90
SHEN
rocking curve is measured in the increasing-y direction, corresponding to the G reciprocal node moving from outside to inside the Ewald sphere. The typical exposure time for each oscillation image is 15 to 30 seconds for lysozyme crystals at CHESS bend-magnet stations. It is convenient and important in an RBD measurement to record the reference reflection intensity on the same y image series so that a true rocking curve of the G reflection is measured simultaneously. This would indicate both the center yG and the width D of the rocking curve that are needed for triplet-phase data analysis. Unfortunately, since G is a strong reflection and is aligned along the oscillation axis, its intensity usually oversaturates the pixels on the image plate or the CCD. To overcome this problem, we have installed a small thin attenuator positioned around the G reflected beam next to the direct-beam stop in front of the area detector, as a secondary beam stop shown in Figure 8. The thickness of the attenuator is adjusted at the peak of the G rocking curve to prevent intensity saturation. This method has allowed us to faithfully record the reference-reflection intensities simultaneously with the RBD profiles. This procedure is very similar to the rocking curve measurements in x-ray standing wave experiments (Batterman, 1964; Bedzyk et al., 1984). C. Data Processing With the aim to make RBD a real practical technique that can be used not only by x-ray specialists but also by crystallographers with a broad range of background, we have devoted considerable eVort to the RBD data collection, processing, and analyses and have developed a procedure that would automatically extract the triplet-phase values for a large number of Bragg reflections collected in an RBD experiment. The analysis procedure consists of the following principal steps. 1. Indexing, Integration, and Scaling The first step in analyzing a series of RBD images is to index the oscillation pattern. Even though each image is obtained by rotating the crystal around an axis that is not perpendicular to the incident beam, we have found that the existing crystallographic software packages such as DENZO (Otwinowski and Minor, 1997) and DPS (Steller et al., 1997) can be reliably used for automatically indexing RBD pattern. An example is shown in Figure 11. This procedure can also be used to obtain an initial orientation matrix for alignment of the reference reflection.
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
91
Figure 11. Example of an indexed RBD pattern using MOSFLM in CCP4 package (CCP4, 1994). Colored squares around measured diVraction spots indicate their predicted positions after the pattern has been indexed.
With a diVraction pattern indexed, integrated intensities of each Bragg reflection are evaluated for each y step, using the same software package, to provide a series of integrated intensities IH(y). These intensities are then scaled using ScalePack in the CCP4 package (CCP4, 1994) to take into account the incident beam variations among other experimental factors. We have found that the automatic scaling function in the package provides results that agree very well with the experimentally measured incident beam monitor counts, although the latter have been used in almost all cases in our analyses. The end result of this step is a series of properly scaled integrated intensities IH(y), not merged among the symmetry-related peaks, of all recorded Bragg reflections in the experiment.
92
SHEN
2. Triplet-Phase Curve Fitting The second step in the RBD data analysis is to extract the triplet-phase values d from the IH(y) data profiles. To do this automatically with minimal operator intervention, we have developed a curve-fitting program to fit the IH(y) data series to an RBD interference function [Eq. (5)], based on a phasesensitive diVraction theory in a distorted-wave approximation (Shen, 1999b, 2000; Shen and Huang, 2001) described in Section II.C. This fitting procedure can directly yield the best-fit values of d for all recorded reflections with only four adjustable parameters (Figure 12): background (two-beam) intensity I0, center yG of G reflection, amplitude p of RBD interference, and triplet phase d. The width w of rocking curve is usually measured from the recorded G-reflection intensities and fixed in the fitting procedure. Even though its center yG is also known from a simultaneously measured rocking curve, we allow yG to vary within a narrow region of w to account for theoretical approximations involved in arriving at Eq. (5), as well as possible experimental errors. In some cases, a fifth parameter is used to take into account a sloped baseline intensity, which may exist when the peak is a partially recorded reflection. 3. Rejection of Unreliable Phases Finally, it should be pointed out that fitting all recorded RBD intensity series automatically does not mean that every reflection exhibits the true
Figure 12. Illustration of the four fitting parameters in triplet-phase curve-fitting procedure.
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
93
reference-beam interference pattern. The second step outlined above is merely a part of an automated procedure, and the next step is to develop a goodness-of-fit criterion that would allow us to reject the bad intensity profiles that for various reasons do not actually contain the true interference information. Contrary to some of the existing criteria on observable multiple-beam interference eVects based on magnitude ratios of the involved structure factors (Weckert and Hummer, 1997), the rejection criteria that have been developed for the RBD analysis rely mostly on the error assessments of the experimental measurements through curve fitting to Eq. (5). The criteria established this way depend on, for example, the w2 value of the best fit, the error bar of fitting parameter d, the magnitude of p, and whether yG is at its boundary. As shown in the next subsection, measured median phase errors can be reduced from
60 to 70 down to less than 45 after the outlier rejection procedure.
D. Examples and Results 1. Example on Proteins We have performed several RBD experiments on diVerent protein crystals, including lysozyme, thrombin, thaumatin, uPA, and so on, at F3 and C1 stations at CHESS using the experimental procedure described in the previ˚ . Here we ous subsection. Typical x-ray wavelengths used were around 1.1 A show a complete example of RBD measurements on a small protein tetrag˚ , c ¼ 37.8 A ˚ ). onal lysozyme (space group P43212, unit cell a ¼ b ¼ 78.5 A This work has been published (Shen and Wang, 2003; Shen et al., 2002). A complete RBD data set with 90 oscillation range was collected on a ˚ . In this data lysozyme crystal at CHESS, with x-ray wavelength l ¼ 1.097 A set, we chose G ¼ (1 1 1) as the reference reflection. The complete data set,
taken at room temperature, consists of 45 series of Dc ¼ 2 oscillation images at 19 y-angle positions across the G-reflection rocking curve, with 15 seconds of exposure time for each image. The diVraction resolution is ˚ , which is limited by the size of the CCD detector, with an 86% 2.5 A
completeness for the whole data set. A typical y range is about 0.05 to
0.1 depending on the crystal mosaicity. The integrated intensities of all recorded Bragg reflections in the data set are deduced using MOSFLM and SCALA in the CCP4 package (CCP4, 1994). These intensities are then sorted according to the y angle at which the original image is taken to form a set of RBD profiles for all 14,914 reflections in the data set. The total data collection time for all these profiles is about 12 hours, with only 6 hours of x-ray exposures to the specimen.
94
SHEN
With integrated intensities IH(yt), i ¼ 1, . . . , 19, we proceed to fit the experimental measurements using EDWA expression Eq. (5). Examples of the fits to experimental data are shown in Figure 13, along with the corresponding inverse-beam H /G measurements that allow the enantiomorph specification and more accurate triplet-phase determinations (Chang et al., 1999; Shen et al., 2000a). As pointed out by several authors (Chang et al., 1999a; Weckert and Hummer, 1997; Weckert et al., 1993), the inverse three-beam measurement provides a good way to separate out the phase-sensitive interference eVect from a phase-independent intensity contribution due to the overall energy-flow balance in a mosaic crystal. In practice, both qualitative (Weckert and Hummer, 1997) and quantitative (Chang et al., 1999b) analytical methods have been used to extract triplet phases from a Friedel pair of inverse-beam–related three-beam cases. In our work, we have adopted the following simple quantitative curvefitting procedure to take into account both the direct-beam H/G and the inverse-beam H /G cases. This procedure is developed in the framework of an automated curve fitting routine using Eq. (5). As illustrated in Figure 13, we first perform the same curve fitting for both cases in each pair, yielding amplitudes p and p, and triplet phases d and d, for each case. We note that although in most cases both d and d are consistent with each other, that is, d d, occasionally this is not true because of the phase-independent intensity contribution. The problem is easily solved by choosing the triplet phase corresponding to the largest-amplitude case in the inverse-beam pair and then assigning the phase of the other case to the negative of the chosen one. For example, if p > p, then d ¼ d and d ¼ d, but if p > p, then d ¼ d and d ¼ d. We find that this procedure works very well, as shown in Figure 9, where the chosen phase is indicated by the boxed text in each pair of the H/G and H /G cases. 2. Precision of Measured Phases Figure 14 shows a comparison of measured triplet phases of tetragonal lysozyme in the data set presented above with the calculated phases based on a data entry in Protein Data Bank (Vaney et al., 1995). The histogram or occurrence distribution as a function of phase error, defined as the diVerence between the measured and the calculated phases, of all N ¼ 14,914 measured triplet phases with w2 > 0 resulted from the curve-fitting step, is shown as
black squares. The median phase error is Ddm ¼ 61 for all 14,914 reflections. The histogram distribution can be viewed as a Gaussian peak centered at zero, which contains the true phase information, on a randomly distributed background.
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
95
Figure 13. Examples of RBD profiles measured on a tetragonal lysozyme crystal. The solid curves are theoretical fits to the data using Eq. (5). See text for more details.
96
SHEN
Figure 14. Comparison of measured triplet phases using RBD with calculated values for the tetragonal lysozyme G ¼ (111) data set. The black squares represent the whole data set of 14,914 phases, whereas the red circles represent a subset of data of 7360 phases obtained by applying outlier rejection criteria as shown.
After the curve fitting, we then apply a rejection criterion that eliminates all measured phases based on the goodness-of-fit parameters (Shen et al., 2000b). Rejecting all phases with w2 > 7, standard deviation sd > 55 , and yG outside its boundaries of the allowed range drastically reduces the random background in the histogram distribution while retaining the Gaussian peak to a large degree, as shown by circles in Figure 14. The final N ¼ 7360 triplet
phases obtained this way has a median phase error of 45 and is much more reliable than the whole data set. Compared with triplet phase measurements using the conventional c-scan technique (Weckert and Hummer, 1997), the average phase discrepancy of our RBD results is still relatively large. This may be due to several factors. First, compared with the conventional c-scan method, intensity statistics may be poorer for weaker reflections because of the same exposure time used for both strong and weak reflections as set in the oscillation data collection.
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
97
Second, there may be accidental multiple-beam eVects due to quartets or higher orders since all measurements are made at a fixed x-ray wavelength. Finally, some of the phase discrepancies observed may be real since the phases of low-order reflections such as the (111) may be aVected by solvent that is not taken into account in the calculated phases. 3. Enantiomorph Determination For noncentrosymmetric structures, the enantiomorphic phase information can be unambiguously determined in the absence of anomalous signal by a three-beam or reference-beam experiment (Colella, 1995; Shen et al., 2000a; Weckert and Hummer, 1997; Weckert et al., 1993). This is done by performing inverse-beam RBD measurements of triplet-inverted Friedel pairs, (H, G, HG) and (H, G, GH). On a standard four-circle diVractometer, the
inverse RBD condition can be reached by rotating f to f þ 180 and w to
w þ 180 simultaneously as described by Shen et al. (2000a). Using the special k-diVractometer described in Section IV.A, the triplet inversion–related measurements can be performed by rotating y to yG and
rotating c to c þ 180 as shown in Figure 15. This operation is equivalent to a single rotation of y to y þ 180 as normally done in multiple-wavelength
anomalous diVraction (MAD) experiments. The 180 y rotation would rotate the three inner circles (f, k, c) upside down and would cause mechanical interference with other equipment in our setup. An example of the RBD profiles of a Friedel pair from tetragonal lysozyme, (3, 2, 4)/(2, 3, 0) and (3, 2, 4)/(2, 3, 0), is shown in Figure 16. The solid curves in Figure 16(a, b) are the automatic best fits to second-order
Born approximation Eq. (3), yielding triplet phase values of d ¼ 77 for the
(3, 2, 4) and d ¼ 116 for the (3, 2, 4), respectively. For comparison, the two possible phases calculated using Protein Data Bank entry 193L
(Vaney et al., 1995) are þ91 or 91 for the (3, 2, 4) and 91 or
þ91 for the (3, 2, 4), depending on whether the enantiomorph is P43212 or
P41212. Thus, we conclude that even though the best-fit values are some 20
away from 91 due to measurement errors, it is still obvious and unambiguous to determine from these RBD measured triplet phases that the correct enantiomorph has to be P43212. 4. Example on a Quasicrystal In addition to macromolecular crystals, the RBD technique can be useful for other material systems in which a diVraction phase problem exists. An example is an intermetallic quasicrystal, which is a unique class of materials with no periodic unit cell yet can produce sharp diVraction peaks in an x-ray diVraction experiment. These peaks are generally indexed by six integers
98
SHEN
Figure 15. Illustration of triplet-inverse-beam measurements using the special k-diVractometer for enantiomorph determination of noncentrosymmetric structures.
representing a six-dimensional reciprocal space. To date, only a few attempts have been directed toward finding the atomic positions in a quasicrystal structure (for a recent review, see Elser, 1999). To help solve this problem, we have performed an RBD experiment on an A1PdMn quasicrystal, which has been investigated previously by conventional three-beam diVraction (Zhang et al., 1998). The reference reflection used in our experiment was a G ¼ (0120 1 2) two-fold reflection (20/32 reflection). The RBD data are recorded with a CCD detector. In Figure 17, we show a typical diVraction image with an oscillation range of Dc ¼ 16 and an exposure time of 40 seconds at each y step. The inset shows an RBD profile of one of the reflections, H ¼ (1 12 1 1 2), where the interference eVect can be clearly seen. From curve fitting, we obtained a triplet phase
value of d ¼ 7 . In addition to the Bragg peaks, significant diVuse scattering signals are also recorded on the same image, which exhibit in some cases a pentagon-shaped contour.
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
99
Figure 16. Inverse-reference-beam measurements for enantiomorph determination of P43212 lysozyme.
V. Retrieval of Individual Phases With the automated data processing steps outlined above, a large number of experimentally measured triplet phases, dH ¼ aG + aHG aH, are deduced along with their weighting factors that indicate the reliability of the phases. The next step is to devise a strategy to use these measured triplet phases to retrieve individual structure factor phases aHs and reconstruct the structure. Given the fact that conventional heavy atom–based methods for phasing protein structures generally require measurements of many tens of thousands Bragg reflection intensities, it is worthwhile to ask the following questions on the three-beam diVraction approach: (a) What is the best systematic approach to make use of the measured triplet phases? (b) What is the minimum number of triplet phases that need to be measured in order to solve a protein structure? (c) How do the measurement errors aVect the possibility of a structural solution? These questions are closely related to
100
SHEN
Figure 17. RBD example on a quasicrystal.
each other, and ultimately their answers depend on the phasing methods used to deduce the individual phases from a triplet-phase data set. In this section, we discuss several approaches. A. Direct-Methods Approach Several authors have discussed the above questions in the framework of the conventional c-scan three-beam measurement technique (Weckert and Hummer, 1997), in which the interference profiles are obtained one at a time. Holzer et al. (2000) have shown that if a suYcient number of triplet phases are measured with substantial overlaps among the individual structurefactor phases in the measured triplet relations, it is then possible to deduce the individual phases using a phasing tree, much like the traditional convergence map technique used in the direct methods approach. Because of considerable numbers of unmeasured triplet phases in the phasing tree, it
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
101
is necessary to adopt a multi-solution entropy-maximization algorithm (Holzer et al., 2000) to deduce all the individual phases and obtain an electron density map. Recently, Wang et al. (2001) used a similar maximum-entropy method to solve a small-molecule structure based on several dozens of measured triplet phases. Measured triplet phases can also be applied in a traditional or a ‘‘shakeand-bake’’ direct-method approach to replace the mathematical estimates with the measured values, leading to structural solutions of small proteins with lower-than-atomic-resolution intensity data (Mo et al., 1996; Weeks et al., 2000). In the work by Weeks et al. (2000), computer simulations using SnB procedures (Miller et al., 1993) were performed on crambin, a small protein containing 327 unique non-H atoms as well as the equivalent of about 75 water molecules. These simulations were aimed at answering three important questions: (1) Is it possible to solve a real crystal structure with only a small number of reference-beam data sets? (2) Are solutions possible at a lower resolution than is possible using intensity-only data? (3) How much phase error is tolerable in experimentally determined triplets? ˚ , involving approxThe main simulation results using a resolution of 1.5 A ˚ for imately 5200 reflections, show that solutions can be achieved at 1.5 A crambin using even a single reference-beam data set, compared with no solutions using intensity-only data. Furthermore, when two or three sets of reference-beam data were used, mean random errors in the triplet phases as
large as 30 to 40 could be tolerated without significant reductions in the success rate. Thus, there does not appear to be significant cause for concern that the information content of the reference-beam data sets might be too low as a result of all triplets in a data set sharing a common Bragg reflection. Solutions were also obtained in conventional direct-methods simulations. However, the success rates were lower, and more than one referencebeam data set were required. Therefore, the dual-space shake-and-bake method appears to be superior for extracting single-phase information from triplet data. These simulation studies indicate favorably the possibility of solving real crystal structures using the phase information obtained by the reference-beam method.
B. Recursive Reference-Beam Phasing In addition to the direct methods (Mo et al., 1996; Weeks et al., 2000) and the maximum-entropy–based (Holzer et al., 2000; Wang et al., 2001) approaches, other strategies of how to go from measured triplet phases to individual structure-factor phases are being explored since certain specific
102
SHEN
experimental features may be taken into account in these alternative approaches. One possibility is to take advantage of a triplet occurrence pattern that is unique in the RBD geometry. The unique pattern leads to a new recursive phasing algorithm that allows a straightforward determination of all individual structure factor phases from as few as four starting single phases. As illustrated in Figure 18, for any reflection H recorded on an RBD image, H + G is its adjacent reflection next to H along direction G. It is obvious that reflections G, H, H + G form a triplet. Thus, if triplet phases along a single column H + nG (n ¼ . . . , 2, 1, 0, 1, 2, . . .) are all measured, then a simple recursive method can be devised to deduce all individual structure-factor phases from the measured triples, once a single phase in that column is known. In other words, if m triplet phases are measured in a single column, there are only m + 1 individual phases plus the G-reflection phase aG (which is common to all columns in the data set) associated with all m triplet phases. Thus, each additional triplet along a column adds only one unknown individual phase. This situation is dramatically diVerent from the conventional c-scanning three-beam technique where each additional three-beam case would introduce two unknown individual phases in general. We have developed a new recursive phasing algorithm based on the unique triplet occurrence pattern mentioned above. The new algorithm has
Figure 18. Illustration of the triplet occurrence pattern that is unique in RBD geometry, i.e., the three-beam condition is satisfied for every adjacent reflection along any column in the direction of reference-reflection G.
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
103
Figure 19. Flow diagram of recursive phasing algorithm based on reference-beam diVraction.
been implemented in program RBD_phasing, and its flowchart is shown in Figure 19. A preliminary test of the new program has been performed using the measured triplet-phase data set obtained on the tetragonal lysozyme crystal and a set of initial known phases. The initial phases are chosen to be the 191 (hk0) reflections in the data set since the phases of these reflections are restricted to either 0 or p due to the space group P43212 symmetry requirements. With the known (hk0) phases plus the (111) phase taken from PDB entry 193L (Vaney et al., 1995), it is possible to use the algorithm RBD_phasing and to deduce new structure-factor phases from the measured subset of 7360
triplet phases. The median phase error for these new individual phases is 66 ,
which is reasonable based on the 45 median error in triplet phases. An electron density map (Figure 20) is then calculated based on the structurefactor phases from the 7360-reflection RBD data set. This map agrees with the same map obtained using the calculated triplet phases for the same 7360 reflections, with a map-correlation coeYcient of 0.70 (Shen and Wang, 2003). Further reduction in the individual phase errors and the corresponding improvement in electron density map are feasible using density modification and other standard crystallographic refinement techniques, but the topic is not the focus of this review.
104
SHEN
Figure 20. Electron density map at z ¼ 0 basal plane of tetragonal lysozyme using the 7360 measured triplet phases and the new recursive phasing algorithm.
Since a relatively large number of known phases are used in the present work, it may be more appropriate to view the new recursive phasing procedure as a phase extension. However, it is entirely possible to reduce substantially the number of initial phases that are needed in the recursive RBD algorithm. For example, if three RBD data sets are measured with noncoplanar G1, G2, and G3 as the reference reflections, then in principle only four initial phases, of G1, G2, G3, plus a single reflection H0, are needed to phase the whole structure. Using this idea (Shen and Wang, 2003), the phases of all reflections in reciprocal space are progressively determined from a single point H0 to a line of nodes through G1, then to a plane of nodes through G2, and finally to a volume of nodes through the G3 data set. It is worth noting that the G1 and the G2 data sets do not need to be complete for the overall method to work. Further reduction of the number of initial phases may also be possible if origin-defining and symmetry-related reflections are taken into account in a systematic manner.
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
105
C. Phase Extension and Iterative Algorithms Direct low-resolution phasing of macromolecular structures has attracted considerable interest in recent years (Guo et al., 1999; Hao et al., 1999). It may be expected that a substantial fraction of low-order reflection phases ˚ ) can be correctly assigned directly for protein crystals based on the (d > 10 A globbic (pseudo-atom) direct methods, or on the low-resolution multidimensional Patterson and the molecular shape transforms. These low-resolution phases form a starting set of individual phases that can be extended by other techniques such as maximum entropy, or in our case, by the measured triplet phases from an RBD experiment. Since the triplet phases measured in RBD experiments would most likely contain a mixture of low-order and higher-resolution individual phases, it can naturally provide a way of performing phase extensions to medium˚ ) reflections through a set of triplet-phase relationships. resolution ( 3–5 A A low-resolution starting phase set can also serve as guidelines for choosing reference reflections (Gs) since a reflection with a known phase may be preferred for further phase extension through the measured triplets. The low-resolution phases may also serve as the starting individual phases for the recursive phasing method described in the previous section. Another potential method of making use of the measured triplet phases is to use an iterative phasing algorithm based on optical image retrieval of Fienup (1982) and Gershberg and Saxton (1972), with measured triplet phases as a constraint. The iterative algorithm is a powerful technique for phasing diVraction pattern of nonperiodic objects (Miao et al., 1999; Sayre, 1980; Shen et al., 2004; Williams et al., 2003). Recent generalization of this iterative method (Elser, 2003) has made it more convenient to incorporate the type of constraints such as triplet phases.
D. Identification of Enantiomorph As demonstrated in Section IV, one of the utilities of the RBD phase measurements is to determine the enantiomorph for noncentrosymmetric crystal structures. For a pair of enantiomorph structures L and R, it is well known that the inversion of all atomic coordinates (xi, yi, zi) to (xi, yi, zi) would change structure L into structure R, and vice versa. This implies that any individual structure-factor phase would reverse its sign, and therefore any triplet phase dL of the L structure would be equal to the negative dR of the R structure if both are evaluated using the same coordinate system. This is true even for those space groups that require an additional
106
SHEN
shift in origin after the inversion operation since triplet phases are invariant with respect to shifts in origin. Thus, distinguishing enantiomorphs ultimately reduces to a determination of the signs of triplet phases (Weckert and Hummer, 1997), which is exactly the information an inverse-beam RBD measurement can provide. The best sensitivity to this is given by those
three-beam cases with d ¼ 90 . Although a single pair of inverse RBD profiles is enough for an enantiomorph determination of a given crystal, as illustrated in Shen et al. (2000a), the data collection eYciency of the RBD method allows simultaneous measurements of many three-beam combinations with their triplet phases close
to 90 . This data redundancy would increase the reliability of the enantiomorphic phases determined in an RBD experiment. The similarity between the RBD data collection and the ordinary crystallographic method may also make it possible to include the RBD Friedel pair measurements in standard techniques such as SAD but without the signals from anomalous atoms. It would also be interesting to include the enantiomorph information obtained by RBD in a unified direct-method approach (Weeks et al., 1998) to extend the applicability of the ab initio phasing methods into a regime of solving larger macromolecular crystal structures and with intensity data sets of lower resolution. VI. Discussion and Conclusions There are several significant practical advantages of the RBD technique over the traditional c-scan method for multiple-beam diVraction. First, of course, the parallel data collection method of RBD allows the measurements of many three-beam profiles to be performed in a much shorter time than the traditional method, which can therefore minimize the eVect of crystal radiation damage at a synchrotron source. Second, since the reference G serves as the common detour reflection, the dynamical phase shift nG is well defined and no ambiguities exist as in the case of the c-scan technique where two situations, ‘‘in or out’’ (Chang, 1982; Chang and Tang, 1988), need to be distinguished. Third, as already mentioned in Section II, there are no polarization switching factors in reference-beam geometry if the incident beam is purely or mostly s-polarized for the G reflection, which is most likely the case at a synchrotron source. Finally, it is possible that larger out-of-plane horizontal divergence in the incident beam can be tolerated in an RBD experiment compared with the c-scan method. This fits the natural beam divergence of a synchrotron beam very well. In terms of data collection procedures, the RBD method presented here is very similar to MAD experiments. Here the angular setting of the reference
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
107
reflection G serves the same role as the atomic absorption-edge energy of a heavy atom. Multiple oscillation data sets are collected around the Bragg angle yG, much like those around the absorption edge, with similar useful signal levels of a few percents for proteins. In fact, instead of changing the rocking angle y, one can change the incident energy to pass through the reference-reflection rocking curve and take multiple data sets at several energies, as mentioned in the legend for Figure 4. An additional advantage is that no global scaling is necessary in RBD measurements so that incomplete data sets from diVerent crystals can be easily combined. This advantage may be significant since radiation damage of biological samples is of serious concern in many crystallography experiments. As for how the measured phase information is used to help solve a crystal structure, the algorithms of using the RBD data sets are most likely diVerent from the MAD technique. So far the most promising route to use the RBD measured phases is to incorporate the phase information into the traditional or dual-space direct-methods algorithms (Weeks et al., 2000). One interesting question that needs to be addressed is the following. Given the way an RBD experiment is performed, there is very little overlap among the individual structure-factor phases in a given data set involving a single reference reflection G. Therefore, how many such data sets with diVerent Gs, either complete or partial, does one need to solve a realistic structure? Preliminary simulations by Weeks et al. (2000) using a unified shake-n-bake program (Miller et al., 1993) suggest that a structural solution is possible for small proteins with even a single measured phase data set (single G) if the data set is rather complete (though less than atomic resolution) and accurate with a
measurements error of less than 20 or so in the measured triplet phases. The
tolerance on the phase errors can be as high as 50 if three or more RBD data sets are measured. These preliminary results are encouraging but further studies are needed to reach a general conclusion. Another recent development in direct-methods algorithms is the use of statistical method in determination of a minimum function or maximum function (Xu and Hauptman, 2004). Preliminary studies indicate that a much higher success rate is possible compared with that using the standard SnB program. This holds the potential in reducing the number of G data sets needed for solving a protein crystal structure and is currently being pursued. In addition to direct-methods–based approaches, a new alternative recursive phasing algorithm has been developed (Shen and Wang, 2003) to deduce the individual structure-factor phases from a triplet-phase data set measured using the RBD technique. A preliminary test of the algorithm on a 7360– triplet-phase data set from tetragonal lysozyme has yielded a reasonable electron density map that is in good agreement with the map based on the calculated phases. The new algorithm makes use of a triplet occurrence
108
SHEN
pattern that is unique to the reference beam geometry and not present in conventional three-beam experiments. The unique triplet pattern oVers a substantial advantage in providing a systematic and definitive way to obtain overlaps among the individual structure-factor phases within a triplet-phase measurement data set. Future work in this area will likely focus on reduction of the number of initial phases that are necessary to start the recursive process in the phasing algorithm, and on proper treatment of error propagation in the recursion due to poorly or inaccurately measured triplet phases in the data set. It may be possible to include the unique triplet occurrence pattern in direct-methods– or maximum-entropy–based programs to increase the likelihood of structural solutions with a smaller number of measured triplet phases. Incorporations of measured triplet phases in phase extension strategies using low-resolution phases and in advanced iterative algorithms are two other ways of using the measured phases and should be studied in future investigations. In addition to theoretical strategies, several challenging issues need to be resolved in experiments before the RBD technique can be widely adopted in everyday crystallography. To speed up the initial alignment of a reference reflection for an arbitrary crystal system, we have designed and constructed a special computer-controlled kappa-goniometer that can be mounted on the f–stage of a standard four circle diVractometer (Figure 12). With this fivecircle kappa setup, one can automatically orient any Bragg reflection along the f that serves as the crystal oscillation axis. This eliminates one of the timeconsuming steps in an RBD experiment. Other experimental problems include how to handle increased crystal mosaicity on freezing specimens, how to minimize the eVects of overlapping multiple reflections for larger structures, and how to improve the intensity integration statistics on area detectors for better data accuracy. Even though the accuracy of the RBD-measured triplet phases may never be able to approach that of using the traditional c-scanning method and a point detector (Weckert and Hummer, 1997), it is entirely possible that its measurement errors would be somewhat compensated by the large number of measured phases that can be used in structural determination routines. Because no anomalous diVraction signals (Shen et al., 2003) are necessary, this evolving method promises to provide phase information needed for solving a biological crystal structure without the requirement for heavy atoms incorporated into a native protein structure. It is hoped that this review of work in progress would also stimulate more discussions in the future on the optimal strategies of three-beam experiments in protein crystallography as well as in other applications. In summary, we have demonstrated that the RBD technique is a promising and practical approach of solving the phase problem in x-ray
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
109
crystallography, without the needs for heavy-atom derivatives. By incorporating the principle of three-beam diVraction into the standard oscillatingcrystal data collection technique, and by a recently developed automated data reduction procedure, a large number of Bragg reflection triplet phases can be measured in an RBD experiment using an area detector within a relatively short period. With further research and development, we believe that the new method will have a significant impact on crystallography data collection and structural determinations. The author would like to thank Andy Stewart, Jun Wang, Rob Thorne, Steve Ealick, Sol Gruner, Quan Hao, Ken Finkelstein, Marian Szebenyi, Chris Heaton, Bill Miller, and others at Cornell; and Herb Hauptman, Chuck Weeks, David Langs, Bob Blessing, Jimmy Xu, and George DeTitta at Hauptman-Woodward Institute, for their useful discussions and helps during the various stages of this work. This work is supported by NSF through CHESS under Grant DMR 0225180 and by NIH Grant EB002057 through Hauptman-Woodward Medical Research Institute.
References Batterman, B. W. (1964). EVect of dynamical diVraction in x-ray fluorescence scattering. Phys. Rev. 133, 759–764. Batterman, B. W., and Cole, H. (1964). Dynamical diVraction of x-rays by perfect crystals. Rev. Mod. Phys. 36, 681–717. Bedzyk, M. J., Materlik, G., and Kovalchuk, M. V. (1984). Phys. Rev. B 30, 2453–2461. Blundell, T. L., and Johnson, L. N. (1976). Protein Crystallogr. London: Academic Press. Busing, W. R., and Levy, H. A. (1967). Acta Cryst. 22, 457–464. Caticha-Ellis, S. (1975). Jpn. J. Appl. Phys. 14, 603–611. Chang, S. L. (1982). Direct determination of x-ray reflection phases. Phys. Rev. Lett. 48, 163–166. Chang, S. L. (1984). Multiple DiVraction of X-rays in Crystals. Heidelberg, New York: Springer Verlag, Berlin. Chang, S. L., King, H. E., Jr., Huang, M.-T., and Gao, Y. (1991). Direct phase determination of large macromolecular crystals using three-beam x-ray interference. Phys. Rev. Lett. 67, 3113–3116. Chang, S. L., Chao, C. H., Huang, Y. S., Jean, Y. C., Sheu, H. S., Liang, F. J., Chien, H. C., Chen, C. K., and Yuan, H. S. (1999a). Acta Cryst. A 55, 933–938. Chang, S. L., Stetsko, Y. P., Huang, Y. S., Chao, C. H., Liang, F. J., and Chen, C. K. (1999b). Phys. Lett. A 264, 328–333. Chang, S. L., and Tang, M. T. (1988). Acta Cryst. A 44, 1065–1072. Chapman, L. D., Yoder, D. R., and Colella, R. (1981). Virtual Bragg scattering: A practical solution to the phase problem. Phys. Rev. Lett. 46, 1578–1580. Cole, H., Chambers, F. W., and Dunn, H. M. (1962). Acta Crystallogr. 15, 138–144. Colella, R. (1974). Multiple diVraction of x-rays and the phase problem: Computational procedures and comparison with experiment. Acta Crystallogr. 30, 413–423.
110
SHEN
Colella, R. (1995). Multiple Bragg scattering and the phase problem in x-ray diVraction: II mosaic crystals. Comments Cond. Mat. Phys. 17, 199–215. Collaborative Computational Project Number 4. (1994). Acta Crystallogr. 50, 760–763. Elser, V. (1999). Acta Cryst. 55, 489–499. Elser, V. (2003). J. Opt. Soc. Am. 20, 40–55. Ewald, P. P., and Heno, Y. (1968). X-ray diVraction in the case of three strong rays: I. Crystal composed of non-absorbng point atoms. Acta Crystallogr. 24, 5–15. Fienup, J. R. (1982). Appl. Opt. 21, 2758. Gershberg, R. W., and Saxton, W. O. (1972). Optik 25, 237. Guo, D. Y., Blessing, R. H., Langs, D. A., and Smith, G. D. (1999). On globbicity of lowresolution protein structures. Acta Cryst. 55, 230–237. Hao, Q., Dodd, F. E., Grossmann, J. G., and Hasnain, S. S. (1999). Ab initio phasing using molecular envelope from solution x-ray scattering. Acta Cryst. 55, 243–246. Hart, M., and Lang, A. R. (1961). Phys. Rev. Lett. 7, 120–121. Hauptman, H. A. (1986). Direct methods and anomalous dispersion. Nobel Lecture December 9, 1985. Chemica Scripta. 26, 277–286. Hauptman, H. A., and Karle, J. (1953). Solution of the phase problem. I. The centrosymmetric crystal. ACA Monograph No. 3. Ann Arbor, MI: Edwards Brothers. Hendrickson, W. (1991). Science 254, 51. Holzer, K., Weckert, E., and Schroer, K. (2000). Acta Cryst. D56, 322–327. Huber (2001). Huber DiVraktionstechnik GmbH (http://www.xhuber.com). Hummer, K., and Billy, H. (1986). Experimental determination of triplet phases and enantiomorphs of non-centrosymmetric structures. I. Theoretical considerations. Acta Crystallogr. 42, 127–133. Hummer, K., Schwegle, W., and Weckert, E. (1991). A feasibiblity study of experimental triplet-phase determination in small proteins. Acta Crystallogr. 47, 60–62. Hummer, K., Weckert, E., and Bondza, H. (1990). Direct measurements of triplet phases and enantiomorphs of non-centrosymmetric structures: Experimental results. Acta Crystallogr. 45, 182–187. Jackson, J. D. (1974). Classical Electrodynamics, 2nd ed. New York: John Wiley. Juretschke, H. J. (1984). Acta Crystallogr. 40, 379–389. Juretschke, J. J. (1982). Invariant-phase information of x-ray structure factors in the two-beam Bragg intensity near a three-beam point. Phys. Rev. Lett. 48, 1487–1489. Lee, H., and Colella, R. (1993). Phase determination of x-ray reflections in a quasicrystal. Acta Crystallogr. 49, 600–605. Lipscomb, W. N. (1949). Relative phases of diVraction maxima by multiple reflection. Acta Crystallogr. 2, 193–194. Mathiesen, R. H., Mo, F., Eikenes, A., Nyborg, T., and Larsen, H. B. (1998). Acta Crystallogr. 54, 338–347. Miao, J., Charalambous, P., Kirz, J., and Sayre, D. (1999). Nature 400, 342–344. Miller, R., DeTitta, G. T., Jones, R., Langs, D. A., Weeks, C. M., and Hauptman, H. A. (1993). On the application of minimal principle to solve unknown structures. Science 259, 1430–1433. Mo, F., Mathiesen, R. H., Alzari, P. M., Lescar, J., and Rasmussen, B. (2002). Physical estimation of triplet phases from two new proteins. Acta Cryst. 58, 1780 –1786. Mo, F., Mathiesen, R. H., Hauback, B. C., and Adman, E. T. (1996). Acta Cryst. D 52, 893–900. Otwinowski, Z., and Minor, W. (1997). Methods Enzymol. 276, 307–326. Patterson, A. L. (1934). A Fourier series method for the determination of the components of interatomic distances in crystals. Phys. Rev. 46, 372–376. Post, B. (1977). Solution of the x-ray phase problem. Phys. Rev. Lett. 39, 760–763.
PHASE PROBLEM AND REFERENCE-BEAM DIFFRACTION
111
Pringle, D., and Shen, Q., (2003). New five-circle kappa diVractometer for reference-beam diVraction experiments. J. Appl. Cryst. 36, 29–33. Renninger, M. (1937). Umweganregung, eine bisher unbeachtete Wechselwirkungser-scheinung bei Raumgitter-interferenzen. Z. Phys. 106, 141–176. Rossmann, M. G. (1972). The molecular replacement method. New York: Gordon and Breach. Sayre, D. (1980). In Imaging Processes and Coherence in Physics, edited by M. Schlenker. Berlin: Springer-Verlag. Schmidt, M. C., and Colella, R. (1985). Phase determination of forbidden x-ray reflections in V3Si by virtual Bragg scattering. Phys. Rev. Lett. 55, 715–718. Shen, Q. (1986). A new approach to multi-beam x-ray diVraction using perturbation theory of scattering. Acta Crystallogr. 42, 525–533. Shen, Q. (1993). EVects of a general x-ray polarization in multiple-beam Bragg diVraction. Acta Crystallogr. 49, 605–613. Shen, Q. (1998). Phys. Rev. Lett 80, 3268–3271; Research News (1998). Science 280, 828. Shen, Q. (1999a). Phys. Rev B 59, 11109–11112. Shen, Q. (1999b). Phys. Rev. Lett. 83, 4784–4787. Shen, Q. (2000). Phys. Rev. B. 61, 8593–8597. Shen, Q. (2003). Improving triplet-phase accuracy by symmetry observations in reference-beam diVraction measurements. Acta Crystallogr. 59, 335–340. Shen, Q., and Colella, R. (1987). Solution of phase problem for crystallography at a wavelength of 3.5 A. Nature 329, 232–233. Shen, Q., and Finkelstein, K. D. (1990). Solving the phase problem with multiple-beam diVraction and elliptically polarized x rays. Phys. Rev. Lett. 65, 3337–3340. Shen, Q., and Finkelstein, K. D. (1992). Complete determination of x-ray polarization using multiple-beam Bragg diVraction. Phys. Rev. 45, 5075–5078. Shen, Q., and Huang, X. R. (2001). Phys. Rev. B. 63, 174102. Shen, Q., and Wang, J. (2003). Recursive direct phasing of protein structure with referencebeam diVraction. Acta Cryst. D 59, 809–814. Shen, Q., Bazarov, I., and Thibault, P. (2004). DiVractive imaging of nonperiodic materials with future coherent x-ray sources. J. Synch. Rad. 11, 432–438. Shen, Q., Pringle, D., Szebenyi, M., and Wang, J. (2002). Solving the crystallographic phase problem with reference-beam diVraction. Rev. Sci. Instrum. 73, 1646–1648. Shen, Q., Kycia, S., and Dobrianov, I. (2000a). Acta Cryst. 56, 264–267. Shen, Q., Kycia, S., and Dobrianov, I. (2000b). Acta Cryst. 56, 268–279. Shen, Q., Wang, J., and Ealick, S. E. (2003). Anomalous diVerence signal in protein crystals. Acta Cryst. 59, 371–373. Sinha, S. K., Sirota, E. B., GaroV, S., and Stanley, H. B. (1988). X-ray and neutron scattering from rough surfaces. Phys. Rev. 38, 2297–2311. Steller, I., Bolotovsky, R., and Rossmann, M. G. (1997). J. Appl. Cryst. 30, 1036–1040. Thorkildsen, G. (1987). Three-beam diVraction in a finite perfect crystal. Acta Crystallogr. 43, 361–369. Thorkildsen, G., and Larsen, H. B. (2002). Acta Cryst. 58, 252–258. Thorkildsen, G., Mathiesen, R. H., and Larsen, H. B. (1999). J. Appl. Cryst. 32, 943–950. Tischler, J. Z., and Batterman, B. W. (1986). Determination of phase using multiple-beam eVects. Acta Crystallogr. 42, 510–514. Tischler, J. Z., Shen, Q., and Colella, R. (1985). Phase determination of the forbidden reflection (442) in silicon and germanium using multiple Bragg scattering. Acta Crystallogr. Sect. 41, 451–453. Vainshtein, B. K. (1981). Modern Crystallography I. Springer Series in Solid-State Sciences, vol. 15. New York: Springer-Verlag.
112
SHEN
Vaney, M. C., Maignan, S., Ries-Kautt, M., Ducruix, A. (1995). Protein Data Bank entry 193L. Vineyard, G. H. (1982). Grazing-incidence diVraction and the distorted-wave approximation for the study of surfaces. Physical Rev. 26(8), 4146–4159. Wang, C.-M., Chao, C.-H., and Chang, S.-L. (2001). Acta Cryst. A 57, 420–428. Warren, B. E. (1969). X-ray DiVraction. Reading, MA: Addison-Wesley. Weckert, E., and Hummer, K. (1997). Multiple-beam x-ray diVraction for physical determination of reflection phases and its applications. Acta Crystallogr. 53, 108–143. Weckert, E., Schwegle, W., and Hummer, K. (1993). Direct phasing of macromolecular structures by three-beam diVraction. Proc. R. Soc. London 442, 33–46. Weeks, C. M., DeTitta, G. T., Miller, R., and Hauptman, H. A. (1993). Applications of the minimal principle to peptide structures. Acta Cryst. 49, 179–181. Weeks, C. M., Miller, R., and Hauptman, H. A. (1998). In Direct Methods for Solving Macromolecular Structures, edited by S. Fortier. Dordrecht: Kluwer Academic Publishers, pp. 463–468. Weeks, C. M., Xu, H., Hauptman, H. A., and Shen, Q. (2000). Acta Cryst. A 56, 280–283. Williams, G. J., Pfeifer, M. A., Vartanyants, I. A., and Robinson, I. K. (2003). Phys. Rev. Lett. 90, 1755001. Xu, H., and Hauptman, H. A. (2004). Statistical approach to the phase problem. Acta Cryst. 60, 153–157. Zachariasen, W. H. (1945). Theory of X-ray DiVraction in Crystals. New York: Dover Publications. Zhang, Y., Colella, R., Shen, Q., and Kycia, S. W. (1998). Dynamical three-beam diVraction in a quasicrystal. Acta Cryst. 54, 411–415.
ADVANCES IN IMAGING AND ELECTRON PHYSICS, VOL. 134
Fractal Encoding DOMENICO VITULANO Instituto per le Applicazioni del Calcolo, Consiglio Nazionale delle Ricerche Viale del Policlinico 137, 00161-Rome, Italy
I. Introduction . . . . . . . . . . . . . . . . . . II. Fractals Catch Natural Information . . . . . . . . . A. Some Mathematical Notions . . . . . . . . . . . B. The Space of Fractals . . . . . . . . . . . . . C. Fractals for Digital Images: Jacquin’s Proposal. . . . III. Improving the Classical Fractal Transform. . . . . . . A. Where to Look for Fractals . . . . . . . . . . . 1. Shaping Range Blocks . . . . . . . . . . . . 2. Split Decision Functions . . . . . . . . . . . 3. Domain Blocks Location and Relative Problems . . B. How to ‘‘Fractally’’ Encode . . . . . . . . . . . 1. Subsampling Domain Blocks. . . . . . . . . . 2. Are Isometries Really Useful? . . . . . . . . . 3. About Contractivity . . . . . . . . . . . . . 4. Quantization of Scaling and Offset Parameters . . . 5. More Blocks Improve Encoding . . . . . . . . 6. Rate-Distortion Upper Bound . . . . . . . . . 7. Decoding . . . . . . . . . . . . . . . . . IV. Fractals Meet Other Coding Transforms . . . . . . . A. Vector Quantization . . . . . . . . . . . . . . B. Fourier Domain . . . . . . . . . . . . . . . C. Wavelets . . . . . . . . . . . . . . . . . . D. Linear Prediction Coding . . . . . . . . . . . . E. Other Representations . . . . . . . . . . . . . V. Conclusions . . . . . . . . . . . . . . . . . . Appendix: Another Way of Fractal Encoding: Bath Fractal References . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transform . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
113 115 115 118 122 127 127 128 137 140 153 153 154 155 159 161 161 162 164 165 166 167 168 168 169 169 170
I. Introduction In the last few decades, signal coding has received increasing interest because of the wide diVusion of multimedia applications. In fact, if on one hand hardware technology has had a quick development, on the other the requirement of eVective tools for managing huge information quantities has been 113 ISSN 1076-5670/05 DOI: 10.1016/S1076-5670(04)34003-6
Copyright 2005, Elsevier Inc. All rights reserved.
114
VITULANO
even faster. This technology race has thus made signal coding play a more fundamental role in various fields. However, compression interest is not recent. It started at the beginning of the last century with Shannon’s studies on information theory. He laid the foundations of lossless coding (i.e., how to exploit redundancies for trapping signal information in a smaller number of parameters than the original one). In computer science language, it can be translated as follows: image coding aims to achieve a representation that requires M < N bits, where N is the number of bits of the original image. Compression ratio is then defined as N/M. The reader can easily argue that encoding performances are measured by compression ratio and computing time (or complexity). Unfortunately, lossless techniques can achieve only low compression ratios on real images, such as 2:1, 3:1, and so on. That is why lossy techniques have been introduced and now are strongly used in many fields. In fact, because they allow the encoded signal to lose information according to an a priori fixed tolerance, they oVer a higher compression ratio. In this case, objective quality also must be taken into account. It is obvious that the higher the compression ratio, the lower the quality, and vice versa. Lossy techniques are usually based on a transform whose coeYcients are quantized according to either statistical information (such as coeYcients distribution (Gonzalez and Woods, 2001; LoPresto et al., 1997; Mallat, 1998; Shi and Sun, 2000)) or human perception laws (Gonzalez and Woods, 2001; Ho¨ntsch and Karam, 2000, 2002; Shi and Sun, 2000). Actually, the standard is JPEG2000, which is based on sub-band coding (ISO/IEC JTC1, 2000). Among lossy approaches, several researchers have come under the fractal coding spell. This interest stems from the fact that fractals seem to ‘‘catch’’ information contained in natural objects: very simple mathematical functions are able to describe very complicated natural structures. Even though fractals have been known since the beginning of the last century, they were reanalyzed by Williams (1971) and Hutchinson (1981) in the 1970s. In 1982 Mandelbrot presented a formalization, proposing fractals as an alternative tool to Euclidean geometry (Reusens, 1994).1 Fractal theory for signal coding and, in particular, IFS (iterated function systems) was finally proposed by Hutchinson, Barnsley, and others (see Barnsley, 1988a,b; Barnsley and Demko, 1985; Barnsley and Sloan, 1988; Barnsley, et al., 1986, 1989). Very impressive examples of natural objects were encoded with this simple formulation. Nonetheless, this approach was limited by not being automatic. Arnaud Jacquin (1989, 1990a,b, 1992) had an intuition as simple as it was clever: natural objects are piecewise self-similar. In practice, an image was blockwise coded via PIFS (partitioned IFS). Since his strategy was completely 1
Mandelbrot called these mathematical objects fractals.
FRACTAL ENCODING
115
automatic, he definitively made fractal encoding one of the best candidates for coding real images. This article surveys fractal encoding research and is organized as follows. The first section introduces the fractal transform to readers who are not familiar with it. After some preliminary mathematical notions about metric spaces and contractive mappings (Kolmogorov and Fomin, 1957), fractal space (Barnsley, 1988), and Jacquin’s proposal (1989, 1990a,b, 1992) are presented. Readers already aware of this topic can skip this section and proceed to Section II, which reviews improved strategies for ‘‘pure’’ fractal coding. This survey includes two fundamental reviews (Jacquin, 1993; Wohlberg and de Jager, 1999) and also refers to two books (Fisher, 1995; Lu, 1997). Finally, the third section focuses on hybrid approaches, where fractal encoding takes advantage of other coding techniques. Because of space limitations, this section is more technical, and assumes that the reader already knows involved techniques. Before beginning this tour of the fractal universe, it should be noted that the fractal coding literature is expansive. Some approaches may not be considered or may receive only minimal mention. The aim is to provide an explanation as good as possible, focusing on the main properties of fractals and pointing out the key issues of fractal improvement. Moreover, the large quantity of papers makes it impossible to include contribution approaches concerning denoising, parallelization, very large-scale integrated (VLSI) architectures, one-dimensional (1D) signals, binary images, and color images and videos. II. Fractals Catch Natural Information What makes fractal encoding philosophy very impressive is its ability to catch a very diYcult scene in a few simple functions: wi ðzÞ ¼ ai z þ bi . In fact, wi’s are able to converge to the encoded image independent of their starting point. An example is shown in Figure 2, where the decoded image is Lena (see Figure 1), regardless of the starting point. This aspect leads the reader to think that this type of encoding is able to contain nature information. The section shows how to reach this encoding, outlining its most important aspects. Thus, after a few mathematical concepts, Jacquin’s proposal for fractal encoding is explained. A. Some Mathematical Notions Let us start with a short review of some basic notions concerning metric spaces. Readers who are already familiar with these concepts can easily skip this section.
116
VITULANO
Figure 1. A typical test image: 512 512 8-bit Lena image.
Figure 2. Contractive functions (W1...Wn ) encoding Lena image always converge to it, independent of the starting image.
FRACTAL ENCODING
117
A metric d on a generic set S is a real and positive function fulfilling 8 x, y, z 2 S the following constraints: 1:
dðx; yÞ ¼ 0
, x¼ y
2: 3:
dðx; yÞ ¼ dðy; xÞ dðx; yÞ dðx; zÞ þ dðz; yÞ:
The couple (S, d ) can be denoted as metric space and several definitions can be given in it, such as open set, neighborhood, and so on (Kolmogorov and Fomin, 1957). For the sake of brevity, we will omit them. Example 1. An example of metric space is the set of real numbers with the distance dðx; yÞ ¼ jx yj: such metric space is denoted with R 1 . Let xn be a sequence in a metric space (S, d). It is convergent to the element x 2 S if: limn!1 dðxn ; xÞ ¼ 0: That is, 8E > 0 We can then write:
9nE 2 N j 8n > nE
dðxn; xÞ < E:
limn!1 xn ¼ x: If there exists one element in S such that xn converges to it, then such a sequence is convergent. It is also possible to define a Cauchy sequence xn in (S, d ), that is, such that: 8E > 0 9nE 2 N j 8n; m > nE dðxn ; xm Þ < E: Using the third property of d, it can be easily shown that each convergent sequence is a Cauchy sequence while the inverse is not true. Then, a metric space is called complete if each Cauchy sequence in (S, d) is convergent. At this point, it is possible to define a mapping w : S ! S on this space and its classical properties such as continuity, linearity, boundness, and so on (for more complete study, please see Kolmogorov and Fomin, 1957). From our point of view, the most important property is the contractiveness, without which fractal encoding would not exist. A mapping w : S ! S is contractive with contraction factor d 2 [0, 1) if: 9d > 0 j dðwðxÞ; wðyÞÞ ddðx; yÞ 8x; y 2 S: Example 2. An example of contractive mapping in R 1 is w(x) ¼ 0.2x þ 1. In fact, 8x; y 2 R 1 we have: dðwðxÞ; wðyÞÞ ¼ jwðxÞ wðyÞj ¼ j0:2x þ 1 0:2y 1j ¼ 0:2jx yj ¼ 0:2dðx; yÞ: Its contraction factor is d ¼ 0.2 < 1.
118
VITULANO
It can be easily proved that each contractive mapping is continuous. The nth iterated wn(x) of w is defined as: wðwn1 ðxÞÞ if n > 0 n w ðxÞ ¼ 8x 2 S: x if n ¼ 0 In practice, the map w is applied n times to the element x. The fixed point x 2 S of w is such that w(x ) ¼ x and is an invariant point for w. Example 3. For example, it is trivial to prove that the fixed point for the function in Example 2 is x ¼ 1.25. It is now possible to present the Banach theorem as follows: Theorem 1 (Contraction Mapping). Be (S, d ) a complete metric space and w : S ! S a contraction in it. Then, there exists one and only one fixed point x of w. Moreover, 8x0 2 S, the sequence {wn ðx0 Þ; n ¼ 0; 1; . . .} is convergent to x , i.e., limn!1 wn ðx0 Þ ¼ x : This is a fundamental theorem for fractal encoding, as shown later. It states that each contractive map converges to only one point, i.e., its fixed point, independent of the initial point. It is also fundamental to highlight that Theorem 1 does not give a constructive rule for computing the map w, once its fixed point x is known. We will see that this lack of theory is severely lacking in fractal encoding since it represents one of its major drawbacks. Luckily, a corollary of the aforementioned theorem helps to overcome this problem. Corollary 1. Be x ¼ limn!1 wn ðx0 ). Then 8x 2 S it is possible to write: 1 dðx; wðxÞÞ: dðx ; xÞ 1d In practice, the distance between a generic point and the fixed point of w is bounded by the distance between the same generic point and its image through the mapping w. The encoding phase is based on this seemingly simple procedure. Nonetheless, this latter becomes computationally expensive if performed many times. B. The Space of Fractals This section focuses on generalizing the results presented in the previous section, providing the basis for a fractal encoding scheme. The additional piece of the theory given in this part is oriented to characterizing the space including fractals (Barnsley, 1988a,b). Contractive mappings are suitably
119
FRACTAL ENCODING
generalized for this space, and an extended version of the Banach theorem with its implications is then given. A subset I S of a metric space (S, d) is compact if each sequence {xn} I contains a subsequence convergent in I. Then, it is possible to define the set H(S) as follows: HðSÞ ¼ fI S j I is compact
and I 6¼ ;g:
We now must define a proper distance in the set H(S): the HausdorV distance. Let (S, d ) be a metric space, x 2 S and I H(S). The distance between the point x and the set I can be defined as follows: dðx; IÞ ¼ minfdðx; yÞ j y 2 I g: Let us define the distance between a subset J H(S) and I as follows: dðJ; IÞ ¼ maxfdðx; IÞ : x 2 J g: In general, it happens that d(I, J ) 6¼ d(J, I ); thus the second property of a metric is not satisfied. On the contrary, the function hd : HðSÞ HðSÞ ! R such that: hd ðI; JÞ ¼ maxfdðI; JÞ; dðJ; IÞg is positive but, overall, it is a metric and is called the HausdorV distance. Hence (H(S), hd) is a complete metric space (the proofs are omitted and can be found in Barnsley, 1988) and is called the space of fractals. Starting from the definition of this particular space, additional eVort must first be devoted to generalizing the definition of a map from the space (S, d ) to (H(S), hd), and then to introducing IFS. The following theorem helps to gain the first goal. Theorem 2. Be w : S ! S a contractive mapping on (S, d ) whose contractivity factor is d. Then the mapping w : H (S) ! H(S) such that wðIÞ ¼ fwðxÞjx 2 I g 8I 2 HðSÞ is contractive in the space of fractals with the same contractive factor. Hence, a simple contractive function in (S, d) can generate a contractive mapping that acts on sets and then is useful for the space of fractals. Nonetheless, it is possible to go further and define more complicated contractive mappings on the space of fractals. Theorem 3. Be fwi : i ¼ 1; ::; N g a finite number N of contractive mappings in the space of fractals with contractive factors fdi : i ¼ 1; ::; N g. Then the mapping W : H(S) ! H(S) defined as:
120
VITULANO
W ðIÞ ¼
N [
wi ðIÞ;
8I 2 HðSÞ
i¼1
is a contractive mapping on the space of fractals whose contractivity factor is d ¼ maxfdi ; i ¼ 1; ::; N g: We are now able to define and better understand the concept of IFS: an IFS is a finite set of contractive mappings fwi ; i ¼ 1; ::; N g defined on a complete metric space (S, d ), and having contractivity factors fdi ; i ¼ 1; ::; N g. It will be denoted as fS; w1 ; w2 ; ::; wN g assuming d ¼ maxfdi ; i ¼ 1; ::; N g as its contractivity factor. Hence, an IFS represents a collection of contractive mappings and is itself a contractive mapping on the space of fractals, according to the last theorem. Therefore the Banach theorem holds again. One and only one fixed point can be found for an IFS in the space of fractals. The proof stems from the above results, so it can be written as: I ¼ W ðI Þ ¼
N [
wi ðI Þ ¼ limn!1 W n ðIÞ 8I 2 HðSÞ:
i¼1
Readers should note that the fixed point is now a set, since the elements of space of fractals are sets. Then I is obviously called an attractor of the IFS that converges to it. Example 4. The eVectiveness of this theory can be seen in coding typical fractal images. An example is shown in Figure 3 (the triangle of Sierpinski). It can be proved that it is a compact set in R 2 and, more precisely, it can
Figure 3. A famous fractal image: the triangle of Sierpinski.
121
FRACTAL ENCODING
be written as: f : R 2 ! f0; 1g. Moreover, IFS: 0:5 0 w1 ðxÞ ¼ 0 0:5 0:5 0 w2 ðxÞ ¼ 0 0:5 0:5 0 w3 ðxÞ ¼ 0 0:5
it is the attractor of the following x1 x2 x1 x2 x1 x2
! þ ! þ ! þ
1
!
1 1 50 50
! !
50
Their contractive factors are equal S to 0.5, while their union is W : H ðR 2 Þ ! HðR 2 ) such that W ðIÞ ¼ 3i¼1 wi ðIÞ; 8I 2 HðR 2 ). Obviously, the contractivity factor of this new contraction is still equal to 0.5. In practice, regardless of the starting point, iterating n times the contraction W, we catch the Sierpinski triangle. It is necessary to stress a point here. Besides all the positive things achieved in the previous section, the same, big, and inverse problem still holds: given a set I 2 H(S), how does one compute the IFS that converges to it? So far, direct method does not exist for solving this. The only known algorithm consists of 1. 2. 3. 4.
Taking an IFS Generating its attractor I , via limn!1 W n ðIÞ 8I 2 HðSÞ Computing the distance hd (I, I ) If hd (I, I ) < E, where E represents the allowed error, end; otherwise go to Step 1.
It is evident that the previous algorithm provides an approximation of the solution, but it coarsely represents what is done in fractal encoding. This problem is smoothed by the collage theorem, which is equivalent to Corollary 1: Theorem 4 (Collage). Be {S; w1, w2, .., wN} an IFS with attractor I and contractivity factor d. Then, for each I 2 (H(S), hd): hd ðI; I Þ
N [ 1 hd ðI; wi ðIÞÞ: 1d i¼1
In practice, this theorem acts on Step 2 of the aforementioned algorithm, allowing us to avoid the limit operation. We can perform only one step of the iterative scheme required by the limit for getting information about the goodness of the candidate IFS.
122
VITULANO
C. Fractals for Digital Images: Jacquin’s Proposal At this point, readers may wonder whether there is an eVective and automatic strategy for coding real images like the one in Figure 1. In 1989 Arnaud Jacquin positively answered this question. In fact, he was the first to make fractal encoding eVective and, mainly, automatic.2 His proposal is presented in the following text. The working space is composed of digital gray-scale (or 8-bit) images, that is, functions f : R 2 ! f0; ::; 255g. Thus, our space is composed of squared images that can be seen as matrices whose elements belong to the interval [0, 255]—each element represents the luminance value of the image under study. In other words, we are supposing that a real-world scene is completely defined by the luminance component of the image, discarding chrominance components that define its colors. Thus, indicating with M the space of all square matrices of size N N and introducing the distance (rms, i.e., root mean square)3: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u1 X ð1Þ dðI; JÞ ¼ t 2 ðIði; jÞ Jði; jÞÞ2 8I; J 2 M; N i; j¼1 we can build the space of digital images (M, d), which is a complete metric space. A generic image to be fractal encoded like the one in Figure 1 can be considered an attractor of an IFS. Then, the problem consists of solving for this IFS. The great conceptual step forward made by Jacquin (1993) has been to consider real scenes composed of ‘‘copies of either themselves or parts of themselves.’’ Jacquin argued that it is easier to code image subparts instead of the image as a whole. In other words, it is easier to find subparts that are similar rather than similarities between the whole image and its subparts. An example is shown in Figure 4. From this observation, he found an automatic strategy for fractal encoding real images by introducing PIFS. The problem of encoding can then be stated as follows: Starting from a digital image I 2 (M, d ), we need to build a contraction W : (M, d ) ! (M, d ) with attractor I such that: 2
More recently, an attempt of automatization of the manual algorithm of Barnsley has been proposed by Wadstro¨ mer (2003). 3 rms now replaces the Hausdorff distance if images (or regions of them) have the same size. Moreover, rms is tied to objective quality measures usually used in signal and image coding such P PN P PN 2 ˆ as SNR (defined as SNR ¼ 10 log10 ð N Iði; jÞ2 = N i¼1 Pj¼1 P i¼1 j¼1 ðIði; jÞ Iði; jÞÞ Þ or PSNR N N 2 2 2 ˆ (defined as PSNR ¼ 10 log10 ð255 =ð1=N Þ i¼1 j¼1 ðIði; jÞ Iði; jÞÞ Þ, where I is the original image and Iˆ the decoded one. Readers interested in a metric closer to the human visual system can also see Li et al. (2002).
123
FRACTAL ENCODING
Figure 4. Two similar regions in Lena image.
. .
d(I, I ) is minimum—for guaranteeing a good quality W is to be composed of few parameters so that its coding is as eYcient as possible—for guaranteing a good compression ratio
Then W will contain image information since union of contractions wi mapping image subparts into others. In other words, each contraction maps a domain block into a range block. Even though both domains and ranges may assume a generic shape, Jacquin proposed the simplest one: squared blocks, as depicted in Figure 4. In practice, the input image is split into K range blocks R1, R2, .., RK of size b S b. They constitute a partition of I, i.e., Ri \ Rj ¼ ; 8i 6¼ j, whereas I ¼ K i¼1 Ri . On the contrary, the domain pool D ¼ {D1, D2, .., DT} is composed of T square blocks of size 2b 2b. The peculiarity of domain blocks is that they can lie everywhere inside the image without any constraint. Let us define now the contractions wi. Keeping in mind that a digital image can be written as z ¼ f(x, y), with x and y spatial coordinates and z the corresponding luminance value, wi will be defined on R 3 . Their explicit form is: 0 1 0 10 x 1 0 e 1 x i ai bi 0 B C @ B C B C A ð2Þ wi @ y A ¼ ci di 0 @ y A þ @ fi A 0 0 ai z z b i
124
VITULANO
As a matter of fact, the contraction above consists of two transformations, wi,1 and wi,2. The first one is called geometric transformation. It acts on spatial coordinates and its form is: x ai bi x ei ¼ wi;1 þ ð3Þ y c i di fi y The second one is called massic; it acts on image gray levels and can be written: wi;2 ¼ ai z þ bi :
ð4Þ
The following shows that parameters ai and bi play a fundamental role in fractal encoding. They are, respectively, called scaling and oVset parameters. Both wi,1 and wi,2 are linear and aYne transformations. Starting from this basis, the fractal coding phase is simple. Given the range Ri, select the best domain Di, among all possible ones {D1, D2, . . . , DT}, which gives the best approximation of Ri, through the contraction wi (to be estimated). Let us consider the choices made by Jacquin for managing this phase. First, domain blocks have size 2b 2b, whereas range blocks b b. Then domain blocks are subsampled in b b blocks by averaging. In other words, each element of the resized domain corresponds to the average of the correspondent four elements of the original domain. To maximize the probability of finding the best couple domain-range, the domain pool can be enlarged. This can be done through rotations and reflections of the domains by means of the following isometries: . . . . . . . .
Identity Rotation through p2 Rotation through þ p2 Rotation through þp Reflection about first diagonal Reflection about second diagonal Reflection about mid-vertical axis Reflection about mid-horizontal axis
Isometries are performed by wi,1 that lead to a larger domain pool D I. It is now possible to show how to perform fractal encoding. For each range block Ri, we have to look for the best domain block Di such that: 8
2. In practice, a square 4
These two schemes use range blocks size of 4 4 and 8 8. In particular, the first one concerns Bath fractal transform. 5 The most of available fractal codes use a three- or four-level quad-tree as default partition, even though more levels are also generally available.
130
VITULANO
Figure 8. In the left image, range shape is regular and nonadaptive to image local information. Blocks partitioning image are square, nonoverlapping, and of the same size. The right image shows a typical quad-tree generated by a fractal encoding.
Figure 9. An alternative way of splitting the input image for generating range blocks. A block (in this case, a square) is horizontally or vertically split. Resulting blocks have a preferential direction.
range block can be split up to k times, depending on image complexity (see Figure 8, right). From a coding eYciency point of view, the quad-tree is very attractive: it requires one bit per node for determining whether it is a terminal node. The split decision function is usually the collage error. If it is not under a given threshold, reduction of the size of range block under study increases the probability of finding a suitable domain block for it. This is due to the correlation among spatially close pixels values. Many papers concerning fractal encoding use this standard formulation (e.g., Fisher, 1995; Lu, 1997). However, another tree-based partitioning scheme is possible: horizontalvertical (HV) proposed by Fisher and Menlove (1995) (Figure 9). Resulting blocks lie on a preferential direction. They achieve good objective and subjective results, using edges location as split decision function. In addition, HV with splitting based on DC component (i.e., the average intensity value of the block) has been used by Saupe et al. (1998).
FRACTAL ENCODING
131
The above-mentioned partition schemes are usually classified as top-down methods—sometimes called coarse-to-fine methods. They start from larger regions, splitting them in agreement with some optimality criteria. Nonetheless, various bottom-up strategies (also called fine-to-coarse) have been proposed. Smaller blocks are merged to make more complicated shapes. Thus, all approaches will diVer in the way of merging regions. However, there is an unavoidable trade-oV between the generality of range block shape and the memory allocation in coding it. That is, coding advantages of quad-tree partitioning cannot be exploited. An example of bottom-up strategy applied to quad-tree (QD) is quad-tree recomposition (QR), as used by Jackson et al. (1997). The final shape of range blocks is still square, as in the classical QD, but the initial partition is composed of squares at the last level—it starts from the smallest squares available in the adopted partition scheme. Thus, the underlying idea is to eliminate the QD ineYciency based on the large number of range-domain comparisons—whenever they fail in the collage error. In practice, QD works at reversal: four b b square range blocks are recombined in the corresponding 2b 2b square range block if the collage error is satisfied, as depicted in Figure 10. Jackson et al. (1997) mathematically and empirically prove that their proposal leads to a more eYcient encoding. Nonetheless, QR still hides an intrinsic ineYciency: it may happen that a large block classified as quite flat (called ‘‘gray shade’’ at a prefixed tolerance) has one or more of its quadrants that is a non–flat block (called ‘‘edge block’’). In this case, QR introduces an overhead of computation. In practice, ineYciency of QR increases as complexity of the image decreases (see Mahmoud and Jackson, 1999, for a more in-depth investigation). That is why Mahmoud and Jackson improve their previous proposal of QR, introducing QDR (quad-tree decomposition recomposition). This is a two-step strategy. In the first step, a QD is performed. This phase eliminates the quite flat blocks that are tedious for QR. In the second phase, QR processes the remaining blocks—the nonflat ones. QDR improves QR, which already improved QD. More general shapes for range blocks are allowed in Tanimoto et al. (1996) and Chang et al. (1997). In the first approach, the input image is split into squares of the smallest allowable size. Merging phase of such blocks is
Figure 10. A bottom-up strategy: quad-tree recomposition. Small square blocks are brought together to make larger (still square) blocks.
132
VITULANO
Figure 11. Criterion on which Tanimoto et al. merging (1996) is based. The top row shows the eight possible seed blocks. The bottom row shows the corresponding blocks where the gray sub-block is the candidate to be merged.
performed in a two-step process. The first one accounts for similarity of their luminance variance, as in Fisher (1995). The second one relies on a classification of range blocks in shade block and non–shade block. It tries to further merge non–shade blocks not merged in the first step. Nonetheless, not all shapes are allowed, but an a priori defined criterion is followed (see Figure 11 and Tanimoto et al. (1996) for details). Range block shape can be eVectively encoded by knowing its composing blocks (Elias, 1975). As a result, a slight improvement (0.25 bpp) on the Lena image with respect to a fixed shape partition is achieved even though some distortion appears in the decoded image. On the contrary, the second approach (Chang et al., 1997) uses a more classical and well-known split and merge algorithm (Gonzalez and Woods, 2001). Here, shapes of the achieved range blocks are very general and then can be encoded by a QD segmented chain code (Ebrahimi, 1994; Kaneko and Okudaira, 1985), showing a compression ratio of contours of about 70%. This model shows better performances than various other approaches, including Fisher and Lawrence (1992) and Lu and Yew (1994) and the next one. Thomas and Deravi (1995), again, use some smallest squares as seeds to expand.6 In particular, three variants are presented: .
.
6
Starting from a seed range, merging of blocks in each of four (north, south, east, and west) possible directions is tried. The corresponding domain is extended in the same direction. Expansion is then performed when collage error using the same seed’s parameters is fulfilled. The same process as above except for recomputing parameters. Moreover, if extension fails, the candidate domain can also take diVerent directions. The new transformation parameters are then
See also Ruan and Nge (1999) for a similar approach using contractive second-order polynomial functions.
FRACTAL ENCODING
.
133
checked for the other range blocks already belonging to the global merged region to be enlarged. Each square comes out by the winner seed in a competition. In practice, an already coded range by a given seed expansion can be better coded by another adjacent seed. Then, at the end of coding, a block will be caught by the best seed in the collage error sense.
An alternative strategy, where merging phase is no longer deterministic, has been proposed by Saupe and Ruhl (1996). Starting from a configuration (i.e., the input image is split into the smallest squares along with their fractal parameters) new configurations are generated. Randomly selected range pairs are merged to give a new range. After a prefixed number of iterations, the next configurations are retained. For each population, which is composed of Np configurations having the same number of ranges, the best fractal encoding is computed. Some approaches based on derivative chain code (DCC) (Gonzalez and Woods, 2001) to compress the range shape are proposed. This approach outperforms traditional QD partitioning schemes and HV-based variants. In particular, a proposal for speeding up this method is also presented by Saupe and Ruhl (1996). It is based on the nearest-neighbor technique, which will be discussed later. The approach of Thomas and Deravi has been improved, first by Breazu and Toderean, (1998) and then by Hartenstein et al. (2000) with their regionbased fractal coder (RBFC). The underlying idea is very simple and eVective. The input image is classically split into a uniform initial partition of range blocks. Two range blocks can be merged if at least one of their associated 2NB domains (NB for each range), suitably extended, fulfills the minimum allowable collage error. Then, the idea is ‘‘to remember’’ the best NB domains for each range. This saves considerable computational eVort that is usually required in similar approaches, outperforming them in compression and quality. This article can be considered the result of some investigations started some years before Hartenstein and Saupe, 1998; Ruhl et al., 1997; and Saupe and Ruhl, 1996) and proves that the way of generalizing the range shape can eVectively be taken. In this approach, shapes are coded via either chain code (for details, see Freeman, 1961; Saupe and Ruhl, 1996) or region edge maps (see Ausbeck, 1998; Tate, 1992). So far, only squares, rectangles, and shapes produced by their combination have been presented. Nevertheless, they are not the only shapes used in the partitioning phase of fractal encoding. Various approaches based on triangles have been proposed with discrete results. Usually the image is split into two main triangles as in Figure 12. Then, at each stage, partition is recursively refined by a one-side or a three-side split (Novak, 1993; Peitgen et al., 1992; Figure 13).
134
VITULANO
Figure 12. Triangular partition. The input image is preliminarily split into two main triangles.
Figure 13. Two possible splits of a starting triangle: left, one side; right, three side.
Delaunay triangulation also has been used for image splitting (Bowyer, 1981; Watson, 1981). It is the core of the approach proposed by Davoine et al. (1996). Any triangle of Delaunay partitioning is characterized by the fact that its circumscribing circle does not contain any other vertex (Figure 14). This peculiarity allows us to reach triangles that maximize their interior angle (with respect to other triangle-based partitions with the same number of vertices). This allows avoidance of very thin triangles and, at the same time, mitigates the blocking eVect, as will be shown in the next section. The adopted strategy is as follows. Starting from a triangle-based partitioning (whose vertices work as seeds) of the input image, a classical split and merge technique is exploited. Split: The initial partition is refined by splitting their triangles that are nonhomogeneous. Merge: The adjacent homogeneous triangles are merged, eliminating nonuseful vertices. Homogeneity criterion is based on intensity mean and variance. A significant variant of this approach has been presented by Davoine et al. (1995), wherein a mixed triangle-quadrilateral partition is used. In this case, the
FRACTAL ENCODING
135
Figure 14. A triangle belonging to a Delaunay partition fulfils the property that its circumscribing circle does not contain other vertices. In other words, the shaded area cannot contain vertices.
Figure 15. Two Delaunay triangles cannot be merged if they do not form a convex quadrilateral (top right) while they can otherwise (bottom right).
merging phase consists of bringing together two Delaunay triangles if they form a convex quadrilateral, as shown in Figure 15. However, a little rate-distortion improvement is achieved for high compression ratios, while inheriting mitigation of blocking artifacts. Nevertheless, triangles are quite simple polygons. At this point, readers may ask: Is it possible to further complicate shapes? The answer is provided in the approach proposed by Reusens (1994), where polygonal shapes are generated in a clever way. Exploiting the well-known segmentation algo rithm proposed by Wu and Yao (1991), a four-direction (0 , 45 , 90 , and 135 ) splitting is allowed. In particular, starting from a polygon, it can be split into two shapes. Unknown variables of this strategy are direction (among the four above) and wideness of the strip, called oVset (Figure 16). The goal is to reach two polygons with the most uniform intensity. The resulting shapes have a suitable form to be coded without too additional bits
136
VITULANO
Figure 16. A generic polygonal shape (for instance, the depicted rectangle) can always be refined via a split that involves one of the four canonical directions (in this case, the direction of the gray strip) and an oVset (the width of the strip).
as QD. Rate-distortion results are not amazing with respect QD, but block artifacts are reduced. A hybrid partitioning scheme (i.e., involving squares, rectangles, and triangles) has been proposed by Kuo et al., (1999). The underlying idea is very insightful. Each block of the image has one or more edges; otherwise it is quite flat and easy to encode. Considering only edge blocks and limiting the search of the best domain in a region centered in the target range, edges passing through range also pass through candidate domains. Then, isometries frequency (only the ones concerning reflections) can be used for eYciently splitting block into one of the three shape types above (see Kuo et al., 1999, for details). In fact, each of the four above isometries is tied to an angle along which the edge lies. This strategy gains a bitrate saving of 11% more than classical fractal coding at the same quality and without any additional computing eVort. b. Partitions Allowing Overlap: Ranges Cover Image. An interesting generalization of how to generate ranges consists of using a covering rather than a partition. In other words, the constraint \ Ri Rj ¼ ; i6¼j
is relaxed, where Ri and Rj are two generic ranges of the considered image. Covering is mainly oriented to eliminating blocking artifacts (as in Figure 7), so that it may be considered as an alternative to partition schemes that attempt to generalize range block shape. The first approach that we present has been proposed by Ho and Cham (1997): LPIFS (lapped PIFS). Adjacent range blocks overlap each other. Then, each pixel in a generic spatial position of the input image belongs to four range blocks, except for the ones of the image frontier. The corresponding four aYne transformation contributions will be weighted by a suitable matrix of weights whose goal is to smooth blocking artifacts. Even
FRACTAL ENCODING
137
though such matrix is built according to previous experience on coding transforms (see Ho and Cham, 1997; Malvar and Staelin, 1989; for details), some blurring eVects remain. This aliasing eVect is eVectively eliminated by simultaneously minimizing the global collage distance of all involved range blocks. It is worth outlining that the simplest choice of weights matrix coeYcients is when each contractive function contributes with 1/4. In this case, it corresponds to the approach previously proposed by Reusens (1994). In particular, here the classical QD partitioning is modified so that each square at each level (four levels QD) of splitting is enlarged by 2n pixels. n obviously depends on the square size. It is then intuitive that objective quality does not change from that achieved by partition-based approaches while strongly reducing blocking. Moreover, both aforementioned approaches prove that the corresponding transformations are still contractive, yielding an only fixed point (i.e., the decoded image is unique). A recent approach for enlarging the search of fractal shapes in a digital image has been proposed by Belloulata and Konrad (2002). Here, coding is not based on blocks as classical fractal schemes but on regions. In agreement with the MPEG4 guideline (1997), coding relies on a semantical segmentation that is oV-line (i.e., performed before coding). A region of interest can completely lie in a square block (called ‘‘interior block’’) or not (‘‘boundary block’’). In the latter case, a padding operation is required; it is performed following the MPEG4 guidelines (see Belloulata and Konrad, 2002, for more details). This paper is a clever attempt7 to extend fractal coding to semantical structures. As already outlined, it is diYcult to weigh the pros and cons of the presented strategies because of the interconnections among the various parts of fractal schemes. In general, it seems that the more complicated the partitioning scheme, the better the reached quality (see, for instance, Fisher et al., 1992). Nonetheless more complicated shapes are usually more diYcult to encode. Moreover, non–right-angled shapes introduce an intrinsic diYculty for the phase involving the match for the best couple range-domain (Wohlberg and de Jager, 1999). That is why most available fractal codes use based partitions. 2. Split Decision Functions We have seen how to shape range blocks. However, criteria on which to base range splitting also plays a fundamental role. Split decision function strongly influences encoder performances in terms of both rate distortion and coding time. 7
Experimental results are achieved on quite simple images.
138
VITULANO
The wide diVusion of QD in fractal encoding has been the platform on which to base splitting strategies more complicated than the classical rms. The most natural generalization of this latter is to make it adaptive (Furao and Hasegawa, 2004). In other words, it is required that quality decreases while reducing range size during splitting. If Tn is the threshold adopted at nth QD level, then Tnþ1 ¼ 2Tn þ 1: Adaptivity allows us to gain better quality as well as another variant proposed by Saupe and Jacob (1997), where QD is still used. However, classical collage error is replaced by N2 fold variance of the block R (i.e., N2 var(R)): SðRÞ ¼
N X
Rði; jÞ2 ;
ð6Þ
i; j¼1
where with R we indicate the block R without its DC component. This approach represents a slight variant of using the simple variance of the range block, previously proposed by Fisher (1995). Nonetheless, they empirically show that Eq. (6) allows us to achieve comparable results in terms of PSNR while speeding up the search and the partitioning (Saupe and Jacob, 1997). As a matter of fact, N2fold variance performs better than simple variance and rms, since it is intrinsically adaptive: it accounts for block size. It can be shown that N2fold variance is equivalent to using variance with an adaptive threshold. This fact has been eVectively exploited by Distasi et al. (1998), where entropy has been proposed as split decision function: ˆ iÞ ¼ SðR
255 X
fk logðfk Þ;
ð7Þ
k¼0
where fk is the number of occurrences of gray-level k in the block Ri. An objective comparison, since performed with the same fractal encoder, shows that: .
. . .
Any split decision function works better in its adaptive form—using an adaptive threshold (this was already been observed by Chang et al., 1997). Adaptivity improves SNR while strongly reducing block artifacts. Adaptive rms shows the best rate distortion curve. Apart from a diVerent visual distortion in the encoded image (see Distasi et al., 1998, for details), both entropy and variance have the drawback that they have to know in advance the best threshold to use.
FRACTAL ENCODING
139
Various criteria for performing matching between a range Rk and domain Dl have been proposed. Elsherif et al. (2000) have compared four of them: . .
Classical RMSE Mean absolute diVerence (MAD), well-known in video coding (Shi and Sun, 2000): MAD ¼
.
N 1 X jRk ði; jÞ Dl ði; jÞj; N 2 i; j¼1
Linear correlation coeYcient (LCC): PN i; j¼1 Rk ði; jÞDl ði; jÞ LCC ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PN 2 PN 2 i; j¼1 Rk ði; jÞ i; j¼1 Dl ði; jÞ
with Rk and Dl , respectively, Rk and Dl without their DC component . Pixel diVerence classification (PDC), defined as: PDC ¼
N X
½ordðjRk ði; jÞ Dl ði; jÞj TÞ
i; j¼1
with T a suitable threshold while ord(.) is equal to 1 if . is true and 0 otherwise. Rate distortion results seem to indicate LCC as the best solution for an eVective range-domain matching (Elsherif et al., 2000). However, the aforementioned approaches do not provide a direct control of both rate and complexity. Thus, Franco and Malah (2001) propose a new strategy called adaptive CC (collage error computational complexity). It is a two-step process as follows: 1. The input image is partitioned in square blocks at the largest possible size. For each of these blocks, the corresponding gain is computed. The latter is the ratio between the reduction of collage error, due to splitting the block in their four subblocks, and the relative computational eVort required for the best domain search for the outcoming sub-blocks. Then a decreasing ordered list is produced accounting for block gain. This phase is recursive and stops when the desired complexity is achieved. 2. Using the gain of nonterminal nodes (that are blocks) of the tree achieved in the first phase, the sub-trees with the desired rate are determined. In addition, whenever a higher rate is required, some nodes can also be added to the aforementioned tree.
140
VITULANO
This strategy has been shown to outperform threshold-based approaches both in a rate-distortion and in a computational sense (Franco and Malah, 2001). An interesting step forward with regard to image splitting has been made by Cai and Hirobe in (1999). They use a variational approach for optimizing both quality and rate: JðPÞ ¼ DistortionðPÞ þ l RateðPÞ; where J is the cost function to be minimized, P is the (unknown) partition, Distortion is the distortion produced by P, Rate is the cost in terms of bits of P, and l is a weight for balancing the two energy terms above. Making the assumption that each image block contributes to collage error via, at most, one interesting and connected edge (lying in it), J can be then suitably minimized. Without adding many more details that can be found in Cai and Hirobe (1999), it is interesting to note that, using a QD splitting and merging, the blocks’ distribution and size diVer from the one of Fisher et al. (1994). Edges are ‘‘better followed,’’ yielding a better SNR (up to 2.5 db), with a saving of both bits and block matches. On the other hand, the HV partitioning scheme of Saupe et al. (1998), as we have already shown, also associates a rate at each node of the produced partition tree concerning the cost for performing the split under study. Then, it outperforms S(R), taking advantage from BFOS, which is a technique for optimally pruning generated partition trees with an associated cost functional (see Breiman et al., 1984; Saupe et al., 1998, for more details). 3. Domain Blocks Location and Relative Problems Let us continue our tour of the fractal coding universe by deepening how to build the domain pool and how to organize it for the best range-domain search. Adopted strategies on this point play a key role since most of the computational eVort is spent during this phase: the so-called collage coding (Saupe, 1995). It represents the main cause of the computational asymmetry between fractal encoding and decoding phases. Since the best domain block for a given range block can be placed everywhere in the image, the simplest choice is to perform an exhaustive search. However, this method cannot be followed because of cardinality of the domain pool. For instance, the number of squared subregions of any size of an N N image has an order / N3. The problem remains untractable, even though the most used spatial oVsets in literature are adopted—the distance between two adjacent candidate domains is to be equal to half or the whole size of domain blocks (Fisher, 1995; Fisher et al., 1992; Jacquin, 1990b; Øien et al., 1991). The speed-up achieved using a spatial oVset equal to D is equal to D2, that, again, does not solve the problem (Wein and Blake,
FRACTAL ENCODING
141
1996). To overcome this drawback, two diVerent schools of thought have been followed in the literature: .
.
Reducing the quantity of information to manage (Saupe (1996) defined it as less search). Organizing such information in an eVective way: faster search.
a. Reducing the Domain Pool. A way to reduce the domain pool is based on exploiting locality: some studies have empirically shown that location of the best domain is generally close to the target range one (Barthel and Voye´ , 1994; Beaumont, 1990; Hu¨ rtgen and Stiller, 1993; Jacquin, 1993; Woolley and Monro, 1995). More precisely, the probability density function computed on some test images, of spatial oVset between the best domain for the target range, has a peak in zero. While this seems intuitive for quite regular images, for textured and strongly irregular ones it is not. Hence, there are some authors who aYrm the contrary (Fisher, 1995; Fisher et al., 1992; Frigaard et al., 1994). As often happens in real-world modeling, it is diYcult to side with one of the two philosophies. In fact, a rigorous mathematical proof misses as well as conditions for aYrming that the aforementioned conclusion can be considered a principle—it should not admit any opposite result. On the other hand, all researchers who studied and presently study how to organize large domain pools (on which next part will focus) do not intrinsically believe in locality. Following locality hypothesis, the simplest choice of adaptively reducing the domain pool has been adopted as in Jacquin (1990b): domains must belong to a region containing the range block. More complicated strategies have also been adopted. Beaumont (1990) found that distribution is gaussian-like with peak in zero. The trajectory traced by the center of the candidate domain is a spiral, as in Figure 17 (and similarly in Barthel and Voye´ , 1994). The concept of locality can also be not considered in a strict sense, as in Hu¨ rtgen and Stiller (1993). Spiral covers the whole image, but with a step wider as the search moves away from the range block center. In particular, this strategy loses just 0.3 db using a suitable mask, which considers only 12% of the domain pool corresponding to exhaustive search. Domain pool reduction can also be achieved by exploiting contractivity constraint (Yisong et al., 2002). In fact, indicating with MR and MD the diVerence between the largest and smallest gray-level value of, respectively, a given range R and its corresponding domain D, it is possible to prove that good candidate domains for it must fulfill the constraint: MR MD , simply using collage error and contractivity constraint (He et al., 2004). This allows us to discard many domains a priori, achieving a considerable time savings without losing quality.
142
VITULANO
A more subtle concept of adaptive locality has been used (Breazu and Toderean, 1998; Hartenstein et al., 2000; Wang and Hsieh, 2001). Speed-up is achieved exploiting correlation among close image blocks, that is, reducing the search space as in Figure 18. In practice, for a given range R the four just-encoded ranges R1, R2, R3, and R4 are considered, following a predetermined path like row scanning: left-to-right and top-to-bottom. Then, the search of the best domain for R is performed in the set {D1, D2, D3, D4} of relative best domains. If the search fails (i.e., collage error is greater than the allowable one), the scheme proposed by Lee and Lee (1998) is performed. This scheme speeds up that of Lee and Lee by a factor of two.
Figure 17. The solid black circle indicates the center of the target range. The center of the candidate domain starts from the center of the target range (black circle) and follows a spiralshaped path till the best match is achieved.
Figure 18. Search of the best domain for the (gray) current range is limited to domains associated to the four just-encoded ranges. The arrow indicates the range-encoding direction.
FRACTAL ENCODING
143
More recently, ‘‘less search’’ has been thought of in slightly less adaptive way (Wu, 2000): it consists of regularly partitioning the image. Each block is considered as an independent image and then a PIFS is found for it. At the end, the union of found PIFSs is the PIFS of the whole image. As size becomes smaller, the quality is lower while the compression is higher. So, 128 128 has been found to be a good trade-oV among the above behaviors. With regard to bath fractal transform (BFT), it takes advantage from zero-searching with a low eVort in encoding. However, performances in coding high frequencies quickly decrease at low bit rates. Then, Zhang and Yu (1999) propose a solution based on simply considering one range for each domain. In practice, the input image is partitioned in nonoverlapping square blocks (ranges). For each range, the domain whose center coincides with the one of the range itself is considered. This simple strategy improves BFT scheme, leading it to achieve better objective (up to 1 db) and subjective quality. The above schemes are designed to obtain an adaptive reduction of the domain pool: it depends on the target range under consideration. A nonadaptive reduction of pool domain is also possible. The domain pool is reduced, discarding unuseful blocks according to a preselected criterion. For instance, Saupe (1996) proposed an approach based on the experimental evidence that not all possible domains of a given image are important. Blocks containing little information—those that are quite flat or with a low variance—are never used. As a matter of fact, Saupe further develops this fact, previously noticed by Jacquin (1989, 1992), who encapsulated it in a classification scheme. We will return to this point later. Apart from the used input image, the situation is the one depicted in Figure 19. It means that
Figure 19. Schematic behavior of all domains (topmost curve) and those actually used in encoding (bottommost curve). The dotted area represents ineYciency of encoding schemes without reduction, both for bits spent to encode domains that are never used and in time spent for unsuccessful matches.
144
VITULANO
a part of the domains is not used, yielding both a waste of bits for encoding the really used domains and time for searching in a wider domain pool. Hence, Saupe introduces the concept of lean domain pool. Elimination of unuseful domains results in a more eVective encoding since time linearly decreases with the percentage of discarded blocks. With regard to PSNR, it slightly increases since some ranges are subjected to some further split, causing a loss in compression. The same guideline is followed by Kominek (1996) and Signes (1994), where blocks are a priori discarded if not fulfilling some constraints. In particular, in this latter, domain pool reduction is based on an interesting geometrical interpretation of the fractal minimization [Eq. (5)]. Starting with the assumption that the domain pool is composed of blocks that can be seen as vectors in a suitable space, the collection of such vectors must contain elements such that the cosine of the angle between two of them is under a given threshold. In this way, a constraint near orthogonality is imposed on domain blocks to be inserted in the pool, since precise orthogonality constraint would require a cosine equal to zero. Blocks not satisfying this weaker orthogonality constraint are discarded as well as flat (low-variance) ones. More recently, elimination of unuseful blocks has been made considering the similar degree between two blocks (both range R and domain D are composed of n pixels), defined as follows: n X SRD ¼ minðRðiÞ; DðiÞÞ; i¼1
where R(i) and D(i) are their normalized pixel intensities and made positive, that is, in the range [0, 2] (Figure 20; Furao and Hasegawa, 2004). b. Effectively Organizing the Domain Pool Information. Domain pool information can be better organized for avoiding elimination of some blocks. However, if on one hand this increases the probability of catching a good domain for a range leading to a best quality, on the other additional
Figure 20. Similar degree accounts for the shared luminance of blocks. Left, low similarity (gray area) between the first block luminance (black curve) and the second block one (gray curve). Right, higher similarity.
145
FRACTAL ENCODING
Figure 21. A block is split into four quadrants to be classified.
bits are required for encoding a higher number of domains. A possible strategy used by such approaches consists of splitting the whole domain pool into a number of classes accounting for some simple features. Hence, the search of the best domain is made in the class whose blocks have features similar to the target range. This phase is called classification and is based on the evidence that usually the best collage matching involves range and domain similar to each other. So, using a famous quote adopted by Saupe (1995), ‘‘birds of a feather flock together.’’ Before further exploring classification, it is interesting to note that it is possible to draw a parallel between faster search and less search. In fact, while less search reduces the number of domains to be examined, exploiting spatial closeness to the target range, faster search is usually based on domains features. In this case, search is again restricted to those domains close to the target range, but in the features space. Thus, the trick is the same except for a change of space. However, in the feature space additional work must be done to determine domains close to the range—in the classical spatial space closeness is immediate. We have seen that original work of Jacquin exploited a classification based on gradient (Ramamurthi and Gersho, 1986). It was based on splitting the domain pool into three classes: shade, edge, and midrange blocks.8 This idea has been further developed by Fisher et al. (1992). The method consists of splitting a square block as shown in Figure 21. The only average intensity of each quadrant is firstly considered: mi i ¼ 1; ::; 4. These can be ever ordered in one of the following three canonical orderings: m1 m2 m3 m4
m1 m2 m4 m3
m1 m4 m2 m3 ;
performing on the block rotations and flipping. If their variance is also computed: s2i i ¼ 1; ::; 4, then there will be further 24 orderings for each of the three aforementioned major classes. Summing up, 72 classes will be finally considered. The aim is to reduce the search of the best range-domain inside a single class. In addition, a further speed-up is achieved by performing isometry by mapping the domain in the range. This can be done starting from isometries that lead domain and range in their major 8
An alternative approach using classification based on Laplacian can be found in Xuejun et al. (1999).
146
VITULANO
class. This operation avoids search in the complete isometry set with a speedup of 8. The introduction of classification on one hand allows us to achieve a speedup in encoding, but on the other lacks neighboring between classes. In other words, if search fails inside a given class, it is diYcult to determine which other class may lead to success. In this sense, Caso et al. (1995) have proposed vectors of variances instead of using variance orderings. Once suitably quantized, these latter produce a collection of classes where the concept of neighboring is gained. A similar idea has been developed by Hu¨ rtgen and Stiller (1993), who compute luminance average of the whole block (under study) and its four sub-blocks as in Figure 21. Vector w ¼ fw1 ; w2 ; w3 ; w4 g can be then computed as follows: 1 if mi > m wi ¼ 1 i 4 ð8Þ 0 if mi m where m is the average of the whole block and mi of ith sub-block. This strategy overcomes the drawback that two blocks may appear similar, looking at their global average, in spite of they have diVerently distributed pixel intensity—they are diVerent in spite of isometries. Two blocks (domain and range) can then be compared using the distance between their feature vectors. Hu¨ rtgen and Stiller used this strategy for performing it in cascade to the domain pool reduction, which was discussed in the previous subsection. In other words, after pool reduction, domains such that the Euclidean distance between their feature vector and the one of the target range is zero are used for the collage distance.9,10 Similar strategies have been used by Novak (1993) and Frigaard et al. (1994). In the first one, Novak uses a feature vector: y ¼ flogjy1 j; logjy2 j; logjy3 j; logjy4 jg; where yj, 1 j 4 are moment invariants, which allows us to achieve an invariant representation for blocks—both range and domain. Frigaard et al. instead propose both the classical standard deviation and the number of dominant gray levels. The latter consists of the number of pixels whose intensity is greater than a prefixed threshold. This strategy recalls the classification shade, non-shade blocks of Jacquin. Search inside a pool composed of ND domains requires O(ND) operations. Classification tries to reduce this eVort by splitting domain pool into a 9 In the work by Hu¨ rtgen and Stiller (1993), there is a further refinement of the domain pool that has not been discussed here for brevity. 10 A variant of this strategy has been implemented in Polvere and Nappi (2000).
FRACTAL ENCODING
147
number of classes. Nonetheless, classification alone is still not able to make fractal encoding competitive with other coding transforms. A great step forward was made by Saupe in 1995. He mathematically proved that collage error can be rewritten as follows: EðR; DÞ ¼ < R; FðRÞ > gðDðR; DÞÞ
ð9Þ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where with < .,. > we indicate the inner product, gðDÞ ¼ D 1 D2 =4; DðR; DÞ ¼ minðkFðRÞ þ FðDÞk; kFðRÞ FðDÞkÞ and F(.) is a normalized projection operator. The underlying idea is p then ffiffiffi to project block in a space of features, exploiting the fact that 0 D 2. Since g(D) is monotonically increasing in this range, it implies that minimization of collage error is equivalent to minimizing D. But this entails the search of the nearestneighbor F(R) in the set {F(D), F(D)} with D belonging to the domain pool. Saupe proposed the block subsampling for reducing the quantity of information to deal with. For instance, a block of size 8 8 can be reduced to 4 4, where there are only 16 coeYcients to manage. But Saupe’s intuition was to use a tree structure for organizing the great quantity of information represented by domain pool. In particular, a variant of k-d-tree was used (Arya et al., 1994; Friedman et al., 1977). Then, the 16 coeYcients above are the keys to be used in the space of features to reach the approximated nearest-neighbor of the range.11 The main point is that from O(ND) required by classification, now only O(ND log(ND) ) for building the tree and O(log(ND) for search is required, drastically reducing computational eVort. It is noteworthy that tree structures are usually used in information retrieval with large databases. In eVect, the search of the best domain for a given range is analogous. Range represents the query, and domain pool works as database. That is why many researchers working on information retrieval have been attracted by fractal encoding. As it is easily arguable, Saupe’s approach has influenced many successive fractal encoding schemes. People have studied how to generalize k-d-tree structures on which it is based. For instance, starting from a geometric interpretation of k-d-tree, where the feature space is split into two parts, Cardinal (2001) proposed splitting the space with a hyperplane accounting for a preferential direction of points in the space itself. This strategy is very close to principal component analysis (PCA) (Partridge and Calvo, 1998). Results are better than classical k-d-tree with a speed-up up to 3.
11
Instead of looking for the nearest-neighbor of a given query, the approximated nearestneighbor can be found. It is equivalent to find it in a larger hypersphere of radius dˆ ¼ (1 þ E)d, where d is the distance between the query and the nearest-neighbor, while E is a threshold to be tuned (Arya et al., 1994).
148
VITULANO
Saupe’s approach has also been combined with other features like the center of mass (Polvere and Nappi, 2000). It is defined for a given block Bði; jÞ 1 i; j N in cartesian coordinates as: 1 X N 1 X N y¼ x¼ iBði; jÞ jBði; jÞ M i; j 2 M i; j 2 P where M ¼ i; j Bði; jÞ is the mass of the block—the origin of the reference system is located in the lower left corner of the block.12 Considering image intensity as mass, similar blocks are characterized by very close locations of their centers of mass. Using polar coordinates y and r (i.e., respectively angle and radius), after simple calculations it turns out that, while y is invariant under both scaling and oVset transformation (involved in the Collage error), r is not under oVset. Then, preserving y, block is subtracted by its DC component and the new center of mass is computed: y; r . Thus the vector {y, y} is selected as features characterizing the block. Instead of using a tree-based scheme for performing the search, after a suitable quantization of features space, a classification is used. This allows us to achieve advantages of classification, which avoids time spent for building and searching the tree, while exploiting the concept of closeness among classes. A similar behavior to the center of mass is shown by box dimension (BD) of a block. It is tied to a number of balls of radius r, which are necessary to cover a subset. BD allows us to achieve more and smaller k-dtrees than the original Saupe scheme, outperforming it as well as center of mass based approach (Song, 2002). Nonetheless, even given the great intuition on which it is based, Saupe’s approach has basically two drawbacks. The first one is the assumption that contraction parameters are continuous and unconstrained—without strict contractivity constraint on scale parameters. This may lead to nonoptimal selected domain blocks from searching phase. The second one is the high memory space required by tree storage. As a solution, Tong and Pi (2000, 2002) propose the use of a suitable parametrization (Tong and Pi, 2001), where mappings are as follows13: wi;2 ðDÞ ¼ aD þ g0 I; where D is D without its DC component and a is the scaling factor. In practice, image blocks are considered without their DC component, as already proposed by Øien and Lepsøy (1995). It is then shown that: 12 Gray-level intensity can be seen as mass as well as energy. An interesting approach based on the latter is in Caso et al. (1995). 13 Readers must not confuse I, that usually indicates image, with I, which is identity matrix.
FRACTAL ENCODING . .
.
149
The scale parameter a is the same of the classical collage error. The new oVset parameter g0 is now diVerent from the old g, but it is also independent of domain block and a. The reduced allowed range for g0 leads to better results after its quantization.
This result encapsulated in a nearest-neighbor scheme gains better results in terms of quality and compression ratio with an average speed-up of two with respect to Saupe’s approach. Moreover, a further improvement can be achieved exploiting strong correlation between mean square error (MSE) and variance of image blocks, as depicted in Figure 22. Using a clever variational approach that ties E (of the distance of approximated nearest-neighbor) to variance, a further improvement can be achieved. The link between mse and variance has been rigorously investigated by various other authors. For instance, He et al. (2004) start from the empirical evidence that two blocks are close in the sense of collage error if their variances are (Lee and Lee, 1998). They further stress this point showing that: d 2 ðR; DÞ ¼ varðRÞ p2 varðDÞ where p ¼ nvarðDÞ , and X indicates the block X without its average. It can be easily seen that a small variance of range block entails a small value of d2 (R, D). Then, after a classification for separating very flat blocks from
Figure 22. Schematic plot showing correlation between root mean square error and standard deviation. All points are inside a cone (where color represents their density) and mainly thickened along diagonal.
150
VITULANO
high-variance ones, a search is performed among domains that have a variance close to the target range. This strategy strongly improves similar already proposed approaches, such as the Lee and Lee (1998) (speed-up of 28 times without loss of quality), Lee (1999), and Lai et al. (2002). This latter uses the result due to Truong et al. (2000), who rewrite collage error as follows: 2 < R; D >2 EðR; DÞ ¼ R 2 D where R and D, respectively, are range and domain of size k, R, and D range, and domain without mean. After further computation (Lai et al., 2003), it can be rewritten: EðR; DÞ ¼ A a2 B where A ¼ kRk2 1=k < R; I >2 and B ¼ kDk2 1=k < D; I >2 . The minimum collage error is achieved when a ¼ !1 so that the kick-out condition for rejecting domains is: A B dmin; with dmin to be suitably tuned. This strategy, along with a suitable quantization of a and the reduced eVort for isometries proposed by Truong et al. (2000) achieves the same quality of exhaustive search with a savings of approximately 75% of computational eVort. It improves performances of Truong et al. (2000) and Bani-Eqbal (1995). The link between blocks mse and variance is also used by Wu et al. (2003) and Ponomarenko et al. (2001). In the first approach, a classification using mean and variance is used. As regards mean, block is classically split into four sub-blocks, whereas for variance the constraint |var(R) var(D)| T, where T is a suitable threshold. The strategy is then very close to the one proposed by He et al. (2004) and Yisong et al. (2002). In the second one, the strategy is based on the following theoretical result that E(R, D) ¼ N(var(R) var(B))2, where R is the target range, D a candidate domain, var(R) and var(D) the relative variances, and N the number of pixels of a block (Ponomarenko et al., 2001). In other words, the collage error cannot be less than diVerences of relative variances multiplied by the number of pixels. This allows discrimination of blocks simply by looking at candidate block variance. An interesting approach has been proposed for clustering (Davoine et al., 1996). The purpose is always the same: to perform a search inside a subset of the domain pool. Here, the idea is to apply the LBG (Linde et al., 1980) algorithm already used for vector quantization. (For a similar approach, see also Lepsøy and Øien, 1994.) Chosen the number M of domains populating
FRACTAL ENCODING
151
the codebook and the feature r by which computing the distance between blocks, LBG gains the optimal codebook. Davoine et al. (1996) insert this strategy in a codebook composed of Delaunay triangles. As a feature, they use the histogram of triangles instead of the direct gray-level values, which shows cases where the latter may fail. Then, the distance between domains Di and Dj is defined in terms of MSE between their normalized histograms hDi, hDj: X ½hDi ðkÞ hDj ðkÞ2 : ð10Þ rðDi ; Dj Þ ¼ k
Finally, notice that centroid is substituted for its nearest vector. Wein and Blake (1996) proposed a clustering-based technique using the well-known K-D-tree structure (Friedman et al., 1977). In practice, feature vectors have K components (called keys) organized in a tree structure. They use an algorithm as simple as clever: 1. Split the domain pool into three classes using quadrant brightness as a feature as in Jacquin (1992). Each domain has pffiffiffiffiffito be made zero-mean. 2. If Ni is the cardinality of ith class, generate Ni clusters. 3. In the class where the range block falls, determine the cluster producing the least RMSE and the contractive map coeYcients. 4. Inside the above best cluster, look for the best domain; here are all nonnormalized. To this aim, they use the pairwise nearest-neighbor (PNN) algorithm of Equitz (1989). The underlying idea is to merge two clusters at a time until the desired number is gained, starting from a situation where each cluster contains one element. Two clusters are merged if the distance between their centroids is the smallest (minimizing): ni nj jxi xj j2 ð11Þ ni þ nj where ni is the number of domain blocks in the cluster Ci and xi is its centroid. Then, using this strategy within a k-d tree, clusters are progressively merged until the desired number of clusters is reached.14 Once achieved, the final tree must obviously be balanced. Both SOM (self-organizing maps) (Kohonen, 1995) and ACN (attribute cluster net works) (Cheng, 1996) have been proposed for speeding up fractal search (Hamzaoui, 1997c; Wang and Cheng, 2000). For instance, Hamzaoui 14 A fast PNN algorithm is used by Wein and Blake (1996): by merging only the best half of pairs satisfying Eq. (11), the method is faster than O(N log(N) ) used by the original PNN, but achieves a suboptimal solution.
152
VITULANO
uses the original Fisher classification scheme for building a SOM, starting from the results of Saupe (1995). In practice, 72 classes of Fisher are replaced by 72 cluster centers, using the same domain pool. By means of a linear initialization and using the set F ¼ ffðIDi ðDi ÞÞ; i ¼ 1; ::; ND g, the training phase is performed within 10,000 steps. Once done, a vector fðIDi ðDi ÞÞ is mapped to the nearest cluster center. Various strategies are investigated for encoding a given range block R. Without giving many details, the search involving the vector fðIR ðRÞÞ can be made within one or more clusters considering only positive or also negative scaling factors. This approach has many positive aspects. First, it achieves very interesting results proving that using a precomputed set of cluster centers and a set of several images does not critically aVect the final result. This way it is possible to avoid the preprocessing computation as previously suggested by Øien (1993, 1994). Second, this strategy can outperform previous clustering-based approaches (Bogdan and Meadows, 1992; Jacobs et al., 1992; Lepsøy, 1993; Lepsøy and Øien, 1994). The huge number of attempts oriented to reduce the searching phase allows the reader to understand how important and delicate this step is in fractal encoding. In order to not confuse readers, a few additional approaches are presented. Some authors proposed the product between blocks and vectors. An example is in Bedford et al. (1992), where this product leads to discarding of some unuseful domains. More recently, Riccio and Nappi (2003) proposed to project both domains and range on a ‘‘preset block.’’ The latter is selected as a mean of range blocks. Then error blocks coming from projection of both ranges and domains are first uniformly quantized and rounded using 5 bits, clipping values falling out of this range. Then, such information is suitably organized using binary trees (domain tree). The peculiarity of this method is reaching a good quality more quickly than other similar approaches as in Cardinal (2001). A very close approach (Franco and Malah, 2001) exploits a modified matching pursuit (see Chapter 10 in Mallat, 1998). Collage distance is computed in two steps. The first step is the classical collage error E1 leading to the best domain D1 and relative scale parameter a1 (b is not considered since blocks are considered without the DC component). The second step is then: arg mina2 ;b2 ja1 ;b1 E2 ¼ kR a1 D1 a2 ðD2 a1 D1 Þk22
ð12Þ
introducing another domain D2 with the relative scale parameter a2. It is important to note that the diVerence with classical matching pursuit is justified by the fact that collage error (in the first step) is basically composed
FRACTAL ENCODING
153
of high frequencies. This makes the above strategy more eVective instead of direct approximation of E1 by a domain. At this point, it is interesting to notice that, even though having structural diVerences, the weakest point of fractal coding is the same as that of coding based on transform. The latter, in fact, attempts to trap image information in a few coeYcients (Mallat, 1998)15 for improving rate. Instead, fractal encoding achieving the same goal would solve problems tied to encoding time. Hybrid approaches with other coding transforms will be considered later. Another way to reduce complexity in encoding is to exploit fractal dimension of input image. Conci and Aquino (1999) use local fractal dimension (LFD) for determining complexity (in terms of information) of a range block: the higher the LFD, the higher the complexity. This method, combined with a variable contraction factor (considering smaller contraction factors that reduce distance between image and attractor in the collage theorem), allows eVective reduction of the domain pool in which to perform the search. Starting from such a quantity of proposed approaches, it is diYcult to draw absolute comparisons. Some papers attempt to do so (such as Loganathan et al., 2003; Polvere and Nappi, 2000), but they only consider subsets of them. The trend in the literature seems to follow Saupe’s guideline: the use of a tree structure based on a suitable set of features for describing blocks. B. How to ‘‘Fractally’’ Encode After choices are made concerning how to split the image for generating ranges and how to build and organize a domain pool for a fast and eVective search, it is now possible to investigate the core of fractal encoding: the minimization of the collage error (Bani-Eqbal, 1995). This latter involves further subtle and mathematical aspects that are the topic of this section.16 1. Subsampling Domain Blocks As seen in the previous section, domains must be subsampled to have the same size of ranges. However, this phase is only practical since it does not influence contractivity of the final mapping (Fisher et al., 1992). Subsampling basically involves two main questions: how much and, obviously, how to subsample. 15
This problem is shared with several other image processing problems such as denoising, retrieval, and so on. 16 In the following, probabilistic fractal approaches using contractive maps with probability, as in Mitra et al. (2001), will be not considered.
154
VITULANO
With regard to how much, Jacquin proposed a spatial contraction ratio equal to 2 in his original scheme. For instance, from an 8 8 domain, he achieved the equivalent 4 4 one. Again, some attempts have been made to change this parameter, taking smaller or higher values of spatial contraction as in Beaumont (1990) and Kuroda et al. (1995). In the former, Beaumont used a factor equal to 3 (i.e., achieving a contracted domain of 4 4 from a 12 12 one). This choice, along with selection of a values in the interval [0, 0.75], yields a faster convergence in decoding. In fact, it can be trivially proved that distance reduction for a linear aYne function is regulated by contraction factor a. More recently, Selvi and Makur (2003) have investigated the possibility of using a variable size for ranges as well as for domains. In particular, they have observed that a fractional ratio between domain and range sizes can yield better results. It obviously depends on the adopted partitioning scheme. Then, using a variational rate-distortion cost, it turns out that a variable domain-variable range scheme achieves better results (up to 1.76 db on Barbara image) than a fixed one. However, as expected, the computational cost quickly increases: it depends on how many sizes are available. Once the spatial contraction ratio is determined, the best strategy for eliminating information (i.e., how to subsample) has to be selected. Unlike Jacquin’s original scheme, subsampling based on the brute elimination of pixels has been proposed in various papers (Lu, 1997). But it seems to yield worst results than the one achieved by averaging (Fisher et al., 1992). A more sophisticated solution has been proposed by Barthel and Voye´ (1994), based on a combination of averaging adjacent pixels followed by an anti-aliasing filter whose cutoV frequency is below p/2. A few comments are necessary about how to subsample irregular blocks. In fact, we have seen that partitioning schemes can also produce ranges with complicated shapes. Without providing burdensome details, classical subsampling is eVective for square and rectangular blocks. For triangles, matching vertices oVers a way to avoid more complicated and ad hoc range-domain projections. On the contrary, these latter are required for quadrilaterals and more general shapes (Wohlberg and de Jager, 1999). 2. Are Isometries Really Useful? Jacquin’s original approach used isometries for enlarging domain pool. Nonetheless, it is worth wondering whether they really provide a contribution for improving performances in terms of rate distortion results. Empirical studies have shown that a slight improvement in quality is undone by additional bits required to code isometries. This seems true for both the classical fractal transform (see Bedford et al., 1994; Kaouri, 1991; Saupe,
FRACTAL ENCODING
155
1996) and BFT (see Monro and Woolley, 1994b; Woolley and Monro, 1994). Nonetheless, almost all authors include this variant.17 With regard to their usage, there is no agreement in the literature if all isometries are used with the same frequency during encoding (for a more in-depth discussion on this topic, see Frigaard et al., 1994; Jacquin 1993; Lu, 1997; Monro and Woolley, 1994a). This fact is quite interesting, since all available fractal encoders support the possibility (user is often obliged to employ them) of managing isometries, following the traditional scheme. Apart from the above disagreement, some researchers investigate this part of fractal encoding for improving performances as well as reducing the computational eVort required for using them. The first thoughtful generalization concerning isometries consists of enlarging the set of allowable angles: not only multiples of p/2 as proposed by Jacquin. This goal has been reached by Popescu et al. in 1997, exploiting the Riemann mapping theorem. The strategy consists of mapping a square block on a disk to perform the required rotation and to perform inverse mapping to come back to the square again. Rotations can assume arbitrary angles so that the eight classical isometries are particular cases of this new formulation. At the same quality, the compression ratio is then increased by slightly less than 10% than classical scheme. Many approaches have been proposed to reduce the computational time required by isometries. In Kapoor et al. (2003), the underlying idea is to find the most likely isometry within the first, say, 100 managed blocks. It will be then applied to the remaining ones. Results achieved with Fisher’s scheme show that a lot of computation can be avoided with almost the same quality. EVort introduced by isometries can be also avoided by preclassifying blocks (ranges and domains) (PfeVerman et al., 1999), splitting them into four blocks, and considering their average—as in Fisher et al. (1992) without variance. Substantial computational eVort can be avoided with an acceptable loss of quality (1 db) by means of a function that transforms two blocks such that they belong to the same class. 3. About Contractivity We have seen that fractal transform is based on both Banach and the collage theorems. In particular, the latter guarantees and, overall, provides a deterministic way for estimating PIFS for a given image to encode.18 PIFS are 17
The term variant is used since isometries are not intrinsic of fractal image compression as outlined by Saupe (1996). 18 PIFS are a special case of RIFS (recurrent IFS); see Barnsley and Jacquin (1988) for details.
156
VITULANO
strongly based on contractivity notion. But what happens if contractivity constraint is weakened? To answer this question, the notion of eventual contractivity is introduced. This concept developed from some empirical studies performed by Fisher, Jacobs, and Boss (see for instance, Fisher et al., 1992; Jacobs et al., 1992, for an insightful lecture), where better results were achieved allowing the transformation to be not strictly contractive. Eventual contractivity exploits the fact that not all maps, belonging to a composition W of them, are strictly contractive. It is just required that the contractive ones have a good influence on the whole W, leading its behavior to be contractive. More precisely, without a strong formalism, we may say that W is eventually contractive if there exists an integer m, such that its mth iterate W m is contractive. m is called the exponent of eventual contractivity. Then, being W m the union of compositions: wi1 wi2 . . . wim , it is required that this mix has a contractive behavior (Fisher et al., 1992; Jacobs et al., 1992). This stems from the fact that the product of contractivities bounds the contractivity of a composition of mappings. As noted above, even though a generalized collage theorem can be stated for eventual contractive (aYne) mappings (Fisher et al., 1992; Jacobs et al., 1992), Fisher et al. achieved better results leaving the scaling parameter a to assume values greater than 1 (usually 0 a 1.5). The empirically guaranteed convergence involves, for instance, 10 iterations with a ¼ 1.2. It stems from the fact that contractivity constraint is only suYcient in the collage theorem. However, Øien et al. (1994) noticed that it may be a bit dangerous to relax contractivity constraint. Even though experience shows the contrary, there is always a possibility that the decoding process could not converge. That is why many authors prefer to achieve worse results using strictly contractive mappings. We have seen that there is no direct procedure for estimating a contractive mapping converging to a given attractor. In this sense, the collage theorem is beneficial. It provides an upper bound to the collage error, simply checking the distance between a generic starting point and the first iterated of the contractive mapping on it. Reducing this bound would not solve the Banach inverse problem but would lead to better results. Can this bound be reduced? From this point of view, an interesting contribution has been given by Øien et al. (1994). It holds for aYne blockwise averaging (ABA) maps, originally introduced by Jacquin (1992). (See also Fisher, 1995; Lundheim, 1992; Øien, 1993.) ABA maps are linear, aYne, and act on blocks where the DC component has been removed. Domain size reduction is achieved by averaging. Thus the following theorem holds:
157
FRACTAL ENCODING
Theorem 5 (ABA mappings collage). If w : R N ! R N is an ABA map from domains of size 2D to ranges of size 2R, then dðx; x Þ ¼
K 2 X
ak dðxðkÞ ; wðxðkÞ ÞÞ
8x 2 R N
k¼0 D where K ¼ d DR e; x is the attractor of w, ak is w Lipschitz factor with k ¼ 0, (k) 1, .. while x denotes x after decimation (via averaging) and undersampling (via sample duplication).
This theorem exploits the fact that collage error signal contains many high frequencies that are smoothed away by decimation and sample duplication. Such an energy reduction improves the collage bound. On simulated signals, the improvement is impressive. On real images, experiments show that a slight improvement in terms of PSNR is achieved but with an increasing quality in diYcult parts such as Lena’s eyes, hat, and so on. Another means of making contractive constraints weaker has been proposed by Hu¨ rtgen and Hain (1994), starting from Lundheim’s Ph.D. work (1992). Lundheim has been a pioneer in studying the problem of fractal encoding in a finite space using functional analysis on discrete signals. In practice, any domain and range can be always considered as N-dimensional vectors (stretching square blocks in 1D signals in a some way). Then, if functions to be used in fractal encoding are aYne, they can be written: W ¼ Ax þ b, where A is the linear part containing eventual isometries, while b is the additive part. Starting from the definition of contractivity, it is easy to show that it is equivalent to the condition: pffiffiffiffiffiffi kAk ¼ supl2sðAT AÞ jlj a 1 where a is the contraction factor of W while s(.) indicates the spectrum. The above relation holds for any norm and is just suYcient. It is also possible to achieve a necessary condition on convergence of W in terms of the spectral radius rs. In fact, keeping in mind that for a given k: xk ¼ W ok ðx0 Þ ¼ Ak x0 þ
k1 X ðAi Þb i¼0
for any arbitrary xo 2 R N , we have that rs (A) < 1 if and only if x ¼ limk!1
xk ¼ limk!1
Ak x0 þ
k1 X ðAi Þb ¼ ðI AÞ1 b: i¼0
158
VITULANO
Hence, eventual contractivity constraint becomes: rs ðAÞ < 1 < kAk: It is worth highlighting that Lundheim showed that eventual contractivity is independent of the used norm. In other words, if a transformation is eventually contractive under a given norm, it will be for any other norm. Two interesting examples are given in Hu¨ rtgen and Hain (1994), where computation of spectral radius (Hu¨ rtgen, 1993) is performed on the scheme proposed by Monro and Dudbridge (1992a) and a special case of Jacquin’s scheme. In the second case, which is more general, the suYcient and necessary constraint for contractivity is equivalent to say that all eigenvalues of linear part A must be in the unitary circle (Hu¨ rtgen and Simon, 1994).19 In this sense, an eVective tool for checking convergence for both contractive and especially eventual contractive maps has been detailed by Tan and Yan (2000), who prove that it is possible to monitor convergence at each decoding step via a simple formulation (see their original article for details). So far, we have seen eVorts for weakening contractivity constraints on linear aYne mappings. Shortly after Jacquin’s proposal, some authors began to be interested in investigating how to generalize the massic component of contractive mappings in the Jacquin’s original scheme (Barthel and Voye´ , 1994).20 Let us replace wi;2 ¼ ai z þ bi by wi;2 ¼ gðzÞ, where g(z) is a more general function. The main problem is to reach contractivity of g(z). Considering the contraction of the geometric part, Zhao and Yuan (1998) have proved that the simple constraint g0 < 1 guarantees contractivity. Nonetheless, as already noticed by Fisher et al. (1992), contractivity of geometric part is only formal, since it is not useful in practice. In eVect, Zhao and Yuan noticed some problems for convergence in decoding using a polynomial function of second degree: gðzÞ ¼ az2 þ bz þ c. Relaxing the contractivity constraint of the geometric part, the problem is to prove contractivity of the only massic component. Considering an image with intensity belonging to the range [0, L] (for gray scale it becomes [0, 255]), it has been proved by Vitulano (2001) that constraints for guaranteing contractivity become: . .
19
Range g([0, L]) has to be included in the domain, and [0, L] is to be in the convergence interval.
Readers interested in more in-depth study of this topic, and, in particular, in how to determine convergence starting from either the statistical distribution of eigenvalues or mapping cycles can also see Gharavi-Alkhansari and Huang (1997), Hu¨ rtgen (1995), and Mukherjee et al. (2000) for its application to a more effective decoding. 20 Even though BFT allows us to use polynomial functions of degree greater than 1, here Jacquin’s original scheme is considered.
FRACTAL ENCODING
159
In terms of constraints on g(z) parameters, the two aforementioned requests correspond to various mathematical conditions. The corresponding heavy computation cost can be reduced considering CVPs (centered vertex parabolas) where the considered functions are: gðzÞ ¼ az2 þ c. In this case, there are only two constraints: 0 c L
0 < aL2 þ c L
ð13Þ
with just two parameters to store as in the linear case. Once the best scaling and oVset parameters are found, they must be stored. While information concerning range partitioning, domain location, and isometries is intrinsically discrete, information relative to contractive mappings is not. Then, the aforementioned parameters must quantized in a suitable number of bits. This is the topic of the next section. 4. Quantization of Scaling and Offset Parameters Quantization of scaling and oVset parameters naturally lead to the problem of looking for the best bit allocation. In other words, how many bits have to be devoted to them? In addition, which quantizer has to be adopted? Before showing proposals in literature, it is worth stressing that quantization of a and b may have dangerous side eVects on the whole coding scheme. In fact, as outlined by Øien (1994), the distance between a range R and a domain D becomes: d 2 ðR; QðDÞÞ ¼ d 2 ðR; DÞ þ d 2 ðD; QðDÞÞ where d(.) is the square distance in l2 and Q(D) the domain after quantization of parameters. It is clear that there is an additional distance due to quantization. It is also evident that quantization may influence the following: .
.
The best couple range-domain search. In fact, it is possible that if a domain D results the best for a given range R, Q(D) is no longer. Convergence in decoding. In fact, both speed of convergence and final quality may have a displacement from the expected ones. As a matter of fact, it is very unlikely as empirically shown.
Moreover, some authors wondered whether it is really useful to require scaling parameter to assume negative values. In fact, allowing only positive values would yield a saving for bits encoding sign. Such bits may be used to enlarge the domain pool (Saupe, 1996). The question is then similar to the one of real usefulness of isometries. Coming back to quantization and bit allocation for parameters, apart from some attempts to use genetic algorithms (Takezawa et al., 1999) or fuzzy logic (Berthe et al., 2002), uniform quantization has been almost
160
VITULANO
Figure 23. Approximated shape of histogram of scaling parameter a for 8 8 blocks (Øien, 1994).
universally adopted (Fisher, 1995), using various bit allocations for parameters. Some examples are reported by Wohlberg and de Jager (1999), even though a comparison has shown that better performances can be achieved with 5 bits for scaling and 7 for oVset (Fisher, 1995). Some authors have also proposed VQ quantization for exploiting correlation between a and b as in Barthel et al. (1993) Hartenstein et al. (1997). On the contrary, Øien showed (first in his Ph.D thesis (1993) and then in his 1994 paper) that parameters correlation is lost if the DC component of decimated domains is removed. In particular, he proved it whenever signals to code are stochastic processes with the property of being stationary. Nonetheless, uncorrelation can be a good approximation for real signals, too. Hence, an independent quantization can be performed for a and b for fractal schemes with orthogonalization. While b pdf (probability density function) has no peaks so that uniform quantization seems to be the most suitable,21 this is not the case for a. Figure 23 shows the approximated shape of the histogram on four test images: Lena, Kiel Harbour, F16, Boat. It shows two peaks, but mainly a strict variance. Supposing the pdf symmetric, Øien proposed a Lloyd-Max quantizer, which minimizes the error variance. As a result, 5 bits for a and 8 bits for b achieve both a good PSNR and discrete visual quality. Considering parameters distribution, quantization is still uniform in the unconventional parameterization used in the approach of Tong et al. (2000, 2001, 2002). Readers should note that here luminance oVset in the aYne transformation is replaced by range mean. 21
Øien (1994) noticed a dependence between DC components of close blocks without giving an explicit solution for an effective DPCM on a typical fractal partition.
FRACTAL ENCODING
161
Things slightly change using both Jacquin’s scheme with nonlinear contractive functions (see Vitulano (2001) and Zhao and Yuan (1998) for details) and BFT (Monro and Woolley, 1994a,b). In both cases, more complicated quantization functions accounting parameters distributions have been adopted. 5. More Blocks Improve Encoding We have seen that fractal transform is based on the research of the best couple domain-range. But what happens if more blocks are used for approximating a given range? Franco and Malah (2001) have already proposed an attempt in this sense. In this case, two domains were used for collage minimization. However, this is a special case of a more general procedure: matching pursuit (Mallat, 1998), where approximation is achieved via a dictionary of selected signals. So the best approximation with the least number of blocks has to be achieved. This strategy has been investigated by Fisher (1995) using more domains as well as more fixed blocks. An interesting variant has been proposed by Gharavi-Alkhansari and Huang (1994, 1996). In their approach, the pool consists of two kinds of blocks. The first one is composed of adaptive-basis blocks (ABB) that can be split into two subsets: higher-scale basic blocks (HSBB) and same-scale basis blocks (SSBB). HSBBs have to be spatially contracted and constrained to lie in regions of the image already coded and close to the range to be coded. SSBBs have the same properties of HSBBs except for the spatial contraction—that is, their size coincides with the range one. Both the above subsets admit rotated and reflected versions of the original blocks. In addition to ABBs, another set of blocks is considered: fixed-basis blocks (FBB). The latter are the same for each range, so that they can be selected during encoder-decoder designing. At this point, the aim is to use the minimum number of blocks trying to achieve the best rms. Obviously, only suboptimal solutions can be achieved. Without providing many details, the research may use either an orthogonal or a nonorthogonal matching pursuit (see Gharavi-Alkhansari and Huang, 1996; Mallat, 1998, for a deeper discussion). It is interesting to notice that this approach becomes a vector quantization, a block coding, or a classical fractal transform, simply making three diVerent and suitable choices of blocks to be used. 6. Rate-Distortion Upper Bound Some authors have investigated some strategies oriented to understanding and trying to achieve rate-distortion upper bound—or, equivalently, the lower bound of collage error. Such an interest started in 1993, when
162
VITULANO
Domaszewicz, Vaishampayan, and Hu¨rtgen tried to understand potentialities of fractal encoding even though via computationally expensive schemes (see Domaszewicz and Vaishampayan, 1994; Hu¨ rtgen, 1994; and related papers). Briefly, their schemes consist of finding a solution via the classical scheme and then iteratively refining coding parameters. This way, an attractor closer to the original image to encode is found, minimizing further distortions coming from, for instance, parameters quantization. Looking at minimization in Bani-Eqbal (1995) as a search of the best set of parameters characterizing fractal code for a given image, fractal encoding can be also seen as combinatorial optimization problem. Intuitively, the number of required tuples for fractal coding is proportional to the number of ranges partitioning the image. Ruhl and Hartenstein (1997) proved that optimal solution is NP-hard. In other words, all schemes we have seen yield suboptimal solutions. This obviously causes worst performances. To try to solve this problem, Hamzaoui et al. (Hamzaoui et al., 2000a,b, 2001) propose local search algorithm. This strategy consists of iteratively improving fractal encoding solution achieved by classical collage coding. In practice, once this preliminary solution is found for the image, the algorithm starts. For each range, the fixed point for the corresponding contraction is computed. Then another contraction-improving collage error is looked for. When the number of domains is infinite, the optimal solution corresponding to the absolute minimum is found. This technique is associated with merging partitioning proposed by Ruhl et al. (1997). PSNR improves a bit less than 1 db. 7. Decoding As already outlined, fractal coding is unbalanced or nonsymmetric: a great computational eVort in encoding as well as a small one in decoding. Similarly, despite the extensive literature concerning encoding problems, only a few contributions have been devoted to fractal decoding by the scientific community. They mainly deal with three problems: . . .
blocking artifacts reduction, convergence improvement, and zooming.
With regard to classical smoothing techniques in correspondence to blocks frontier are used. They usually provide a more pleasant visual quality (Chang et al., 2000; Fisher, 1995; Lu, 1997). More recently, Giang and Saupe (2000) have proposed the application of a smoothing filter before each decoding iteration. It smoothes artifacts caused by domains boundaries inside range blocks. This strategy makes spurious edges attenuation in
163
FRACTAL ENCODING
agreement with action of scaling parameter of contractive mapping. Then it preserves image discontinuities. Finally, spurious edges in correspondence to range boundaries are eliminated by another adaptive filter after the whole decoding process. This provides an improvement of about 0.5 db gain against 0.3 $ 0.4 of classical techniques. With regard to convergence improvement, eVorts have been oriented to optimizing the convergence of PIFS to their attractor: the recovered image. As a matter of fact, convergence usually takes a few seconds, since fewer than 10 iterations are enough to reach a solution close (in the mean square error sense) to the attractor. For each iteration, a temporary image is produced and is the starting point for the next one. The possibility of speeding up the convergence has been argued since 1991 by Kaouri (1991) and then further developed by Hamzaoui (1996). The problem can be stated as follows. If the contractive transformation used is aYne, it can be rewritten as W ðIÞ ¼ AI þ b, where I 2 R N is our image (here represented as M ¼ N N vector), A is an M M matrix and b is an M dimensional vector. Then, standard decoding corresponds to: ðkþ1Þ
Ii
¼
M X
ðkÞ
Ai; j Ij
þ bi
i ¼ 1; ::; M:
j¼1
Such a process can be improved if already-computed pixels of I (kþ1) are exploited for generating the remaining I (kþ1) ones instead of I (k)’s pixels, i.e., ðkþ1Þ
Ii
¼
i1 X j¼1
ðkþ1Þ
Ai; j Ij
þ
M X
ðkÞ
Ai; j Ij
þ bi
i ¼ 1; ::; M:
ð14Þ
j¼i
For implementation details and a convergence discussion, see Hamzaoui (1996, 1997a). A further gain in terms of convergence speed can be achieved introducing frequency of usage of image parts (blocks or, as variant, pixels) (Hamzaoui, 1997b). For instance, frequency of a given range accounts for how many domains intersect it, as well as how many pixels belong to such intersections. Thus, the order for decoding such ranges (or pixels) is not fixed a priori, as in the classical scheme, but follows such a frequency. Parts with the highest frequency will be coded first, and so on, until parts with frequency zero. These latter can be only decoded at last iteration.22 Additional cost for computing frequencies is low. Such a strategy outperforms the two previous ones, showing a slight improvement in terms of mse than Eq. (14) with a significant speed-up. It is worth mentioning the important result on decoding, achieved first by Baharav et al. (1995) and then generalized to QD partition-based schemes by 22
For a simpler approach, see also Chu and Chen (2000).
164
VITULANO
Sutskover and Malah (1999). It is based on a theorem that links attractor versions at diVerent scales. In practice, if I is the attractor of a composition of contractive mappings, its subsampled copies will be the attractors of suitable contractive mappings at the corresponding resolution. Subsampling can be performed via either neighbor averaging or alternative operators. This strategy is important since it allows achievement of a fast decoding algorithm (Sutskover and Malah, 1999). In addition, unlike iterative decoding procedure, it guarantees that fixed point is reached in a finite number of operations. Fractal encoding allows us to also achieve zooming. The underlying idea is that similarities of a given image are scale independent. In other words, if a domain is similar to a range, they will be similar at any resolution. Hence, decoding may recover the image at any degree of resolution. Unlikely, the desire of capturing nature information, again, is not satisfied. In fact, fractal encoding usually produces blocking artifacts. They are zoomed, too. Solutions as range lapping partially solve this problem but, at the same time, reduce coding performances (Polidori and Dugelay, 1997). In practice, apart from a better visual quality tied to a slight better recovering of edges, fractal zoom generally works as other existing techniques (Gharavi-Alkhansari et al., 1997; Polidori and Dugelay, 1997). Before concluding this part, an eVort to make fractal code progressive has been accomplished by Kopilovic´ et al. (2001). This is usually an important feature for a coding scheme, since it allows the decoder to truncate code according to the required quality. Such proposal is based on a Lagrange optimization for giving an optimal ordering of fractal code bits. IV. Fractals Meet Other Coding Transforms Many attempts have been oriented to combining fractal coder (in the following FC ) and other coding approaches. They are usually oriented to following two main strategies: .
.
To combine two coding techniques for an improved hybrid one; in this case, complementarity between FC features, such as long-range correlation (Hu¨ rtgen and Simon, 1994), and other coding techniques is exploited. To improve some specific parts, by either inserting an alternative coding technique in the fractal scheme or using a diVerent representation such as Fourier, wavelets, and so on.
With regard to the first strategy, a step forward for mathematically justifying a hybrid scheme has been achieved by Wang et al. (2000). Their
FRACTAL ENCODING
165
elegant formulation allows us to design a fractal hybrid scheme to any other coding scheme. Regardless, many ad hoc and eVective approaches have been and currently are under study for both aforementioned strategies. This makes it very diYcult to follow all the guidelines in this jungle of papers. Thus, only main trends followed in literature are presented, with hints and links for more in-depth study provided. This section is more technical and mainly oriented to readers already familiar with other coding techniques. Hence, many details are missing since they are considered known. A. Vector Quantization Fractal encoding has many similarities with vector quantization. Jacquin highlighted this point in his original paper (1992) as well as in the successive review (1993). The main diVerences between these two coding approaches consist of the following. .
.
The fractal codebook is generated from the image to be encoded instead of being universal.23 Hence, the fractal decoder needs no information about it. The fractal parameters rely on contractivity. Then PIFS converges to the attractor against no iteration of vector quantization.
All approaches based on combining vector quantization and fractal encoding are then oriented to drastically reducing redundancies in the pure fractal codebook (Chang and Kuo, 2000; Kim, 1996; Kim and Park, 1994; Kim et al., 1998). Strategy consists of building a low-resolution image, known by both encoder and decoder, so that they have the same codebook. Obviously, decoded image is achieved without convergence since no restrictions are required for scaling parameter. Hamzaoui and Saupe (2000) and Hamzaoui et al. (1996) have also investigated how to exploit vector quantization features for building a hybrid codebook. Starting from the clustering approach used in Hamzaoui (1997), a set of VQ blocks are used as an alternative to classical domains. VQ blocks are used if fulfilling a given tolerance. Otherwise, the search is performed in the classical fractal codebook. This way, ranges coded by VQ blocks require no iteration in decoding, unlike the other ones.
23
Domain pool was also called virtual codebook by Jacquin (1993) for outlining differences between fractal coding and vector quantization.
166
VITULANO
B. Fourier Domain After vector quantization, the most natural way for improving fractal transform performances is to explore Fourier domain. It is well known that Fourier representation is able to compact information in a few coeYcients. Hence, all approaches try to exploit this aspect for combining these diVerent philosophies. Some authors propose applying fractal transform to DCT image representation (Barthel, 1997; Barthel and Voye´ , 1994; Barthel et al., 1994; Farhadi, 2003). In this case, the massic component of aYne mappings is changed so that scaling parameter is replaced by a diagonal matrix, whose elements weight block spectral content. OVset parameter is replaced by a column vector. Contractivity of this new massic component is guaranteed via Parseval theorem if all diagonal elements are less than unity. This representation corresponds to classical fractal coding (FC) if diagonal elements are equal and discrete cosine transform (DCT) if they are zero. In the latter case (whenever diagonal elements are equal), some authors stressed the fact that achieved expansion via fixed blocks takes advantage of orthogonalization (Melnikov and Katsaggelos, 1998, 2002; Wakefield et al., 1997). A diVerent approach consists of image fractal coding followed by DCT of the error between original image and achieved fractal attractor (Thao, 1996; Zhao and Yuan, 1994). This method of combining them attempts to overcome the high computational eVort of fractal transform for encoding some image details even though FC ability in encoding edges as well smooth image areas. Such details can then be quickly and eVectively encoded by DCT, which is then a complementary encoder. Such a role for DCT has also been exploited in other proposals (Curtis et al., 2002), where an image block is encoded via pruned DCT, if under a given tolerance; otherwise, is left to FC. If FC also gives no satisfactory results, DCT is definitively adopted. An alternative strategy is proposed by Yuxuan and Nge (1999), where block size starts from 32 32, splitting them for fulfilling quality requirements. FC is then used until 8 8, where DCT is adopted for smaller blocks. Compaction property has also been adopted for solving specific problems within fractal encoder. For instance, Saupe (1995) noticed that 10 DCT coeYcients performed as 16 spatial coeYcients in his nearest-neighbor search. This approach was been further developed by Hartenstein and Saupe (2000), where a lossless speed-up is gained, exploiting cross-correlation in the Fourier transform that simplifies some operations. On the other hand, DCT encoding followed by DC subtraction and normalization has been adopted by a number of authors for a faster search in agreement with human visual criteria (Barthel et al., 1994; Beaumont, 1990; Lu, 1997; Saupe, 1995; Wohlberg and de Jager, 1995, 1999). On the other hand, speed-up can also
FRACTAL ENCODING
167
be achieved (Valantinas et al., 2002), exploiting the fact that the spectrum of a block of a given image can be always approximated by a 2D hyperbolic surface: zðx1 ; x2 Þ ¼
C ; ðx1 x2 Þgx
C; gx 0:
Invariance to isometries along with direct control of gx on the surface smoothing allows us to estimate closeness between gxs of domain and range. This leads to considerable eVort saving. Finally, isometries phase also takes advantage of Fourier domain. A reduction of computational eVort can be achieved by exploiting invariance of some operations in an alternative domain like the Fourier one. For instance, the speed-up using DCT achieved by Jeng and Shyu (2000) and Truong et al. (2000) is up to about 3.5 than with the classical scheme. C. Wavelets The simplest way to combine wavelet bases and fractal encoding is to independently encode each sub-band. This way, ranges and domains belong to the same sub-band exploiting its redundancy. Such an approach has been explored (e.g., see Belloulata et al., 1998). To further stress intrinsic correlation of wavelet coeYcients, block shape is made adaptive by allowing a more or less elongation along band direction accounting for content of the band itself. However, some authors have also investigated an intrinsic link between sub-bands at diVerent scale levels. Rinaldo and Calvagno (1995) proposed fractal encoding of a given sub-band taking domains from the next lowerresolution sub-band. This strategy is usually known as sub-band prediction. This approach exploits the fact that domain and range already have the same size in a multiresolution analysis. After encoding, both lower-level details and diVerence between the original image and decoded one are suitably quantized. Sub-band prediction can be further generalized by subtree prediction. Several authors followed this approach based on matching wavelet subtrees instead of blocks. In other words, a mapping between the subtree associated to the target range and the one of the best domain is searched. This can be done via the Haar basis as in Davies (1998) and further improvements in terms of lower complexity (Zhang and Po, 1999), better visual quality for image components like edges and textures (Kim et al., 2002), and so on (Li and Kuo, 1999; Peng, 2002; Sawada et al., 2001; Simon, 1995; Wu et al., 2000). If diVerent bases are used, that is, with more vanishing moments or
168
VITULANO
biorthogonal, a more pleasant visual eVect is achieved: a better objective quality is brought together, blocking artifacts reduction (Caso and Kuo, 1996; Krupnik et al., 1995; Li and Kuo, 1996; van de Walle, 1997). A comparison among performances of some bases can be found in Qiu and Dervai (2000). Wavelet representation can also help to speed up and simplify specific parts of a fractal scheme. For instance, classification can take advantage from wavelet details directionality whenever clustering is adopted as in (Belloulata et al., 1997) as well as implementation of geometric (Andreopulos et al., 2000; Davies, 1995) and massic components of contractive mappings (Andreopulos et al., 2000). D. Linear Prediction Coding The first approach to combine linear prediction coding (LPC) and fractal transform was proposed by Nappi and Vitulano (1999). The underlying idea consists of replacing classical quantizers for coding prediction error with fractal transform. Hence, search of fractals is performed within image high frequencies that characterize prediction error. This strategy strongly outperforms classical LPC techniques performances. Another way of combining these two approaches has been investigated by Tong and Pi (2003). They start from their proposal of unconventional aYne parameters (Tong and Pi 2003; Tong and Wong, 2002) which has already been discussed, where luminance oVset is replaced by range mean. This strategy yields an optimal bit allocation as well as an eVective LPC of DC component of ranges. Under suitable statistical hypotheses on prediction error, LPC followed by arithmetic coding (Shi and Sun, 2000) gains better results than conventional fractal scheme. E. Other Representations Other representations have also been investigated for speeding up fractal encoding. In Lee and Ra’s work (1999) the well-known Walsh-Hadamard transform (WHT) is exploited (Gonzalez and Woods, 2001). It is easy to argue that most of information lies in the low-frequency WHT coeYcients. So, considering the second and third WHT coeYcients, it is trivial to show (via the Parseval theorem) that distance between range and domain is less or equal to the usual collage error. This strategy achieves a speed-up of about 18% over full search. Hadamard transform has been also exploited by Jeng et al. (2000), who found that some encoding operations like isometries are more eVective with a speed-up of about 2.3. Finally, Breazu et al. (1999) used
FRACTAL ENCODING
169
the Karhunen-Loe`ve transform (Gonzalez and Woods, 2001) through neural networks for making more eVective the search via suitable feature vectors and gaining encouraging results. V. Conclusions This article presents the main trends followed in the literature on fractal coding. The huge quantity of research on this topic limited this review to a description of only the main ideas, making the reader aware of already followed approaches along with a short explanation of them. However, each paper usually hides, more or less explicitly, some other aspects that sometimes can represent the missing component for solving some open problems. Hence, guidelines, comments, and hints given in this contribution represent just a starting point for researchers interested in orienting their eVorts on this topic. Appendix: Another Way of Fractal Encoding: Bath Fractal Transform Similar to Jacquin’s fractal transform, Bath fractal transform (Monro, 1993; Monro and Dudbridge, 1992a,b; Monro and Woolley, 1994a,b; Woolley and Monro, 1994, 1995) relies on an IFS: W ¼ fwk ; k ¼ 1; . . . ; N g for coding a given image I. But, in BFT this latter is combined with a fractal function f (x, y) with x, y 2 I (called function part), taking the form: f ðwk ðx; yÞÞ ¼ vk ðx; y; f ðx; yÞÞ: In practice, image I is split into a partition whose elements are called tiles—these blocks are equivalent to range blocks of Jacquin’s proposal. The task of function f is to approximate gray levels gk of kth tile via the gray levels of a portion of the image I containing it (called parent block and equivalent to domain block of Jacquin’s scheme). In practice, in BFT each tile is contained by the parent block to use for its approximation. If nks have M parameters ai i ¼ 1; ::M to be estimated, these ones can be computed as follows: @dðgk ; gk Þ ðkÞ
@ai
¼ 0 8i;
1 where gðxÞ ¼ nk w1 represents approximation of kth tile k ðxÞ; g½wk ðxÞ via BFT. In practice, the diVerence with Jacquin’s model stems from the
170
VITULANO
massic component, while the geometric one is similar apart from that only contraction is allowed (i.e., without rotations and so on) and domain blocks spatially include range blocks. The most interesting aspects of this approach are: .
.
.
The great generality of the function f. In fact, nk ðx; y; f Þ ¼ a þ bx x þ by y þ cx x2 þ cy y2 þ dx x3 þ dy y3 þ ef ðx; yÞ. In a sense, BFT may be considered a sort of generalization of Jacquin’s gray-level management. BFT approximates gray levels of a tile with the ones of a block containing the tile itself. Hence, search eVort is drastically reduced, thanks to a limited number of allowable parent blocks. Parent spatial closeness to its tile, but this aspect touches an open problem, too.
The last two points (discussed in depth in Section III.A.3) and other details of this scheme are managed in the relative sections. References Andreopulos, I., Karayiannis, Y. A., and Stouraitis, T. (2000). A hybrid image compression algorithm based on fractal coding and wavelet transform. Proc. of ISCAS’00. Geneva, Switzerland, May, pp. 28–31. Arya, S., Mount, D., Netanyah, R., Silverman, R., and Wu, A. Y. (1994). An optimal algorithm for approximate nearest-neighbor searching in fixed dimension. Proc. 5th Annual ACMSIAM Symp.: Discrete Algorithms, 573–582. Ausbeck, P. J. (1998). Context models for palette images. Proc. IEEE Data Compres., Conference, edited by J. Storer and M. Cohn., Snowbird, UT, pp. 309–318. Baharav, Z., Malah, D., and Karnin, E. (1995). Hierarchical interpretation of fractal image coding and its applications, in [54]. Bani-Eqbal, B. (1995). Enhancing the speed of fractal image compression. Opt. Eng. 34(6), 1705–1710. Barnsley, M. F. (1988a). Fractals Everywhere. New York: Academic. Barnsley, M. F. (1988b). Fractal modeling of real world images. The Science of Fractal Images. New York: Springer-Verlag. Barnsley, M. F., and Demko, S. (1985). Iterated function systems and the global construction of fractals. Proc. of Royal Society London Series A 399, 243–275. Barnsley, M. F., and Jacquin, A. (1988). Application of recurrent iterated function systems to images. SPIE Visual Comm and Image Proc. 1001, 122–131. Barnsley, M. F., and Sloan, A. D. (1988). A better way to compress images. Byte Magazine, Jan. Barnsley, M. F., Elton, J. H., and Hardin, D. P. (1989). Recurrent iterated function systems, in Constructive Approximation. Berlin; Germany: Springer-Verlag, pp. 3–31. Barnsley, M. F., Ervin, V., Hardin, D., and Lancaster, J. (1986). Proc. Nat. Acad. Science USA 83. Barthel, K. U., Voye´ , T., and Noll, P. (1993). Improved fractal image coding. Proc. Int. Picture Coding Symp. Lausanne, Switzerland, March, p. 1.5.
FRACTAL ENCODING
171
Barthel, K. U. (1997). Entropy constrained fractal image coding. Fractals 5(Suppl), 17–26. Barthel, K. U., and Voye´ , T. (1994). Adaptive fractal image coding in the frequency domain. Proc. Int. Work. Image Proc. Hungary: Budapest, June, pp. 33–38. Barthel, K. U., Schu¨ ttemeyer, J., Voye´ , T., and Noll, P. (1994). A new image coding technique unifying fractal and transform coding. Proc. of ICIP ’94. TX: Austin, Vol. III, pp. 112–116. Beaumont, J. M. (1990). Advances in block based fractal coding of still pictures. Proc. IEE Colloq. The application of fractal techniques in image processing, pp. 3.1–3.6. Bedford, T., Dekking, F. M., and Keane, M. S. (1992). Fractal image coding techniques and contraction operators. Nieuw Archief Voor Wiskunde 10, 185–217. Bedford, T., Dekking, F. M., Breewer, M., Keane, M. S., and van Schooneveld, D. (1994). Fractal coding of monochrome images. Signal Processing 6, 405–419. Belloulata, K., and Konrad, J. (2002). Fractal image compression with region-based functionality. IEEE Trans. on Image Proc. 11(4), 351–362. Belloulata, K., Baskurt, A., and Prost, R. (1997). Fast directional fractal coding of subbands using decision directed clustering for block classification. Proc. ICASSP ’97 4, 3121–3124. Belloulata, K., Baskurt, A., Benoit-Cattin, H., and Prost, R. (1998). Fractal coding of subbands with an oriented partition. Signal Processing: Image Communication 12, 243–252. Berthe, K., Hua, J. Y., and Yang, Y. (2002). EYcient image compression based on combination of fuzzy fractal theory. Proc. of TENCON ’02 1, 573–577. Bogdan, A., and Meadows, H. E. (1992). Kohonen neural network for image coding based on iteration transformation theory. Proc. of SPIE Neural and Stochastic Methods in Image and Signal Proc. 1766, 425–436. Bowyer, A. (1981). Computing Dirichlet tessellations. Computer J. 24(2), 162–166. Breazu, M., and Toderean, G. (1998). Region-based fractal image compression using deterministic search. Proc. of ICIP ’98 8, 742–746. Breazu, M., Toderean, G., Volovici, D., and Iridon, M. (1999). Speeding up fractal image compression by working in Karhunen-Loewe transform space. Proc. of IJCNN ’99 4(6), 2694–2697. Breiman, L., Friedman, J. H., Olshen, R. A., and Stone, C. J. (1984). Classification and Regression Trees. Belmont, CA: Wadsworth. Cai, D. S., and Hirobe, M. (1999). Optimal fractal image coding. Proc. IEEE of TENCON ’99 1, 650–653. Cardinal, J. (2001). Fast fractal compression of greyscale images. IEEE Trans. on Image Processing 10(1), 159–164. Caso, G., and Kuo, C. C. J. (1996). New results for fractal/wavelet image compression. Proc. SPIE Visual Communication and Image Processing, edited by R. Ansari and M. J. Smith, 2727, 536–547. Caso, G., Obrador, P., and Kuo, C. C. (1995). Fast methods for fractal image encoding. Proc. SPIE Visual Communication and Image Processing ’95 2501, 583–594. Chang, H. T., and Kuo, C. J. (2000). Iteration-free fractal image coding based on eYcient domain pool design. IEEE Trans. on Image Proc. 9(3), 329–339. Chang, Y. C., Shyu, B. K., and Wang, J. S. (1997). Region-based fractal image compression with quadtree segmentation. Proc. ICASSP ’97. Munich, Germany. Chang, Y. C., Shyu, B. K., Cho, C. Y., and Wang, J. S. (2000). Adaptive post-processing for region-based fractal image compression. Proc. Data Compression 549. Cheng, Q. (1996). Attribute pattern recognition and its application. CSI-AM’96. Shangai, China, pp. 27–32. Chu, H. T., and Chen, C. C. (2000). An eYcient decoding scheme for fractal image compression. Proc. ICIP ’00 2, 164–166. Conci, A., and Aquino, F. R. (1999). Using adaptive contraction for fractal image coding based on local fractal dimension. Proc. IEEE of Computer Graphics and Image Processing, 231–239.
172
VITULANO
Curtis, K. M., Neil, G., and Fotopoulos, V. (2002). A hybrid fractal/DCT image compression method. Proc. of Int. Conf. on DSP 2, 1337–1340. Davies, G. (1995). Self-quantized wavelet subtrees: A wavelet-based theory of fractal image compression. Proc. SPIE: Wavelet Applic. II, edited by H. H. Szu, 2491, pp. 141–152. Davies, G. M. (1998). A wavelet-based analysis of fractal image compression. IEEE Trans. on Image Proc. 7(2), 141–154. Davoine, F., Svensson, J., and Chassery, J. M. (1995). A mixed triangular and quadrilateral partition for fractal image coding. Proc. ICIP ’95, Washington, DC, 3, 284–287. Davoine, F., Antonini, M., Chassery, J. M., and Barlaud, M. (1996). Fractal image compression based on Delaunay triangulation and vector quantization. IEEE Trans. on Image Processing 5, 338–346. Distasi, R., Polvere, M., and Nappi, M. (1998). Split decision functions in fractal image coding. Electronics Letters 34(8), 751–753. Domaszewicz, J., and Vaishampayan, V. A. (1994). Iterative collage coding for fractal compression. Proc. of ICIP’94. Texas: Austin, 3, 127–131. Ebrahimi, T. (1994). A new technique for motion field segmentation and coding for very low bitrate coding applications. Proc. of ICIP 2, 433–437. Elias, P. (1975). Universal codeword sets and representations of integers. IEEE Trans. on Information Theory 21(2), 194–203. Elsherif, M., Rashwan, M., and Elsayad, A. (2000). Matching criteria in fractal image compression. Proc. of 43rd Symp. on Circuits and Systems. Lansing, MI, 2, 612–615. Equitz, W. H. (1989). A new vector quantization clustering algorithm. IEEE Trans. on Acoustic, Speech Siganl Proc. 37, 1568–1575. Farhadi, G. (2003). A hybrid image compression scheme using block-based fractal coding and DCT. Proc. of 4th EURASIP ’03, Video/Image Proc. and Multimedia Comm 1, 89–94. Fisher, Y. (Ed.) (1995). Fractal Image Compression: Theory and Application. Berlin, Germany: Springer-Verlag. Fisher, Y., and Lawrence, A. F. (1992). Fractal image compression for mass storage applications. Proc. of SPIE ’92 1662, 244–255. Fisher, Y., Jacobs, E. W., and Boss, R. D. (1992). Fractal image compression using iterated transforms, in Image and Text Compression, edited by J. A. Storer. Boston, MA: Kluwer, pp. 35–61. Fisher, Y., Shen, T. P., and Rogovin, D. (1994). Comparison of fractal methods with discrete cosine transform (DCT) and wavelets. Proc. SPIE: Neural Stochastic Methods Image Signal Processing III, edited by S.-S. Chen, 2304, 132–143. Franco, R, and Malah, D. (2001). Adaptive image partitioning for fractal coding achieving designated rates under a complexity constraint. Proc. of ICIP ’01 2, 435–438. Freeman, H. (1961). On the coding of arbitrary geometric configurations. IRE Trans. Electron. Comput. EC-10, 260–268. Friedman, J. H., Bentley, J. L., and Finkel, R. A. (1977). An algorithm for finding best matches in logarithmic expected time. ACM Trans. Math. Softw. 3(3), 209–226. Frigaard, C., Gade, J., Hemmingsen, T., and Sand, T. (1994). Image compression based on a fractal theory. Intern. Report S701, Inst. Elec. Syst. Denmark: Aalborg University. Furao, S., and Hasegawa, O. (2004). A fast no search fractal image coding method. Signal Processing: Image Communication 19, 393–404. Gharavi-Alkhansari, M. (1994). Fractal-based techniques for a generalized image coding method. Proc. ICIP ’94. Austin, Texas, 3, 122–126. Gharavi-Alkhansari, M., and Huang, T. S. (1996). Fractal image coding suing rate-distortion optimized matching pursuit. Proc. SPIE: Vis. Comm. Image Proc., edited by R. Ansari and M. J. Smith, 2727, 1386–1393.
FRACTAL ENCODING
173
Gharavi-Alkhansari, M., and Huang, T. S. (1997). A system/graph theoretical analysis of attractor coders. Proc. ICASSP ’97. Munich, Germany, 4, 2705–2708. Gharavi-Alkhansari, M., DeNardo, R., Tenda, Y., and Huang, T. S. (1997). Resolution enhancement of images using fractal coding. Proc. SPIE: Vis. Comm. Image Proc., edited by J. Biemond and E. J. Delp, 3024, 1089–1100. Giang, N. K., and Saupe, D. (2000). Adaptive post processing for fractal image compression. Proc. of ICIP ’00 2, 183–186. Gonzalez, R. C., and Woods, R. E. (2001). Digital image processing (2nd edn.). New Jersey: Prentice Hall, Upper Saddle River. Hamzaoui, R. (1996). Decoding algorithm for fractal image compression. Electronics Lett. 32, 1273–1274. Hamzaoui, R. (1997a). Fast decoding algorithms for fractal image compression. Proc. of Fractals in Engineering, pp. 19–23. Arcachon. Hamzaoui, R. (1997b). Ordered decoding algorithm for fractal image compression. Proc. Int. Picture Coding Symp. Berlin, Germany. 91–95. Hamzaoui, R. (1997c). Codebook clustering by self-organizing maps for fractal image compression. Fractals 5, 27–38. Hamzaoui, R., and Saupe, D. (2000). Combining fractal image compression and vector quantization. IEEE Trans. on Image Processing 9(2), 197–208. Hamzaoui, R., Mu¨ ller, M., and Saupe, D. (1996). VQ-enhanced fractal image compression. Proc. ICIP ’96. Lausanne, Switzerland, 1, 153–156. Hamzaoui, R., Hartenstein, H., and Saupe, D. (2000a). Local iterative improvement of fractal image codes. Image and Vision Computing 18, 565–568. Hamzaoui, R., Saupe, D., and Hiller, M. (2000b). Fast code enhancement with local search for fractal image compression. Proc. of ICIP ’00 2, 156–159. Hamzaoui, R., Saupe, D., and Hiller, M. (2001). Distortion minimization with fast local search for fractal image compression. J. Visual Communication and Image Representation 12, 450–468. Hartenstein, H., and Saupe, D. (1998). Cost-based region growing for fractal image compression. Proc. IX EUSIPCO, pp. 2313–2316. Hartenstein, H., Ruhl, M., and Saupe, D. (2000). Region-based fractal image compression. IEEE Trans. on Image Proc. 9(7), 1171–1184. Hartenstein, H., and Saupe, D. (2000). Lossless acceleration of fractal image encoding via the fast Fourier transform. Signal Processing: Image Communication 16, 383–394. Hartenstein, H., Saupe, D., and Barthel, K. U. (1997). VQ-encoding of luminance parameters in fractal coding scheme. Proc. of ICASSP ’97. Munich, Germany, 4, 2701–2704. He, C., Yang, S. Y., and Huang, X. (2004). Variance-based accelerating scheme for fractal image encoding. Electronics Lett. 40(2), 362–363. Ho, H. L., and Cham, W. K. (1997). Attractor image coding using lapped partitioned iterated function systems. Proc. ICASSP ’97. Munich, Germany, 4, 2917–2920. Ho¨ ntsch, I., and Karam, L. J. (2000). Locally adaptive perceptual image coding. IEEE Transactions on Image Processing 9(9), 1472–1483. Ho¨ ntsch, I., and Karam, L. J. (2002). Adaptive image coding with perceptual distortion control. IEEE Transactions on Image Processing 11(3), 213–222. Hu¨ rtgen, B. (1993). Contractivity of fractal transforms for image coding. Electronics Lett. 29, 1749–1750. Hu¨ rtgen, B. (1994). Performance bounds for fractal coding. Proc. of ICASSP ’94. Detroit, 3, 2563–2566. Hu¨ rtgen, B. (1995). Statistical evaluation of fractal coding schemes. Proc. of ICIP ’95. Washington, DC, 3, 280–283.
174
VITULANO
Hu¨ rtgen, B., and Hain, T. (1994). On the convergence of fractal transform. Proc. of ICASSP ’94. Australia: Adelaide, 5, 561–564. Hu¨ rtgen, B., and Simon, S. F. (1994). On the problem of convergence in fractal coding schemes. Proc. of ICIP ’94. Texas: Austin, 3, 103–106. Hu¨ rtgen, B., and Stiller, C. (1993). Fast hierarchical codebook search for fractal coding of still images. SPIE Proc.: Video Commun. PACS Med. Applic., edited by R. A. Mattheus, A. J. Duerinckx, and P. J. van Otterloo, 1977, 397–408. Hutchinson, J. E. (1981). Fractals and self-similarity. Indiana Univ. Math. J. 35, 713–747. Jacobs, E. W., Fisher, Y., and Boss, R. D. (1992). Image compression: A study of iterated transform method. Signal Processing 29, 251–263. Jackson, D. J., Mahmoud, W., Stapleton, W. A., and Gaughan, P. T. (1997). Faster fractal image compression using quadtree recomposition. Image and Vision Computing 15, 759–767. Jacquin, A. E. (1989). Image coding based on a fractal theory of iterated contractive Markov operators, Part II: Construction of fractal codes for digital images. Tech. Rep. Math. 91389–17. Georgia Institute of Technology. Jacquin, A. E. (1990a). A novel fractal block-coding technique for digital images. Proc. ICASSP ’90. Albuquerque, NM, 4, 2225–2228. Jacquin, A. E. (1990b). Fractal image coding based on a theory of iterated contractive image transformations. Proc. SPIE: Vis. Commun. Image Process. Switzerland: Lausanne, edited by M. Kunt, 1360, 227–239. Jacquin, A. E. (1992). Image coding based on a fractal theory of iterated contractive image transformations. IEEE Trans. on Image Processing 1, 18–30. Jacquin, A. E. (1993). Fractal image coding: A review. Proc. IEEE 81, 1451–1465. Jeng, J. H., and Shyu, J. R. (2000). Fractal image compression with simple classification scheme in frequency domain. Electronics Lett. 36(8), 716–717. Jeng, J. H., Truong, T. K., and Sheu, J. R. (2000). Fast fractal image compression using Hadamard transform. Proc. of Vis., Image and Signal Proc. 147(6), 571–574. ISO/IEC JTC 1/SC 29/WG 1 (2000). ISO/IEC FCD 15444-1:. Information technology-JPEG 2000 image coding system: Core coding system. Kaouri, H. A. (1991). Fractal coding of still images. Proc. IEE 6th Int. Conf. Digital Proc. of Signals in Comm. 235–239. Kaneko, T., and Okudaira, M. (1985). Encoding of arbitrary curves based on the chain code representation. IEEE Trans. on Communications COM-33, 697–707. Kapoor, A., Arora, K., Jain, A., and Kapoor, G. P. (2003). Stochastic image compression using fractals. Proc. of ITCC ’03, 574–579. Kim, C. S., Kim, R. C., and Lee, S. U. (1998). Fractal vector quantizer for image coding. IEEE Trans. on Image Proc 7, 1598–1602. Kim, K. (1996). Still image coding based on vector quantization and fractal approximation. IEEE Trans. on Image Proc. 5, 587–597. Kim, K., and Park, R. K. (1994). Image coding based on fractal approximation and vector quantization. Proc. of ICIP ’94. Austin, TX, 3, 132–136. Kim, T., Van Dick, R. E., and Miller, D. J. (2002). Hybrid fractal zerotree wavelet image coding. Signal Processing: Image Communication 17, 347–360. Kohonen, T. (1995). Self-Organizing Maps. Berlin: Springer-Verlag. Kolmogorov, A. N., and Fomin, S. V. (1957). Elements of Theory of Functions and Functional Analysis. New York: Graylock Press. Kominek, J. (1996). Codebook reduction in fractal image compression. Proc. IS & T/SPIE ’96 Symp. on Electr. Imaging: Science & Technology—Still Image Compr. II 2669. Kopilovic´ , I., Saupe, D., and Hamzaoui, R. (2001). Progressive fractal coding. Proc. of ICIP’01 1, 86–89.
FRACTAL ENCODING
175
Krupnik, H., Malah, D., and Karnin, E. (1995). Fractal representation of images via the discrete wavelet transform. Proc. 18th IEEE Conv. Electrical and Electronics Engineers in Israel, Tel-Aviv, pp. 2.2.2/1.5. Kuo, C. J., Huang, W. J., and Lin, T. G. (1999). Isometry-based shape-adaptive fractal coding for images. J. Visual Communication and Image Representation 10, 307–319. Kuroda, H., Popescu, D. C., and Yan, H. (1995). Fast block matching method for image data compression basedon fractal models. Proc. SPIE Vis. Comm. Image Proc., edited by L. T. Wu, 2501, 1257–1266. Lai, C., Lam, K., and Siu, W. (2002). Improved searching scheme for fractal image coding. Electronics Lett. 38, 1653–1654. Lai, C. M., Lam, K. M., and Siu, W. C. (2003). A fast fractal image coding based on kick-out and zero contrast conditions. IEEE Trans. on Image Processing 12(11), 1398–1403. Lee, C. K., and Lee, W. K. (1998). Fast fractal image block coding based on local variances. IEEE Trans. on Image Processing 7(6), 888–891. Lee, S. M. (1999). A fast variance ordered domain block search algorithm for fractal encoding. IEEE Trans. on Consumer Electronics 45(2), 275–277. Lee, S. M., and Ra, S. W. (1999). An analysis of isometry transforms in frequency domain for fast fractal encoding. IEEE Signal Processing Lett. 6(5), 100–102. Lepsøy, S. (1993). Attractor image compression—Fast algorithms and comparisons to related techniques, Trondheim, Norway: Norvegian Institute of Technology, Ph.D. Dissertation. Lepsøy, S., and Øien, G. E. (1994). Fast attractor image encoding by adaptive codebook clustering, in Fractal and Image Compression—Theory and Application, edited by Y. Fisher. New-York: Springer-Verlag. Li, J., and Kuo, C. C. J. (1996). Fractal wavelet coding using a ratedistortion constraint. Proc. of ICIP ’96. Lausanne, Switzerland, II, 81–84. Li, J., and Kuo, C. C. J. (1999). Image compression with a hybrid wavelet-fractal coder. IEEE Trans. on Image Proc. 8(6), 868–873. Li, J., Chen, G., and Chi, Z. (2002). A fuzzy image metric with application to fractal coding. IEEE Trans. on Image Processing 11(6), 636–643. Linde, Y., Buzo, A., and Gray, R. M. (1980). An algorithm for vector quantizer disign. IEEE Trans. Commun COM-28(1), 84–95. Loganathan, D., Amudha, J., and Mehata, K. M. (2003). Classification and feature vector techniques to improve fractal image coding. Proc. IEEE of TENCON 4, 1503–1507. LoPresto, S. M., Ramchandran, K., and Orchard, M. T. (1997). Image coding based on mixture modeling of wavelet coeYcients and a fast estimation-quantization framework. Proc. IEEE of Conference on Data Compression (DCC), 221–230. Lu, G., and Yew, T. L. (1994). Image compression using partitioned iterated function systems. Proc. of SPIE: Image and Video Compression, edited by M. Rabbani and R. J. Safranek, 2186, pp. 122–133. Lu, N. (1997). Fractal Imaging. New York: Academic. Lundheim, L. M. (1992). Fractal signal modelling for source coding (Ph.D. thesis). The Norvegian Institute of Technology. Mahmoud, W. H., and Jackson, D. J. (1999). Improved quadtree decomposition/recomposition algorithm for fractal image compression. Proc. IEEE Southeastcon ’99, 258–263. Mallat, S. (1998). A Wavelet Tour of Signal Processing. New York: Academic Press. Malvar, H. S., and Staelin, D. H. (1989). The LOT: Transform coding without blocking eVects. IEEE Trans. on ASSP 37(4), 553–559. Mandelbrot, B. B. (1982). The Fractal Geometry of Nature. New York: Freeman. Melnikov, G., and Katsaggelos, A. K. (1998). Non uniform segmentation optimal hybrid fractal/DCT image compression algorithm. Proc. of ICASSP. Seattle, WA, 5, 2573–2576.
176
VITULANO
Melnikov, G., and Katsaggelos, A. K. (2002). A jointly optimal fractal/DCT compression scheme. IEEE Trans. on Multimedia 4(4), 413–422. Mitra, S. K., Murthy, C. A., Kundu, M. K., and Bhattacharya, B. B. (2001). Fractal image compression using iterated function system with probabilities. Proc. of Int. Conf. on Inf. Techn.: Coding and Comp., 191–195. Special issue on MPEG-4. (1997). IEEE Trans. on Circuits Syst. Video Techn. 7 (entire issue). Monro, D. M. (1993). Class of fractal transforms. Electronics Lett. 29, 362–363. Monro, D. M., and Dudbridge, F. (1992a). Fractal approximation of image blocks. Proc. of ICASSP ’92 3, 485–488. Monro, D. M., and Dudbridge, F. (1992b). Fractal block coding of images. Electronics Letters 28, 1053–1055. Monro, D. M., and Woolley, S. J. (1994a). Rate/distortion in fractal compression: Order of transform and block symmetries. Proc. Int. Symp. Speech. Image Proc. and Neural Networks. Hong Kong, 1, 168–171. Monro, D. M., and Woolley, S. J. (1994b). Fractal image compression without searching. Proc. ICASSP ’94. Adelaide, Australia, 5, 557–560. Mukherjee, J., Kumar, P., and Ghosh, S. K. (2000). A graph-theoretic approach for studying the convergence of fractal encoding algorithm. IEEE Trans. on Image Processing 9(3), 366–377. Nappi, M., and Vitulano, D. (1999). Linear prediction image coding using iterated function systems. Image and Vision Comp 17, 771–776. Novak, M. (1993). Attractor coding of images. Proc. Int. Picture Coding Symp. Lausanne: Switzerland. Øien, G. E. (1993). L2-Optimal attractor image coding with fast decoder convergence (Ph.D. thesis). Norwegian Institute of Technology. Øien, G. E. (1994). Parameter quantization in fractal image coding. Proc. ICIP ’94. Austin, Texas, 3, 142–146. Øien, G. E., and Lepsøy, S. (1995). A class of fractal image coders with fast decoder convergence, in Fractal Image Compression: Theory and Applications, edited by Y. Fisher. Berlin, Germany: Springer-Verlag, pp. 153–174. Øien, G. E., Baharav, Z., Lepsøy, S., and Karnin, E. (1994). A new improved collage theorem with applications to multiresolution fractal image coding. Proc. ICASSP ’94. Adelaide, Australia, 5, 565–568. Øien, G. E., Lepsøy, S., and Ramstad, T. A. (1991). An inner product space approach to image coding by contractive transformations. Proc. ICASSP ’91. Toronto, Ontario, Canada, 4, 2773–2776. Partridge, M., and Calvo, R. A. (1998). Fast dimensionality reduction and simple PCA. Intell. Data Analysis 2, 203–214. Peitgen, H. O., Jurgens, H., and Saupe, D (1992). Chaos and Fractals: New Frontiers of Science. Berlin, Germany: Springer-Verlag. Peng, H. (2002). Wavelet transform and fractal predict for image compression. Proc. of 1st Int. Conf. on Machine Learning and Cyb 3, 1673–1675. PfeVerman, J. D., Cingolani, P. E., and Cernuschi-Frias, B. (1999). An improved search algorithm for fractal image compression. Proc. IEEE of ICECS ’99 2, 693–696. Polidori, E., and Dugelay, J. L. (1997). Zooming using iterated function systems. Fractals 5, 111–123. Polvere, M., and Nappi, M. (2000). Speed-up in fractal image coding: Comparison of methods. IEEE Trans. on Image Processing 9(6), 1002–1009. Ponomarenko, N. N., Egizarian, K., Lukin, V. V., and Astola, J. T. (2001). Lossless acceleration of fractal compression using domain and range block local variance analysis. Proc. IEEE of Image Process 2, 419–422.
FRACTAL ENCODING
177
Popescu, D. C., Dimca, A., and Yan, H. (1997). A nonlinear model for fractal image coding. IEEE Trans. on Image Processing 6(3), 373–382. Qiu, Z., and Dervai, F. (2000). A new wavelet feature for wavelet basis selection in waveletfractal hybrid image coding. Proc. WCCC-ICSP’00 2, 1054–1057. Ramamurthi, B., and Gersho, A. (1986). Classified vector quantization of images. IEEE Trans. on Communications 34, 1105–1115. Reusens, E. (1994). Partitioning complexity issue for iterated functions systems based image coding. Proc. Eur. Signal Proc. Conf. 1. Edinburgh, U.K, 171–174. Reusens, E. (1994). Overlapped adaptive partitioning for image coding based on the theory of iterated function systems. Proc. ICASSP ’94. Adelaide, Australia, 5, 569–572. Riccio, D., and Nappi, M. (2003). Defering range/domain comparisons in fractal image compression. Proc. of ICIAP’03 412–417. Rinaldo, R., and Calvagno, G. (1995). Image coding by block prediction of multiresolution subimages. IEEE Trans. on Image Proc. 4(7), 909–920. Ruan, Y., and Nge, T. G. (1999). Region evolution with non-linear block transformations for fractal image coding. Proc. of ISSPA ’99 1, 87–90. Ruhl, M., and Hartenstein, H. (1997). Optimal fractal coding is NP-hard, in Proc. DCC ’97, edited by J. A. Storer and M. Cohn, IEEE Comp. Soc. Press, pp. 261–270. Utah: Snowbird. Ruhl, M., Hartenstein, H., and Saupe, D. (1997). Adaptive partitionings for fractal image compression. Proc. ICIP ’97 2, 310–313. Saupe, D. (1995). Accelerating fractal image compression by multidimensional nearest-neighbor search. Proc. IEEE Data Compression Conf., edited by J. A. Storer and M. Cohn, Snowbird, UT, pp. 222–231. Saupe, D. (1995). Fractal image compression via nearest-neighbor search, in Proc. NATO ASI on Fractal Image Encoding and Analysis, pp. 95–114. Norway: Trondheim. Saupe, D. (1996). Lean domain pools for fractal image compression. Proc. SPIE Elec. Imaging, Still Image Compression II, edited by R. L. Stevenson, A. I. Drukarev, and T. R. Gardos, 2669, 150–157. Saupe, D. (1996). The futility of square isometries in fractal image compression. Proc. ICIP ’96. Lausanne, Switzerland, 161–164. Saupe, D., and Jacob, S. (1997). Variance-based quadtrees in fractal image compression. Electronics Letters 33, 46–48. Saupe, D., and Ruhl, M. (1996). Evolutionary fractal image compression. Proc. ICIP ’96. . Switzerland: Lausanne, 1, 129–132. Saupe, D., Ruhl, M., Hamzaoui, R., Grandi, L., and Marini, D. (1998). Optimal hierarchical partitions for fractal image compression. Proc. ICIP ’98. Chicago, IL, 1, 737–741. Sawada, K., Nagai, S. Y., and Nakamura, E. (2001). Fractal image coding combined with subband decomposition. Proc. of ICECS ’01 3, 1347–1350. Selvi, S. S., and Makur, A. (2003). Variable dimension range and domain block-based fractal image coding. IEEE Trans. on Circuits and Systems for Video Technology 13(4), 343–347. Shi, Y. Q., and Sun, H. (2000). Image and Video Compression for Multimedia Engineering. New York: CRC Press. Signes, J. (1994). Geometrical interpretation of IFS based image coding. Fractals 5(Suppl), 133–143. July. Simon, B. (1995). Explicit link between local fractal transform and multiresolution transform. Proc. of ICIP. Washington, DC, 1, 278–281. Song, C. (2002). The box dimension for researching similarity in fractal image coding. Proc. IEEE of ICSP ’02 1, 889–891.
178
VITULANO
Sutskover, I., and Malah, D. (1999). Hierarchical fast decoding of fractal image representation using quadtree partitioning. Proc. of Int. Conf. of Image Proc. and Its Appl 2, 581–585. Takezawa, M., Honda, H., Miura, J., Haseyama, M., and Kitajima, H. (1999). A geneticalgorithm based quantization method for fractal image coding. Proc. of ICIP ’99 1, 458–461. Tan, T., and Yan, H. (2000). Determining and controlling convergence in fractal image coding. Proc. of ICIP ’00 2, 187–190. Tanimoto, M., Ohyama, H., and Kimoto, T. (1996). A new fractal image coding scheme employing blocks of variable shapes. Proc. ICIP ’96, Lausanne, Switzerland, 1, 137–140. Tate, S. R. (1992). Lossless compression of region edge maps. Dept. of Computer Science, Duke University, Durham, NC, Tech. Rep. CS-1992-9. Thao, N. T. (1996). A hybrid fractal-DCT coding scheme for image compression. Proc. of ICIP ’96. Lausanne, Switzerland, 1, 169–172. Thomas, L., and Deravi, F. (1995). Region-based fractal image compression using heuristic search. IEEE Trans. on Image Processing 4, 832–838. Tong, C. S., and Pi, M. (2001). Fast fractal image encoding based on adaptive search. IEEE Trans. on Image Processing 10(9), 1269–1277. Tong, C. S., and Pi, M. (2003). Analysis of a hybrid fractal-predictive-coding compression scheme. Signal Processing: Image Communication 18, 483–495. Tong, C. S., and Wong, M. (2000). Approximate nearest-neighbour search for fractal image compression based on a new aYne transform parametrization. Proc. IEEE of 15th Conf. on Pattern Recognition 3, 219–223. Tong, C. S., and Wong, M. (2002). Adaptive approximate nearest-neighbour search for fractal image compression. IEEE Trans. on Image Processing 11(6), 605–615. Truong, T. K., Jeng, J. H., Reed, I. S., Lee, P. C., and Li, A. Q. (2000). A fast encoding algorithm for fractal image compression using the DCT inner product. IEEE Trans. on Image Processing 9(4), 529–535. ˇ umbakis, T. (2002). Accelerating compression times in Valantinas, J., Morkevicˇ ius, N., and Z block based fractal image coding procedures. Proc. IEEE of EGUK ’02 83–88. van de Walle, A. (1997). Merging fractal image compression and wavelet transform methods. Fractals 5(Suppl.), 3–15. Vitulano, D. (2001). Fractal image coding schemes using nonlinear grey scale functions. Signal Processing 81, 1095–1099. Xuejun, W., Lianyu, C., and Hexin, C. (1999). A quadtree classified-based fractal image coding approach. Proc. IEEE of APCC/OECC ’99 2, 912–915. Yisong, C., Guoping, W., and Shihai, D. (2002). Feature diVerence classification method in fractal image coding. Proc. IEEE of ICSP ’02 1, 648–651. Yuxuan, R., and Nge, T. G. (1999). An improved fractal image compression scheme embedding DCT encoder. Proc. of Int. Conf. on Image Proc. and Its Appl. 2, 610–614. Wadstro¨ mer, N. (2003). An automatization of Barsnley’s algorithm for inverse problem of iterated function systems. IEEE Trans. on Image Processing 12(11), 1388–1397. Wakefield, P. D., Bethel, D. M., and Monro, D. M. (1997). Hybrid image compression with implicit fractal terms. Proc. of ICASSP ’97. Munich, Germany, 4, 2933–2936. Wang, C., and Cheng, Q. (2000). Attribute cluster network and fractal image compression. Proc. of WCCC-ICSP ’00 3, 1613–1616. Wang, C. C., and Hsieh, C. H. (2001). An eYcient fractal image-coding method using interblock correlation search. IEEE Trans. on Image Processing 11(1), 257–261. Wang, Z., Zhang, D., and Yu, Y. (2000). Hybrid image coding based on partial fractal mapping. Signal Processing: Image Communication 15, 767–779.
FRACTAL ENCODING
179
Watson, D. F. (1981). Computing the n-dimensional Delaunay tessellation with application to Voronoi polytopes. Computer J. 24(2), 167–172. Wein, C. J., and Blake, I. F. (1996). On the performance of fractal compression with clustering. IEEE Trans. on Image Processing 5, 522–526. Williams, R. F. (1971). Compositions of Contractions, Bol. Soc. Brasilian Math 2, 55–59. Wohlberg, B., and de Jager, G. (1995). Fast image domain fractal compression by DCT domain block matching. Electronics Lett. 31, 869–870. Wohlberg, B., and de Jager, G. (1999). A review of the fractal image coding literature. IEEE Trans. on Image Processing 8(12), 1716–1729. Woolley, S. J., and Monro, D. M. (1994). Rate-distortion performance of fractal transforms for image compression. Fractals 2, 395–398. Woolley, S. J., and Monro, D. M. (1995). Optimum parameters for hybrid fractal image coding. Proc. of ICASSP ’95. Detroit, MI, 4, 2571–2574. Wu, M., Ahmad, M. O., and Swamy, M. N. S. (2000). A new fractal zerotree coding from wavelet image. Proc. of ISCAS ’00. Geneva, Switzerland, 3, 21–24, May. Wu, P. Y. (2000). Fast fractal image compression. Proc. IEEE on Information Technology: Coding and Comp. 54–59. Wu, X., and Yao, C. (1991). Image coding by adaptive tree-structured segmentation. In Data Compression Conference. Snowbird, Utah, 73–82. Wu, Y. G., Huang, M. Z., and Wen, Y. L. (2003). Fractal image compression with variance and mean. Proc. IEEE of ICME ’03 1, 353–356. Zhang, Y., and Po, L. M. (1999). Variable tree size fractal compression for wavelet pyramid image coding. Signal Processing: Image Communication 14, 195–208. Zhao, Y., and Yuan, B. (1994). Image compression using fractals and discrete cosine transform. Electronics Lett. 30, 474–475. Zhao, Y., and Yuan, B. (1998). A new aYne transformation: Its theory and application to image coding. IEEE Trans. on Circuits and Systems for Video Technology 8(3), 269–274. Zhang, Z. M., and Yu, S. L. (1999). An improved zero-searching fractal image coding method. IEEE Trans. on Consumer Electronics 45(1), 91–96.
ADVANCES IN IMAGING AND ELECTRON PHYSICS, VOL. 134
Morphologically Debiased Classifier Fusion: A Tomography-Theoretic Approach DAVID WINDRIDGE School of Electronics and Physical Sciences, University of Surrey, Guildford Surrey, GU2 7XH, United Kingdom
I. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . A. Construction of a Generalized Theory of Classifier Fusion . . . . . . 1. Issue of Feature Selection . . . . . . . . . . . . . . . . . . B. Existing Approaches to Classifier Combination . . . . . . . . . . . C. Tomography Theory and Its Applications . . . . . . . . . . . . . D. Article Structure . . . . . . . . . . . . . . . . . . . . . . 1. Philosophical Development . . . . . . . . . . . . . . . . . II. Methodology of Tomographic Classifier Fusion . . . . . . . . . . . . A. Outline of the Methodology . . . . . . . . . . . . . . . . . . B. Generalization of the Radon Transform to Arbitrary Dimensionality . . C. Sampling Concerns . . . . . . . . . . . . . . . . . . . . . D. Formal Explication of the Parallel Between Radon Transform Theory and Classifier Combination . . . . . . . . . . . . . . . E. Estimation Error . . . . . . . . . . . . . . . . . . . . . . F. Strategy for the Combination of Overlapping Feature Sets . . . . . . G. Fully General Solution to the Combination Problem: Unity of Combination and Feature Selection Processes . . . . . . . . . . . H. Summary of Methodological Approach. . . . . . . . . . . . . . III. Postcombination Tomographic Filtration . . . . . . . . . . . . . . A. Introduction . . . . . . . . . . . . . . . . . . . . . . . . B. An Economic Approach to Postcombination Tomographic Filtration . . C. Nature of Ho¨gbom Deconvolution in the Sum Rule Domain . . . . . 1. Finite Sampling Issues . . . . . . . . . . . . . . . . . . . D. EYcient Implementation of Ho¨gbom Deconvolution in the PDF Domain 1. PDF-Centered Approach . . . . . . . . . . . . . . . . . . 2. Algorithmic Implementation . . . . . . . . . . . . . . . . . 3. Step-by-Step Approach to Procedurally Implementing Performance-Optimized ‘‘Filtered Back-Projection’’ . . . . . . . . E. Final Considerations on the Postcombination Approach . . . . . . . IV. An Example Application . . . . . . . . . . . . . . . . . . . . A. Test Data Characteristics . . . . . . . . . . . . . . . . . . . B. Results of Application . . . . . . . . . . . . . . . . . . . . V. Dimensionality Issues: Empirical and Theoretical Constraints on the Relative Performance of Tomographic Classifier Fusion . . . . . . . . . A. Relative Performance of Tomographic Classifer Fusion in Empirical Tests 1. Response to Estimation Error . . . . . . . . . . . . . . . . B. Relative Performance of Tomographic Classifier Fusion on Model Data .
. . . . . . . . . . .
. . . . . . . . . . .
182 182 185 185 187 188 188 190 190 194 200
. . . . . .
201 208 210
. . . . . . . . . .
. . . . . . . . . .
215 219 220 220 222 223 224 227 228 230
. . . . .
. . . . .
232 234 235 235 236
. . . .
. . . .
244 245 248 248
181 ISSN 1076-5670/05 DOI: 10.1016/S1076-5670(04)34004-8
Copyright 2005, Elsevier Inc. All rights reserved.
182
WINDRIDGE
C. Tomographic Model Solution . . . . . D. Sum Rule Model Solution . . . . . . E. Product Rule Model Solution . . . . . F. Findings of Dimensionality Tests . . . . G. Conclusions to Dimensionality Tests . . VI. Morphology-Centered Classifier Combination: A. Outlook . . . . . . . . . . . . . References . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Retrospect, Prospect . . . . . . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
249 255 256 258 260 261 263 265
I. Introduction A. Construction of a Generalized Theory of Classifier Fusion The limitation on the performance gains that can be derived from a single classifier is a reflection of the fact that any individual design must, in the absence of an exhaustive training set, necessarily encode an a priori, metastatistical assumption as to the type of morphology applicable to the decision boundary. The potentially infinite variety of real-world data would seem to indicate, however, that no such singular assumption could ever be fully justified: No single model exists for all pattern recognition problems and no single technique is applicable to all problems. Rather what we have is a bag of tools and a bag of problems.—Kanal, 1974
This observation has engendered considerable recent interest in multiple classifier systems (Jacobs, 1991; Kittler et al., 1998; Lam and Suen, 1995; Rahman and Fairhurst, 1997, 1998; Woods et al., 1997; Xu et al., 1994), which seek to make use of the divergence in design methodologies to limit such a priori impositions and obtain a correspondingly better estimate of the decision boundary to boost classification performance. In seeking to establish a general theoretical framework for such approaches, we will attempt to determine that classifier combination in virtually all of its variant forms has an aspect that may be regarded as an approximate attempt at the reconstruction of the combined pattern space by tomographic means. The feature selection process in this scenario constitutes an implicit radon integration along the lines of the physical process involved in computerized axial tomography (CAT) scanning, and so forth, albeit of a multidimensional nature. (An indication of precisely what we envisage by this equivalence between Radon integration and featureselection is given in Figure 1 for the two-dimensional [2D] case.) It will thereby be ascertained that a morphologically optimal strategy for classifier combination can be achieved by appropriately restructuring the featureselection algorithm such that a fully constituted tomographic combination
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
183
Figure 1. Radon integration.
Figure 2. Back-projection.
(rather than the approximation) acts in its most appropriate domain: that is, when the combination is composed of classifiers with distinct feature sets. As in medical imaging, this fully constituted tomographic combination necessarily involves the application of a deconvolution procedure to a backprojected space (Figure 2), which, in the context of pattern recognition, we will demonstrate amounts to the composite probability density function (PDF) constructed implicitly by the sum-rule decision scheme (for example, see Kittler et al., 1998). In conventional implementations of tomography (Natterer, 1997), such deconvolution is most usually accomplished via a collective prior filtering of the Radon integrals to remove reconstruction artifacts (as illustrated in Figures 3 and 4). This would typically take the form of a diVerentiation operator that acts to remove what, in the reconstructive space, would (for the case of perfect angular sample coverage) amount to convolution by an |1/r| function.
184
WINDRIDGE
Figure 3. Reconstruction artifacts in the back/projected space.
Figure 4. Back-projection with prior filtration of Radon integrals.
The very low angular sampling implied by feature selection, however, as well as the dimensionality of the spaces involved, means that an appropriate form of tomography theory needs to be developed from first principles. This, and the testing of the methodology so derived, is the subject of this article. As such, this constitutes a review of and an expansion on the author’s existing work in the field [in particular Windridge and Kittler, 2003a; in respect to which the publishers retain copyright over certain diagrams set perturbation techniques: boosting (Freund and Schapire, 1996) and bootstrap aggregation—‘‘bagging’’ (Breiman, 1996)]. The latter two techniques have, in particular, achieved widespread success and can be seen as methods for tuning a given base classifier’s bias or variance, respectively (Melville et al., 2004). Our concern, however, is ostensibly only with decision fusion and its optimization (as opposed to ensemble creation), although the later theoretical necessity of delineating combination types on the basis of whether the constituent classifiers share a common feature-space means that ensemble methods are implicitly considered in our proposed framework for classifier fusion. Given that we may also diversify classifier morphology via existing combination methodologies, we are hence, in this article, eVectively proposing a meta-combination strategy capable of incorporating all existing work in the field within a common framework, albeit at an implicit level. Section II.G addresses this point in more detail.
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
185
1. Issue of Feature Selection Classifier combination almost always takes place in the context of feature selection, either implicitly or explicitly. That is, features are either precorrelated with classifiers on an a priori (possibly physically motivated) basis, or else allocated by a specific feature selection process (such as forward searching) acting via a performance-based criterion. The actual number of features allocated to each classifier depends on the outcome of a tension between the two opposing constraints of—first, more accurate parameter determination through increased sampling rates, and second, maximized retention of morphological information through increased dimensionality. These contrary considerations are summarized in Figure 5. We shall argue later in the article that any selected mechanism for decision fusion acts and derivations used throughout the following, to be indicated at the appropriate points. B. Existing Approaches to Classifier Combination Our investigation commences with a brief overview of the various preexisting methods of classifier fusion. We have indicated that the nonoverlapping of the misclassification errors of very distinct methods of classification has led to the realization that, in general, no one method of classification can circumscribe all aspects of a typical real-world classification problem, prompting, in consequence, the investigation of a variety of combinatorial methods in a bid to improve classification performance. Generally, at least in regard to the final combination, these methods have in common that they are based on intuitive techniques for the combination of disparate decision schemes (e.g., majority vote, weighted mean) and not on any underlying theoretical schematics. In particular, there has not as yet been any attempt to obtain a generally optimal mathematical solution to the problem on meta-statistical grounds. However, this fact has not prevented the incorporation of a large body of eVective, heuristically conceived techniques into the machine learning tool kit. The most notably successful such approaches can be divided into two principal areas of concentration: decision fusion and ensemble creation. In the former, we are concerned only with obtaining an overall class decision from the various classifier outputs, working either with hard decisions (e.g., maximum vote) or probabilistic outputs (sum rule, product rule, etc.). In the latter, however, we are principally concerned with mechanisms for either diversifying or increasing the representative capacity of the classifiers constituting the combination. This approach further divides into two subcategories: subspace methods (Ho, 1998) and training fundamentally diVerently
186 Figure 5. Schematic illustration of the central dilemma of feature-selection.
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
187
for classifiers that act on common feature spaces in relation to those that act on discrete sets of feature spaces, and that this issue needs to be addressed at the feature selection level to avoid the systemic ambiguities that have hitherto been involved (if generally ignored). Ultimately, we shall discover that optimality requires that we abandon altogether the notion of a feature selection process distinct from classifier combination, and consider instead a hybrid process. C. Tomography Theory and Its Applications Tomography theory, on which our work is constructed, is a well-developed branch of applied mathematics that finds its chief application in the area of image reconstruction. Typically, an imaging modality measures a series of intersecting slices (from the Greek tomos) of a 2D or three-dimensional (3D) manifold containing volumetric (or areal) information. Constraint information for an individual point value in this space is thus spread over a number of individual slices, and it is the task of tomographic reconstruction to invert this process and determine, as far as possible, the aggregate point values of the measured space. Very often this problem is ill-posed, and an a priori supposition as to the nature of the underlying morphology must be made to arrive at a single well-defined solution. Generally, these constraints are based on reasonable assumption as to the underlying material disposition of the imaged object. Tomographic reconstruction is thus principally used where direct, invasive imaging of the object under consideration is either hazardous (medical imaging), unfeasible (seismological earth imaging, stellar imaging), or empirically compromising (plasma imaging). The examples listed provide an idea of the scope and extent of the use of tomographic methods in the research arena, with the former, medical imaging, constituting perhaps the single most common practical use of the technique. The imaging modalities themselves exhibit a similar breadth of scope; within the subject of medical imaging, a number of diVerent subdivisions may be made on the basis of the specific data-capturing modality, for instance: 3D ultrasound, CAT scanning, positron emission tomography (PET), and electrical impedance tomography. As long as the constraint of noninvasiveness is an experimental requirement, tomographic methods will continue to enjoy a wide and potentially ever-expanding range of uses and implementations. Very often such new methods will require new a priori constraints, and even novel mathematics, to solve the inverse problem presented by the reconstruction of individual points from tomographically obtained data (see, for instance, Natterer, 1997, for a summary). Our particular use of tomographic reconstruction is certainly consistent with this trend, taking place, as it does, in an entirely abstract space (and,
188
WINDRIDGE
moreover, one of arbitrary dimensionality). Consequently, it presents a number of problematic aspects entirely unique in tomography theory and requires the use of a number of novel mathematical devices to set unique constraints on the reconstructed probability space. D. Article Structure Given the scope of our inquiry, the format of this article divides naturally into a number of subinvestigations. The initial sections are concerned with outlining tomographic reconstruction theory and its generalization to the higher-dimensionality, low-angular sample-rate pattern spaces appropriate to pattern recognition theory. The following sections focus on making the parallels with probability theory mathematically rigorous. Finally, we consider the generalization of the technique to the combination of classifiers with nondistinct feature sets, and hence the universal application of the method. With the theoretical aspects of the method thus elucidated, we shall instigate a program of investigation into the practical utility of the method, first setting out an economized approach to practical implementation of the methodology and introducing an intuitive ‘‘iterative graphical correlation’’ explication of the tomographic method, laying bare the method by which it interrelates constituent classifier morphologies in the combination. We then proceed with a 2D example. Next, we set out on a more rigorous program of performance investigation, focusing in particular on the performance trends over increasing classifier dimensionalities, and hence, in conclusion, confirming the method’s general applicability to the field of pattern recognition. 1. Philosophical Development We have argued that the elucidation of this methodology over the course of this article will strongly suggest a very much more unified approach to feature selection in the context of classifier combination, one in which the two apparently distinct processes become inseparable on attempting to obtain the optimal classification performance from a given set of classifiers. This unification comes about quite naturally, through having necessarily made an explicit distinction between the two separate aspects of classifier combination that become apparent—namely, classifier combination as a method of implicitly refining the individual class PDFs, and classifier combination as implicit n-dimensional tomographic reconstruction. The former aspect, the refining of the PDF morphology through combination, may, within the wider theoretical context imposed on us by our methodology, then be treated as a separate classification scheme in its own right. Hence, our investigation gives rise to the notion that classifier combination, in its most rigorous sense, can only apply to those feature sets explicitly selected
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
189
by the feature selection algorithm to be distinct—that is, distributed over the range of possible classifiers in a nonoverlapping manner, and thus combining in an entirely tomographically reconstructive fashion. The modification of the feature selection algorithm implied by the method therefore involves treating combinations of the classifiers (via any of the preexisting nontomographic schemes for combination) on exactly the same footing as their constituent classifiers within the selection algorithm. The wider perspective that evolves in this article would then account for the previous qualitative but widely observed property of classifier combination, that such methods are especially eVective if individual classifiers use distinct features (e.g., Ali and Pazzani, 1995; Ho et al., 1994; Windridge and Kittler, 2000b), since we may now regard classical combination methods as, to a large degree, implicit, if only partial, tomographic reconstruction algorithms. They are therefore, without prior modification of the feature selection algorithm, attempting to inappropriately conflate the two distinct aspects of combination within the same procedure, unless either a mutual exclusivity or complete identity exists among the feature sets (Figure 6).
Figure 6. The two distinct aspects of classifier combination.
190
WINDRIDGE
The first step in elucidating this generalized view of decision fusion and feature selection is thus the derivation of tomography theory as it applies within the context of classifier fusion. As such, this constitutes the exclusive concern of the following section. Much of the methodological development of Section II.A is quoted directly from Windridge and Kittler (2003a; with permission) and is hence the copyright of the The Institute of Electrical and Electronic Engineers (IEEE) (and similarly, the example and later diagrams of Section IV).
II. Methodology of Tomographic Classifier Fusion A. Outline of the Methodology In formalizing the framework of this analysis,1 we begin by specifying our prior assumptions as follows (generalizing later to a less constricting set of assumptions): 1. We shall assume, at least initially, that the selection of features is decided through classifier preference, and that this is accomplished via the straightforward omission of superfluous dimensions as appropriate. (Note that this will in general diVer from the more usual techniques of feature selection, in that we are feature selecting on a class-by-class basis, the separate feature sets for each class only combining at the stage of Bayesian decision making). 2. For simplicity, it shall (at least at the outset) be assumed that the set of classifiers operates on only one feature individually, and that these are distinct (though note that the former is not a prerequisite of the method). Evidence that the stronger of these two assumptions, the latter, is reasonably representative of the usual situation comes from Windridge and Kittler (2000b), wherein features selected within a combinatorial context are consistently shown to favor the allocation of distinct feature sets among the constituent classifiers, presumably due to their divergent design philosophies. The wider implications of the alternative to this assumption are considered in Section II.F. 3. We shall consider that the construction of a classifier is the equivalent of estimating the PDFs pðxN ð1;iÞ ; xN ð2;iÞ . . . xN ðki ;iÞ joi Þ 8i, where N (x, y) is the final set of feature dimensions passed from the feature selection algorithm for class y (the cardinality of which, ki, we will initially set to unity for every class identified by the feature selector: i.e., ki ¼ 1 8i).
1
ß2003 IEEE. Reprinted, with permission, from IEEE PAMI, Vol. 25, No. 3, March 2003.
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
191
4. It is assumed (prior to setting out the feature selection algorithm most appropriate to our technique) that, in any reasonable feature selection regimen, the total set of features employed by the various classifiers exhausts the classification information available in the pattern space (i.e., the remaining dimensions contribute only a stochastic noise component to the individual clusters). Given assumption 3 above (that individual classifiers may be regarded as PDFs) and further, that pattern vectors corresponding to a particular class may be regarded as deriving from an n-dimensional probability distribution, then the process of feature selection may be envisaged as an integration over the dimensions redundant to that particular classification scheme (the discarding of superfluous dimensions being, in eVect, the linear projection of a higherdimensional space onto a lower one, ultimately a one-dimensional (1D) space in the above framework). That is, for n-dimensional pattern data of class i: Z þ1 Z þ1 ~ joi Þdx1 . . . dxk1 dxkþ1 . . . dxn :dxk ð1Þ pðxk joi Þdxk ¼ ... pðX |{z} 1
n1
1
~ ¼ ðx1 ; x2 ; . . . ; xn Þ. with X Because of condition 4 above (a good approximation when a range of classifiers is assumed), we shall consider that the pattern vector eVectively terminates at index j, where j n is the total number of features (and also ~ ¼ ðx1 ; x2 ; . . . ; xj Þ now represents classifiers, given condition 3). That is, X the extent of the pattern vector dimensionality. In the integral analogy, the remaining dimensions that are integrated over in Eq. (1) serve to reduce the stochastic component of the joint PDF by virtue of the increased bin count attributable to each of the pattern vector indices. Now, it is the basis of our thesis that we may regard Eq. (1) as the j-dimension analogue of the Radon transform (essentially the mathematical equivalent of the physical measurements taken within a tomographic imaging regimen), an assertion that we shall make explicit in Section II.D after having found a method for extending the inverse Radon transform to an arbitrarily large dimensionality. The conventional Radon transform, however, is defined in terms of the 2D function f (x, y) as follows: Z þ1 Z þ1 Rðs; yÞ½ f ðx; yÞ ¼ f ðx; yÞdðs x cos y y sin yÞdxdy ð¼ gðs; yÞÞ ð2Þ 1
1
where s may be regarded as a perpendicular distance to a line in (x, y) space, and y the angle that that line subtends in relation to the x axis. R(s, y) is then an integral over f (x, y) along the line specified. As a first approximation to inverting the Radon transform and reconstructing the original data f (x, y), we might apply the Hilbert space adjoint operator of R(s, y), the so-called back-projection operator:
192
WINDRIDGE
R ½Rðs; yÞð~ xÞ ¼
Z
Rð~ y; ~ y ~ xÞdy
ð3Þ
s
with ~ x ¼ ðx; yÞ; ~ y ¼ ðcos y; sin yÞ, and S the angular extent of the plane of rotation of y. To appreciate how this operates, consider first the following identity written in terms of the arbitrary function v, where V ¼ R v: Z Z vðy;~ x ~ y sÞgðy; sÞdsdy S s Z Z Z ¼ vðy;~ x ~ y sÞ f ð~ x0 Þdðs ~ x0 ~ yÞd 2 x0 dsdy S s R 02 Z Z ¼ vðy;~ x ~ y ~ x0 ~ yÞf ð~ x0 Þd 2 x0 dy ðeliminating sÞ S R 02 ð4Þ Z Z vðy; ð~ x ~ x0 Þ ~ yÞdy f ð~ x0 Þd 2 x0 ¼ R 02 S Z ¼ V ð~ x ~ x0 Þf ð~ x0 Þd 2 x0 ðvia the definition of V ½¼ R vÞ R 02
¼V ?f The first term in the above may be symbolically written R ðv ? gÞ, where it is understood that the convolution is with respect to the length variable and not the angular term in g. Hence, we have that V ? f ¼ R ðv ? gÞ. We may also describe the relationship between V and v in terms of their Fourier transforms. Consider first the 2D transform of V: Z ~ F ð~ kÞ½V ð~ xÞ ¼ ð2pÞ1 ei~x k V ð~ xÞd 2~ x R2 Z Z ~ ¼ ð2pÞ1 ei~x k vðy; ~ x ~ yÞd 2 x dy ðby substitutionÞ ð5Þ 2 R S Z Z ~ 1 ¼ ð2pÞ e~x k vðy;~ x ~ yÞd 2 x dy S
R2
We now consider a slice through this transform along the direction y. This may be accomplished in the above by substituting in the delta function dð~ k s~ yÞ within the y integral (i.e., coupling the variables ~ k and ~ y) and transforming it to a k space integral via the corresponding transformation d~ k ! sd~ y ðs is a positive real Z number): Z ~ ei~x k vðy;~ x ~ yÞd 2 x dð~ k s~ yÞdy
F ðs~ yÞ½V ð~ xÞ ¼ ð2pÞ1
S
¼ ð2pÞ
1
¼ ð2pÞ
R
2
Z Z S
1
R
2
Z
R
~
ei~x k vðy;~ x ~ yÞd 2 x dð~ k s~ yÞd~ ks1 ~
eis~x y vðy;~ x ~ yÞd 2 xs1 2
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
193
We have also that dð~ x ~ yÞ ¼ d~ x ~ y for constant ~ y. Thus: Z ~ eis~x y vðy;~ x ~ yÞdð~ x ~ yÞðsj~ yjÞ1 F ðs~ yÞ½V ð~ xÞ ¼ ð2pÞ1 2 R Z 1 ¼ ð2pÞ eisz vðy; zÞdzðsj~ yjÞ1 ðwhere z ¼ ~ x ~ yÞ R2
The z-dependent terms now form a Fourier transform with respect to the second variable in v. Hence, we may write the above in the following form to elucidate the precise relation between V and v in Fourier terms: F ðs~ yÞ½V ð~ xÞ ¼ ð2pÞ1 Fz ðsÞ½vðy; zÞðsj~ yjÞ1
ð6Þ
The eVect of the back-projection operator on the Radon transform of f may then be appreciated, via a consideration of Eq. (4), by setting v to be a Dirac delta function in s (corresponding to an identity operation within the convolution). The V corresponding to this v may then be deduced by inserting the Fourier transform of the delta function (unity throughout f-space) into the above equation. Hence, we see that the eVect of applying the backprojection operator to the Radon-transformed f function is the equivalent of convolving f with the inverse Fourier-transformed remainder: frecovered ðx; yÞ ¼ f original F 1 ðs1 Þ
ð7Þ
In terms of the tomographic analogy, we retrieve a ‘‘blurred’’ version of the original data. In fact, the object of tomography is exactly the reverse of this process: we seek to obtain a v function such that it is V that approaches the form of the delta function: that is, transforming the RHS of Eq. (4) into f alone. In this instance, we may regard the v function as a ‘‘filtering operator’’ that serves to remove morphology attributable to the sampling geometry rather than the original data, which is then hence applied to the Radon data at a stage prior to inversion via the back-projection operator. We shall in Section II.D set out to show that the summation method of classifier combination (which is representative of many more generalized combination approaches under certain conditions, such as very limited class information within the individual classifiers) is, in eVect, the equivalent of applying the back-projection operator immediately to the classifier PDFs (which in our analogy are to be considered Radon transforms), without any attempt to apply prior filtering (i.e., setting v to the delta function in Eq. [4]). It is then via this observation that we hope to improve the combination process, presenting an optimal or near-optimal solution to the inversion problem by finding an appropriate filter, v, albeit in the context of probability theory.
194
WINDRIDGE
Prior to setting out this correspondence we shall first extend the method to the j-dimensions required of our pattern vector and illustrate how the mechanics of the Radon reconstruction might be applied within the current context.
B. Generalization of the Radon Transform to Arbitrary Dimensionality It would seem on intuitive grounds that there should exist a relatively straightforward generalization of the inverse Radon transform to an arbitrary number of dimensions, one that would, in theory, involve only a simple multidimensional extrapolation of the ‘‘deblurring’’ mechanism (here presumed to be a convolving filter) to generate a complete reconstruction of the original data. However, this would rely on having previously obtained a complete set of hyper-‘‘facet’’ data, collectively defining the hypervolume of pattern data via inverse Radon transformation (by way of 3D illustration, we should have had to obtain three separate data sets consisting of perpendicular line integral bundles over a cubic pattern space, the line integrals being parallel to the three feature axes: that is, if we labeled the feature axes a, b, and c, then we could compute a 3D Radon reconstruction only if we have separately obtained all integrals perpendicular to the facets ab, bc, and cd ). This is of no immediate use for our methodology, however; the nature of the integral set out in Eq. (1) means that for all dimensionality in excess of two, we are no longer implicitly dealing with line integrals, but rather area integrals, volume integrals, and so on. The dimensionality attributable to each classifier, however, remains at the specified value of unity. If we are then to address this shortfall and reconstruct the complete pattern space via Radon methods, we shall require a series of intermediate stages in which the dimensionality of the pattern space is progressively increased prior to the final application of the n-dimensional inverse Radon transform alluded to above. In fact, these intermediate stages are themselves inverse multidimensional Radon transforms, providing an elegant continuity of mechanism across the dimensional range. Indeed, it further transpires that we might, rather than speculating on the nature of the dimensionally generalized inverse Radon transform, instead further break the problem into a series of standard 2D Radon transforms. Before we outline the methodology, we note that it may be objected at this stage, given that we have obtained the line integrals for every hyperfacet of the pattern space from data sets that were originally of one dimension, that the data contained within these hyperfacets must be of a highly correlated nature, and that the consequent pattern space reconstructed from them
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
195
cannot conceivably then consist of independent data points. While this is certainly true, there is, however, a still more fundamental reason for which we must abandon the idea of independent points within the pattern space, and one that ultimately subsumes this point; namely, the mismatch between the angular and spatial sampling densities. Hence, even for data of two dimensions, the angular resolution of the Radon line integrals (p2 radian intervals) is such that the pattern space that we are trying to reconstruct almost invariably contains more independent information than the Radon transforms can possibly supply (respectively, n2 as opposed to 2n independent variables for line integrals consisting of n parallel bundles). This, it must be understood, does not in anyway represent a shortcoming of the method. The pattern-space PDF that we are trying to reconstruct does not in any accessible sense have a prior existence in relation to which our method might be considered to be producing an approximation: the space is, in fact, constructed from discrete classifiers that have had their features chosen via individual preference, the implication being that feature dimensions not included have been so excluded because they fail to correctly represent the various class PDF morphologies. Therefore, the only non–a priori sense in which an n-dimensional class PDF morphology could be said to exist is as a perfectly executed inverse radon transform of the features’ class morphologies as represented by the individual classifiers. We should add that, in the case of more than one feature belonging to a particular classifier (which we have excluded in the preceding discussion for simplicity), clearly, the class morphology will be well represented at the appropriate dimensionality without the need for any reconstructive method and, moreover, with truly independent data points: the reconstructive methods are only required for features contained in separate and therefore mathematically independent classification schemes. Any such classifiers containing multiple features will then in fact enhance the reconstructive process, containing, as they do, entirely uncorrelated multidimensional information. In general, however, we shall have to consider that the data in our pattern space are inherently correlated in some sense: following the immediately succeeding outline of the proposed technique for multidimensional Radon inversion, we shall show that the pattern space may be considered via Fourier analogy as inherently ‘‘bandwidth limited.’’ We shall, then, in commencing our elucidation of the n-dimensional Radon transform, switch instead to a discretized version of Eq. (4), evaluating the integrals on the right-hand side via the trapezoid rule, as is more appropriate to the computational nature of our reconstruction. This will also assist later in the quantification of the bandwidth limitation of the reconstructed data. Thus, Eq. (4) becomes: (after Natterer, 1997)
196
WINDRIDGE p¼2
zfflfflfflfflfflfflfflffl}|fflfflfflfflfflfflfflffl{ p1 q 2pr X X R ðv ? gÞ vO ð~ x ~ yj sl ÞRð~ yj ; s l Þ pq j¼0 l¼q
ð8Þ
(the subscript O appended here to indicate some as yet unspecified bandwidth limitation). We should also understand at this point that the principle of Radon transformation will not be modified by the fact that the data points of the reconstructed space now represent integrals over the remaining n 2 dimensions of the pattern space, rather than discrete points within a Euclidean space, the distinction being at this stage purely semantic. We shall assume here that the function vO is such that its adjoint V has no eVect on the real-space data, f, under convolution. Note that with the sampling function of the real data now explicitly included (that is, convolution with the ‘‘Dirac forest’’ or shah function), this condition is now no longer so restrictive as to imply that V is solely a delta function. Indeed, any function with an origin value of unity and zeros at intervals corresponding to the peaks of the sampling function will have this property, perhaps the prime example being the (appropriately scaled) sinc function: any function that has a Fourier transform unity-valued up to the bandwidth limitation (corresponding to the reciprocal of the sampling rate) will, however, give rise to a function of the desired properties. The first stage in the construction of the n-dimensional pattern space (for n > 3) is the generation of every possible subset of 2D pattern spaces from the total feature set of n possibilities. Note that, as we have indicated, the reconstructed space will not strictly be an independent pattern space, every data point denoting an integral over the remaining n 2 dimensions. There n! will then be ðn2Þ!2! ¼ n C2 such spaces, which we will henceforth label compositely in terms of their constituent feature sets, features being denoted in the following discussion by lowercase Greek letters. Hence, we require: qab 1 X prab X vO ð~ xab ~ yabj sabl ÞRð~ yabj; sabl Þ qab j ¼0 l ¼q ab
ab
ð9Þ
ab
for: 8 a; b : a; b 2 I ; a 6¼ b; 0 < fa; bg < n
ð10Þ
On the assumption that the prior geometry filtering process is executed perfectly, we will then have obtained the optimal representation of the series of 2D arrays of integrals over the remaining n 2 dimensions that have been excluded by each of the feature pair combinations. We must now construct
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
197
every possible 3D pattern space from these 2D data sets. If we had a generalized 3D inverse Radon transform (as well as access to a 3D deblurring function), we might then construct each 3D pattern space from the totality of its ‘‘facets’’ (that is, for features labeled a, b, and g, we would require the facets ab, gb, and ga), there being in general j Cj1 ¼ j such entities required for construction of a j-dimensional object (as we shall demonstrate later). However, in the interests of maintaining continuity of mechanism over every dimensional iteration of the process, we shall instead address this as a series of 2D inverse Radon transforms, the plane of operation of which being in each case that perpendicular to every set of line-integral bundles. In this way, every point of the 3D space (not yet a pattern space until dimension n is reached) is subject to an inverse Radon computation. Moreover, since there are three possible perpendicular planes through every given point of the space, we in fact have a surplus of two inverse Radon computations per data point. We shall not, at this stage, make any assumptions as to the precise interrelation of the separate point computations, but rather seek to unify them in a manner congruent with the overall Radon analysis. Since the essential nature of the Hilbert-space adjoint to the Radon transformation (i.e., the unfiltered inverse Radon transform or back-projection operator) is, broadly speaking, one of addition followed by normalization, the appropriate method of combination for the three points would then seem to be the intuitive one of taking their collective mean (recall that geometry-filtering process has already been applied in the derivation of each of the individual points). This composite 3D inverse Radon transform would therefore appear in terms of the format of Eq. (9) as follows: qab 1 X 1 pr X vO ð~ xab ~ yabj sabl ÞRð~ yabj; sabl Þ þ 3 qab j l ¼q ab¼0 ab
1 1 pr0 X
qgb X
3 qgb j 0
ab
v0O ð~ xgb ~ ygbj0 sgbl 0 ÞRð~ ygbj 0 ; sgbl 0 Þ þ
l 0 ¼qgb gb¼0 gb
ð11Þ
qga 1 X 1 pr00 X v00 ð~ xga ~ ygaj 00 sgal 00 ÞRð~ ygaj 00 ; sgal 00 Þ 3 qga j00 ¼0 l 00 ¼q O ga
ga
ga
8 a; b : a; b 2 I ; a 6¼ b; 0 < fa; bg < n However, we should recall that the angular ordinate j may take only two values, dividing the plane into two perpendicular axes. If we can then permit a further simplifying assumption to serve merely as a vehicle for elucidating the various redundancies and symmetries inherent in the above
198
WINDRIDGE
mathematical structure, namely, that there are equal numbers of data points in each of the Radon transforms (i.e., that the various q’s are all equal in the above), then we see that the planes become pairwise-degenerate. That is, we find the relations: yga1 yab1; ygb1 yab0; ygb0 yga0
ð12Þ
sga1 sab1; sgb1 sab0; sgb0 sga0 :
ð13Þ
and
If we make the related assumption that the deblurring functions are symmetric under rotation (the rotational symmetry of the geometry should guarantee this for equivalent linear axes), then we should, in conjunction with the previous assertion, have a further degeneracy in the form of an equality of action among the various deblurring functions (that is, vO ¼ v0O ¼ v00O ). We may yet make a further simplification to Eq. (9) in that, having labeled the plane-bound vectors ~ x and ~ y by the double Greek subscripts that denote their constitutive features, a redundancy in the dot product ~ x ~ y makes itself apparent, the plane-specific nature of the one quantity invariably implying the plane-specific nature of their product. We are then free to generalize either one of the vectors to a unit-higher dimensionality without the loss of veracity in Eq. (10). We shall opt, then, to entirely omit the subscripts of ~ x, allowing it instead to represent the same arbitrary 3D vector within each of the three summations of Eq. (9). That is, the original subscripted vectors will each become projections of a common, 3D vector onto the host planes that originally contained them. Along with the further trivial normalizing assumption that qab ¼ qgb ¼ qga q, we have now the capacity to simplify Eq. (9) to approximately half of its complexity, giving us the equivalent form: X p ðr þ r00 Þ vO ð~ x ~ yab0 sabl ÞRð~ yab0 ; sabl Þ þ 3q l ¼q q
ab
X p ðr þ r0 Þ vO ð~ x ~ yab1 sabl 0 ÞRð~ yab1 ; sabl 0 Þ þ 3q 0 l ¼q q
ab
ð14Þ
q X
p 0 ðr þ r00 Þ vO ð~ x ~ ygb1 sgbl 00 ÞRð~ ygb1 ; sgbl 00 Þ 3q l 00 ¼q gb
8 a; b : a; b 2 I; a 6¼ b; 0 < a; b < n; which will permit us later to make very general statements about the higherdimensional equivalents of Eq. (9).
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
199
Even if we do not make any of the above assumptions, the only way in which the actual situation may diVer from the imposed one is in terms of the extent of, and the number of, the data points within each particular feature. It is fairly evident from the preceding discussion that this may be accommodated very simply within Eq. (14) via an appropriate rebinning and normalization of each axis, without compromising any of the symmetryrelated simplifications that we have discussed. We have then, in consequence of this elimination of redundancy, the basis for constructing a method of n-dimensional inverse Radon transformation from (n 1)-dimensional pattern data, the only diVerence in the more general mechanism being the number of constitutive (n 1)-dimensional entities required to fully describe the reconstituted n-dimensional object. The precise number will simply be the number of nonpermuted subsets of n 1 objects that we can draw from a total of n distinct objects, this being nCn1, corresponding to the number of (n 1)-dimensional hyperfacets required to uniquely specify the points of an n-dimensional hypercube, and is in fact the same as the number of feature dimensions in the initial data (since n! nC n1 ¼ ðn1Þ!1! ¼ n). Thus, the number of summations in the n-dimensional generalization of Eq. (14) will be n, and the number of Greek subscripts that will need to be associated with any given y or s is n 1. Therefore, rather than writing out a very general composite equation embodying this principle, we have instead a series of progressive stages of association of the n separate feature dimensions, the dimensionality of the entities constructed at each step increasing by a factor of 1, until the cumulating n-dimensional object is finally created. Needless to say, notwithstanding our simplification of the procedure by insisting on the allocation only one feature per classifier (condition 2), the method can accommodate (and is in fact very much improved by) having to subsume within it classifiers with more than one feature. In this case there is, of course, no need for reconstruction of the corresponding pattern space via Radon methods; the space is already fully and accurately defined. In such a regime we may then simply skip the Radon reconstruction procedure corresponding to those particular dimensions in the above construction, without the loss of methodical consistency. Aside from its complexity, this latter point is then the predominating reason for not having explicitly compiled a general equation for the execution of the n-dimensional inverse Radon transformation from its 1D constituents favoring, rather, its retention as a recursive computational procedure. This point shall be fully realized in Section III.D.3, where we set out a fully general economized tomographic classifier fusion procedure centered around the principle of graphical intercorrelation.
200
WINDRIDGE
C. Sampling Concerns We now return to the question of what conditions, if any, the very low number of angular Radon samples inherent in our class probability density functions will impose on the reconstructed pattern space, or even on the very utility of the method itself, given that we shall have to consider our reconstructed data exceptionally bandwidth limited in some sense, within the terms of the Fourier analogy implied by Eq. (6). In the interests of clarity, and given our recursive approach, we shall consider this bandwidth limitation only in terms of the individual 2D inverse Radon transforms from which the multidimensional inverse transform is progressively constructed, the n-dimensional corollary of these findings being at least qualitatively self-evident, and the predictions for the utility of the technique remaining valid by extension. There are then two distinct aspects to the sampling issue as it relates to Radon transformation, namely, the linear and the rotational integrations within the discretized form of the inverse Radon transform [Eq. (8)]. The first of these we can address in terms of the Nyquist criterion for suYcient sampling within the Fourier domain; following Natterer (1997) we shall consider that the reconstructed pattern space is bandwidth limited (in the Fourier sense) to frequencies within a value O. The Nyquist criterion states that this space may then be fully determined by linear sampling with a stepsize of p/O. In the nomenclature of Eq. (8), this step-size may be derived from the width of the reconstructed space, r, and the total number of parallel Radon transforms, q, via the ratio r/q. The fact that the Radon transform and the pattern space have identical bandwidth limitations, as is implicitly considered to be the case in the above argument, may be verified by an inspection of Eq. (6), which linearly relates the Fourier transforms of the two quantities. The overall imposition on the bandwidth arising from these arguments may then be derived from the two step-size–related terms to be: 1 q rO: p
ð15Þ
However, we have also to consider what possible bandwidth limitations are imposed by the rotational sampling rate, which, given that Radon samples are obtained for only two angles per plane of reconstruction (the feature axes), would then appear, on intuitive grounds, to be the dominating factor of the two. This calculation is less straightforward, and we adopt Natterer’s (1997) argument in terms of Bessel functions by way of approximation. Using Debye’s representation of the asymptotic form of Hankel functions of the
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
201
first kind as a method of relating angular integration to wavelength (or its nearest equivalent in Bessel terms), it can be shown that the bandwidth of the Radon transform in terms of y is essentially Or. The step-size relevant to angular version of the Nyquist criterion is then simply p/p, the criterion itself p consequently imposing the restriction: pp rO , or: p rO
ð16Þ
Now p, as we have stated, is equal to 2. The bandwidth criterion owing to the angular sampling rate is then: O r2. From Eq. (15), however, the corresponding bandwidth criterion deriving from the linear sampling rate was simply: O pq r . Now, the number of points in a typical classifier-derived PDF will generally be in excess of the cardinality of the test data set from which it derived; being typically of the order of 1000. This, and the corresponding bandwidth limitation, will clearly be so far in excess of the angular sampling limitations that we are justified in entirely disregarding the number of sample points as being of any consequence at all to the recovered pattern-space morphology, being thus dominated almost completely by correlations imposed by the angular sampling rate. Hence, this is a pattern space of inherently few degrees of freedom in relation to the multidimensional feature spaces that exist within single classifiers, and thus no method of classifier combination, no matter how perfect (corresponding to having obtained a perfect ‘‘deblurring’’ filter, vO), can possibly reconstruct an n-dimensional pattern space consisting of entirely independent points. D. Formal Explication of the Parallel Between Radon Transform Theory and Classifier Combination Having obtained a mathematical form (or rather, a method ) for n-dimensional inverse Radon transformation, we are now in a position to make the correspondence of the outlined method to classifier combination theory more formally explicit. That is, we shall seek to encompass the various extant combinatorial decision theories within the tomographic framework that we have developed over the preceding sections and show that they represent, within certain probabilistic bounds, an imperfect approximation to the unfiltered inverse Radon transformation. We will first, however, demonstrate how we might explicitly substitute probabilistic terms into Eq. (14), and therefore, by extension, the complete n-dimensional inverse Radon transformation. We have initially then to establish exactly what is meant in geometrical terms by the Radon forms on which Eq. (14) is constructed. It is helpful in this endeavor to, at least initially, eliminate the complication of the pre-filtering convolution
202
WINDRIDGE
represented by v. We do this by settingPv to a discretized form of the Dirac d function throughout the summation, qlxy ¼q , that is: vO ðsxyl Þ ¼ Pq
1
lxy ¼q sxyl
¼
1 2q
when
sxyl ¼ 0
ð17Þ
vO ðsxyl Þ ¼ 0 otherwise Hence, the various summations only produce non-zero terms when: ~ x ~ yab0 sabl ¼ 0 or ~ ~ x ya b0 ¼ sa bl Thus, without filtering, Eq. (14) commutes to the form: p ðr þ r00 ÞRð~ yab0;~ x ~ ya b0 Þ þ 3q p ðr þ r0 ÞRð~ yab1 ;~ x ~ yab1 Þ þ 3q p 0 ðr þ r00 ÞRð~ yg b0 ;~ x ~ ygb0 Þ 3q 8 a; b : a; b 2 I ; a 6¼ b; 0 < a; b < n
ð18Þ
For convenience we shall normalize the axial extent parameters; r ¼ r0 ¼ r00 throughout the following discussion, such that we may introduce a constant multiplying factor, A, into each summation component. Now, because we a free to set the coordinate system as we choose, and, in having set j to 2 in Eq. (14), consequently obtaining a perpendicularity between the Radon integral vectors, we shall find it convenient to express our geometry in terms of an orthogonal coordinate system, with axial direction vectors set parallel to the perpendicular Radon integrals. Thus, we may legitimately make the equations: a ¼ x1 ; b ¼ x2 ; g ¼ x3 :
ð19Þ
Also, in having imposed this parallelism between the Radon integrals and coordinate axes, we find that the subscript xyl comes to exhibit a redundancy of two variables, such that we may state the further consequent equivalences: a b0 ¼ a; ab1 ¼ b; g b0 ¼ g:
ð20Þ
Thus Eq. (14) now adopts the form: A½Rð~ yx1 ; x1 Þ þ Rð~ yx2 ; x2 Þ þ Rð~ yx3 ; x3 Þ: However, recall from Eq. (2) that:
ð21Þ
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
Rðy; sÞ½ f ðx01 ; x02 Þ
¼
Z
þ1
Z
1
þ1
203
f ðx01 ; x02 Þdðs x01 cos y x02 sin yÞdx01 dx02 ð¼ gðs; yÞÞ
1
ð22Þ Now, we also have that cos ~ yx2 ¼ sin ~ y x1 ¼ 0 and
ð23Þ cos ~ yx1 ¼ sin ~ y x2 ¼ 1
(y being measured in relation to the x1 axis). Thus, for example, picking an ordinate at random: Z þ1 Z þ1 Rð~ y x1 ; x 1 Þ ¼ f ðx1 ; x2 Þdðx1 x01 Þdx01 dx02 1
¼
Z
þ1
1
¼
Z
1
f ðx1 ; x02 Þdx02
ð24Þ
þ1
f ðx1 ; x2 Þdx2
1
and similarly for x2, x3. Now, a rational extension of the nomenclature of Eq. (1) would allow us to write: Z þ1 Z þ1 ... pðX joi Þdx3 . . . dxR dx1 dx2 ð25Þ pðx1 ; x2 joi Þdxk ¼ |{z} 1
n2
1
(and similarly for the remaining pairs of basis vector combinations). We, of course, still have that: Z þ1 Z þ1 Z þ1 pðx1 joi Þdxk ¼ . . . pðX jo Þdx . . . dx dx ¼ pðx1 ; x2 joi Þdx2 i 2 R 1 |{z} 1
n1
1
1
ð26Þ Thus, by setting the equivalence f ðx1 ; x2 Þ pðx1 ; x2 joi Þ, we find by direct substitution into Eq. (24) that we can state that: Z þ1 ~ Rðyx1 ; x1 Þ ¼ f ðx1 ; x2 Þdx2 ¼ pðx1 joi Þ ð27Þ 1
and similarly for the remaining numeric subscripts. Hence, in consequence, we may simply restate the unfiltered 2D to 3D inverse Radon transformation in the more transparent form:
204
WINDRIDGE
A½ pðx1 joi Þ þ pðx2 joi Þ þ pðx2 joi Þ:
ð28Þ
Moreover, we can go further and extend this approach to the recursive methodology of the n-dimensional inverse Radon transformation, in which case we find in the most general terms, that the unfiltered n-dimensional inverse Radon transformation will have the form (declining explicit calculation of the various normalizing constants corresponding to A in the above, this being a relatively complex undertaking, and not in any case required in the context of the decision-making schemes within which the method will ultimately be applied [see later]): " # X 0 A pðxk joi Þ ; ð29Þ all k
which clearly comes to resemble the sum rule decision-making scheme (a correspondence we shall make formal later). The substitution of probabilistic terms into the generalized inverse Radon transformation having thus been rendered explicit, it is now an elementary matter to substitute the previously omitted filtering function vO back into Eq. (29) (the various subscript redundancies induced by an appropriate selection of the coordinate system above applying equally to the variable s in Eq. [14]), most particularly since the set of filtering convolutions will remain additive in relation to their correspondent p(xk|oi) functions throughout the recursive increment in dimensionality, and will therefore readily generalize to a composite n-dimensional filtering function. (We omit a discussion of its specific form since this is entirely dependent on the choice of vO.) Having transcribed the inverse Radon transform into purely probabilistic terms and eliminated any residual geometric aspects of the problem, we may now turn to an investigation of how the n-dimensional reconstruction relates to the decision-making process implicit within every regimen of classifier combination. As a preliminary to this endeavor, we must first ensure that there exist comparable pattern vectors for each class PDF (such not necessarily being the case for feature sets constructed on a class-by-class basis, as within our approach). That is, we shall need to ensure that: pðxRi ð1Þ ; . . . ; xRi ðjk;iÞ jok Þ ¼ pðxlk ; . . . ; xuk jok Þ 8i; k
ð30Þ
where uk and lk are, respectively, the highest and lowest feature indices of the various feature sets involved in the combination, and jk,i is the cardinality of the feature set corresponding to the kth class and ith classifier: Ri(nk,i) is then the nth highest feature index in the feature set presented to the ith classifier for computation of class PDF number k.
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
205
This may be straightforwardly accomplished by the inclusion of null vector components, such that: pðxt62Ri ð jk;i Þ jok Þ ¼ R
1 dxt 8t; i; k dx1 dx2 . . . dxt1 dxtþ1 . . . dxN
ð31Þ
implicitly setting lk to 1 and uk to N, thereby allowing a universal approach for each class index, k. Now, we have via the Bayes decision rule (i.e., that: ~ ! oj if assign X pðoj jx1 ; . . . xN Þ ¼ maxk pðoj jx1 ; . . . xN Þ;
ð32Þ
given pðok jx1 ; . . . xN Þ ¼
pðx1 ; . . . ; xN jok Þpðok Þ pðx1 ; . . . ; xN Þ
ð33Þ
), that our decision rule for unfiltered N-dimensional inverse Radon PDF reconstruction is: ~ ! oj if assign X "P # N i¼1 pðxi jok Þpðok Þ pðoj jx1 ; . . . xN Þ ¼ maxk ð34Þ pðx1 ; . . . ; xN Þ from [Eq. (29)]. The more familiar decision rules, however, may be derived solely via probabilistic constraints on the Bayes decision rule. For instance, suppose that we impose the condition that x1, . . ., xN are independent random variables (such that: pðx1 ; . . . ; xN jok Þ ¼
N Y
pðxi jok Þ
ð35Þ
i¼1
), then we obtain the decision rule: ~ ! oj if assign X "Q # R pðx jo Þpðo Þ i k k i¼1 pðoj jx1 ; . . . xN Þ ¼ maxk : pðx1 ; . . . ; xR Þ
ð36Þ
That is, we obtain the classical ‘‘product rule.’’ If we impose the further constraint that: pðwk jxi Þ ¼ pðok Þ½1 þ df ðok ; xi Þ;
ð37Þ
with df (ok, xi) an infinitesimal function (in eVect, imposing a high degree of ‘‘overlap’’ among the total set of class PDFs, or, equivalently, a ubiquitous
206
WINDRIDGE
class membership ambiguity), and apply this directly to the Bayes theorem for single vectors: pðxi jok Þpðok Þ ð38Þ pðok jxi Þ ¼ pðxi Þ then we obtain: pðok jxi Þ ¼
pðxi jok Þpðok Þ pðok Þ½1 þ df ðok ; xi Þ ¼ : pðxi Þ pðxi Þ
ð39Þ
½1 þ df ðok ; xi Þ ¼ pðxi jwk Þ:
ð40Þ
Or, more succinctly: Substituting back into Eq. (36), the product rule decision scheme, obtained, we recall, via the imposition of statistical independence among a given class’s pattern vector ordinates, we then have the resultant decision rule: ~ ! oj if assign X "Q # R ð41Þ i¼1 ½1 þ df ðok ; xi Þpðok Þ pðoj jx1 ; . . . xN Þ ¼ maxk : pðx1 ; . . . ; xR Þ Expanding the product and collecting infinitesimals of higher order (via the function O(2)), we obtain: ~ ! oj if assign X " # P ð42Þ 1þ R i¼1 df ðok ; xi Þ þ Oð2Þpðok Þ pðoj jx1 ; . . . xN Þ ¼ maxk : pðx1 ; . . . ; xR Þ Eliminating O(2), and resubstituting Eq. (40), we find: ~ ! oj if assign X " # P 1þ R i¼1 fpðxi jok Þ 1gpðok Þ pðoj jx1 ; . . . xN Þ ¼ maxk : pðx1 ; . . . ; xR Þ Or:
~ ! oj if assign X pðoj jx1 ; . . . xN Þ ¼ maxk
"
# PR ð1 RÞpðok Þ i¼1 pðok Þ þ ; pðx1 ; . . . ; xR Þ pðx1 ; . . . ; xR Þ
which is the equivalent of the classical ‘‘sum rule’’: ~ ! oj if assign X "P # R i¼1 pðok Þ pðoj jx1 ; . . . xN Þ ¼ maxk pðx1 ; . . . ; xR Þ when the unconditional class probabilities p(ok) are close to equality.
ð43Þ
ð44Þ
ð45Þ
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
207
This, however, is identical to our original decision rule for the unfiltered inverse Radon transformation. Hence, we may state that the unfiltered inverse Radon PDF reconstruction is, within a Bayesian decision-making context, the equivalent of the sum rule decision-making scheme under the specified probabilistic constraints (and the minor additional imposition of the unconditional class probabilities having approached equality), and will thus produce near-optimal results only when the two conditions are satisfied (i.e., that the pattern vector components are statistically independent, and that there exists a high class membership ambiguity owing to similar PDF morphologies). The unfiltered inverse Radon decisionmaking scheme then recreates the product rule under the less constrictive (and therefore more common) condition of a high class membership ambiguity alone, a condition, however, that must still presuppose very major constraints on the N-dimensional PDF morphology if the equality is to hold. Very many other classical combination rules are derived from combinations of these preconditions (see Kittler et al., 1998) and thus come to resemble, to some degree, the unfiltered inverse transform. Without exception, however, they will all impose very considerable constraints on the implied N-dimensional PDF reconstruction. When viewed in this morphological regard, it is clear that the lack of universal application of classical methods of combination, however eVective they may be within their typical domains of application, is (by an inversion of the above process) attributable to these implicit constrictions on the reconstructive process, to which these methods have been shown to oVer an approximation. The only way in which we can free ourselves of these restrictions (on the assumption that we have obtained error-free PDFs [see later]) is then to apply the filtered inverse Radon transform in its entirety, since this inherently neither assumes nor imposes any morphological (and therefore probabilistic) constraints on the final N-dimensional PDF, other than those already implicit in the original PDF data. On model-theoretical grounds this would therefore represent an optimal solution to the implied problem of N-dimensional PDF reconstruction, having shown, by an inversion of the arguments above, that at least one aspect of every method of classifier combination is in some (not necessarily immediately obvious) way, the implicit recovery of an N-dimensional PDF. We have now to consider whether the above argument is modified by the fact that the various classifier PDFs, in consequence of having been derived from a finite set of stochastically distributed pattern data points, will invariably, to some extent, deviate from the ‘‘true’’ (if only hypothetically existent) probability density functions.
208
WINDRIDGE
E. Estimation Error We have observed in the preceding section that the unfiltered inverse transform equates to the sum rule decision scheme (with some additional probabilistic constraints) and would thus appear, on purely reconstructive grounds, to be a rather poor method of combining classifiers. However, we find that this is not in general the case, the sum rule often, in fact, achieving a better classification performance than the product rule, despite inherently making fewer impositions on the form of the N-dimensional pattern PDF that has been implicitly reconstructed. The reason for this is that the sum rule exhibits a pronounced robustness to estimation errors, which we demonstrate in the following way (paralleling the discussion in Kittler et al. (1998), albeit in terms of the prior probabilities): ˆ the hypothetical PDF from which the pattern data origiDenoting by P nally derived, we have that the PDF, P, constructed by the classifier under consideration is related to the accented quantity via the error value eij as follows: ˆ ij joj Þ ¼ Pðxij joj Þ þ eij : Pðx
ð46Þ
Now, by inspection of the decision rule formula, Eq. (45), we can extract the essential summation component of the sum rule to be the formula: n X
ˆ ij joj Þ ¼ Pðx
i
n X ½Pðxij joj Þ þ eij i
¼
" n X
# Pðxij joj Þ
i
Pn
eij 1 þ Pn : i Pðxij joj Þ
ð47Þ
i
The latter term in square brackets we denote the error factor. The equivalent component to the summation above in product rule terms would then be the following: n Y
ˆ ij joj Þ ¼ Pðx
i
n Y
ðPðxij joj Þ þ eij Þ
ð48Þ
i
"
n Y i
#" Pðxij joj Þ
1þ
n X i
eij Pðxij joj Þ
# ð49Þ
ðassuming higherorder eij terms to be negligibleÞ: The latter square-bracketed term being then the error factor associated with the product rule.
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
209
We see immediately that, within the error factor for the product rule, each eij is amplified by the term: 1 1 ; ð50Þ ~ joj Þ Pðxij joj Þ PðX as opposed to the sum rule, for which the corresponding amplification term is: 1 1 Pn ¼ : ð51Þ ~ Pðx jo Þ ij j Pð X joj Þ i Thus, the sum rule exhibits a very much greater degree of stability in relation to estimation errors. It may be shown (see Kittler et al., 1998) that the sum and product rules represent opposite poles, in terms of their sensitivity to errors, of a very large range of classifier combination techniques, most of the remaining combinatorial strategies being implicitly derived from either or both sets of the probabilistic assumptions that underly these two polar combination methods, and which hence fall somewhere between them in terms of their robustness to estimation error. We would expect, then, given that unfiltered tomographic PDF reconstruction is essentially identical in operation to the sum decision rule for classifier combination, that the process will exhibit a similar robustness to estimation error, and therefore, if it is indeed the case that the sum rule represents an optimal solution to the problem, a near-optimal combinatorial method, both in this sense, as well as in the former sense of being an ideal information-theoretic N-dimensional PDF recovery procedure. In addition, because we have stated at the outset that poorly performing feature/classifier combinations are explicitly rejected at the feature selection stage (since they represent inherently poor models of the PDF), this argument becomes even more forceful. We need, however, to consider the consequence of filtering on this argument if we are to make the case without reservation. In general this will not be simple, and it will depend to a very great extent on the nature of the classifier. This is because filtering acts, in essence, as an edge-enhancing, gradient-negating convolution (similar in eVect to the familiar ‘‘Mexican hat’’ filter) and will thus act diVerently on diVering classification procedures. For instance, errors caused by exceptional outliers within a ‘‘nearestneighbor’’ classification framework would tend to be exacerbated by filtration, whereas a continuous curve-fitting classifier presenting too smooth a PDF might, in consequence, have its estimation error reduced by the act of filtration. We cannot, therefore, consider that there is any single systematic eVect of the deblurring process on the final estimation error of the reconstructed N-dimensional PDF. Intuitively, however, we may state the error
210
WINDRIDGE
will remain of a similar order to that of the sum rule decision scheme (on the assumption of an overall cancellation of the negative and positive aspects of filtration): a more formal investigation of this point will constitute a later section of the article. We should further add, at this point, that an exact morphological reconstruction of the various class PDFs within an N-dimensional pattern space will have less of a consequence for the overall performance of the decision scheme if the classes are well separated (which is why binary class delimiters such as neural networks, with PDFs that in consequence exhibit only two values of the probability density, namely zero and a constant, may yet still exhibit a good classification ability despite being unrepresentative of the ‘‘true’’ PDF). Thus, conventional methods of combination might, within such a scenario, approach the theoretical performance levels of the tomographic method. We should reiterate, however, that the latter will always represent the optimal (in the sense of least biased) decision-making scheme, albeit at some additional computational cost. Whether, in this instance, the full N-dimensional filtered Radon inversion would be justified for such a small increment in performance would be a matter of appropriate judgment. When, however, there exists a high degree of class membership ambiguity, the performance enhancement of the proposed technique should, we expect, be very marked. In consequence of this absence of a general solution to the issue of estimation error, we shall in Section V, attempt to give practical and model-based measures of the resilience of the method to this source of error, finding in doing so that our intuitive arguments of this section are, to a large extent, empirically justified. F. Strategy for the Combination of Overlapping Feature Sets We have thus far considered tomographic reconstruction theory only in terms of distinct feature sets: the contrary situation must now be addressed if we are to arrive at a universally optimized solution to the problem of classifier combination. Before embarking on this investigation we should, however, reiterate just how exceptional it is to find overlapping feature sets among the classifiers within a combination when feature selection is explicitly carried out within a combinatorial context (see Windridge and Kittler, 2000b). The specific question that we are seeking to address is therefore what strategy to adopt when presented with overlapping feature sets on attempting our tomographic reconstruction of the complete N-dimensional patternspace probability density function. This is clearly not a problem for classical methods of combination, which consider combination in probabilistic, not morphological, terms, and which do not thus consider the implicit ambiguity
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
211
in PDF representations as presenting any particular diYculty. Indeed, classical techniques such as majority voting may actively assist us in our tomographic endeavor by explicitly eliminating superfluous PDF characterizations, and, as such, are not in any way mutually exclusive to our methodology. In general, though, it will not be obvious whether the current and classical methodologies are of simultaneous applicability. There is, however, another perspective from which we may view the action of classical combination methods in regard to overlapping feature sets (as opposed to implicit unfiltered tomographic reconstruction, which would only apply to distinct feature sets), and that is as methods for refining the PDF morphology of the particular class under consideration. This is because all such methods of combination will propose a probabilistic output for a given pattern vector input (even if the input and/or output are ostensibly in terms of a class, as opposed to a probability, there is still an underlying PDF that may be straightforwardly reconstructed by exhaustively scanning across the pattern space). If the collective decision-making process for overlapping feature sets is then more eVective than that for the classifiers individually, then this is because the aggregate PDF is closer to the ‘‘true’’ probability density distribution. In terms of one of the more familiar combinatorial methods, the ‘‘weighted mean’’ decision system, the mechanics of PDF refinement are fairly intuitive: in this case, the PDFs are combined via summing in appropriate ratio, with the final PDF recovered after a later normalizing step; and similarly, though perhaps less obviously, for the other decision schemes that have been discussed. Thus, we see that conventional combination methods, by virtue of not having specified the nature of the feature sets to which they apply, have tended to conflate two absolutely distinct methods of improving classification performance: namely (insofar as the feature sets are distinct), classifier combination has gained its advantages by being an implicit tomographic reconstruction of the N-dimensional pattern space PDF, and (insofar as the feature sets are overlapping), the advantage is obtained via a refinement of the features’ PDF morphology. If we are to set about obtaining an optimal solution to the problem of classifier combination, it is therefore clear that we shall have to apply these two diVering mechanisms in their appropriate and rigorously distinguished domains of operation. That is, we should retain the classical methods of combination, but employ them only within the nontomographic domain (to which they constitute only an imperfect approximation); that is, solely within the domain of overlapping classifiers, where they can be treated simply as methods of PDF refinement. However, to return to the imperfect situation in which we are presented with overlapping features within a tomographic framework, and to consider
212
WINDRIDGE
a concrete case, suppose that there are two classifiers (A and B) that contain, respectively, the preferred feature sets {1, 2} and {2, 3} after feature selection. We then wish to obtain the best possible classification performance from the combination of features and classifiers available. There are a number possibilities open to us, for example, we might: 1. Establish which of the two classifiers, A or B, is the better classifier of feature 2 alone, and then apply the filtered inverse Radon transformation to features 1, 2, and 3 separately (feature 1 already being associated with classifier A, and feature 2 with classifier B). Note that we can envisage the first part of this as, in a sense, an implicit weighted majority vote decision scheme applied to both of the classifiers containing feature 2; this observation shall later help us to generalize our diVering approaches within a unified framework. 2. Establish which of the two classifiers can least aVord, in terms of classification performance, to lose feature 2, and (supposing that this is classifier A) perform the filtered inverse Radon transformation on the data sets A(1,2) and B(3) (the terms within the brackets being the features associated with the classifier outside the bracket). Note that there is still an implicit majority vote at the outset of this procedure, though not so obviously as in the previous case. We also note, without rigorous proof, that we might expect this to be the better option on intuitive grounds, since it does not involve either the addition of features rejected by the feature selection process (see later) or the tomographic reconstruction of spaces that are already fully defined (as for 1). 3. Establish whether one of the two classifiers (A or B) is the better classifier of feature 2 alone, or whether a weighted mean combination of classifiers A and B sharing feature 2 constitutes the better classifier of that feature, and then deploy the filtered inverse Radon transformation on the three features within their designated classifiers individually. Note that we might consider this a generalization of strategy 1, permitting the two classical combination methods (majority vote and weighted mean) to vie for the representation of feature 2’s PDF prior to inverse Radon transformation. We might similarly have included any of the other classical combination methods. 4. Establish which is the better classifier (either A or B) of the entire pattern space of features 1, 2, and 3 and consider only that classifier’s output (an implicit weighted majority vote applied to the two classifiers’ output). 5. Generate the two 3D PDFs of the pattern space consisting of the features 1, 2, and 3 via classifiers A and B, and then combine through any of the classical methods of classifier combination. This may then be considered simply as a generalization of the preceding option. Note that we do not
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
213
expect either of these possibilities to generate particularly good classifications, despite containing the full 3D pattern space within the two classifiers (therefore avoiding a necessarily ambiguous tomographic reconstruction of it), because the space implicitly then includes features rejected by the feature selection process, designed, as it is, specifically to exclude those features that do not lend themselves to the generation of accurate PDFs. We begin to see that in the above (by no means exhaustive) list of strategies a number of consistent patterns begin to emerge. We should therefore like to generalize all of the above into unified framework by giving a formal (and therefore necessarily algebraic) description of the various approaches to the problem. Suppose, therefore, that we have a battery, T, of techniques for PDF combination excluding the Radon method, and a set of features FA associated with classifier A and a set of features FB associated with classifier B. Then, in the reconstruction of the pattern space FA [ FB, we can set about generalizing the combination techniques in each of the above instances in the following manner. We first denote the best-performing classical PDF combination technique by {X } (F ) (the star-operator extracting the optimum classifier from the total body of classifiers, X, with that classifier acting on the feature set F ): the converse of this, the operator that extracts the worst performing classifier we denote by 0 {}. The filtered inverse Radon combination of classifiers A, B, and C containing arbitrary arrangements of nonoverlapping features we shall then denote by Ri[A, B, C]. The additional functional operator F X is then introduced, which acts to extract the feature set from a particular classifying entity X (whether it be a single or compound classification scheme, i.e., a solitary classifier or combination via conventional methods of classifiers with identical feature sets). We shall make the further assumption that the filtered inverse Radon transformation of a single PDF Ri[A] constitutes an identity operation (i.e., Ri[A] ¼ A), as required for algebraic consistency, and further that there exists a ‘‘zero’’ of the algebra such that A() (i.e., the classifier A acting on the empty set) produces the value ‘‘0’’ such that Ri[X, 0] Ri[X ]. Under the algebraic formalism we have therefore evolved the list above would be written: 1. Ri ½AðFA 62 FA \ FB Þ; fAðFA \ FB Þ; BðFA \ FB Þg; BðFB 62 FA \ FB Þ 2. Ri ½ T fAðFA 62 FA \ FB Þ; BðFB 62 FA FB Þgð FT fAðFA 62 FA \ FB Þ; BðFB 62 FA \ FB ÞgÞ; 0 T fAðFA 62 FA \ FB Þ; BðFB 62 FA \ FB Þg ð 0 FT fAðFA 62 FA \ FB Þ; BðFB 62 FA \ FB Þg [ ðFA \ FB ÞÞ 3. as 1 4. T fAðFA [ FB Þ; BðFA [ FB Þg 5. as 4.
214
WINDRIDGE
With this common framework in place, we may now seek to generalize options 1 and 2 by defining the feature sets FA0 and FB0 such that FA0 FA and FB0 FB0 . That is, FA0 and FB0 are subsets of their respective originals permitting the empty and isomorphic sets [{} and FX] as appropriate. In conjunction with this, we further generalize the operator to O(z) such that it now extracts the optimal classifier with respect to every possible feature set z: that is, O(z) may be considered a function in its own right, although with respect to feature sets rather than classifiers as for , and which multiplies the number of options instead of reducing them. Thus O(z) might be considered ‘‘O(z) followed by .’’ This will permit us to exploit a redundancy in relation to later on. Within this regard, the generalization of options 1 and 2 (and therefore 3) would appear: 0 0 Ri ½AðFA 62 OðFA ;FB ÞF T AðFA0 Þ; BðFB0 Þ Þ; ð52Þ 0 0 OðFA0 ;FB0 Þ T AðFA0 Þ; BðFB0 Þ ; BðFB0 62 OðFA ;FB ÞF T AðFA0 Þ; BðFB0 Þ Þ However, because we have specified that Ri ½X ; 0 Ri ½X and, Ri ½X ¼ X , we see that the above formulation can also be made to subsume options 4 and 5 by setting FA0 ¼ FA [ FB and FB0 ¼ FA [ FB (that is, explicitly abandoning the imposed limitation that FA0 and FB0 be subsets of the original feature sets), such that the above form becomes: 0 0 0 0 Ri ½AðFA 62 OðFA ;FB ÞF T AðFA0 Þ; BðFB0 Þ Þ; OðFA ;FB Þ 0 0 T AðFA0 Þ; BðFB0 Þ ; BðFB 62 OðFA ;FB ÞF T AðFA0 Þ; BðFB0 Þ Þ ¼ Ri ½0; T fAðFA [ FB Þ; BðFA [ FB Þg; 0
ð53Þ
¼ Ri ½ T fAðFA [ FB Þ; BðFA [ FB Þg ¼ T fAðFA [ FB Þ; BðFA [ FB Þg (equals option 4). In Eq. (52) we have then obtained a very general form for the optimal strategy for dealing with overlapping feature sets, one that may be made completely general for the case of two classifiers by noting that the operator , is in eVect, a weighted majority vote combination scheme, which will therefore belong to the total body of nontomographic combination methods, T. Hence by inverting this consideration, and applying to the above, we see that we can obtain the exhaustive combination strategy: 0 0 Ri ½AðFA 62 OðFA ;FB ÞF T AðFA0 Þ; BðFB0 Þ Þ; ð54Þ 0 0 OðFA0 ;FB0 Þ T AðFA0 Þ; BðFB0 Þ ; BðFB 62 OðFA ;FB ÞF T AðFA0 Þ; BðFB0 Þ Þ:
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
215
We might, furthermore, consider no longer restricting the overlap of feature sets to be merely one feature among two classifiers, permitting instead a features to overlap among b classifiers. However, we begin to see that any such process would involve a very major modification of the feature selection algorithm, on which we are already beginning to encroach. Thus we can begin to appreciate that, in seeking to obtain optimality, there can be no rigorous distinction between the two processes of feature selection and classifier combination. In fact, the diYculty of overlapping feature sets that we have been seeking to address only really arises when we have been failing to rigorously distinguish between the classifier combinations that are, in eVect, single classifiers, and the classifier combinations that are tomographically reconstructive in nature. We might therefore suppose that, if this distinction were built into the feature selection process, such that the final combination process were a purely tomographic procedure in relation to distinct feature sets contained within single (or classically combined) classifiers, the diYculty would never have arisen. This is indeed the case, and an optimal solution to the problem of classifier combination implemented from the level of the feature selection algorithm onward is outlined in the following section. G. Fully General Solution to the Combination Problem: Unity of Combination and Feature Selection Processes To summarize our findings thus far: throughout the investigation we have found it necessary to postulate (and clarify the nature of ) an apparent double aspect to the functionality of conventional classifier combination, one facet of which may be considered the refinement of PDF morphology, and therefore a form of classification in its own right, and the other being that of tomographic reconstruction, insofar as the feature sets belonging to the classifiers within the combination are distinct. Classical techniques of combination tend to conflate these two disparate aspects through not having made a rigorous distinction between those classifier combinations that, in eVect, act as a single classifier and those combinations that may be considered to act on entirely distinct orthogonal projections of a single PDF encompassing the whole of the N-dimensional pattern space. We, in contrast, have found it necessary, in seeking an optimal solution to the combination problem, to make this distinction completely formal. Explicitly separating the two, however, will involve reverting to a stage prior to combination, and addressing the nature of the feature selection process itself. Thus, we find we must take a unified perspective on the apparently separate issues of feature selection and classifier combination if we are to achieve our aim of attaining an optimal solution.
216
WINDRIDGE
The essence of the unity that we are seeking will lie in ensuring that we exhaust those possibilities of classifier combination that serve only to act as single classifiers at the feature selection stage, with classifier/feature set combinations then being chosen by the feature selector only on the basis of their suitability for tomographic combination by the optimal filtered process. This basis will clearly center on the principle of supplying classifiers with distinct feature sets to the tomographic combination. The precise methodology of this procedure is therefore as follows. Besides the classifiers (a; b; c . . . nc ), we must also consider as being classifiers in their own right every possible combination of these classifiers via the various nontomographic techniques, (1; 2; 3 . . . n0 ), that exist for conventional classifier combination. That is, we require the various combinations ab1, ac1, . . .; ab2, ac2, . . .; abc1, abd1 . . . etc. (with the appropriate combination method indicated by the numeric subscript). We must, however, also consider the possibilities of the form: {ab1} {bcd2}3 (that is, the associative composition by method 3 of the pseudoclassifiers ab1 and bcd2), wherein the preceding classifier combinations may themselves be combined by any of the conventional combination methods. Thus, the total set of classifiers of the first kind (that is to say, nonassociative combinations) now numbers: ! nc X nc C 1 þ n0 nc Ci ¼ nc C1 þ n0 ð½1 þ 1nc 1 nc Þ i¼2
ðvia the binomial theoremÞ
ð55Þ
¼ nc þ n0 ð2nc 1 nc Þ: By a similar progression, we arrive at the total number of higher-order n þn ð2nc 1nc Þ C2 þ associative combinations as being (progressively): n0c 0 nc þn0 ð2nc 1nc n0 ÞC3 þ . . ., giving an overall total of classifiers of both varieties of the number: ! P X P C 1 þ n0 P Ci ¼ PC1 þ n0 ð½1 þ 1P 1 P Þ i¼2
ðvia the binomial theoremÞ
ð56Þ
¼ P þ n0 ð2P 1 P Þ where P ¼ nc þ n0 ð2nc 1 nc Þ. Note that, in general, there will be tautologies, and consequently simplifications, in the descriptions of the above terms: for instance {ab1} {cd1}1 would be the equivalent of abcd1 if 1 is the (weighted) majority vote scheme. Whether it will be possible in general to exploit such redundancies for the
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
217
purpose of saving computation time will depend entirely on the nature of the combination scheme. With all of the compound classifiers that may be legitimately considered to act as single classifiers thus constructed, we may then go on, in a reverse of the usual procedure, to specify the appropriate feature selection algorithm. We need not in consequence worry, as we would otherwise have to if we mean to obtain an optimal solution, about selecting features for the original classifiers on the basis of their ability to act in combination (at least in conventional terms), because we have inherently constructed all of their combinations prior to feature selection: feature selection can then be conducted on a purely tomographic basis among the original classifiers and their composites. Thus, we test (exhaustively, if we require optimality) those feature combinations consisting only of distinct feature sets distributed among the classifiers and pseudoclassifiers, with final feature set selection occurring only on the basis of the features’ collective ability to classify within the tomographic regime. This is illustrated pictographically in Figure 7 for the two combination rules: sum and product. Hence we implicitly test all of the necessarily uncorrelated (see later) tomographically reconstructed subspaces of the n-dimensional pattern space against those spanned by single classifiers or pseudoclassifiers, with the maximum possible n-dimensional reconstructive information being extracted from the data in consequence of this implied competition between decorrelated though stochastically averaged and correlated but stochastically variable representations of the same probability density functions, the criterion function being the final arbiter of the outcome. How exactly the above considerations might modify a typical, nonexhaustive feature selection algorithm, should we wish to exploit the principle within a less computationally intense framework, would depend entirely on its nature. For instance, a sequential forward selection algorithm that might have selected feature/classifier combinations by choosing features in sequence from the total set of possibilities, allocating the selected feature to that classifier that makes the best use of it in terms of classification performance, with the processing cycle then returning to its original state, would now, under our imposed modification, become such that features are removed from the total set after they have been allocated to a particular classifier (or pseudoclassifier) prior to final tomographic reconstruction and comparison on the basis of classification performance, with no feature thus appearing more than once within the total body of feature sets, which are then hence guaranteed to be distinct. More exhaustive feature selection methods may require a correspondingly more complex modification to
218
WINDRIDGE
ensure that distinct feature sets are obtained; however, in general, there will always exist a reasonably evident approach to achieving this. As an additional note to conclude this section, we should also like to address the question of the optimality of the filtering procedure. In fact, a complete inversion of the tomographic reconstruction formula [Eq. (4)] through filtration (that is, construction of an appropriate v function) does not always constitute a mathematically analytic problem, the diYculty (by inspection of [Eq. 7]) being essentially one of deconvolution: a notoriously ill-posed problem in consequence of convolution being an informationdestroying procedure when the convolving function is bandwidth limited to any degree (or at least when moreso than the data undergoing convolution).
Figure 7. Unified feature selection and tomographic classifier fusion.
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
219
However, since we are inherently working with discrete data (due to the computational necessity of sampling the PDF at discrete intervals), this latter point does not apply to our technique, the PDF data being bandwidth limited to exactly the same extent as the filtering function. There is then no fundamental diYculty to obtaining an optimal filtering function via inversion of Eq. (4), merely the pragmatic one of establishing whether this can be done by analytic methods, or whether it would be better approached by numeric means. The discrete nature of the computational representation of the filter function, however, will ensure that either of the methods will suYce. H. Summary of Methodological Approach We have, then, in the preceding section, set out the basis for a morphologically optimal method of classifier combination via a tomographic analogy of what we now appreciate to be a major aspect of the process, the assemblage of Radon transform data, finding, in delineating this aspect, that classifier combination is inseparable from the feature selection process. Our assertion of the optimality of our method then centers on it being a full completion of the partial tomographic reconstruction process implicit in all conventional methods of classifier combination. The only other considerations that we need address in this regard being, first, that of the remaining aspect of combination as implicit refinement of the PDF morphologies and, second, the robustness of the reconstructive procedure in relation to estimation error. The former point is necessarily now addressed at the level of feature selection and hence, within our unified perspective, may now be carried out at an optimal level through having distinguished it from the purely tomographic aspects of classical combination. The latter concern, the robustness of the procedure to estimation error, has been argued to be of the order of that of the sum rule, the previously optimal procedure in this regard, although exact calculation was omitted due to the dependence of the filtering procedure on the nature of the input PDFs. A number of practical utilizations of the outlined methodology will be given in the remaining sections of this article, the findings in relation to which will suggest that, aside from the usefulness of the knowledge of the existence of, and specification of, a theoretically optimal limit to the performance of classifier combination, the implementation of such a procedure can lead to very substantial real-world performance gains, and at a potentially small computational cost.
220
WINDRIDGE
III. Postcombination Tomographic Filtration A. Introduction In setting out the framework of our tomographic methodology, we found it natural to specify prefiltering of the Radon transforms as the appropriate method of removing the purely systematic morphology arising from the use of the back-projection operator. We can, however, equally well postfilter the backprojection of the unfiltered Radon transforms, if it can be shown that the two methods are equivalent in terms of the resultant probability density function. Indeed, given the graphical nature of the particular tomographic process to be outlined, the latter method is in fact the more intuitive, in that we commence the key distinction of our optimal methodology, the deconvolution of the systematic morphology, at the end point of the previously optimal fusion method (namely, the sum rule method), with the superfluous systematic geometrical aspects of the sum fusion then being straightforwardly inferred by visual inspection of the initial and final PDFs. The problem of deconvolution, however, is in general an ill-posed one, in that there are multiple potential solutions to the problem. We therefore generally tend to favor a specific deconvolution on a priori grounds, the very specification of which, moreover, often dictates the method of deconvolution in its entirety. Perhaps the two canonical representations of this, in the sense that they collectively represent the extremes of the gamut of possibilities, are the maximum entropy (see, for example, Cornwell and Evans, 1985) and the Ho¨ gbom (1974) algorithms, which, respectively, presume the piecewise continuity (strictly, minimum information-theoretic complexity), and the discreteness of the final solution. Because of the unique form of our tomographic problem, however, we shall find it useful to specify our own method of deconvolution, based only on the one a priori assumption, namely, that in the case of any deconvolutional ambiguity that gives rise to a choice between imposing an arbitrary axial asymmetry on the final PDF and one that does not, then the latter alternative will always be the favored one. This is equivalent to insisting that, in the absence of any information to the contrary, the feature axes will inherently constitute a favored orientation in relation to the data, taking precedence over any of the other potential oriental axes. Such an imposition amounts, in fact, to the specification of a fully decorrelated reconstructive space with regard to the constituent features, and is actually the default outcome of the prefiltering approach, thus rendering the a priori stance in relation to postfiltering a fully necessitated one, given their equivalence.
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
221
We thus set about implementing this consideration within the context of a modified version of the Ho¨ gbom algorithm; that is, a Ho¨ gbom algorithm to the degree that it involves the recursive subtraction of infinitesimal simulacra of the systematic artifacts [the ‘‘blurring’’ function of Eq. (7)] from the backprojected (sum rule) data: modified to the extent that when the algorithm is required to subtract several systematic simulacra such that their overlap could itself be interpreted as geometric simulacra (in which case there is an ambiguity as to the distinction between ‘‘overlap’’ and ‘‘simulacra’’), then all of these correlated entities are treated as equally indicative of the underlying ‘‘deblurred’’ morphology. Thus, in essence, we implement a recursive Ho¨ gbom deconvolution algorithm with an additional intermediate stage mapping the various correlations between the proposed subtractions to ensure that the artificially imposed and unrepresentative dichotomy between the subtractions and their intersections does not have an impact on the final recovered PDF. In broad terms, then, this process is implemented in the following manner: In carrying out an explicitly 2D implementation of the mathematical stages that give rise to Eq. (29), we found that the unfiltered 2D inverse Radon transformation equates to: Pðx; yÞrecovered ¼ P1 ðxÞ þ P2 ðyÞ;
ð57Þ
for the two classifier density functions P1(x) and P2(y), distributed respectively over the variables x and y (ignoring normalization considerations). Now, we also have from Eq. (7) that the unfiltered inverse radon transform is equal to: frecovered ¼ R ð foriginal Þ ¼ foriginal Bðx; yÞ;
ð58Þ
with B(x, y) the ‘‘blurring function’’ or systematic artifact. Therefore, if we set foriginal to be the delta function d(x1, y1), such that frecovered ¼ Bðx x1 ; y y1 Þ, then we have that: P1 ðxÞ þ P2 ðyÞ ¼ Bðx x1 ; y y1 Þ:
ð59Þ
On the assumption that P1(x) and P2(y) are representative of their respective projections of the 2D PDF: foriginal ¼ dðx1 ; y1 Þ, such that Eq. (1) holds— that is, P1 ðxÞ ¼ dðx x1 Þ and P2 ðyÞ ¼ dðy y1 Þ, we then obtain the equivalence: Bðx x1 ; y y1 Þ ¼ dðx x1 Þ þ dðy y1 Þ
ð60Þ
Removing the position dependences x1 and y1 by coordinate transformations, this gives the pure blurring artifact to be the cross-shaped function: Bðx; yÞ ¼ dðxÞ þ dðyÞ
ð61Þ
222
WINDRIDGE
Our modified Ho¨ gbom algorithm therefore involves the recursive subtraction of an infinitesimal version of this artifact, shifted appropriately, from all discrete 2D PDF values (recovered via back-projection) within some small fixed percentage of the maximum value. A scalar quantity proportional to the infinitesimal magnitude of this value (which we denote C ) is then added to the existing (initially zero) quantity associated with the value’s coordinates. This latter matrix will then constitute the proposed PDF deconvolution at the termination of the procedure, which occurs when the initial matrix first generates negative values on subtraction of the infinitesimal blurring artifact. The feature specific to our postfiltering approach, namely the a priori assumption of the priority of the feature axes, becomes apparent in the particular way in which we deal with the (almost ubiquitous) situation in which multiple infinitesimal blurring artifacts are to be simultaneously subtracted from the data. As we have indicated, there is an ambiguity as to what constitutes an artifact and what constitutes the overlap of an artifact. We see from an inspection of the form of B(x,y) that the proposed subtraction of artifacts centered at (say) (xa, ya) and (xb, yb) would lead to a double subtraction of the infinitesimal value C from the points (xa, ya), (xa, yb), (xb, ya), and (xb, yb), without a corresponding registration of the points (xa, yb) and (xb, ya) in the final deconvolution matrix. This occurs because the intersection of two blurring artifacts has itself the precise form of a blurring artifact, and the claim to primacy of the original artifacts over the intersecting region is thus invalidated. Thus, we must seek to correlate all of the proposed subtractions with each other to establish whether there exist any overlapping regions that might themselves have to be considered constitutive of subtractive entities and consequently registered alongside the originals in the final deconvolution matrix. It is thus clear that any such postfiltration methodology must invariably involve the nesting of a series of complex conditional tests, and, hence, require considerable computational expenditure. The following subsection reconfigures the proposed postfiltration system to achieve this on feasible time scales.
B. An Economic Approach to Postcombination Tomographic Filtration We have thus argued that the most straightforward approach to removing the possibility of negative PDF values in the tomographically reconstructed feature space is that of unfiltered (post-) deconvolution, via an adaptation of the Ho¨ gbom (1974) deconvolution algorithm. The iterative nature of this technique allows a piece-by-piece removal of systematic artifacts, such that in its unmodified and mathematically ideal form, the procedure can be
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
223
considered to impose an a priori condition of least possible correspondence of the recovered morphology to the feature axis geometry (Briggs and Cornwell, 1992). Thus, the procedure embodies a distinct methodology for distinguishing between the degenerate solutions that all methods of deconvolution must address whenever zeros exist in the Fourier transform of the entity to be deconvolved. Moreover, it invariably generates positive-definite solutions. Endeavoring to reduce the computation time involved in this procedure will involve establishing the degenerate form that tomographic reconstruction adopts under the particular geometric constraints of pattern-space reconstruction. In quantitative terms, this will permit a reduction in the number of computational cycles required to execute the procedure from X 3n1 to X n1, (n being the dimensionality of the problem and X its sampling resolution). These three orders of magnitude reduction in the computational complexity of the problem bring the tomographic method well within the realms of practical feasibility, as well as giving a more intuitive description of the process in graphical terms, perhaps serving as a backdrop for future extensions of the technique. C. Nature of Ho¨ gbom Deconvolution in the Sum Rule Domain Following from the above illustration, the next two subsections of the discussion of Ho¨gbom deconvolution in the sum rule domain shall be confined to a 2D reconstructive space (that is to say, with two single-featured PDFs constituting the classifiers in the combination) for reasons, again, of conceptual simplicity as well as ready graphical perspicuity. That the generic results derived for this space are straightforwardly generalizable to an arbitrarily dimensioned reconstructive space (one consisting of an arbitrarily large set of constituent classifiers of unconstrained feature-space dimension) follows from the recursive argument for progressively building up the reconstructive PDF space set out in Section II.B. It was hence established in the preceding introduction to postfiltration methods that the sum rule space postconvolution artifact for twofeature dimensions is equivalent to a ‘‘cross’’ of infinitesimal width (i.e., Partifact ðx; yÞ ¼ dðxÞ þ dðyÞ). It is consequently this entity (modified appropriately to account for the discrete sampling of the PDF inherent in a computational methodology) that we are seeking to remove via recursive Ho¨ gbom subtraction. In the 2D case we have specified (Figure 8), this occurs as follows. A counter value, z, is set at the peak value of the sum rule space, with a recursive scanning cycle then initiated to establish the set of all positions within a probability density value | n Pai .) fX i¼1 6. Subtract the resolution parameter dz from each peak value Pai; 8i, and set an iteration parameter (say, t) to zero. 7. Subtract a quantity jX1 j jX2 j . . . jXi1 j jXiþ1 j . . . jXn j dz from the current peak value of each classifier, Pai ; jXj j being the scalar values ~i g of diderived in Step 5 (that is, the number of coordinate vectors fX mensionality ai counted by the PDF hyper area–establishing procedure above). Note, especially, the absence of |Xi| in the product entity. 8. Establish the new hyper area value associated with the subtraction in Step 7 (that is, the hyper area between the probability density ordinates representing the previous and current peak values (as per Step 4). 9. Allocate a value N t:dz to those points in the deconvolution matrix representing novel coordinates established after the manner of Step 4. That is, the Cartesian product diVerence: ~1 g [ fX ~1 g ðfX ~2 g [ fX ~2 g Þ . . . ðfX ~n g [ fX ~n g Þ ½ðfX old new old new old new ~ ~ ~ ½fX1 gold fX2 gold . . . fXn gold (t the cycle count number, N as above). 10. Increment the cycle counter, t, by 1 and go to Step 7 while Pai > 0, 8i. 11. After termination of the major cycle in Steps 7 through 11, subtract a value t.dz from each point of the deconvolution matrices to establish true PDFs, if required (see footnote 5). 12. Repeat from step 2 for the remaining classes in the sequence o1 ; o2 . . . om . 13. Construct the modified Bayes optimal decision boundaries at points of transition of the most probable class PDFs (see footnote 5).
2 In a memory-restricted environment, it is alternatively possible to perform iterations in Steps 7–11 simultaneously for the respective classes, retaining only those points of coincidence between the various class probabilities: a significantly smaller set than the matrix specified in Step 5. The total memory footprint for this configuration is of the order fX ga1 þ fX ga2 þ . . . fX gan , rather than the former fX ga1 þa2 þ...þan (for feature spaces of uniform dimensional size X); which is to say, an equivalent memory requirement to conventional linear methods of combination.
234
WINDRIDGE
In functional mapping terms, we thus seek to repeatedly perform the conditional iteration: 8 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
Pmax DPt 8X t
: P ðX ~Þ
otherwise
t
t
~1 : P1 ðX ~1 Þ ¼ Pmax j where DPt ¼ Dz jX 1 ~2 Þ ¼ Pmax j . . . ~2 : P2 ðX jX 2
~t1 Þ ¼ Pmax j ~t1 : Pt1 ðX jX t1 ~tþ1 Þ ¼ Pmax j . . . ~tþ1 : Ptþ1 ðX j X tþ1 ~t : Pt ðX ~t Þ > 0 while 9X > ~n Þ ¼ Pmax j ~n : Pn ðX > jX > n > > 8 > > ~TOM Þ þ Dz 8X ~TOM ; t : Pt ðX ~TOM :X ~t Þ > PTOM ðX > > > > > < > > max NEW > ~ > Pt DPt PTOM ðXTOM Þ ¼ > > > > > > : > > ~ Pt ðXTOM Þ otherwise > > > > > ~TOM Þ ¼ PNEW ðX ~TOM Þ ~TOM PTOM ðX > 8 X > TOM > > > : 8X ~ Þ ¼ PNEW ðX ~Þ ~ ; t P ðX t
t
t
t
t
The final PTOM function is the tomographically reconstructed probability density function (which is initiated with uniform zero values; PTOM ~TOM Þ ¼ 0 8X ~TOM ). (X E. Final Considerations on the Postcombination Approach We have thus set out to significantly reduce the computation time involved in the postfiltration form of the morphologically optimal tomographic fusion strategy, and we have achieved a reduction of many orders of magnitude. This is suYcient that the method no longer poses any significant cost obstacle to the implementation of the procedure with current computer technology (though note, this is not necessarily more eYcient than performing a full classification of the composite feature space; we are assuming that underfitting constraints prevent this possibility at the feature selection level). The basis of this eYciency gain is the appreciation that, when viewed in terms of the constituent PDFs, the three chief computational components within the recursive procedure (the peak-seek, the analysis of the correlation between detected peaks, and the subtraction/registration of those correlated components) need not be performed on an individual basis, potentially reducing the iteration requirement from X n ½X n ½X n1 þ X computational cycles to X n computational cycles, with the further possibility of an order of magnitude decrease in this figure for point-wise continuous classifiers. This is
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
235
in addition to gains arising from the requirement that the dz parameter eVectively vary throughout the procedure. It might thus be anticipated that this PDF-centered approach could ultimately lend itself to a future reinterpretation of the morphologically optimal methodology for multiple expert fusion without any explicit reference to tomography theory, being rendered instead in the more familiar terminology of probability theory and correlation analysis.
IV. An Example Application Using the postfiltration method thus outlined we shall, in the following section,3 seek to give a practical implementation of methodology in regard to real-world data, demonstrating the sort of classification performance that we might thereby expect from a given set of classifiers under typical conditions. A. Test Data Characteristics In setting about this goal, we have used data consisting of a set of expertly classified geological survey images, with subsequent image processing carried out via a battery of 26 cell-based processes for texture characterization, chosen without regard to the particular nature of the classification problem. Hence, at the outset, a particularly high feature redundancy was anticipated for the corresponding 26-dimensional pattern vector. One such image, characterized as delineating three distinct strata classes, and exhibiting a high degree of class membership ambiguity among those classes, gave rise to a purely 2D reconstructive feature space on being classified from among a bank of four potential classifiers of suitably distinct character (nearest-neighbor, neural net, Gaussian, and quadratic classifiers), when allocated features on a sequential forward selection basis. That is, the feature selection process involved the consecutive and independent allocation of the same two respective features to two of the four possible classifiers for each of the three classes. Thus, the feature selection gave rise to a classification that explicitly excluded the combination of features within single classifiers. We shall assert that, despite being of an inherently inexhaustive sequential type (within which only two classifiers are allocated a single feature before overclassification sets in and the procedure terminates), the feature selection procedure may almost certainly be considered the 3
ß2003 IEEE. Reprinted, with permission, from IEEE PAMI, Vol. 25, No. 3, March 2003.
236
WINDRIDGE
equivalent of the exhaustive variety for this particular case, by virtue of the fact that any possible combination of features within one particular classifier, as selected by some putatively exhaustive feature selection algorithm, would almost invariably include the first feature allocated to that classifier on a sequential forward selection basis. Thus, the fact that the addition of any of the remaining features to this classifier within the latter regime actually degrades the performance (the method implicitly testing all of the possible feature additions), would strongly suggest that the exhaustive procedure would not find an alternative optimal solution, the overclassification eVect predominating for this image. In addition to this implicit exhaustivity of the feature selection mechanism, this particular data set lends itself to our practical investigation on the grounds that, the selected features numbering merely two, only the first stage of the n-dimensional tomographic reconstruction algorithm need be implemented. However, and more conveniently, we have also that the resulting classifier PDF data lend themselves to immediate and uncomplicated graphical representation, should any of the morphological aspects of the preceding mathematical argument lack transparency, given its necessarily abstract nature. B. Results of Application The three class PDFs before back-projection are shown in Figures 10, 11, and 12 for the two feature axes (a normalization equalizing the extents of these axes for the purposes of display). The corresponding 2D PDF reconstructions for the various classes obtained via the sum rule (that is, unfiltered inverse radon transformation) are indicated in Figures 13, 14, and 15. Filtering has the eVect of rendering the reconstructed class morphology as shown in Figures 16, 17, and 18. The pronounced rectilinearity is a direct consequence of imposing the a priori precedence of the feature axes throughout the above deconvolution procedure, or, equivalently, giving an exactly equal precedence to the overlap of systematic artifacts as to artifacts themselves, in the absence of any prior assumptions as to the morphology of the reconstructed space. Should we wish to do so, it would be possible within the postfiltering approach to impose an alternative constraint on the deconvolution without having to substantially redesign the procedure, somewhat after the fashion of the maximum entropy deconvolution, which in fact requires the imposition of such a priori information (Cornwell and Evans, 1985). The composite superpositions of the three class PDFs over the reconstructed space, such that only the maximum of the respective probability
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
Figure 10. Class 1 feature PDFs.
Figure 11. Class 2 feature PDFs.
237
238
WINDRIDGE
Figure 12. Class 3 feature PDFs.
Figure 13. Class 1 2D reconstruction via the sum rule.
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
Figure 14. Class 2 2D reconstruction via the sum rule.
Figure 15. Class 3 2D reconstruction via the sum rule.
239
240
WINDRIDGE
Figure 16. Class 1 2D reconstruction after filtration.
Figure 17. Class 2 2D reconstruction after filtration.
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
Figure 18. Class 3 2D reconstruction after filtration.
Figure 19. Composite superposition of unfiltered PDFs.
241
242
WINDRIDGE
Figure 20. Composite superposition of filtered PDFs.
densities is indicated, are as shown in Figure 20 for the filtered case and in Figure 19 for the unfiltered case. However, probably the more indicative rendering of the distinction between the two approaches to classifier combination is in terms of the decision boundaries of the respective reconstructed spaces; these are shown in Figure 22 for the filtered space and in Figure 21 for the unfiltered space. It is immediately evident that the cross-like extensions along the feature axes associated with clusterings of higher-probability densities are no longer evident in the filtered space. For this particular case, the most dramatic changes to the morphology of the decision space occur at some distance from the class probability maxima, and thus it is only the outlying pattern vectors that tend to be reclassified under the filtered regime, which hence represents only a relatively minor percentile change in the overall classification rate (the probability of misclassification, though, undergoes a far more substantial percentage change). In the more general scenario, however, it is entirely possible that a substantial fraction of a class’s extent within the unfiltered reconstructed probability space may in fact be occluded by the sampling geometry of another class’s reconstructed PDF, in which case a very substantial percentile change in the overall classification rate would be expected.
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
243
Figure 21. Decision boundaries for the unfiltered case.
Figure 22. Decision boundaries for the filtered case.
However, within our particular example, the probabilities of misclassification for 2D PDFs constructed from 1000 of the 10,000 possible samples for the filtered and unfiltered reconstructed spaces, respectively, are then 0.0472 and 0.0709—an approximate halving of the misclassification rate.
244
WINDRIDGE
V. Dimensionality Issues: Empirical and Theoretical Constraints on the Relative Performance of Tomographic Classifier Fusion With the preceding example thus defining an approximate expectation of the level of performance improvement due to tomographic filtration, we should now like to turn to a more systematic quantification of both the classification performance and estimation error robustness of the tomographic classifier fusion methodology.4 In particular, we seek to confirm that the tomographic methodology represents a generally optimal strategy across the entire range of problem dimensionalities, and at a suYcient margin to justify the general advocation of its use. In Section II it was stated without proof that the expected error resilience of the tomographic method ought to be similar to that of the sum rule (the optimal combination strategy in terms of robustness to estimation error; Kittler et al., 1998), since the back-projection aspect of the tomographic fusion approach imposes exactly the same averaging process with respect to stochastic variation. A precise calculation was omitted since it depends critically on the interaction between the filtering mechanism and the morphological characteristics of the classifier (which is not something we would wish to specify in advance, the tomographic method being intended as a ‘‘black box’’ approach, to which novel methods of classification may be appended as developed). Given this theoretical limitation on characterizing the error resilience of the proposed method, it is necessary to base any attempted quantification of the resilience to estimation error instead on practical trials and model solutions to build a convincing case. More generally, though, we have yet to fully establish the most significant performance statistic for the tomographic combination method in relation to the conventional alternatives: the eVect on the misclassification rate. The very limited example of this statistic given in the preceding section indicates a halving of the misclassification rate. However, momentary consideration would indicate that it is not possible to guarantee an equivalent performance response for combinations of higher dimensionality without a great deal of further analysis. Indeed, this is self-evidently not the case if the classifiers constituting the combination exhibit any degree of estimation error, since error resilience scales diVerently with dimensionality for the sum and product rule combination schemes (see discussion in Kittler et al., 1998). It is therefore necessary, in any reasonable attempt to quantify the general performance of tomographic combination, to establish performance across 4
This section ßWorld Scientific (to appear in the International Journal of Pattern Recognition and Artificial Intelligence).
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
245
the range of feature-space dimensionalities: we should in particular like an assurance that the tomographic method remains the optimal choice at higher dimensionalities within a representative range of scenarios. Sections V.A and V.B therefore detail our attempts to achieve this at the practical and theoretical levels, respectively. Giving any comparative performance benchmark for the tomographic combination method requires that we test it against a representative sample of the remaining combination schemes. Kittler et al. (1998) have demonstrated that the majority of commonly used decision rules can be considered to derive from either the sum or the product decision rules. It is therefore these two methodologies, in particular, against which we shall choose to benchmark the tomographic combination system outlined in Section III.D.3. A. Relative Performance of Tomographic Classifier Fusion in Empirical Tests For the practical, as opposed to the mathematical aspect of this investigation, the ‘‘real-world’’ data upon which we shall perform this experimental comparison are derived from the same set of expertly classified geological survey images utilized earlier (i.e., with the 26 dimensions of the pattern vectors corresponding to 26 distinct cell-based processes for texture characterization). The arbitrary nature of these processes means that the totality of the data set simultaneously exhibits all three of the distinct characteristics of, respectively, large-scale feature redundancy, feature independence, and its converse, feature dependency within its various feature subsets: that is, very largely the full range of possible behaviors with regard to feature selection, classification, and classifier combination. Also, since we are primarily interested in testing the relative capabilities of the combination schemes, we shall seek both to homogenize the classifiers constituting the combination and to make them as representative of the pattern data as possible. Thus, rather than the customary arrangement in which feature sets are allocated to a morphologically disparate set of classifiers on the basis of their individual representative strengths, we shall instead artificially impose a uniform classification scheme, a probability density function derived by regularly spaced block-density histogramming of the pattern data, upon each of the tested feature subsets constituting the combination. Furthermore, in order that we might establish a direct measure of the classification performance of the various combination schemes, we shall impose the condition that the composite feature space PDF of i-dimensions which we are implicitly reconstructing by classifier combination is that obtained by a block density histogramming of the original i-dimensional space. In other words, we are designating the i-dimensional space.
246
WINDRIDGE
For this approach to have general validity, it is necessary that a large number of pattern vectors are sampled per histogram, even at the extremity of the tested dimensionality range. Thus, we are also required to impose a relatively small number of bins per feature (r) to maintain reasonable count statistics at the extremity of the range: of the order of r ¼ 4, given our 125,000 pattern vectors and eight-dimensional range. Because of the need to establish a meaningful performance comparison across the dimensional range, it is additionally necessary to derive each of the tested multidimensional composite reference feature-space PDFs from the same experimental source. Hence we obtain the various i-dimensional spaces via projection of the complete n-dimensional pattern space, finally averaging over all n Ci performance figures thus obtained. Clearly, as the dimensionality i varies, the averages thus obtained are subject to a statistical fluctuation associated with low number statistics (becoming asymptotic at i ¼ n when only one subspace exists), and hence the tested sequence is required to terminate well short of this value (coupled with the aforementioned consideration of avoiding undersampling of the prior PDF at higher dimensionalities). The reason it shall only prove necessary to consider the combination configuration consisting of i 1D classifiers (that is, combinations with one feature per classifier) is that we are principally interested in characterizing the variation of combination performance in relation to a uniform ‘‘morphological information shortfall.’’ That is, we are primarily interested in the extent to which a combination scheme can make use of the ri possible classifier ordinate values to reconstruct the ri possible co-ordinates of the prior PDF: introducing additional combinations of classifiers containing diVering numbers of features would tend only to obscure this perspective without generating any additional insight into the combination processes not already encompassed by the latter approach. The experimental format for the real-world combination test is therefore illustrated as such in Figure 23. We should clarify that the test scenario in no way is intended to represent a plausible real-world situation when feature selection is explicitly taken into consideration. Given that we are in a position to obtain suYcient pattern vectors so as to be able to constrain a plausible model of the i-dimensional prior PDF, the most eVective feature selection strategy (presuming a reasonably flexible set of classifier morphologies to choose from) would, most naturally, be to allocate the maximal i features to the best performing classifier of the ensemble to guarantee retention of the maximal quantity of discrimination information. We have, however, imposed the one-featureper-classifier limitation in order that we might mimic the generalized situation in which any one-classifier parameterization of the whole i-dimensional space would likely be subject to serious overparameterization error, and
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
247
Figure 23. Experimental format.
therefore disposed to reduce the classification rate in relation to a combination of classifiers of lower, but better sampled, feature dimensionalities. Of course, this condition being an external restriction means that, in fact, we do have access to a plausible model for the i-dimensional prior PDF as required for the purposes of performance evaluation. The specified experimental scenario should thus be considered from the context of the broader tomographic perspective, within which feature selection can be envisaged as seeking an appropriate balance between the mutually exclusive requirements of maximizing the retention of classdiscriminant morphology information through the allocation of spaces of higher feature dimensionalities to the classifiers, and the minimization of the dangers of overclassification through the allocation of lower feature-space dimensionalities to classifiers.
248
WINDRIDGE
1. Response to Estimation Error The remaining aspect of the investigation—the assessment of the resilience to estimation error of the three fusion methods—is addressed in the above experimental context by the straightforward simulation of classifier error through adding uniform stochastic noise to each of the classifier density histograms (simulating, in eVect, estimation error arising from an insuYcient degree of parameter freedom among the classifiers, rather than estimation error attributable to, say, incorrect, or over-, parameterization). The tomographic performance results for the ‘‘real-world’’ geological survey data are thus as depicted in Figure 27 (see Section V.F), alongside an analysis of its comparative significance. Placing the experiment in the widest context, however, requires that we turn to a more constrained model scenario. B. Relative Performance of Tomographic Classifier Fusion on Model Data The significance of the findings of the proceding investigation are, then, best established in relation to an absolute baseline against which the performance on real-world data may be graded. Any such proposed performance indicator must thus seek to determine the eVect of classifier combination on the classification performance in a way that is independent of both pattern data and classifier morphology. It so transpires that one of the very few classes of mathematically tractable characterizations of the algorithmic procedure of Section III.D.3 occurs in relation to prior PDFs composed of orthogonally gridded histograms of randomized density (hence fulfilling the requisite test conditions of independence to pattern data morphology in the case where every such distribution is considered). Furthermore, it is natural to suppose that prior probability density distributions so derived will, when averaged over the ensemble, naturally constitute a generalized performance minimum for the tomographic methodology as a consequence of its specifically seeking to reconstruct the overall pattern-space PDF through correlating morphology across the separate classifiers: a randomized morphology eVectively undermines this agenda by decorrelating the diVering subregions of the composite PDF, permitting the isolation of the required ‘‘absolute’’ performance statistic.5
5
A performance maximum for the tomographic method is correspondingly established when the composite prior PDFs correspond to unimodal distributions that are capable of undergoing decomposition into intersecting hypercubes, in which case the tomographic combination performance achieves Bayes optimality (on the assumption of ideal constituent classifiers).
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
249
In combination with this argument, there is also the consideration that the randomization of the PDF morphology takes places with respect to a coordinatization aligned with the feature axes: tomographic methods, however, in applying a prior filtration to the back-projected Radon data, implicitly seek to override metrics dictated by the feature axes in favor of those constrained solely by the underlying classifier morphology. There are therefore two distinct sets of reasons for supposing that the specified ensemble of prior PDF forms constitutes a generalized performance minimum with regard to the tomographic combination method, which (when combined with their unique mathematical tractability), naturally cement them as the choice for mathematical analysis of the relative tomographic combination performance. The derivation of this quantity will therefore occupy the majority of the remainder of the section. An additional benefit arising from the elucidation of the model data performance statistics for all three of the tested combination methods is that, in doing so, we uncover a great many of the mathematical processes that underly the performance/dimensionality scaling phenomenon for classifier combiners in general. Remarkably, however, we shall demonstrate that the tomographic fusion method, notwithstanding the specified PDF restrictions, still exceeds the performance of the sum and product rules by a considerable margin.
C. Tomographic Model Solution The composite prior PDF format for the model solution is then as illustrated in Figure 24 for the first three dimensionalities in the range, with the obvious extrapolation to higher dimensionalities. The per-ordinal resolution, r, of the gridded composite PDF having thus been lowered to a value of 2, it becomes possible to uniquely grade the ordinal projections in the manner represented in Figure 25, the ordinal disparity thus, now, the single distinguishing parameter between classifier morphologies. That this reformatting is permissible within the context of the model data performance test is a consequence of the tomographic combination methodology’s situation-specific independence to ordinate translations when the indicated prior PDF constraints are imposed, along with its more generalized independence to axial permutations occurring irrespective of any PDF model constraint. It shall therefore be the case, throughout the subsection, that we continue the convention already adopted in Figure 25; namely, that calligraphic figures (such as A) denote such magnitude ordering of ordinal density values. The process outlined in Section III.D.3 may now be seen as a sequence of subtractions bringing the respective classifier ordinate values, xA2, into
250
WINDRIDGE
Figure 24. Randomized composite PDF morphology of dimensionality i ¼ 1, 2, 3.
Figure 25. Projected ordinal PDF values for the various 1D classifiers.
equality with the values of their neighboring ordinates, x1A2, the number and magnitude of the subtractions enacted with respect to each set of i-dimensional ordinates, thus dictating the tomographically estimated PDF value at the corresponding co-ordinate constructed by their amalgamation. Since each subtraction removes a constant quantity, / (probability density number of ordinates), from each PDF, the corresponding per co-ordinate increment in the proposed composite PDF at each iteration increases in the geometric sequence 21, 22, 23 . . . . Hence the actual value of the composite PDF proposed by tomographic fusion at a particular i-dimensional coordinate is thus:
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
251
i1 X ! a¯ 2 1 im 1 1 2 1 Ptom: ð~ aÞ ¼ A i þ A imþ1 A imþ1 A im þ A im 2 2 m¼1 1
ð69Þ where ~ a is the coordinate (a1 ; a2 ; a3 ; a4 . . . ai ) (ax may take the values 1 or 2 denoting the minimum and maximum ordinate values, respectively). The term a¯ refers to the minimum x value for which ax ¼ 1. (We also specify that x A ¼ 0 for consistency.) 0 Thus, we see that the predicted PDF value is governed by a cumulation of the disparities between ordinal projections, rather than simply those particular ordinates intersecting the point under consideration, as is the case for the sum rule and product rules (and indeed any other linear combination technique). In this way, even under the simplified scenario dealt with here, the tomographic technique involves all of the information contained within the ordinal projections (as the classifier PDFs) to generate the predicted prior probability density value. The particular quantity that will be of interest to us in establishing the classification performance of the tomographic combination method is the PDF of the predicted composite probability density value at a particular i-dimensional coordinate in relation to a given composite probability value. That is, since we are seeking to establish a morphology-independent classifier combination performance estimate, we shall derive a predicted composite feature-space PDF value distribution function from the ensemble of all possible prior composite PDF morphologies within the terms set out above. The first step in this process is to establish the ensemble average PDFs of the individual classifiers’ projected ordinate values (x A1;2 ), in relation to a particular fixed prior PDF value, X, occurring at the coordinate (a1 ¼ 1; a2 ¼ 1; a3 ¼ 1 . . . ai ¼ 1) (that is, fixed relative to the ensemble averaging over all prior composite PDFs consistent with this condition). Note that we are now in the original coordinate system, so the superscript numeral has no bearing on the relative value of A. The value of the prior density function at each i-dimensional coordinate ðexcluding ða1 ¼ 1; a2 ¼ 1; a3 ¼ 1 . . . ai ¼ 1ÞÞ is thus permitted to take, independently, a uniformly distributed random value in the interval [0, 1]. (We need not consider the issue of normalization at this stage.) Once this quantity has been established, the resulting formulation will then permit a calculation of the disparity values, the predicted composite PDF value being constructed, as indicated, by a series of iterations whose total number is governed by the index number of first positive disparity value (of the i total); that is, the first pair of feature ordinate values, x A1;2 , for
252
WINDRIDGE
which the unconstrained ordinate value x A2 is greater than the constrained one, x A1 . The degree to which the probability distribution of ordinal disparities is constrained by the actual value of the point under consideration depends, primarily, on the dimensionality i of the problem, a point we can elucidate by commencing with the calculation of the distribution of those ordinate projection values, x A2 , that do not intersect the point under consideration (and are therefore not in anyway constrained by it, given the randomness inherent in the PDF specification). This quantity is derived via convolution of the PDFs of the independent histogram density parameters comprising the composite feature space PDF, x A2 being essentially a sum over independent random variates: 2i1 convolutions
zfflfflfflfflfflfflfflfflffl}|fflfflfflfflfflfflfflfflffl{ i1 Pð A2 Þd A2 ¼ u ? u ? . . . u dA0 ðu?Þ2 x
x
ð70Þ
(with the later term adopted as a convention throughout: u is the probability density of the uniformly distributed random variate with limits [0, 1]). That is, the distribution of x A2 approaches the Gaussian form in the limit i ¼ 1 via the central limit theorem. Eq. (70) may be written without explicit convolution as: ! i X i x 1 x k Pi ð A2 Þ ¼ ð1Þ ð71Þ ð A2 kÞi1 sgnðx A2 kÞ; 2ði 1Þ! k¼0 k via the characteristic function method. Conversely, those ordinate projection values that do intersect the point under consideration (being therefore partially constrained by it), m A1 , are distributed thus: 2i1 1 convolutions
zfflfflfflfflfflfflfflfflffl}|fflfflfflfflfflfflfflfflffl{ ~ Þd m A1 ¼ u ? u ? . . . u d m A1 ðu?Þ2i1 1 Pð A1 X m
ð72Þ
where the constraining factor that the point (a1 ¼ 1; a2 ¼ 1; a3 ¼ 1 . . . ai ¼ 1) must equate to the value X acts to displace the distribution (minus one of the convolutions) by that same value (a point that may be readily confirmed by setting one of the u in Eq. [70] to a delta function centered on X ). The probability that any given feature, j, has a disparity Dj ¼ jA2 j A1 between ordinate projections is hence: Z D i1 i1 ðu?Þ2 1 ðj A1 X Þðu?Þ2 ðj A1 þ Dj Þdðj A1 ÞdDj ð73Þ PðDj jX ÞdDj ¼ 0
(with a negative value indicating j A2 > jA1 ).
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
253
Recognizing the above as essentially a convolution with one of the functions having undergone the ordinate inversion (A ! A), giving a total of 2i1 convolutions of the uniform distribution, we may rewrite Eq. (73) as: i1
PðDj jX ÞdDj ¼ ðu?Þ2 ðD X 2i1 ÞdD
ð74Þ
(the 2i1 term recentering the distribution to account for the [now implicit] ordinate inversion). Critically, while the individual sets of A values for each of the ordinal projections are not independent of each other, their diVerentials D are (perturbations of any particular D value aVect the x A2 and x A1 values of the other ordinates symmetrically). Hence, we can derive the probability distribution of the predicted tomographic combination rule X values, (Ptom), by considering the various Dj values independently, and making the appropriate D for A substitutions in Eq. (70). This comes about in the following way. In the original coordinate system, the value a¯ (the minimum x value for which ax ¼ 1) becomes instead i minus the largest x for which the corresponding Dx value is less than zero (that is, for which 1 Ax > 2Ax ), and the summation is hence over those D magnitudes that are greater than |Dx|. The revised format of Eq. (70) is therefore: Ptom: ða1 ; a2 ; . . . ai Þ ¼ Amin
i1 i1 i¯a X im a¯ 1 1 1 1 1 þ jDi j jDi¯a j þ jDim j 2 2 2 2 m¼1
ð75Þ (for a¯ > 2, otherwise we would be required to remove the second and third terms as appropriate). Thus, for a given a¯ the probability of a particular predicted prior PDF value, Ptom. occurring at the point (1, 1, . . .) with respect to an actual underlying value, X, is: PðPtom: j¯a; X Þ ¼ PðAmin 2i1 Þ ? PðjDi j2i1 ÞPðjDi¯a j2i¯a Þ ? PðjDi1 j2i1 Þ ? PðjDi2 j2i2 Þ ? . . . ? PðjDi¯aþ1 j2i¯aþ1
ð76Þ
The probability that a particular D value is the most positive of the negative D values (i.e., that D ¼ Da¯ ) is given by: PðD ¼ Da¯ j¯aÞ ¼ iC1 ½PðD > Da¯ & D < 0Þ½1 PðD > Da¯ & D < 0Þi1 ð77Þ
254
WINDRIDGE
¼ i C1
Z
Da¯
Z PðDÞdD 1
0
Da¯
i1 PðDÞdD
ð78Þ
0
(the probability with which the number of terms, a¯ , is distributed being:) Z 1 i Ca¯ ½PðD < Da¯ & D > 0Þa¯ Pð¯ajDa¯ Þ ¼ ð79Þ 0 ½1 PðD < Da¯ & D > 0Þi¯a PðDÞdD ¼
Z
Z
1 i
Ca¯
a¯
Da¯
PðDÞdD 0
0
1
Z
i¯a
Da¯
PðDÞdD
PðDÞdD: ð80Þ
0
Given, then, that there are a¯ terms, the probability distribution of those D terms that do form the summation (that is the Dk such that 1 < k < a¯ ) is: PðDk jDa¯ ; a¯ Þ ¼ a¯ Ck ½PðD < Dk Þk1 PðD ¼ Dk Þ½PðD > Dk & D < Da¯ Þa¯ 1 ½PðD > 0Þi¯a :
ð81Þ
Thus, substituting the above into Eq. (76) and eliminating a¯ by summing over every possibility, we eventually obtain the sought quantity, Ptom.; the probability distribution of the predicted composite probability density value at (1, 1, 1 . . .) under tomographic combination with respect to the true value X: Z 1 i Y k¼¯a X PðPtom: jX Þ ¼ PðDk jDa¯ ; a¯ Þ Pð¯ajDa¯ ÞPðDa¯ ÞdDa¯ PðPtom: j¯a; X Þ ð82Þ a¯ ¼1 k¼1
1
(the outstanding term, Amin, in the above being eliminated by summing over the two possibilities Amin ¼ xA1 ; Amin ¼ xA2 ). For our purposes, it will be suYcient to carry out this integration numerically. Given that the tomographic method makes optimal use of the morphological information contained in the classifiers constituting the combination, the variance of this distribution then gives us some indication of the absolute loss of composite PDF descriptivity that occurs following feature selection with respect to increasing dimensionality (since we have averaged over a full set of randomized morphologies). However, it is the loss of classification information with which we are most concerned. The average misclassification rate with respect to full gamut of i-dimensional PDF morphologies under the tomographic scheme is, then, given as the integral:
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
Z
X1 ¼1
X1 ¼X2
þ
Z
X2 ¼1
X2 ¼X1
Z Z
X2 ¼1 X2 ¼0 X1 ¼1 X1 ¼0
Z
P1tom: ¼1
P1tom: ¼P2tom:
Z
P2tom: ¼1
P2tom: ¼P1tom:
Z
P2tom: ¼1
P2tom: ¼0
Z
P1tom: ¼1
P1tom: ¼0
255
X2 :PðP1tom: jX1 Þ:PðP2tom: jX2 ÞdP1tom: dP2tom: dX1 dX2 X1 :PðP1tom: jX1 Þ:PðP2tom: jX2 ÞdP1tom: dP2tom: dX1 dX2
ð83Þ where the subscripts/superscripts 1 and 2 indicate class labels. That is, we implicitly sum over the two sets of possibilities for which class misattribution errors occur: {P1tom: > P2tom: when X2 > X1 } and {P2tom: > P1tom: when X1 > X2 }. A numerically computed graph of the outcome of this equation for a range of dimensionalities is shown in Figure 26. D. Sum Rule Model Solution Turning now to the equivalent formulation for the sum rule combination scheme, the predicted value of the composite feature space probability density at point (1, 1, 1 . . .) for a randomized morphology, (Psum|X ), is given, for an underlying value X, by the formula:
Figure 26. Classification rate versus dimensionality for the model data.
256
WINDRIDGE
Psum jX ¼
m¼i X
m
A1 =ði2i1 Þ
ð84Þ
m¼0
(we here introduce a normalization (i2i1) for consistency with the stochastic approach above). The calculation of the way in which this quantity is distributed is complicated by the fact that many of the terms implicit in the individual summation, x A1 , are also implicit in a number of the other summations (specifically, at the various intersections of the hyperplanes represented by the x A1 ). However, by explicitly acknowledging that each of the constituent hyperplanes essentially constitutes a sum over all of the points of the composite posterior PDF having coordinates with consecutive ordinates held at unity, we can isolate the various independent coordinate values in multiples: " # iC m¼ k¼i k X X ~Þ ¼ PðPsum jX k Xm ð85Þ k¼0
m¼1
(where the Xm are independently selected from the distribution u). The summation over every Xm for a particular k thus represents the set of k coordinates having equal numbers of ordinals of value 1. When the ensemble average is sought over every possible randomized morphological permutation of the composite prior pattern space PDF in the previous manner, the predicted prior PDF value at (1, 1, 1 . . .) is thus distributed as: i
i
~ Þ ¼ ð1ðu?ÞiC1 Þ ? ð2ðu?ÞiC2 Þ ? ð3ðu?Þ C3 Þ ? . . . ðiðu?Þ Ci Þ: PðHjX
ð86Þ
The calculation of the misclassification rate is then achieved as before [Eq. (83)], via an integration over every probability for which the predicted value of class 1 at (1, 1, 1. . .), P1sum, is of the opposing magnitude to the equivalent point of class 2, P2sum, in relation to the actual value disparity. (See Figure 26 for a numerical calculation of this quantity against dimensionality.) E. Product Rule Model Solution The calculation of misclassification rate with respect to randomized morphology for the product rule is considerably more involved in the previous cases as a consequence of the proliferation of terms with mixed products of higher variate powers as dimensionality increases. As such, the misclassification rate versus dimensionality calculation may, very possibly, not be generally formalizable except on a case-by-case basis. A partial mathematical
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
257
treatment may, however, be encompassed by approximation, that is, by explicitly assuming the independence of the summed terms in each ordinal projection from their counterparts in the remaining ordinal projections. The probability density distribution of each ordinal is then as derived previously: i1
PðxA2 Þd xA2 ¼ ðu?Þ2 :
ð87Þ
However, it is the multiplicative value density of these terms with which we are primarily interested: PðPprod: jX Þ ¼ Pð1 A1 2 A1 . . . z A1 jX Þ
ð88Þ
(overlooking normalization considerations). Hence, we need to apply a logarithmic substitution to render the distribution tractable as a convolution of random variates: PðlogðPprod: ÞÞ ¼ Pðlogð1 A1 Þ þ logð2 A1 Þ þ . . . logðz A1 ÞÞ ) PðlogðPprod: ÞÞ ¼ Pðlogð1 A1 ÞÞ ? Pðlogð2 A1 ÞÞ ? . . . Pðlogðz A1 ÞÞ:
ð89Þ ð90Þ
The distribution resulting from this substitution is thus approximately lognormal (increasingly so as dimensionality increases via the multiplicative central limit theorem). Montecarlo simulation for the lower end of the dimensional range tends to confirm the accuracy of the adopted approximation. The performance results in terms of the correct class attribution rate for the product rule as are given in Figure 26 via the formulation of Eq. (83), which, along with the previous results, thus serves as our baseline performance, ‘‘noise response’’ model over the dimensional range; direct comparison with the results for the real-world geological survey data given in Figure 27 is thus invited. We reemphasize in passing that the vertical ordinate of Figure 27 represents the ensemble average error rate: the error bars thus refer to the unbiased estimate of the standard deviation of the mean of the error rate (a figure which would otherwise be partially dictated by the number of samples, ¼ 26Ci, contributing to each test point of the dimensional range. This consideration, however, does not supersede the fact of an inevitably incremental correlation among the individual samples as the dimensionality increases (in consequence of a greater degree of overlapping among the feature subsets), manifesting itself as a decreasing sample variance with increasing i (without, in principle, aVecting the mean to any great degree). Hence we opt to terminate the sequence at a figure significantly smaller than the total dimensionality to mitigate the impact of this eVect. We also note that in the wider interpretation of Figures 26 and 27, the horizontal graph axis could be equivalently labeled ‘‘classifier number’’
258
WINDRIDGE
Figure 27. Misclassification rate versus dimensionality for the real-world data (note linearizion of vertical scale).
rather than ‘‘composite feature space dimensionality,’’ the results being intended to be at least indicative of the more general fusion scenario for which classifiers are not necessarily limited to representing single features (via the argument of Section V.A). F. Findings of Dimensionality Tests In presenting an analysis of the results quantified in Figures 26 and 27, the first point to notice (briefly alluded to earlier) is that, even though the tomographic method is disadvantaged by the specifically randomized nature of the morphology in the ‘‘baseline’’ performance test (thereby imposing an absolute minimum of correlatable morphology between the various classifiers’ PDFs), the performance graph of Figure 26 suggests that it is, in terms of classification performance, nonetheless the superior combination methodology at every point of the dimensional range with respect to the sum and product rule alternatives. That is, the tomographic method, by virtue of making use of all of the data available in the classifiers constituting the combination (utilizing the cross-referenced information contained within
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
259
every classifier ordinal, rather than just those constituting the implicitly reconstructed coordinate), is able to recover a greater extent of the composite pattern space’s PDF lost during the feature selection process than can existing methods (as represented by the sum and product rules via the argument of Kittler et al., 1998). The diVerences between the reconstructive abilities of the methods are encapsulated in the distributions P(Ptom.|X ), P(Psum|X ), and P(Pprod.|X ), describing the deviation from the prior composite probability density value at the implicitly reconstructed pattern space coordinate. That the performance results of the three methods are not especially diVerent relative to the simulated Bayes error rate (which descends logarithmically) is thus speculated to be a consequence of the fact that the variances of these distributions are of similar orders of magnitude for constant dimensionalities, being governed chiefly by the number of self-convolutions of the uniform function, with higher numbers thus progressively approximating the Gaussian form (or log-Gaussian form, in the case of the product rule). This ‘‘convolution number’’ being the same for the sum, product, and tomographic combination methods means that the diVerence between the techniques, the ‘‘methodological signature’’ as it were, is manifested solely by the diVerences in the shape of the distribution functions. Thus, in retrospect, we can appreciate that any combination method postulating a composite PDF solution consistent with the ordinal projection values (as the sum, product, and tomographic methods all do) will prove to give very similar classification accuracies when the class morphologies are randomized over all ensembles. It is, as stated earlier, only when correlated classifier morphologies can be related to each other that advantages of tomographic combination come significantly to light. Turning then to the real-world performance tests, for which we postulated that correlated morphology among the classifiers is the rule rather that the exception, the equivalent results for the rock-strata data (Figure 27) would seem to indicate that the ability of the tomographic combiner to correlate morphology between the i discrete classifiers is much more in evidence, with a clear performance advantage over the sum and product rules developing with increasing dimensionality. In terms of the point-by-point relationship between the three combining methodologies, it would appear that the tomographic method more closely mimics the performance of the product rule than the sum rule, despite its origins in the latter technique. We hypothesize that this is a consequence of actual independence in the original PDFs being recovered by the tomographic method (which is feasible, given that, on inspection, the prior PDFs have an approximately similar morphology to the Gaussian distribution of uniform covariance). It should be noted, however, that the tomographic
260
WINDRIDGE
estimation-error graph more closely parallels that of the sum rule than the product rule (as we would conceivably expect, given the results of baseline performance measure tests of Figure 26). Thus, in a sense, Figure 26 and its attendant mathematical derivations can be considered to additionally serve as an indicator of the isolated eVects of estimation error on the respective tomographic, sum, and product combination rules (the estimation error plots in Figure 27 then correspondingly being seen as contextual indicators of the eVects of estimation error). That is, the point-by-point randomness of the prior PDFs in the model solution gives rise to a noise function at each of the classifier ordinates (the sum of the independent random variables being binomially distributed) that behaves similarly to the simulated estimation error of the second investigation, albeit without the context of the real-world classifier PDFs. G. Conclusions to Dimensionality Tests In terms of the advocation of a general combination strategy for an unfamiliar classification problem on the basis of the tests we have conducted, it would seem that the tomographic method is the indicated approach, both in terms of its reconstructive ability and its estimation-error resilience (for which the method approaches the performance of the sum rule, without that technique’s reconstructive deficiencies). In particular, these advantages would appear to scale favorably with the number of classifiers constituting the combination. It must, however, be clearly understood that the scatter of data points in Figure 27 is such that it is not possible to guarantee in all cases (or even much more than half of the cases for the lower dimensionalities) that the tomographic method is optimal (it being always possible to consider composite pattern space PDF morphologies that favor either of the alternative strategies). Our argument, we emphasize, is with respect to arbitrary underlying PDF morphologies,6 for which the presence of back-projection artifacts implied by conventional linear combination methods (the gamut of which the sum and product rules are deemed to collectively encompass) are taken to be generally unrepresentative. It is interesting to note, however, on the evidence of Figure 27, that in the real-world scenario, despite the presumed presence of these artifacts, the product rule would appear to be significantly better at composite PDF morphology recovery than the sum rule. This is presumably a consequence of the fact that the reconstruction artifacts are suppressed (but, note, not fully removed) via repeated 6
As distinct from the randomized morphologies of our model solution.
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
261
multiplication. This advantage, however, is generally suppressed by the multiplicative cumulation of estimation-error eVects for all but ideal classifiers. We have thus provided performance statistics to complement the earlier theoretical assertion that tomographic combination recovers the greatest degree of the composite pattern space PDF morphology lost during feature selection (the precise quantity of recoverable information being indicated by the disparity between the Bayesian and tomographic error rates for Figures 26 and 27 with regard to artifical and real situations, respectively). Moreover, we have demonstrated that the tomographic method, as well as having the best underlying performance rate, has also a similar error resilience to the sum rule combination methodology, thereby combining the best of both of the aspects of combination through which classification performance is improved, the morphologically reconstructive and the error negating: these two aspects being previously partially, though separately, represented within the product and sum rules, respectively.
VI. Morphology-Centered Classifier Combination: Retrospect, Prospect In this article, we have set out an analogy in which the range of classifier combination strategies represent, insofar as the feature sets are distinct, the incomplete tomographic reconstruction of the combined pattern space probability density function from Radon transform data presented by the feature selection process. After accommodating the specific issues arising from the higher dimensionality and lower angular sample rates of the Radon transforms within this regime, the metaphor immediately indicated a methodology for performance optimization through the application of the prefiltering convolution of Eq. (7) to the classifier PDFs prior to back-projection. We have thus achieved an optimization of classifier combination that proceeds from a priori grounds, rather than the more usual approach of optimizing from within a preexisting combination strategy selected on contingent or heuristic grounds (see, for example, Intrator and Cohen, 2002; Breiman, 1996; Drucker et al., 1994; Bruzzone et al., 2002; and Dietterich and Bakiri, 1995). Our assertion of the morphological optimality of our method then centers on it being a full completion of the partial tomographic reconstruction process implicit in all conventional methods of classifier combination, the only other considerations that we need address in this regard being, first, that of the remaining aspect of combination as implicit refinement of the
262
WINDRIDGE
PDF morphologies and, second, the robustness of the reconstructive procedure in relation to estimation error. The former point is necessarily now addressed at the level of feature selection and hence, within our unifying perspective, may be carried out at an optimal level through having distinguished it from the purely tomographic aspects of classical combination. The latter concern, the robustness of the procedure to estimation error, is argued to be of the order of that of the sum rule, the previously optimal procedure in this regard, and although exact calculation was omitted due to the dependence of the filtering procedure on the nature of the input PDFs, we found evidence to support this claim in the practical and model-theoretical experiments of Section V. Another major area of concern, set out in detail in Section III.B, was to significantly reduce the computation time involved in tomographic fusion. By reinterpreting the methodology in graphical correlation terms, we were able to achieve a reduction of many orders of magnitude. This is suYcient that the method no longer poses any significant cost obstacle to the implementation of the procedure with current computer technology. The exact basis of this eYciency gain was the appreciation that, when viewed in terms of the constituent PDFs, the three chief computational components within the recursive procedure (the peak-seek, the analysis of the correlation between detected peaks, and the subtraction/registration of those correlated components) need not be performed on an individual basis, potentially reducing the iteration requirement from X n ½X n ½X n1 þ X computational cycles to X n computational cycles, with the further possibility of an order of magnitude decrease in this figure for point-wise continuous classifiers. This is in addition to gains arising from the requirement that the dz parameter eVectively vary throughout the procedure. It was further anticipated that this PDF-centered approach might ultimately lend itself to a future reinterpretation of the optimal methodology for multiple expert fusion without any explicit reference to tomography theory, being rendered instead in the more familiar terminology of probability theory. Implementing (in Section V) this economized strategy on a set of practical and model scenarios over a range of dimensionalities, we argued that in terms of the advocation of a general combination strategy for an unfamiliar classification problem on the basis of the tests we have conducted, it would appear that the tomographic method is the indicated approach, both in terms of its reconstructive ability, as well as in its estimation error resilience (for which the method mimics the performance of the sum rule, without that technique’s reconstructive deficiencies). In particular, these advantages would appear to scale favorably with the number of classifiers constituting the combination.
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
263
However, it should be clearly understood in making this argument that the scatter of data points in Figure 27 is such that it is not possible to guarantee in all cases (or even much more than half of the cases for the lower dimensionalities) that the tomographic method is optimal (it always is possible to consider composite pattern-space PDF morphologies that favor either of the alternative strategies). Our argument, we emphasize, is with respect to arbitrary underlying PDF morphologies,7 for which the presence of back-projection artifacts implied by conventional linear combination methods (the gamut of which the sum and product rules are deemed to collectively encompass) are taken to be generally unrepresentative. It is interesting to note, however, on the evidence of Figure 27, that in the realworld scenario, despite the presumed presence of these artifacts, the product rule would appear to be significantly better at composite PDF morphology recovery than the sum rule. This is presumably a consequence of the fact that the reconstruction artifacts are suppressed (but, note, not fully removed) via repeated multiplication. This advantage, however, is generally suppressed by the multiplicative cumulation of estimation-error eVects for all but ideal classifiers. A. Outlook With respect to the prospects for further improving the tomographic combination methodology, one possibility is to note that the modified Ho¨ gbom method specified in Section III.D.3 inherently regards the rectanguloid hypercube as its deconvolution primitive, and thus constitutes only a partial realization of the potential for applying tomographic filtration to combined classifiers (the central idea of which is removal of all axial bias from backprojected Radon data). Clearly, while the rectanguloid hypercube primitive serves to remove much of the the feature axial alignment imposed by classifier combination (in particular, the elongated striations depicted in Figure 14), it still exhibits an obvious axial alignment on the local scale. Thus, there is scope for future methodological improvement by introducing more rotationally symmetric primitives (for instance hyperovoids, which would be capable of reconstructing complete Gaussians). Another, complementary, approach is to seek to increase the computational performance of the tomographic method, which at present, though considerably economized (Windridge and Kittler, 2003b) (and parametrically tunable to a high degree), falls significantly behind the linear sum and product methods. To remedy this situation, it is necessary to use a 7
As distinct from the randomized morphologies of our model solution.
264
WINDRIDGE
prefiltration approach. That is, we should have to apply a filtering convolution to the individual classifier PDFs and combine via the sum rule, imposing a positivity condition on the output. Such a method, while conjectured to be somewhat less accurate than the current approach, would have the benefit of scaling linearly in terms of operation time with the number of classifiers. To determine exactly what the accuracy deficit might be for such a procedure would be the basis of further empirical and theoretical investigation. It might also be of interest to consider alongside an investigation of this type the suitability of diVering base classifiers as candidates for prefiltration. In principle, our method is completely independent with regard to underlying base classifier type (as argued in Section II.A on a theoretical level, an also experimentally in Section IV in relation to Neural Net, quadratic, and Gaussian classifiers in various combinations). However, the practical payoV in each case might not be justified. EYcient prefiltration prior to tomographic reconstruction requires that there exists a mathematical representation of the base classifier PDFs amenable to convolution (such as could, for instance, be determined even for non-Bayesian classifiers like decision trees by the explicit elaboration of their recursive division of the feature space into hyperplane sections). If such a direct formulation is not straightforwardly available, it would be necessary to construct it artificially via sampling and interpolation, in itself a form of PDF estimation, and thus prone to an additional source of estimation error. In some cases, one may therefore be justified in compromising accuracy in the interests of reducing computation time. Equally, one may prefer handling combination in a manner conceptually congruent with the underlying base classifier paradigm (for instance, using a neural net combination layer for combining neural nets). However, while morphologically consistent, such approaches (unlike tomographic combination) can never be considered morphologically unbiased. In conclusion, we have undertaken a series of experimental investigations to demonstrate the utility of our theoretical understanding that tomographic combination methodologies have the capability to reconstruct the composite pattern space PDF morphology lost during feature selection in the most morphological unbiased fashion. Classifiers in this scenario can hence be considered to act as ‘‘morphology probes’’ within the context of the feature selection process, classifier morphology being matched to training data morphology as appropriate throughout the procedure. The method thus exists at a meta-level to both classification and (via the argument of Section II.G) classifier combination in its conventional form, and hence should not be invalidated by future developments in the field (in particular, new forms of classification, even though a particular classifier might arise that is capable of representing the composite pattern space without any
MORPHOLOGICALLY DEBIASED CLASSIFIER FUSION
265
reconstructive loss, that classifier is immediately appended to the range of possible morphology descriptors available to the optimal feature selection mechanism of Section II.G). Furthermore, we have demonstrated that the procedure has an error resilience comparable to that of the sum rule combination methodology, thereby combining both of the aspects of combination through which classification performance is improved, the morphologically reconstructive and the error negating. Acknowledgments This research was conducted at the University of Surrey, UK, supported by, and within the framework of, EPSRC research grant number GR/M61320. Particular thanks are due to Josef Kittler for his unstinting technical and practical guidance throughout the long gestation and eventual fruition of this project. Personal thanks are due to my wife, Andrea, for her equally unwaivering emotional support throughout. References Ali, K. M., and Pazzani, M. J. (1995). On the link between error correlation and error reduction in decision tree ensembles, in Technical Report 95–38. Irvine, CA: Department of Information and Computer Science, University of Irvine. Breiman, L. (1996). Bagging predictors. J. Machine Learning 24(2), 123–140. Briggs, D. S., and Cornwell, T. J. (1992). An alternative interpretation for the physical basis of CLEAN, in Astronomical Data Analysis Software and Systems I, A.S.P. Conference Series, 25, edited by Diana M. Worrall, Chris Biemesderfer, and Jeannette Barnes, p. 170. Bruzzone, L., Cossu, R., and Vernazza, G. (2002). Combining parametric and nonparametric algorithms for a partially unsupervised classification of multitemporal remote-sensing images. Inform. Fusion 3, 289–297. Cornwell, T. J., and Evans, K. F. (1985). A simple maximum entropy deconvolution algorithm. Astronomical Astrophysics 143, 77–83. Dietterich, T. G., and Bakiri, G. (1995). Solving multiclass learning problems via errorcorrecting output codes. J. Artificial Intell. Res. 2, 263–286. Drucker, H., Cortes, C., Jackel, L. D., Lecun, Y., and Vapnik, V. (1994). Boosting and other ensemble methods. Neural Computation 6(6), 1289–1301. Freund, Y., and Schapire, R. E. (1996). Experiments with a new boosting algorithm, in Proceedings 13th International Conference on Machine Learning, pp. 148–156. Ho, T. K. (1998). The random subspace method for constructing decision forests. IEEE Trans. Pattern Analysis Mach. Intell. 20(8), 832–844. Ho, T. K., Hull, J. J., and Srihari, S. N. (1994). Decision combination in multiple classifier systems. IEEE Trans. Pattern Analysis Mach. Intell. 16(1), 66–75.
266
WINDRIDGE
Ho¨ gbom, J. (2002). Aperture synthesis with a non-regular distribution of interferometer baselines. Astrophys. J. Suppl. Ser. 15, 417–426. Intrator, N., and Cohen, S. (2002). Automatic model selection in a hybrid perceptron/radial network. Inform. Fusion 3, 259–266. Jacobs, R. A. (1991). Methods for combining experts’ probability assessments. Neural Computation 3, 79–87. Kanal, L. (1974). Patterns in pattern recognition. IEEE Trans. Inform. Theory IT-20(4), 697–721. Kittler, J. (1997). Improving recognition rates by classifier combination: A review June 1997. In 1st IAPR TC1 Workshop on Statistical Techniques in Pattern Recognition, 205–210. June 1997. Kittler, J., Hatef, M., Duin, R. P. W., and Matas, J. (1998). On combining classifiers. IEEE Trans. Pattern Analysis Mach. Intell. 20(3), 226–239. Lam, L., and Suen, C. Y. (1995). Optimal combinations of pattern classifiers. Pattern Recog. Lett. 16(9), 945–954. Melville, P., Shah, N., Mihalkova, L., and Mooney, R. J. (2004). Experiments on ensembles with missing and noisy data. LNCS 3077, 293–302. Natterer, F. (1997). Algorithms in tomography, in The State of the Art in Numerical Analysis, edited by I. S. BuV and G. A. Watson. Clarendon Press. Rahman, A. F. R., and Fairhurst, M. C. (1997). A new hybrid approach in combining multiple experts to recognise handwritten numerals. Pattern Recog. Lett. 18, 781–790. Rahman, A. F. R., and Fairhurst, M. C. (1998). An evaluation of multi-expert configurations for the recognition of handwritten numerals. Pattern Recog. Lett. 31, 1255–1273. Windridge, D., and Kittler, J. (2000a). A generalised solution to the problem of multiple expert fusion, in Technical Report VSSP-TR-5. Surrey, UK: University of Surrey. Windridge, D., and Kittler, J. (2000b). Combined classifier optimisation via feature selection. LNCS 1876, 487–495. Windridge, D., and Kittler, J. (2003a). A morphologically optimal strategy for classifier combination: Multiple expert fusion as a tomographic process. IEEE Trans. Pattern Analysis Mach. Intell 25(3), 343–353. Windridge, D., and Kittler, J. (2003b). Economic tomographic classifier fusion: Eliminating redundant Ho¨ gbom deconvolution cycles in the sum-rule domain, in Technical Report VSSPTR-1. Surrey, UK: University of Surrey. Woods, K., Kegelmeyer, W. P., and Bowyer, K. (1997). Combination of multiple classifiers using local accuracy estimates. IEEE Trans. Pattern Analysis Mach. Intell. 19, 405–410. Xu, L., Krzyzak, A., and Suen, C. Y. (1994). Methods of combining multiple classifiers and their application to handwriting recognition. IEEE Trans. System, Man and Cybernetics 22(11), 1539–1549.
Index
A
C
Ab initio method, 70 ABA. See AYne blockwise averaging ABB. See Adaptive basis blocks ACN. See Attribute cluster networks Adaptive basis blocks (ABB), 161 Adaptive CC, 139 Adaptive locality, 142 Adaptive reduction of domain pool, 141 Additivity, 6 AYne blockwise averaging (ABA), 156–157 Angle correction, RBD and, 82 Arbitrary dimensionality, Radon transform and, 194–200 Article structure, 188 philosophical development of, 188–190 Artifacts, blocking, 129 Attractors, 126 Attribute cluster networks (ACN), 151–152
CAT. See Computerized axial tomography Cauchy sequences, 117 CCP4, indexed RBD patterns and, 91 Center of mass, 148 Chain codes, 133 Chemical methods of phasing, 70–71 Circulant matrices, 7 conclusions regarding, 65–66 DFT domain interpretation and, 8–10 eigenvalues of, 55 Kirsch masks and, 22 Classification, 145, 146–147 Classifier combination approaches to, 185–187 distinct aspects of, 189 establishing classifiers in, 212 estimation error and, 208–210 feature selection in, 185 filtered radon combination in, 213 fully general solution to problems in, 215–219 methodological outline of, 190–194 morphology oriented, 261–263 overlapping feature sets in, 210–215 philosophical development of, 188–190 radon transform theory and, 201–208 sampling concerns in, 200–201 tomography theory and, 187–188
B Back-projected space, 183 with prior filtration of Radon integrals, 184 Radon transforms and, 193 Banach theorem, 118, 119 Bath fractal transform, 169–170 BFT, 143 Born series, 77 Bottom-up strategies, quad trees and, 131 Boundness, 117 Bragg diVraction, 71 267
268 Classifier fusion, 182–185 Clustering, 150–151 Coarse-to-fine methods, 131 Collage errors, 130 Column vectors, real-valued, 51 Combination selection, feature selection and, 215–219 Compaction property, 166–167 Compass feature masks, 50 Compass gradient edge masks eigenvalue interpretation of, 13–25 eight-directional, 15 interpretation of, 66 new interpretation of, 22 pixels values in, 14 required matrix form of, 23 summarized, 43 three-by-three, 13, 38 Compass roof edge masks, feature detection of, 35–36 Complex-valued edge masks, 33–35 eigenvalue vectors of, 34 weight magnitude of, 40 Complex-valued features masks, feature detection of, 37–43 Composite classifiers, 218 Compound classifiers, 217 Compression ratio, defining, 114 Computerized axial tomography (CAT), 182 Computing time, 114 Constituent PDF feature dimensions, 231 Continuity, 117 Contraction factors, 117 estimating, 156 Contraction mapping, 118 Contractive functions, 116 Contractivity, fractal encoding and, 155–159 Contractivity constraint, 141 eventual, 158 weakening, 157, 159
INDEX
Coupling inversion symmetry, in RBD, 84–85 Coupling reflection, 72 Crystallography, Invariant triplet phase in, 73
D Database, 147 DC components, 130 DCC. See Derivative chain code DCT. See Discrete cosine transform DCT/DST hybrid masks, results of, 47 Decision boundaries for filtered cases, 243 for unfiltered cases, 243 Decision fusion, dilemma of, 186 Decoding, 126 Delaunay triangulation, 134 merging, 135 properties of, 135 Derivative chain code (DCC), 133 DFT domain interpretation, 8–12 circulant matrix, 8–10 eigenvalue interpretation and, 10–11 8D, 28–29 orthogonal transforms and, 11–12 summarized, 12 DFT. See Discrete Fourier transform DHT masks applications of, 59–65 results of, 49 DHT. See Discrete Hartley transform Digital images, fractals for, 122–127 Dimensionality, misclassification v., 258 Dimensionality issues, 244–245 Dimensionality tests conclusions from, 260–261 findings from, 258–260 Dirac delta cross-function artifacts, 225
269
INDEX
Direct methods of phase retrieval, 99–100 Direct methods of phasing, 70 Direction, 135 Discrete cosine transform (DCT), 2 fractal coding followed by, 166 orthogonal, masks, 45 orthogonal transform-based interpretation of, 44–48 Discrete Fourier transform (DFT), 2 complex-valued, 30 edge detection in, 12–25 Discrete Hartley transform (DHT), 2 information systems and, 61–63 optical control systems and, 59–61 orthogonal transform-based interpretation of, 48–54 results of applications of, 63–64 Discrete sine transform (DST), 2 orthogonal, masks, 45 orthogonal transform-based interpretation of, 44–48 Distinct feature sets, 217 Distorted-wave approximation, 78 Distortion, 140 Domain blocks, 125 current ranges and, 142 elimination of, 144 fractal encoding and, 161 half size, 140 relative problems and, 140–153 schematic behavior of, 143 shared luminance of, 144 subsampling, in fractal encoding, 153–154 whole size, 140 Domain pool classes of, 145 organizing information, 144–153 reducing, 141–144
Double minimization, 125 DST. See Discrete sine transform Dynamical theory of phase diVraction, 74
E Edge blocks, 145 Edge detection, 2 in DFT domain, 12–25 of Frei-Chen masks, 27 of Kirsch masks, 21 of Prewitt masks, 19 of Sobel masks, 17 Edge masks, 3 complex-valued, 33–35 EDWA. See Expanded distorted-wave approximation Eigenvalue analysis of circulant matrices, 55 of compass gradient edge masks, 13–25 in DFT domain, 10–11 for symmetric and skew-symmetric matrices, 24 of three-by-three features masks, 52 Eigenvalue vectors, of complex-valued edge masks, 34 Eigenvectors, 9 Electron density maps, of tetragonal lysozymes, 104 Enantiomorph determination, 97 Enantiomorph identification, in RBD experiments, 105–106 Error factor, 208 Estimation error classifier combination and, 208–210 tomographic classifier fusion response to, 248 Eventual contractivity, 156 constraint of, 158 defined, 156 exponents of, 156
270 Expanded distorted-wave approximation (EDWA), 78–80 schematic illustration of, 78
F Faster search, 145 FBB. See Fixed-basis blocks Feature detection of compass roof edge masks, 35–36 of complex-valued features masks, 37–43 of Frei-Chen line masks, 36–37 Feature masks, resulting images from, 65 Feature selection, 218 in classifier combination, 185 combination selection and, 215–219 unified, 218 Filtering, 218 Fine-to-coarse strategies, 131 Finite sampling issues, 224–227 Fixed points, 118 Fixed-basis blocks (FBB), 161 Flat images, 126 Fold variance, 138 Fourier domain, 166–167 Fractal coding DCT and, 166 defining, 114 five parameters of, 125–126 overlapping partitions and, 136–137 Fractal encoding blocking artifacts in, 128 blocks and, 161 decoding and, 162–164 isometries in, 154–155 oVset parameters in, 159–161 quantization of scaling in, 159–161 rate-distortion upper bound, 161–162 subsampling domain blocks in, 153–154
INDEX
Fractal transform, 127 Fractals coding transforms and, 164–169 for digital images, 122–127 images of, 120 mathematical notions of, 115–118 space of, 118–121 where to look for, 127–153 Frei-Chen edge masks, 25–33 edge detection results from, 27 interpretation of, 66 summarized, 43 weight vectors of, 31 Frei-Chen kernal masks, 51 three-by-three, 39 Frei-Chen line masks feature detection of, 36–37 orthogonal sets of, 26 Frequency domains, 66
G Geometric interpretation of k-d trees, 147 Geometric simulacra, 221 Geometric three-beam condition illustration of, 81 in RBD, 80–82 Geometric transformation, 124 Gradient edge kernel masks, 51 Gradients, 145
H HausdorV distances, 119 Hermitian property, 32 Higher-scale basis blocks (HSBB), 161 Hilbert space adjoint operator, 191–192 Ho¨ gbom algorithm, 222, 225 Ho¨ gbom deconvolution algorithmic implementation of, 230–232 filtered back projection and, 232–234
271
INDEX
finite sampling issues in, 224–227 in PDF domain, 227–228 in sum rule domain, 223–224 Homogeneity, 6 Horizontal-vertical proposal (HV), 130 HSBB. See Higher-scale basis blocks HV. See Horizontal-vertical proposal Hybrid codebook, 165
I IFS, 120 Imaginary masks, 41 Indexing, RBD images, 90–91 Information systems, DHT and, 61–63 In-out symmetry, in RBD, 84 Integration, of RBD images, 90–91 Invariant triplet phase, 73 Isometries, in fractal encoding, 154–155 Iterative algorithms, in RBD experiments, 105
J Jacquin’s proposal, 122–127 first generalization of, 129–130 subsampling and, 154
K Karhumen-Loieve transform (KLT), 2 interpretation of, 54–59 K-d tree structures, 147 geometric interpretation of, 147 Kernel masks, 50 Kick-out, 150 Kinematic theory of phase diVraction, 74 Kirsch masks circulant matrices of, 22 edge detection results of, 21
eigenvalues for matrices of, 24 matrix representation of, 20 three-by-three, 39 KLT. See Karhumen-Loieve transform
L Laplacian masks, 29–30 LCC, 139 Less search, 145 Line detection, 2 Linear position-invariant systems, 7 Linear prediction coding, 168 Linearity, 117 Local fractal dimension (LFD), 153 Local search algorithms, 162 Locality, 141 Lorentz factor, 82 LPIFS, 136 Luminance average, 146
M Main reflection, 71–72 Mask weights, 39–40 Massic, 124, 159 Matrix of weights, 136–137 Matrix representation of general matrix, 3–4 of Kirsch masks, 20 of Prewitt masks, 18 of Sobel masks, 16 Matrix theory, 2 Merging, 132 Midrange blocks, 145 Misclassification, dimensionality v., 258 Mixed triangle-quadrilateral partitioning, 134–135 Model data, tomographic classifier fusion and, 248 Modified matching pursuits, 152–153
272 Morphology-oriented classifier combination, 261–263 outlook on, 263–265 MOSFLM, RBD patterns indexed by, 91 MPEG4 guidelines, 137 MSE, variance and, 149–150 Multiple-beam diVraction (MBD), RBD v., 83
N N-beam dynamical theory, 74–76 RBD intensities calculated by, 76 Neighborhoods, 117, 146 9D vector description, of three-by-three neighborhood, 26 Non-adaptive reduction of domain pools, 143 Non-shade blocks, 132 Nyquist criterion, 200
O OVset, 135 OVset parameters, 159–161 One-side split, 133 Open set, 117 Optical control systems, 59–61 Orthogonal DCT masks, 45 Orthogonal DST masks, 45 Orthogonal masks, resulting images from, 65 Orthogonal sets, of Frei-Chen masks, 26 Orthogonal transform-based interpretation DCT, 44–48 defined, 43 DHT, 48–54 DST, 44–48 Orthogonal transforms, DFT domain interpretation and, 11–12 Orthogonality property, 9
INDEX
Orthonormal masks, basis vectors of, 53 Oscillation method, schematic illustrations of, 72
P Pairwise nearest-neighbor (PNN), 151 Partition schemes classifying, 131 hybrid, 136 overlapping, 136–137 PCA. See Principle component analysis PDF, 188–189, 201, 209, 219 for 1D classifiers, 250 class 1 2D reconstruction after filtration, 240 class 1 2D reconstruction via sum rule, 238 class 1 feature, 237 class 2 2D reconstruction after filtration, 240 class 2 2D reconstruction via sum rule, 239 class 2 feature, 237 class 3 2D reconstruction after filtration, 241 class 3 2D reconstruction via sum rule, 239 class 3 feature, 238 composite superposition of unfiltered, 241 decision boundaries for filtered, 243 decision boundaries for unfiltered, 243 morphological reconstruction of, 210 Radon method and, 213 randomized composite morphology of, 250 refinement of, 211 requisite subtractions from, 228 in sum rule space, 224
273
INDEX
PDF domain algorithmic implementation of Ho¨ gbom deconvolution in, 230–232 filtered back projection in, 232–234 Ho¨ gbom deconvolution in, 227–228 PDF-centered approach to Ho¨ gbom procedure, 228–230 Performance optimized filtered back-projection, 232–234 PET. See Positron emission tomography Phase extension, 104 in RBD experiments, 105 Phase precision, 94–97 Phase problem defined, 70 methods for, 70–71 three-beam diVraction and, 71–72 Phase rejection, in RBD, 92–93 Phase retrieval described, 99–100 direct methods of, 100–101 recursive reference-beam, 101–104 Phase sensitive diVraction theory defined, 74 expanded distorted-wave approximation, 78–80 N-beam dynamical, 74–76 second-order born approximation, 76–78 PIFS, 143 Pixel values, in three-by-three mask, 14 PNN. See Pairwise nearest-neighbor Polarization eVect, RBD and, 82–83 Polygonal shapes generation of, 135 generic, 136 Positron emission tomography (PET), 187
Postcombination tomographic filtration economic approach to, 222–223 final considerations regarding, 234–235 introduction to, 220–222 Postfiltration methodology, 222 application results of, 236–243 test data characteristics, 235–236 Prewitt masks edge detection results of, 19 eigenvalues for matrices of, 24 matrix representation of, 18 structures of, 53 Principle component analysis (PCA), 147 Product rule model solution, 256–258 Programmatic eYciency gains, 230 Proteins, RBD on, 93–94 PSNR, 144
Q QD. See Quad-trees QR. See Quad-tree recomposition Quad-tree recomposition (QR), 131 Quad-trees (QD), 130 bottom up strategies and, 131 partitioning, 137 Quantization of scaling, in fractal encoding, 159–161 Quasicrystals example of, 100 RBD on, 97–98 Query, 147
R Radon integrals, back projection with filtration of, 184 Radon transform, 191 back projector and, 193 generalized inverse, 205
274 Radon transform (Cont. ) modification of, 196 sampling issues and, 200 3D inverse, 197 unfiltered 2D inverse, 221 unfiltered inverse, 207 Radon transform theory, classifier combination and, 201–207 Range blocks classifying, 132 partitions in, 128–136 shaping, 128–137 side by side, 128 split decision functions and, 137–140 splitting, 125 splitting input image for generating, 130 Range pairs, 133 Range shape, 130 Range-domain projections, 154 Rate, 140 Rate-distortion upper bound, in fractal encoding, 161–162 RBD. See Reference-beam diVraction RBFC. See Region-based fractal coder Real masks, 41 Real-valued column vectors, 51 Recursive phasing algorithms, flow diagram of, 103 Recursive reference-beam phasing, 101–104 Reference reflection orientation, 88–89 enantiomorph determination and, 97 Reference-beam diVraction (RBD), 72–73 angle correction and, 82 calculating interference from, 80 conclusions of, 106–109 coupling inversion symmetry in, 84–85 data collection for, 88–90 data processing for, 90–93
INDEX
defined, 72 experimental set up for, 87 geometric three-beam condition in, 80–82 illustration of data sets, 89 indexing, integration, and scaling of, 90–91 in-out symmetry in, 84 iterative algorithms and, 105 Lorentz factor and, 82 MBD v., 83 measured phase precision in, 94–97 N-beam dynamical theory calculations of, 76 outline of, 73–74 phase extension and, 105 polarization eVect and, 82–83 profiles of, measured on tetragonal lysozyme crystal, 95 on proteins, 93–94 recursive phasing algorithms based on, 103 reference reflection orientation, 88–89 reference-beam oscillation, 89–90 schematic illustrations of, 72 schematic procedure of, 88 triple occurrence pattern in, 102 triplet inversion symmetry in, 85 triplet-phase curve fitting and, 92 unreliable phase rejection, 92–93 Reference-beam oscillation, 89–90 Reflection G, 71–72 Reflection H, 71–72 Region edge maps, 133 Region-based fractal coder (RBFC), 133 Roof kernel masks, three-by-three, 39
S Same-scale basis blocks (SSBB), 161 Sampling concerns, in classifier combination, 200–201 Saupe’s approach, 146–148 ScalePack, 91
275
INDEX
Scaling, of RBD images, 90–91 Scaling parameters, histogram of, 160 Secondary reflections, 71–72 Second-order born approximation, 76–78 Seed range, 132 Self organizing maps (SOM), 151–152 Shade blocks, 132, 145 Similar degree accounts for shared luminance of blocks, 144 Singular value decomposition (SVD), 2 Six-circle diVractometers, 86–87 Skew-symmetric matrices, eigenvalues for, 24 Sobel masks edge detection results of, 17 eigenvalues for matrices of, 24 matrix representation of, 16 structures of, 53 SOM. See Self organizing maps Special five-circle diVractometer, 86–88 Special matrices properties of, 65–66 vector-matrix representation of, 6–8 Split and merge algorithm, 132 Split decision functions, 137–140 criteria for, 138, 139 Splitting, variational approach to, 140 Square matrices, decomposing, 23 SSBB. See Same-scale basis blocks Sub-band prediction, 167 Subsampling, 153–154 Jacquin’s proposal and, 154 Subtree prediction, 167 Sum rule domain class 1 2D reconstruction via, 238 class 2 2D reconstruction via, 239 class 3 2D reconstruction via, 239 composite PDF in, 224 finite sampling issues in, 224–227 Ho¨ gbom deconvolution in, 223–224
Sum rule model solution, 255–256 SVD. See Singular value decomposition Symmetric matrices, eigenvalues for, 24
T Target ranges, 142 Tautologies, 216–217 Test data characteristics, in postfiltration methodology, 235–236 Test images, 116 Tetragonal lysozyme electron density maps of, 104 measured tripled phases and, 96 RBD profiles measured on, 95 reflection of, 80 3 3 masks real-valued eigenvalues for, 52 with zero center weight, 38 Three-beam condition, illustration of, 81 Three-beam diVraction, phase problem and, 71–72 Three-side split, 133 Toeplitz matrices, 7 Tomographic classifier fusion, 218 constraints on, 244–245 in empirical tests, 245–247 estimation error response of, 248 experimental format of, 247 methodological outline of, 190–194 on model data, 248 model solution for, 249–255 product rule model solution for, 256–258 sum rule model solution for, 255–256 Tomographic reconstruction formula, 218 Tomographically optimal combination, 218
276 Tomography theory, classifier combination and, 187–188 Top-down methods, 131 Triangle of Sierpinski, 120 Triangle-based partitioning merging in, 134 splits in, 134 Triangular partitions, 134 Triple-phase curve fitting, 92 four fitting parameters in, 93 Triplet inversion symmetry example of, 86 in RBD, 85
U Unconventional parameterization, 160 Uniform quantization, 159
V Variance, MSE and, 149–150 Vector quantization, 165
INDEX
Vector-matrix representation, 4–8 background of, 4–6 special matrices, 6–8 Vertical edges, three-by-three image containing, 41
W Walsh-Hadamard transform (WHT), 168 Wavelets, 167–168 Weight magnitude, of complex-valued masks, 40 Weight vectors, of Frei-Chen masks, 31
X X-ray diVraction introduction to, 70 phase sensitive categories of, 75
Z Zooming, 164