ADVANCES IN I M A G I N G AND ELECTRON PHYSICS VOLUME 120
EDITOR-IN-CHIEF
PETER W. HAWKES CEMES-CNRS Toulouse, Franc...
61 downloads
859 Views
17MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
ADVANCES IN I M A G I N G AND ELECTRON PHYSICS VOLUME 120
EDITOR-IN-CHIEF
PETER W. HAWKES CEMES-CNRS Toulouse, France
ASSOCIATE EDITORS
BENJAMIN K A Z A N Xerox Corporation Palo Alto Research Center Palo Alto, California
TOM M U L V E Y
Department of Electronic Engineering and Applied Physics Aston University Birmingham, United Kingdom
Advances in
Imaging and Electron Physics EDITED BY
PETER W. HAWKES CEMES-CNRS Toulouse, France
V O L U M E 120
ACADEMIC PRESS An Elsevier Science Imprint San Diego
San Francisco New York Boston London Sydney Tokyo
This book is printed on acid-free paper.
Copyright 2002, Elsevier Science (USA) All Rights Reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the Publisher. The appearance of the code at the bottom of the first page of a chapter in this book indicates the Publisher's consent that copies of the chapter may be made for personal or internal use of specific clients. This consent is given on the condition, however, that the copier pay the stated per-copy fee through the Copyright Clearance Center, Inc. (222 Rosewood Drive, Danvers, Massachusetts 01923), for copying beyond that permitted by Sections 107 or 108 of the U.S. Copyright Law. This consent does not extend to other kinds of copying, such as copying for general distribution, for advertising or promotional purposes, for creating new collective works, or for resale. Copy fees for pre-2002 chapters are as shown on the title pages: If no fee code appears on the title page, the copy fee is the same as for current chapters. 1076-5670/02 $35.00 Explicit permission from Academic Press is not required to reproduce a maximum of two figures or tables from an Academic Press chapter in another scientific or research publication provided that the material has not been credited to another source and that full credit to the Academic Press chapter is given.
Academic Press An Elsevier Science Imprint 525 B Street, Suite 1900, San Diego, California 92101-4495, USA http://www.academicpress.com
Academic Press Harcourt Place, 32 Jamestown Road, London NW1 7BY, UK International Standard Serial Number: 1076-5670 International Standard Book Number: 0-12-014762-9 PRINTED IN THE UNITED STATES OF AMERICA 02 03 04 05 06 MB 9 8 7 6 5 4 3
2
1
D e d i c a t e d in g r a t i t u d e to
Peter W. Hawkes
f o r twenty years of d i s t i n g u i s h e d editorial achievement Advances in Electronics and Electron Physics, first published in 1948, subsequently combined with Advances in Image Pick-up and Display and Advances in Optical and Electron Microscopy, and ultimately titled Advances in Imaging and Electron Physics, could not possibly have come into more capable hands than those of Peter Hawkes. I met Peter during my very first brief excursion beyond the iron curtain at the 10th International Congress on Electron Microscopy in Hamburg in 1982, the year he took over the editing of this important serial. My short trip, barely a week in length, cost me more than my year's salary. But what it showed me was that the names in electron microscopy also had faces, and that one of them belonged to Peter, whom I knew from his papers and publications. We spent some time talking about the lectures we had just heard, exploring our common interest in electron optics and our common hobby of electron optical aberrations - - for which Peter was the true guru and I just a humble beginner. This helped me to recognize the importance of meeting people and talking to them, something I had hardly known during my country's isolation. Two years later in Budapest at the EUREM meeting, we were already talking like old friends. Since then we have exchanged many letters and reprints; and, thanks to e-mail, we also have been able to write two joint papers. During various official and unofficial gatherings it was always a great pleasure to meet him, whether for lunch, dinner, or a cup of coffee, he is such excellent company. I have always admired the extent of Peter's knowledge, because his interests are not limited to aberrations in electron optics but extend to image processing, and because he is so conversant with the history of electron optics and microscopy. The time he spent editing the many volumes of AIEP did not prevent him from publishing more than 100 papers of his own during the past twenty years; the
citations number well over 1000. His most important single contribution has been the book, Principles of Electron Optics, co-authored by E. Kasper and published by Academic Press in 1989, which is an invaluable starting point for any research in the field. He has evaluated numerous PhD theses; those reviews alone, if published, could make a special volume. Such enormous scholarly activity would suggest that he must spend all of his time behind a desk, sorting piles of papers, but nothing could be further from the truth. Peter is a respected leader and organizer whose name appears in the proceedings of countless conferences on electron microscopy and particle optics. He also served as the first President of the European Society of Electron Microscopy and now represents the French EM society in that body. BOHUMILA LENCOVA
I have known Peter from his earliest PhD research at the Cavendish Laboratory, Cambridge, under the supervision of V. E. Cosslett, of happy memory, to the present day. He is an outstanding researcher and a brilliant scientific editor in addition to being an active physicist, a scientific globe trotter and a man of parts. National and scientific barriers do not seem to exist in his mind. His courteous nature, paired with his strong sense of humor, indeed of fun, allows him to remain on friendly terms with those who might disagree with him. Due to his great editorial and diplomatic skills, he has been able to take on a wide range of scientific contributions from all over the world. Peter has undoubtedly raised the standards of electron optical publication worldwide by his careful attention to detail and scientific accuracy, as well as providing electron microscopists with an extraordinarily broad range of electron-optical literature. One of his greatest achievements was to take on Advances in Imaging and Electron Physics Volume 96, "The Growth of Electron Microscopy," to which all members of IFSEM were invited to contribute. Previous attempts by IFSEM to organize the production of such a volume had failed because of the complexities of their membership and the difficulties in many countries in producing a professional text in English. Peter Hawkes suggested that Academic Press could undertake such a task and his offer was accepted. I agreed to act as editor of this volume. It was indeed an enormous task, brilliantly handled by Academic Press, under Peter's constant guidance and his diplomatic manner. His work with the volume did not prevent Peter from his extensive collaboration with E. Kasper in the definitive books on electron optics or in the final "polishing up" of the well-known books of Ludwig Reimer. TOM MULVEY
CONTENTS
CONTRIBUTORS PREFACE .
.
.
. .
. .
. .
. .
FUTURE CONTRIBUTIONS
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
.
.
.
.
ix
.
.
.
.
xi
. . . . . . . . . . . . . . . . . . . . . . .
xiii
A Review of Image Segmentation Techniques Integrating Region and Boundary Information X. CUFf, X. Mulqoz, J. F R E I X E N E T , AND J. MARTf I. II. III. IV. V.
Introduction . . . . . . . . . . . . . . . . . . . . . . E m b e d d e d Integration . . . . . . . . . . . . . . . . . . Postprocessing Integration . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . Conclusions and Further W o r k . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
1 6 17 31 35 36
Mirror Corrector for Low-Voltage Electron Microscopes P. HARTEL, D. PREIKSZAS, R. SPEHR, H. Mf0LLER, AND H. ROSE I. II. III. IV. V. VI.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . General Considerations . . . . . . . . . . . . . . . . . . . . The Spectromicroscope " S M A R T . . . . . . . . . . . . . . . . . . M e c h a n i c a l D e s i g n o f the M i r r o r C o r r e c t o r . . . . . . . . . . . . Testing o f the Mirror Corrector . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Addition o f Refractive P o w e r s in the T w o - L e n s S y s t e m . . References . . . . . . . . . . . . . . . . . . . . . . . . .
42 44 52 72 84 128 130 132
Characterization of Texture in Scanning Electron Microscope Images J. L. LADAGA AND R. D. BONETTO I. II. III. IV. V.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . T h e Variogram as a Surface Characterization Tool . . . . . . . . . Variogram Use for Texture Characterization o f Digital I m a g e s . . . . Two E x a m p l e s of Application in S E M I m a g e s . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . A p p e n d i x I: Correlation b e t w e e n Fourier P o w e r S p e c t r u m M a x i m u m and Variogram Characteristic M i n i m u m . . . . . . . . ~
Vll
136 136 146 174
183 183
viii
CONTENTS Appendix II: Theoretical Example to Show the Correlation between the Fourier Power Spectrum M a x i m u m and the Variogram Characteristic M i n i m u m . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . .
186 189
Degradation Identification and Model Parameter Estimation in Discontinuity-Adaptive Visual Reconstruction A. TONAZZINI AND L. BEDINI I. II. III. IV. V. VI. VII. VIII. IX. X. XI. XII.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . Fully Bayesian Approach to Unsupervised Blind Restoration . . . . The M A P - M L Method . . . . . . . . . . . . . . . . . . . . M A P Estimation of the Image Field . . . . . . . . . . . . . . . M L Estimation of the Degradation Parameters . . . . . . . . . . . M L Estimation of the Model Parameters . . . . . . . . . . . . . The Overall Architecture for the Fully Blind and Unsupervised Restoration . . . . . . . . . . . . . . . . . . . Adaptive Smoothing and Edge Tracking . . . . . . . . . . . . . Experimental Results: The Blind Restoration Subcase . . . . . . . Experimental Results: The Unsupervised Restoration Subcase . . . . Experimental Results: The Fully Unsupervised Blind Restoration Case Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . .
INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
194 202 206 208 215 217 227 231 238 247 270 279 280
285
CONTRIBUTORS
Numbers in parentheses indicate the pages on which the authors' contribution begins.
L. BEDINI (193), Institute for the Elaboration of Information, Area della Ricerca CNR di Pisa, 1-56124 Pisa, Italy R. D. BONETTO(135), Center of Investigation and Development in Processes Catalytic, National Council of Investigations Scientific and Technical, Universidad Nacional de La Plata, 1900 La Plata, Argentina X. CUFf (1), Computer Vision and Robotics Group, Department EIA-IIiA, University of Girona, 17071 Girona, Spain J. FREIXENET (1), Computer Vision and Robotics Group, Department EIAIIiA, University of Girona, 17071 Girona, Spain P. HARTEL (41), Darmstadt University of Technology, Institute of Applied Physics, D-64289 Darmstadt, Germany J. L. LADAGA (135), Laser Laboratory, Department of Physics, Faculty of Engineering, Universidad de Buenos Aires, 1063 Buenos Aires, Argentina J. MARTf (1), Computer Vision and Robotics Group, Department EIA-IIiA, University of Girona, 1707.1 Girona, Spain n. MULLER (41), Darmstadt University of Technology, Institute of Applied Physics, D-64289 Darmstadt, Germany X. MUgTOZ(1), Computer Vision and Robotics Group, Department EIA-IIiA, University of Girona, 17071 Girona, Spain D. PREIKSZAS(41), Darmstadt University of Technology, Institute of Applied Physics, D-64289 Darmstadt, Germany H. ROSE (41), Darmstadt University of Technology, Institute of Applied Physics, D-64289 Darmstadt, Germany R. SPEHR (41), Darmstadt University of Technology, Institute of Applied Physics, D-64289 Darmstadt, Germany A. TONAZZINI (193), Institute for the Elaboration of Information, Area della Ricerca CNR di Pisa, 1-56124 Pisa, Italy ix
This Page Intentionally Left Blank
PREFACE
The contributions to this volume form a good balance between imaging and electron physics, with chapters on segmentation, texture analysis of scanning electron microscope images, blind image restoration and an extremely thorough account of a new aberration corrector for the scanning electron microscope. We begin with a careful discussion from the Computer and Robotics Group in the University of Girona of the integration of boundary and region information into segmentation techniques. The authors make a clear distinction between embedded integration and postprocessing. This is not usually a high priority and an analysis of the problems and advantages of such an approach is therefore all the more welcome. The second, very long contribution describes the ambitious mirror aberration-corrector that is currently being developed by a consortium of German organizations with government support, namely, the universities of Darmstadt, Clausthal and Wtirzburg, the Fritz-Haber Institute in Berlin and LEO Elektronenmikroskopie. Although correctors have now been developed for high-resolution transmission and scanning transmission electron microscopes, there is no satisfactory corrector for direct imaging in the low-voltage domain. The design described at length here is an attempt to remedy this omission, at the cost of a relatively complex electron path between source and specimen and detector. Two families of microscopes are considered, lowenergy electron microscopes operating at energies below 15 keV (LEEM) and scanning electron microscopes; if the specimen is illuminated with photons, then the microscope will be a photoemission electron microscope (PEEM). The ultimate aim of the project is to incorporate such a corrector into a combined PEEM-LEEM known as SMART, a "SpectroMicroscope for All Relevant Techniques". The authors take us carefully through all aspects of the instrumentation, discussing both the optics and the mechanical requirements. A particularly interesting section is devoted to testing the device and to trouble-shooting, from which the reader can assess the difficulty of putting this design into practice and its chances ofjoining the growing number of corrected instruments. The third contribution, by J. L. Ladaga and R. D. Bonetto from Buenos Aires and La Plata is again concerned with the scanning electron microscope. Interest in digital processing of the SEM image arose soon after the instrument became commercially available, though the first attempts could not of course be fully digital, and SEM image processing has now reached a high degree xi
xii
PREFACE
of sophistication and some such tools are routinely supplied with these instruments. Here the theme is texture characterization and the preferred tool is the variogram, from which the fractal dimension can be deduced. The authors present the basic ideas and their own contributions to this approach very clearly and conclude with several examples, showing the power of the technique. We conclude with a very complete account by A. Tonazzini and L. Bedini from Pisa of ways of identifying image degradation and of restoring degraded images based on blind restoration. The authors introduce us to the Bayesian approach to restoration and explain in great detail how fully blind and unsupervised restoration can be achieved. The authors' own contribution is fully described and placed in the context or other attempts to solve the difficulties that arise. This account is on the scale of a short monograph and brings out very clearly the practical merits of their methods. It only remains for me to thank all the authors for the trouble they have taken to make their surveys complete and accessible to non-specialists. As usual, I conclude with a list of forthcoming articles, many of which will be published in the course of 2002.
FUTURE CONTRIBUTIONS
T. Aaeh Lapped transforms G. Abbate New developments in liquid-crystal-based photonic devices S. Ando Gradient operators and edge and corner detection A. Arnrodo, N. Decoster, P. Kestener and S. Roux A wavelet-based method for multifractal image analysis M. Barnabei and L. Montefusco Algebraic aspects of signal and image processing C. Beeli Structure and microscopy of quasicrystals I. Bloch Fuzzy distance measures in image processing
G. Borgefors Distance transforms B. L. Breton, D. McMullan and K. C. A. Smith (Eds) Sir Charles Oatley and the scanning electron microscope A. Carini, G. L. Sicuranza and E. Mumolo V-vector algebra and Volterra filters
Y. Cho Scanning nonlinear dielectric microscopy E. R. Davies Mean, median and mode filters H. Delingette Surface reconstruction based on simplex meshes A. Diaspro Two-photon excitation in microscopy
R. G. Forbes Liquid metal ion sources xiii
xiv
FUTURE CONTRIBUTIONS
E, F6rster and E N, Chukhovsky X-ray optics A. Fox The critical-voltage effect L. Frank and I. Miillerovfi Scanning low-energy electron microscopy M. Freeman and G. M. Steeves Ultrafast scanning tunneling microscopy A. Garcia Sampling theory
L, Godo & V, Torra Aggregation operators P, W. Hawkes Electron optics and electron microscopy: conference proceedings and abstracts as source material
M, I, Herrera The development of electron microscopy in Spain J. S. Hesthaven Higher-order accuracy computational methods for time-domain electromagnetics K. Ishizuka Contrast transfer and crystal images
I, P, Jones ALCHEMI W. S. Kerwin and J. Prince The kriging update model
B, Kessler Orthogonal multiwavelets A. Khursheed (vol. 122) Add-on lens attachments for the scanning electron microscope G. K6gel Positron microscopy W. Krakow Sideband imaging
FUTURE CONTRIBUTIONS
xv
N. Krueger The application of statistical and deterministic regularities in biological and artificial vision systems B. Lahme Karhunen-Lo~ve decomposition C. L. Matson Back-propagation through turbid media P. G. Merli, M. Vittori Antisari and G. Calestani, eds (vol. 123) Aspects of Electron Microscopy S. Mikoshiba and E L. Curzon Plasma displays M. A. O'Keefe Electron image simulation N. Papamarkos and A. Kesidis The inverse Hough transform M. G. A. Paris and G. d'Ariano Quantum tomography
C. Passow Geometric methods of treating energy transport phenomena E. Petajan HDTV
E A. Ponce Nitride semiconductors for high-brightness blue and green light emission T.-C. Poon Scanning optical holography H. de Raedt, K. E L. Michielsen and J. Th. M. Hosson Aspects of mathematical morphology E. Rau Energy analysers for electron microscopes H. Rauch The wave-particle dualism R. de Ridder Neural networks in nonlinear image processing D. Saad, R. Vicente and A. Kabashima Error-correcting codes
xvi
FUTURE CONTRIBUTIONS
O. Scherzer Regularization techniques G. Sehmahl
X-ray microscopy S. Shirai
CRT gun design methods T. Soma
Focus-deflection systems and their applications I. Talmon Study of complex fluids by transmission electron microscopy M. Tonouchi
Terahertz radiation imaging N. M. Towghi Ip n o r m optimal filters T. Tsutsui and Z. Dechun
Organic electroluminescence, materials and devices Y. Uchikawa Electron gun optics D. van Dyck Very high resolution electron microscopy J. S. Walker Tree-adapted wavelet shrinkage C. D. Wright and E. W. Hill
Magnetic force microscopy F. Yang and M. Paindavoine
Pre-filtering for pattern recognition using wavelet transforms and neural networks M. Yeadon Instrumentation for surface studies S. Zaefferer Computer-aided crystallographic analysis in TEM
ADVANCES IN IMAGING AND ELECTRON PHYSICS, VOL. 120
A R e v i e w of I m a g e S e g m e n t a t i o n T e c h n i q u e s Integrating Region and Boundary Information
X. CUF[, X. MUlqOZ, J. FREIXENET, AND J. MARTI Computer Vision and Robotics Group, EIA-IIiA University of Girona, 17071 Girona, Spain
I. I n t r o d u c t i o n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Integration Techniques: Embedded versus Postprocessing . . . . . . . . B . Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . Embedded Integration . . . . . . . . . . . . . . . . . . . . . . . . A . Guidance of Seed Placement . . . . . . . . . . . . . . . . . . . . B. Control of Growing Criterion . . . . . . . . . . . . . . . . . . . . . 1. Integration in Split-and-Merge Algorithms . . . . . . . . . . . . . 2. Integration in Region-Growing Algorithms . . . . . . . . . . . . .
.
A. II.
C. F u z z y L o g i c . III.
.
.
.
.
.
Postprocessing Integration A. O v e r s e g m e n t a t i o n B.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
. .
6
. .
7
. .
10
. .
10
. .
12
.
.
14
. . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
Boundary Refinement . . . . . . 1. Boundary Refinement by Snakes
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
21
.
.
.
.
.
.
17
.
IV. S u m m a r y
.
.
.
.
.
.
.
.
.
.
.
23
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
27
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
. . . . . . . . . . . . . . . . . . . .
33
. . . . . . . . . . . . . . . . . . . .
35
Disadvantages of Both Strategies Conclusions and Further Work . . . References . . . . . . . . . . .
A. V.
.
1 4
.
C. S e l e c t i o n
.
.
.
. . .
I.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
36
INTRODUCTION
One of the first and most important operations in image analysis and computer vision is segmentation (R. Haralick and R. Shapiro. 1992-1993; Rosenfeld and Kak, 1982). The aim of image segmentation is the domain-independent partition of the image into a set of regions which are visually distinct and uniform with respect to some property, such as gray level, texture, or color. Segmentation can be considered the first step and key issue in object recognition, scene understanding, and image understanding. Its application areas vary from industrial quality control to medicine, robot navigation, geophysical exploration, military applications, and so forth. In all these areas, the quality of the final results depends largely on the quality of the segmentation. The problem of segmentation has been, and still is, an important research field, and many segmentation methods have been proposed in the literature (Fu and Mui, 1981; R. M. Haralick and L. G. Shapiro, 1985; Nevatia, 1986; 1 Volume 120 ISBN 0-12-014762-9
ADVANCES IN IMAGING AND ELECTRON PHYSICS Copyright 2002,
ElsevierScience (USA). All rightsreserved.
ISSN 1076-5670/02 $35.00
2
X. CUFf ET AL.
Pal and Pal, 1993; Riseman and Arbib, 1977; Zucker, 1977). In general, segmentation methods are based on two basic properties of the pixels in relation to their local neighborhood: discontinuity and similarity. Methods that are based on some discontinuity property of the pixels are called boundarybased methods, whereas methods based on some similarity property are called region-based methods. More specifically, 9 The boundary approach uses the postulate that abrupt changes occur with regard to the features of the pixels (e.g., abrupt changes in gray values) at the boundary between two regions. To find these positions, one can choose from two basic approaches: first- and second-order differentiation. In the first case, a gradient mask (Roberts, 1965, and Sobel, 1970, are wellknown examples) is convolved with the image to obtain the gradient vector V f associated with each pixel. Edges are the places where the magnitude of the gradient vector [IV f II is a local maximum along the direction of the gradient vector 4~(Vf). For this purpose, the local value of the gradient magnitude must be compared with the values of the gradient estimated along this orientation and at unit distance on either side away from the pixel. After this process of nonmaxima suppression takes place, the values of the gradient vectors that remain are thresholded, and only pixels with a gradient magnitude exceeding the threshold are considered as edge pixels (Petrou and Bosdogianni, 1999). In the second-order derivative class, optimal edges (maxima of gradient magnitude) are found by searching for places where the second derivative is zero. The isotropic generalization of the second derivative to two dimensions is the Laplacian (Prewitt, 1970). However, when gradient operators are applied to an image, the zeros rarely fall exactly on a pixel. It is possible to isolate these zeros by finding zero crossings: places where one pixel is positive and a neighbor is negative (or vice versa). Ideally, edges of images should correspond to boundaries of homogeneous objects and object surfaces. 9 The region approach tries to isolate areas of images that are homogeneous according to a given set of characteristics. Candidate areas may be grown, shrunk, merged, split, created, or destroyed during the segmentation process. There are two typical region-based segmentation algorithms: region-growing and split-and-merge algorithms. Region growing (Adams and Bischof, 1994; Zucker, 1976) is one of the most simple and popular algorithms and it starts by choosing a starting point or seed pixel. Then, the region grows by adding neighboring pixels that are similar, according to a certain homogeneity criterion, which increases the size of the region step by step. Typical split-and-merge techniques (Chen and Pavlidis, 1980; Fukada, 1980) consist of two basic steps. First, the whole image is considered as one region. If this region does not comply with a homogeneity
A REVIEW OF INTEGRATED IMAGE SEGMENTATION TECHNIQUES
3
criterion, the region is split into four quadrants and each quadrant is tested in the same way until every square region created in this way contains homogeneous pixels. Next, in a second step, all adjacent regions with similar attributes may be merged upon compliance with other criteria. Unfortunately, both techniques, boundary-based and region-based, often fail to produce accurate segmentation, although the locations where they fail are not necessarily identical. On the one hand, in boundary-based methods, if an image is noisy or if its region attributes differ by only a small amount between regions, characteristics very common in natural scenes, edge detection may result in spurious and broken edges. This occurs mainly because such methods rely entirely on the local information available in the image; very few pixels are used to detect the desired features. Edge-linking techniques can be employed to bridge short gaps in such a region boundary, although doing so is generally considered an extremely difficult task. On the other hand, region-based methods always provide closed-contour regions and make use of relatively large neighborhoods in order to obtain sufficient information to allow the algorithm to decide whether to aggregate a pixel into a region. Consequently, the region approach tends to sacrifice resolution and detail in the image to gain a sample large enough for the calculation of useful statistics for local properties. This sacrifice can result in segmentation errors at the boundaries of the regions and in failure to distinguish regions that would be small in comparison with the block size used. Further, in the absence of a priori information, reasonable starting seed points and stopping criteria are often difficult to choose. Finally, both approaches sometimes suffer from a lack of knowledge because they rely on the use of ill-defined hard thresholds that may lead to wrong decisions (Salotti and Garbay, 1992). In the task of segmentation of some complex pictures, such as outdoor and natural images, it is often difficult to obtain satisfactory results by using only one approach to image segmentation. Taking into account the complementary nature of the edge-based and region-based information, it is possible to alleviate the problems related to each of them considered separately. The tendency toward the integration of several techniques seems to be the best way to produce better results. The difficulty in achieving this lies in that even though the two approaches yield complementary information, they involve conflicting and incommensurate objectives. Thus, as observed by Pavlidis and Liow (1990), although integration has long been a desirable goal, achieving it is a nontrivial task. In the 1990s, numerous techniques for integrating region and boundary information were proposed. One of the principal characteristics that permits classification of these approaches is the time of fusionmembedded in the region detection or after both processes (Falah et al., 1994):
4
x. CUFI ET AL. 9 Embedded integration can be described as integration through the definition of new parameters or a new decision criterion for the region-based segmentation. First, the edge information is extracted, and, second, this information is then used within the segmentation algorithm which is mainly based on regions. For example, edge information can be used to define the seed points from which regions are grown. 9 Postprocessing integration is performed after image processing by using the two approaches (boundary-based and region-based techniques). Edge information and region information are extracted independently in a preliminary step, and then integrated
Although many surveys on image segmentation have been published, as stated previously, none focuses specifically on the integration of region and boundary information. As a way to overcome this deficiency, this article discusses the most current and most relevant segmentation techniques that integrate region and boundary information. The remainder of this article is structured as follows: A discussion of embedded and postprocessing strategies and the related work concludes the Introduction. Section II defines and classifies the different approaches to embedded integration, whereas Section III analyzes the proposals for the postprocessing strategy. Section IV summarizes the advantages and disadvantages of the various approaches. Finally, the results of our study are summarized in Section V.
A. Integration Techniques: Embedded versus Postprocessing Many cooperative methods have been developed, all with the common objective of improving the segmentation by using integration. However, the fusion of boundary information and region information has been attempted through many different approaches. The result is a set of techniques that contains very disparate tendencies. As many authors have proposed (Falah et al., 1994; Le Moigne and Tilton, 1995), one of the main characteristics that allows classification of the integration techniques is the time of fusion. This concept refers to the moment during the segmentation process when the integration of the dual sets of information is performed. This property allows us to distinguish two basic groups among the integration proposals: embedded and postprocessing. The techniques based on embedded integration start with the extraction of the edge map. This information is then used in the region-detection algorithm, in which the boundary information is combined with the region information to carry out the segmentation of the image. A basic scheme of this method is indicated in Figure 1a. The additional information contributed by the edge detection can be employed in the definition of new parameters or new decision criteria.
A REVIEW OF INTEGRATED IMAGE SEGMENTATION TECHNIQUES
5
Input image Input image Edge Detection Edge Detection
'l
Region Detection
Region Detection Information Fusion
Output image Outputimage
(a)
(b)
FIGURE 1. Strategy schemes for region and boundary integration according to the time of fusion: (a) embedded integration; (b) postprocessing integration.
The aim of this integration strategy is to use boundary information as the means of avoiding many of the common problems of region-based techniques. Conversely, the techniques based on postprocessing integration extract edge information and region information independently, as depicted in the scheme of Figure lb. This preliminary step results in two segmented images obtained by using the classical techniques of both approaches, so they probably have the typical faults that are generated by the use of a single isolated method. An a posteriori fusion process then tries to exploit the dual information in order to modify, or refine, the initial segmentation obtained by a single technique. The aim of this strategy is the improvement of the initial results and the production of a more accurate segmentation. In the following sections (Sections II and III), we describe several key approaches that we have classified as embedded or postprocessing. Within the embedded methods we differentiate between those that use boundary information for seed-placement purposes and those that use this information to establish an appropriate decision criterion. Within the postprocessing methods, we differentiate three approaches: oversegmentation, boundary refinement, and selection evaluation. We discuss each of these approaches in depth and, in some cases, emphasize aspects related to the implementation of the methods (region-growing or split-and-merge) or to the use of fuzzy logic, which has been considered in a number of proposals. B. Related Work
Brief mention of the integration of region and boundary information for segmentation can be found in the introductory sections of several papers. As a
6
X. CUFI ET AL.
first reference, Pavlidis and Liow (1990) introduced some earlier papers that emphasized the integration of such information. In 1994 Falah et al. identified two basic strategies for achieving the integration of dual information, boundaries and regions. The first strategy (postprocessing) is described as the use of the edge information to control or refine a region segmentation process. The second strategy (embedded) is to integrate edge detection and region extraction in the same process. The classification proposed by Falah et al. has been adopted by us and is discussed in this article. In a different case, Le Moigne and Tilton (1995), thinking in the general case of data fusion, identified two levels of fusion: pixel and symbol. In a pixel-level integration between edges and regions, the decision for integration is made individually on each pixel, whereas the symbol-level integration is made on the basis of selected features, which simplifies the problem. In the same paper, these authors discussed embedded and postprocessing strategies and presented important arguments on the supposed superiority of the postprocessing strategy. They argued that the a posteriori fusion yields a more general approach because, for the initial task, it can employ any type of boundary and region segmentation. A different viewpoint regarding the integration of edge and region information for segmentation proposals consists of the use of dynamic contours (snakes). In this sense, Chan et al. (1996) reviewed different approaches, pointing out that integration is the way to decrease the limitations of traditional deformable contours.
II. EMBEDDED INTEGRATION
The embedded integration strategy consists of using the edge information, previously extracted, within a region segmentation algorithm. It is well known that in most of the region-based segmentation algorithms, the manner in which initial regions are formed and the criteria for growing them are set a priori. Hence, the resulting segmentation will inevitably depend on the choice of initial region growth points (Kittler and Illingworth, 1985), whereas the region's shape will depend on the particular growth chosen (Kohler, 1981). Some proposals try to use boundary information in order to avoid these problems. According to the manner in which this information is used, it is possible to distinguish two tendencies: 1. Guidance of seed placement: Edge information is used as a guide to choose the most suitable position to place the seed (or seeds) of the regiongrowing process. 2. Control ofgrowing criteria: Edge information is included in the definition of the decision criterion which controls the growth of the region.
A REVIEW OF INTEGRATED IMAGE SEGMENTATION TECHNIQUES
7
A. Guidance of Seed Placement In 1992 Benois and Barba presented a segmentation technique that combined contour detection and a split-and-merge procedure of region growing. In this technique, the boundary information is used to choose the growing centers. More specifically, the original idea of the method is the placement of the seeds on the skeleton of nonclosed regions obtained by edge detection. The technique starts with contour detection and extraction, according to the algorithm proposed in Moulet and Barba (1988), which finds the most evident frontiers of homogeneous regions. The contours obtained as a result of this overall procedure are of high quality, but they are not always closed. Then, a region-growing procedure is used to close these regions and to obtain a more precise segmentation. Hence, as a way to obtain a uniformly spread speed of region growing constrained by original contours, the growing centers should be chosen as far as possible from these contours. To do so, the algorithm chooses them on the skeleton defined by the set of the original contours. The skeleton is computed by the Rosenfeld method of local maxima distance. Finally, the region-growing process is realized in the following steps: a splitting process that divides an initial image into homogeneous rectangular blocks, then a merging process grouping these blocks around growing centers to obtain final segments. A similar work was proposed by Sinclair (1999), who presented an interesting integration segmentation algorithm. First, the Voronoi image generated from the edge image is used to derive the placement of the seeds. The intensity at each point in a Voronoi image is the distance to the closest edge. Second, the peaks in the Voronoi image, reflecting the farthest points from the contours, are used as seed points for region growing. In the growth, two criteria are used in order to attach unassigned pixels: the difference in color between the candidate pixel and the boundary member pixel must be less than a set threshold, and the difference in color between the candidate and the mean color of the region must be less than a second, larger threshold. In this sense, these criteria take into account local and global region information for the aggregation of a new pixel to a region. This could be especially interesting for blurred regions. From another integration aspect, edges recovered from the image act as hard barriers through which regions are not allowed to grow. Figure 2 shows the images generated during the segmentation process, including the Voronoi image, which guide the placement of the region-growing centers. Moghaddamzadeh and Bourbakis (1997) proposed an algorithm that uses edge detection to guide initialization of an a posteriori region-growing process. Actually, this work is not specifically oriented to the placement of the seeds for the a posteriori growing process, but is focused on establishing a specific order for the processes of growing. As is well known, one disadvantage of the
8
X. CUFI ET AL.
FIGURE 2. The Sinclair (1999) approach using the Voronoi image. (a) Original image. (b) Edges extracted from the original color image. (c) Voronoi image computed from the edge image. (d) Final segmentation. region-growing and merging processes is their inherently sequential nature. Hence, the final segmentation results depend on the order in which regions are grown or merged. The objective of this proposal is to simulate the order by which we humans separate segments from each other in an image; that is, from large to small. As a way to achieve this, an edge-detection technique is applied to the image to separate large and crisp segments from the rest. The threshold of the edge-detection algorithm is fixed low enough to detect even the weakest edge pixels in order to separate regions from each other. Next, the regions obtained (considering a region as a place closed by edges) are sequentially expanded, starting from the largest segment and finishing with the smallest. E x p a n d i n g a segment refers to merging adjacent pixels with the segment, on the basis of some conditions. Two fuzzy techniques are then proposed to expand the large segments and/or to find the smaller ones. Another proposal, which uses the edge information to initialize the seeds of a posteriori region growing, has been presented by Cuff et al. (2000). Like the proposal of Moghaddamzadeh and Bourbakis, Cuff et al.'s proposal takes into account seed placement as well as the order by which the regions start the growing process. However, Moghaddamzadeh and Bourbakis give priority to the largest regions, whereas Cuff et al. prefer a concurrent growing, giving the same opportunities to the regions. The basic scheme of their technique is shown in Figure 3. The technique begins by detecting the main contours of the image following the edge extraction algorithm discussed in Cuff and Casals (1996). For each extracted contour, the algorithm places a set of growing centers at each side and along it. It is assumed that the whole set of seeds of one side of the contour belong to the same region. Then, these seeds are
FIGURE 3. Scheme of the segmentation technique proposed by Cuff et al. (2000). The method is composed of four basic steps: (1) main contour detection, (2) analysis of the seeds, (3) adjustment of the homogeneity criterion, and (4) concurrent region growing.
10
x. CUFI ET AL.
used as samples of the corresponding regions and analyzed in the chromatic space in order to establish appropriate criteria for the posterior growing processes. The goal is to know a priori some characteristics of regions with the aim of adjusting the homogeneity criterion to the region's characteristics. Finally, the seeds simultaneously start a concurrent growth using the criterion established for each region, which is based on clustering analysis and convex hull construction.
B. Control of Growing Criterion Another way to carry out the integration from an embedded strategy is to incorporate the edge information into the growing criterion of a region-based segmentation algorithm. Thus, the edge information is included in the definition of the decision criterion that controls the growth of the region. As discussed in the Introduction, region-growing and split-and-merge algorithms are the typical region-based segmentation algorithms. Although both share the essential concept of homogeneity, the way they carry out the segmentation process is different in the decisions taken. For this reason, and ' to facilitate the analysis of the surveyed algorithms, we present these two types of approaches in separate subsections.
1. Integration in Split-and-Merge Algorithms Bonnin and his colleagues (1989) proposed a region extraction based on a split-and-merge algorithm controlled by edge detection. The method incorporates boundary information into the homogeneity criterion of the regions to guide the region-detection procedure. The criterion to decide the split of a region takes into account edge and intensity characteristics. More specifically, if there is no edge point on the patch and if the intensity homogeneity constraints are satisfied, the region is stored; otherwise, the patch is divided into four subpatches, and the process is recursively repeated. The homogeneity intensity criterion is necessary because of possible failures of the edge detector. After the split phase, the contours are thinned and chained into edges relative to the boundaries of the initial regions. Later, a final merging process takes into account edge information in order to solve possible oversegmentation problems. In this last step, two adjacent initial regions are merged only if there are no edges on the common boundary. The general structure of the method is depicted in Figure 4, where it can be observed that edge information guides the split-and-merge procedure in both steps of the algorithm: first, to decide the split of a region, and second, in the final merging phase, to solve the possible oversegmentation.
A REVIEW OF INTEGRATED IMAGE SEGMENTATION TECHNIQUES
EdgePoint ]~ _v_ ~ n_:_._-'~~] Thinning Chainin: ~ Detection Initial I Control~
11
PiecewiseLinear I SegmentApproximation[
I
IstFeedback Regi~oint
2ndF4back Edge--iRe#on
[,
~..]Split&M e r g e ~ n i . f i a l ~
FinalMerge~
~'~Ed~,es~'~
Inz':I FIGURE 4. Scheme of the segmentation technique proposed by Bonnin et al. (1989). The edge information guides the split-and-merge procedure in both steps of the algorithm: first, to decide the split of a region, and second, in the final merging phase, to solve the possible oversegmentation.
The split-and-merge algorithm cooperating with an edge extractor was also proposed in the work of Buvry, Zagrouba et al. (1994). The proposed algorithm follows the basic idea introduced by Bonnin, considering the edge segmentation in the step of merging. However, a rule-based system was added to improve the initial segmentation. A scheme of the proposed algorithm is illustrated in Figure 5. These authors argued that the split-and-merge segmentation algorithm creates many horizontal or vertical boundaries without any physical meaning. To solve this problem, the authors defined a rule-based system dealing with this type of boundary. Specifically, the gradient mean of each boundary is used to decide if the boundary has a physical reality.
Image
I Median ] Modified F'dter l Image
Prewitt
OP.
J Gradient [ Edge L] Edge l I m a g e [ Extraction "1 , Image ~
t| i ! !
N
plit-and-~ Regions MergingofSmallRegions] Modified Image andCurve ] Regions Approximation
GROWINGALGORITHM Rules
Application
t[
Final Regions
FIGURE 5. Segmentation technique proposed by Buvry, Zagrouba et al. (1994). Edge information is used to guide the split-and-merge region segmentation. Finally, a set of rules improve the initial segmentation by removing boundaries without corresponding edge information. Prewitt op., Prewitt operator.
12
x. CUFf ET AL.
In 1997, Buvry, Senard et al. reviewed the work presented in Buvry, Zagrouba et al.'s publication (1994) and proposed a new hierarchical regiondetection algorithm for stereovision applications taking into account the gradient image. The method yields a hierarchical coarse-to-fine segmentation in which each region is validated by exploiting the gradient information. At each level of the segmentation process, a threshold is computed and the gradient image is binarized according to this threshold. Each closed area is labeled by applying a classical coloring process and defines a new region. Edge information is also used to determine if the split process is finished or if the next partition must be computed. As a way to do that, a gradient histogram of all pixels belonging to the region is calculated and its characteristics (mean, maximum, and entropy) are analyzed. A proposal for enriching the segmentation by irregular pyramidal structure by using edge information can be found in the work of Bertolino and Montanvert (1996). In this proposed algorithm, a graph of adjacent regions is computed and modified according to the edge map obtained from the original image. Each graph edge* is weighted with a pair of values (r, c) which represent the number of region elements and contour elements in the common boundary of both regions, respectively. Then, the algorithm goes through the graph and at each graph edge decides whether to forbid or favor the fusion between adjacent regions. The use of edge information in a split-and-merge algorithm may not be reduced to only the decision criterion. In this sense, Gevers and Smeulders presented, in 1997, a new technique that extends the possibilities of this integration. Their proposal uses edge information to decide how the partition of the region should be made, or, in other words, where to split the region. The idea is the adjustment of this decision to boundary information and to split the region following the edges contained in it. In reference to previous works, these authors affirmed that although the quad-tree scheme is simple to implement and computationally efficient, its major drawback is that the image tessellation process is unable to adapt the tessellation grid to the underlying structure of the image. For this reason they proposed to employ the incremental Delaunay triangulation competent of forming grid edges of arbitrary orientation and position. The tessellation grid, defined by the Delaunay triangulation, is adjusted to the semantics of the image data. In the splitting phase, if a global similarity criterion is not satisfied, pixels lying on image boundaries are determined by using local difference measures and are used as new vertices to locally refine the tessellation grid. 2. Integration in Region-Growing Algorithms
One of the first integrations of edge information into a region-growing algorithm can be found in the work of Xiaohan et al. (1992), in which edge *To avoid confusion, we designate graph edge as an edge thatjoins two nodes in a graph.
A REVIEW OF INTEGRATED IMAGE SEGMENTATIONTECHNIQUES
13
information was included in the decision criterion. A classic region-growing algorithm generally takes into account only the contrast between the current pixel and the region in order to decide the merging of them. Xiaohan et al. proposed a region-growing technique that includes the gradient region in the homogeneity criterion to make this decision. The proposed combination of region-growing and gradient information can be expressed in the following formula:
x(i, j) --IXNa v -- f (i, j)l z(i, j) -- (1 -- qb)x(i, j) + CG(i, j)
(1)
where X~ v is the average gray value of the region which is updated pixel by pixel. The contrast of the current pixel with respect to the region is denoted by x(i, j). Parameter 4~ controls the weight of gradient G(i, j). Finally, the sum of the local and the global contrast is the final homogeneity measure, z(i, j). Following this expression the proposed algorithm can be described by using only two steps: Step 1 If z(i, j) is less than a given threshold r , then the current pixel is merged into the region. Step 2 Else the local maximum of the gradients on a small neighborhood of the current pixel is searched along the direction of region growing. The procedure stops at the pixel with the local gradient maximum. The first step of the algorithm describes the growing of the region guided by the proposed homogeneity criterion. The second tries to avoid the typical error of the region-based segmentation techniques~that is, the inaccuracy of the boundaries detected~by putting the result of the segmentation in coincidence with the edge map. A similar integration proposal was suggested by Falah et al. in 1994. In this work the gradient information is included in the decision criterion to restrict the growth of regions. At each iteration, only pixels having low gradient values (below a certain threshold) are allowed to be aggregated into the growing region. Another interesting aspect of this work is the choice of the seeds for the process of region growing. This selection uses the redundancy between the results obtained by several region segmentations (with different thresholds and different directions of image scanning), with the aim of placing the seeds in a proper position in which they have a high degree of certainty of belonging to a homogeneous region. In 1992 Salotti and Garbay developed a theoretical framework of an integrated segmentation system. The core of the problem of traditional segmentation methods, as denoted by these authors, relates to the autarchy of the methods and to the schedule of conditions that are defined with a priori assumptions. As a way to solve this problem, major directives to control each decision are
14
x. CUFI ET AL.
presented; to accumulate local information before taking difficult decisions; to use processes exploiting complementary information to cooperate successfully; to defer difficult decisions until more information is available; and, finally, to enable easy context switches to ensure an opportunistic cooperation. The main idea of these directives is that each decision must be strongly controlled. This implies that a massive collaboration must be carried out and that the segmentation task should not necessarily be achieved before the beginning of the high-level process. Finally, all these principles are used in a segmentation system with a region-growing process as main module. Pixels that seem difficult to classify because there is insufficient information for a sure decision are given to an edge-detection unit that must respond whether they correspond to an edge or not. The same directives were followed in an a posteriori work (Bellet et al., 1994). that presents an edge-following techniques which uses region-based information to compute adaptive thresholds. In such situations, when it is difficult to follow the high gradient, complementary information is requested and successfully obtained through the emergence of regions on both sides of the edge. A child edge process is then created with a threshold adapted to lower gradient values. Moreover, these authors introduce the adaptability of the aggregation criterion to the region's characteristics: several types of regions are distinguished and defined. The region-growing method dynamically identifies the type of the analyzed region, and a specific adapted criterion is used.
C. Fuzzy Logic A current trend in segmentation techniques that deserves special attention is the use of fuzzy logic (Bezdek et al., 1999). The role of fuzzy sets in segmentation techniques is becoming more important (Lambert and Carron, 1999; Pham and Prince, 1999), and the integration techniques are in the mainstream of this tendency. In this sense, we want to emphasize the growing interest of researchers to incorporate fuzzy logic methods into integrated segmentation. This interest was mainly prompted because these two integration methods are developed from complementary approaches and do not share a common measure. Hence, fuzzy logic offers the possibility to solve this problem, as it is especially suited to carry out the fusion of information of a diverse nature (Kong and Kosko, 1992; Moghaddamzadeh and Bourbakis, 1997). In the case of embedded integration of edge information into a region-growing procedure (Krishnan et al., 1994; Steudel and Glesner, 1999), the fuzzy rule-based homogeneity criterion offers several advantages in contrast to ordinary feature aggregation methods. Among these advantages is its short development time as a result of the existing set of tools and methodologies for the development of fuzzy rule-based systems. An existing rule-based system can
A REVIEW OF INTEGRATED IMAGE SEGMENTATION TECHNIQUES
15
easily be modified or extended to meet the specific requirements of a certain application. Furthermore, it does not require a full knowledge of the process and it is intuitive to understand because of its human-like semantics. In addition, it is possible to include such linguistic concepts as shape, size, and color, which are difficult to handle when one is using most other mathematical methods. A key work in using fuzzy logic was by Steudel and Glesner (1999), in which the segmentation is carried out on the basis of a region-growing algorithm that uses a fuzzy rule-based system for the evaluation of the homogeneity criterion. These authors affirmed that there are several negative aspects of using only the intensity difference for segmentation: 9 Oversegmentation of the image 9 Annoying false contours 9 Contours that are not sufficiently smooth Therefore, new features are introduced into the rule base of the fuzzy rulebased system which result in a better and more robust partitioning of the image while maintaining a small and compact rule base. The proposed homogeneity criterion is composed of a set of four fuzzy rules. The main criterion is the difference between the average intensity A of a region Rj and the pixel in under investigation. The corresponding fuzzy rule is as follows: RI"
IF D I F F E R E N C E IS S M A L L THEN HOMOGENEOUS ELSE NOT_HOMOGENEOUS
Another important feature for the segmentation of regions is the gradient at
the position of the pixel to be merged. A new pixel may be merged into a region Rj when the gradient at that location is 1 ow. Conversely, when the gradient is t o o h i g h , the pixel definitely does not belong to the region and should not be merged. In terms of a fuzzy rule, R2"
IF G R A D I E N T IS L O W THEN PROBABLY HOMOGENEOUS ELSE NOT_HOMOGENEOUS
With this rule, an adjacent pixel in satisfies the premise of rule R2 with a degree of lZmw (GRADIENT(in)). The two remaining rules are refereed to the size and the shape of regions in order to avoid smallest regions and to benefit compact regions with smooth contours. A complete scheme of this proposal is shown in Figure 6. Krishnan et al. (1994) described a boundary extraction algorithm based on the integration of fuzzy rule-based region growing and fuzzy rule-based edge detection. The properties of homogeneity and edge information of each
FIGURE 6. Fuzzy segmentation technique by Steudel and Glesner (1999). The method is composed of a set of fuzzy rules corresponding to the main properties of the regions: intensity, gradient, shape, and size. The united result of these rules indicates the desirability of aggregating a new pixel into the region. H, homogeneous; NH, not homogeneous; PH, probably homogeneous; PNH, probably not homogeneous. (Reprinted from Pattern Recognition, vol. 32, no. 11, A. Steudel and M. Glesner, Fuzzy Segmented Image Coding Using Orthonormal Bases and Derivative Chain Coding, page 1830, 9 1999, with permission from Elsevier Science.)
A REVIEW OF INTEGRATED IMAGE SEGMENTATION TECHNIQUES
17
candidate along the search directions are evaluated and compared with the properties of the seed. The fuzzy output values of edge detection and a similarity measure of the candidate pixel can be used to determine the test for the boundary pixel. This proposal was applied to colonoscopic images for the identification of closed boundaries of intestinal lumen, to facilitate diagnosis of colon abnormalities. Another proposal for the integration of boundary information into the regiongrowing process was presented by Gambotto (1993), in which edge information was used to stop the growing process. The algorithm starts with the gradient image and an initial seed that must be located inside the connected region. Then, pixels that are adjacent to the region are iteratively merged if they satisfy a similarity criterion. A second criterion is used to stop this growth. The criteria assume that the gradient takes a high value over a large part of the region boundary. Thus, growth termination is based on the average gradient, F(n), computed over the region boundary following the expression
F(n) -- ~ G(k, 1)/P(n)
(2)
where P(n) is the perimeter of the region R(n), and G(k, 1) is the value of the modulus of the gradient of pixels on the region boundary. The iterative growing process is then continued until the maximum of the global contrast function, F, is detected. Gambotto points out that the cooperation between region growing and contour detection is desirable because the assumption of homogeneous regions is usually too restrictive. If this approach is used, the class of regions that can be characterized is wider than that characterized by using smooth gray-level variations alone.
III. POSTPROCESSINGINTEGRATION In contrast to the works analyzed until this point, which follow an embedded strategy, the postprocessing strategy carries out the integration a posteriori to the segmentation of the image by region-based and boundary-based algorithms. Region information and edge information are extracted in a preliminary step and then integrated. Postprocessing integration is based on fusing results from single segmentation methods attempting to combine the map of regions (generally with thick and inaccurate boundaries) and the map of edge outputs (generally with fine and sharp lines, but dislocated), with the aim of providing an accurate and meaningful segmentation. Most researchers agree on differentiating embedded methods from postprocessing methods. We have identified different approaches for performing postprocessing tasks:
18
X. CUFf ET AL.
1. Oversegmentation: This approach consists of using a segmentation method with parameters specifically fixed to obtain an oversegmented result. Then, additional information from other segmentation techniques is used to eliminate false boundaries that do not correspond with regions. 2. Boundary refinement: This approach considers the region segmentation result as a first approach, with regions well defined, but with inaccurate boundaries. Information from edge detection is used to refine region boundaries and to obtain a more precise result. 3. Selection evaluation: In this approach, edge information is used to evaluate the quality of different region-based segmentation results, with the aim of choosing the best. A third set of techniques deal with the difficulty of establishing adequate stopping criteria and thresholds in region segmentation. A. Oversegmentation
The oversegmentation approach has emerged because of the difficulty of establishing an adequate homogeneity criterion for the region growing. As Pavlidis and Liow (1990) suggested, the major reason that region growing produces false boundaries is that the definition of region uniformity is too strict, as when the definition insists on approximately constant brightness while in reality brightness may vary linearly within a region. It is very difficult to find uniformity criteria that match these requirements exactly and do not generate false boundaries. Summarizing, these authors argued that the results can be significantly improved if all region boundaries qualified as edges are checked rather than attempting to fine-tune the uniformity criteria. A basic scheme is shown in Figure 7. A first proposal can be found in the work of Monga et al. (Gagalowicz and Monga, 1986; Wrobel and Monga, 1987). The algorithm starts with a region-growing or a split-and-merge procedure, in which the parameters have been set up so that an oversegmented image results. Then the region-merging process is controlled by edge information which helps to remove false contours generated by region segmentation. Every initial boundary is checked by analyzing its coherence with the edge map, where real boundaries must have high gradient values, while low values correspond to false contours, According to this assumption, two adjacent regions are merged if the average gradient on their boundary is lower than a fixed threshold. In 1992, Kong and Kosko included fuzzy logic in the algorithm proposed by Monga et al. As Monga et al. did, Kong and Kosko computed gradient information that they called high-frequency characteristics h, to eliminate false contours: h =
Ihigh-frequency components along the boundaryl length of the boundary
(3)
A REVIEW OF I N T E G R A T E D I M A G E S E G M E N T A T I O N TECHNIQUES
19
Original Image
Region Detection
[
i z
flange DN 16 CF
'
'
i
_
~=
,-
* :
Vmirr~ ~ t [] 350 mm . . . . . . . . . . . . . . . . . . . . . .
t-,-
210 mm
FIGURE 15. Overall view of the beam separator with built-in pole pieces. The flame also serves as a vacuum chamber. A vacuum box of copper, which is fastened with membrane bellows to the outer frame, is located between the pole plates.
LOW-VOLTAGE ELECTRON MICROSCOPES
77
2. Framework of the Beam Separator The framework of the beam separator establishes the connection of the surrounding components with the b e a m separator (see Fig. 15). The mechanical stability results from four welded plates of nonmagnetizable stainless steel with a thickness of 30 mm, which are shown in Figure 16. In the side walls,
FIGURE 16. Photographs of the framework of the beam separator. (Upper photograph, foreground) The mirror-side flange. (Lower photograph) The copper vacuum box with membrane bellows and upper pump supports as well as the outer surfaces of the beam separator.
78
P. HARTEL ET AL.
flange inserts are fixed, fitted to each other with a precision of 4-0.1 mm in all three spatial directions. The flange inserts are precision-turned compo, nents with fittings for holding the field lenses and for coupling up of the other electron-optical components. They form, at the same time, the separation wall between the evacuated beam region and the surroundings. Toward the outside they present a standard ultra-high-vacuum DN 150 CF flange. The flange inserts thus define the position of the ideal optic axis. The beam separator must, relative to this, be positioned with an accuracy of +0.1 mm. This can be achieved by displacing the beam separator on the reference surfaces, whose heights are guaranteed to 4-0.02 mm. The fittings for centering the field lens and its outside diameter are finished to -t-0.02 mm. A stainless-steel tube with a wall thickness of 10 mm binds the electron mirror and the multipoles directly to the mirror-side flange insert (see Fig. 17). Thus the position of the mirror relative to the optic axis is defined to an accuracy of about 0.1 mm. The electron source and the transfer lens system are adjusted by means of centering devices in the outer flange, while for the actual positioning of the objective lens suitable calipers are available. Between the pole plates of the beam separator there is a vacuum box with a height of 6.6 mm. The beam separator with a pole plate separation of 7 mm can therefore be adjusted to a small extent in height. The vacuum box consists of two symmetric halves. In both halves, in the region of the optic axis and in a straight connecting line of the oppositely placed flanges, there are milled channels of a depth of 2 mm and a width of 16 mm. The two halves were hard-soldered together (with nonmagnetic silver solder) with membrane bellows in the comers and two pump supports in the center of the vacuum box. The membrane bellows balance the different thermal expansions of the stainless-steel frame and the copper-finished vacuum box during the bakeout times. The endpieces of the bellows are screwed, vacuum tight, onto the flange inserts with special aluminum seals. The upper pump support is also used for fixing the vacuum box, in order to relieve the (soft) copper box and the bellows mechanically.
B. Field Lenses and Electron Mirror
For the electrostatic field lenses and the tetrode mirror, one can choose between two well-known manufacturing procedures. In the first method, the electrodes are shrunk onto ceramic tubes, while in the second method, two electrodes are insulated from each other by three ceramic spheres. The second method seemed technologically simpler to carry out. Moreover, it permits a slightly smaller construction height for a lens. This is especially significant for the field lenses, since the center of the lens should lie as close as possible to the intermediate image (near the edge of the beam separator).
79
LOW-VOLTAGE ELECTRON MICROSCOPES
flange for electric supplies of the multipole elements
flanges for high voltage connection Faraday cage / ~ tetrode mirror / / \ t , k
i
flange for test specimen electric-magnetic ~ beam separator multipole ,/~elements \~ frame \ k
I. I
-
~
~/~ Mu-metal screenings pump flan! ae
strengthening/tube I
j
I
r 100 mm
field lens
t ~
Mu-metal vacuum chamber
i
i
I
FIGURE 17. General view of the tetrode mirror, multipoles, and a field lens together with magnetic screening and the vacuum chamber made of Mu-metal. Between the multipoles and the field lens is a direct entry to the optic axis. Here, for instance, a test specimen can be introduced. The high voltage is led in straight lines to the mirror electrodes over suitably placed flanges.
A well-proven material for the electrodes is titanium. It has no problems of magnetic inclusions and most titanium oxides are electrically conducting. The precision spheres made of aluminum oxide are pore free. The deviations from nominal diameter and sphericity lie, for spheres up to a diameter of 25 mm, below 3/zm. For the approximate selection of sphere size, a rule of thumb can be applied: the diameter in millimeters should be at least the maximum potential difference in kilovolts between the two electrodes. In this case, the
80
P. HARTEL ET AL.
voltage drop per unit length on the spherical surface lies below 1.3 kV/mm, if one assumes a quarter of the sphere's girth as an insulation path. Care should be taken that the spheres are hidden from the optic axis to avoid charging effects. The manufacturing of a lens (or the electron mirror) proceeds in several steps. First, the electrodes are premachined and the sockets for the spheres attached. Second, the lens is assembled with hardened steel spheres of comparable accuracy. With a hydraulic press, one presses in the final seating of the spheres into the sockets. In this way, the relative position of the electrodes is fixed. After the pressing, one puts all the electrodes of a lens, in one setting, in a lathe for final machining of one side. Finally, the back face of each electrode is individually finished. The use of special feeds enables the finishing of a lens to be carried out within the machining accuracy (better than 10/zm) of a conventional lathe. In the field lenses*--electrostatic einzel lenses--a maximum voltage of 10 kV on the central electrode is adequate for all modes of the SMART. Correspondingly, spheres with a diameter of 10 mm were used. The maximum local field strength on the electrode surfaces is less than 10 kV/mm. This avoids any electric breakdown problems during the operation of the lens. The design of the lens is not symmetric with respect to the midplane of the lens, as one may deduce from Figures 17 and 18. In this way, the center of the lens is only 13 mm away from the beam separator's edge and, at the same time, the spheres are hidden from the beam. The electrode geometry of the tetrode mirror was determined in such a way that the chromatic and spherical aberrations of the objective lens can be simultaneously corrected in all operational modes without overstepping a maximum local field strength of 10 kV/mm. The spheres were so dimensioned that at the reversing electrode a voltage of 20 kV can be applied, while between the other electrodes the potential difference remains less than 15 kV. As an option for the alignment of the optic axis, the reversing electrode contains a small transmission hole. Behind it, a Faraday cup is mounted for measuring the electron intensity. The optic axis is adjusted by two multipole elements, whose construction is described in Section IV.C.1. The direct fixing of the mirror and the multipoles at the side wall of the beam separator has two advantages: The position of the mirror is independent of the accuracy of the vacuum chamber. This is very favorable, since with a welded construction with Mu-metal the best tolerance that can be guaranteed is around 4-1 mm. Moreover, the vacuum chamber and mirror are connected mechanically only by means of the framework of the beam separator. Hence, the intrinsic stability of the electron-optical corrector system is increased. *The field lens was calculated and designed by S. Planck as part of her diploma thesis.
LOW-VOLTAGE ELECTRON MICROSCOPES
81
FIGURE 18. Field lens and electron mirror. Both elements are based on the same principle. The insulation and positioning of one electrode relative to another is ensured by three highprecision ceramic spheres.
82
P. HARTEL ET AL.
C. Multipoles 1. Electric-Magnetic Multipole Elements Between the mirror and the beam separator, two electric-magnetic multipole elements are situated with a separation of 100 mm. Figure 17 shows a longitudinal cross section through the multipole arrangement. Both multipoles consist of an electrostatic dodecapole and a magnetic octopole. In addition to their role as a double-deflection element (for aligning separately the optic axes for incoming electrons and those reflected by the mirror), they can serve as stigmators for the compensation of residual aberrations. With the dodecapole, quadrupole and hexapole fields of any desired azimuthal orientation and, with slight loss of quality, octopole fields can be produced. The magnetic octopole should be used exclusively for the generation of dipole and quadrupole fields. Since the necessary magnetic field strength for the stigmators is relatively low, pole pieces can be dispensed with. Figure 19 shows the construction of a multipole element. The electrostatic multipole consists of 12 molybdenum wires, led through two ceramic holders. The holders are mounted on titanium tubes that at the same time determine the effective field length. The wires are surrounded by a winding support of bronze on which eight adjustable coils are wound. The effective length of the magnetic fields is set by an external cylinder and two disks of high-permeability material. The effective length of the electric and of the magnetic field lies around 25 mm. The connection of the dodecapole is established by attached contacts, while the coils and its leads are screwed into a ceramic ring (see Fig. 19). The two multipoles are fastened to the baseplate of the mirror. The mirror-side multipole is centered on a cylinder of aluminum that carries the second multipole. Generally, with screw connections in ultra-high-vacuum devices, care must be taken of the bakeout capability. Therefore, nuts and threads of different nonmagnetic materials were chosen. In the immediate vicinity of the electron beam, molybdenum, titanium, and bronze were used exclusively.
2. Additional Magnetic Deflection Elements Additional deflection elements were needed for testing the individual components of the mirror corrector in a conventional SEM. The microscope was split between the aperture and the objective lens, in order to insert adapter flanges (see also Section V.A). The deflection elements provide a fine adjustment of the optic axes of the corrector and the microscope under test, as well as the intentional deflection of the illuminating electron beam for characterizing the electron-optical properties of the corrector.
LOW-VOLTAGE ELECTRON MICROSCOPES
83
FIGURE19. Electric-magneticmultipole elements. (Left) The multipoles as finally mounted and connected. (Top right) The combined electric dodecapole with a magnetic octopole superimposed. (Bottom fight)The assembled (mirror-side) multipole, hidden behind the aluminum cylinder. The arrows indicate where the individual components are situated in the final assembly. The double-deflection element shown in Figure 20 is mounted on the adapter flange between the electron source of the SEM and the framework of the beam separator. A further single-deflection element of the same type is located between the beam separator and the objective lens of the test microscope. The construction of the deflecting elements needs to satisfy only high-vacuum requirements. This greatly simplifies the design. Magnetic deflection elements were chosen, since the mechanical outlay is easier in comparison with that needed for electrical elements. Moreover, the electrical requirements are simpler, since facing coils can be connected in series. The coil bobbins are attached to brass tubes of 7-mm outside diameter. A bobbin consists of two anchorshaped bearing plates, whose separation is fixed by soldered rods. The coils
84
R HARTEL ET AL.
FIGURE 20. Magnetic double-deflection element. In the test bed, the deflection element is located between the aperture of the modified SEM and the beam separator.
are wound with 28 turns of lacquered copper wire. The desired geometry was held better than 0.2 mm.
V. TESTING OF THE MIRROR CORRECTOR
A. Measurement Arrangement The various components of the SMART must be tested individually because of the complexity of the system as a whole. The testing of the mirror corrector, consisting of the beam separator and the tetrode mirror, in the finished apparatus was impossible in view of the time available. The construction of
LOW-VOLTAGE ELECTRON MICROSCOPES
85
a similar system with an electron source, an objective lens, and a projector system also seemed unreasonable. Furthermore, the beam separator had to be investigated separately. In this first test phase, the imaging properties should be characterized for a 90 ~ deflection by one quadrant of the beam separator. In the second phase the tetrode mirror must be attached. For this purpose two quadrants of the beam separator must be used. For both phases of the test, a suitable electron-optical bench must be available. Since only one quadrant of the beam separator was to be tested, a test bed such as that of a direct-imaging LEEM was not possible, since for the electron illumination and the imaging, one quadrant of the beam separator is needed for each function. If, instead, one illuminates the specimen with photons from an ultraviolet lamp (PEEM), one is forced into time-consuming ultra-highvacuum technology and has additionally to contend with intensity problems. There remained two sensible possibilities for analyzing the image quality of both parts of the mirror corrector. The components to be investigated could be integrated in a TEM or in an SEM. The schematic construction of a TEM is shown in Figure 21. The condenser is operated so that the specimen is illuminated normally or slightly convergently. The objective lens and the projector lenses image the transmitted electrons with adjustable magnification in the observation plane. New electron-optical components can be tested by inserting them in place of a lens
FIGURE21. Schematic diagram of a transmission electron microscope. The condenser lens system provides an almost-uniform illumination of the specimen. The objective lens and the projector lens system image the exit plane of the specimen with variable magnification in the observation plane.
86
E HARTEL ET AL.
behind the specimen or by placing the components in an intermediate plane. The latter corresponds to the position of the mirror corrector in the SMART. For testing, on the one hand, one can make use of the illuminating system as usual to illuminate, for example, a copper mesh. This would then be imaged through the component to be tested and magnified by the projector. The distortions of the image thus obtained allow one to draw conclusions about the imaging properties of the component being characterized. On the other hand, by removing the specimen, one can set up the optical system as a whole in such a way that an image of the source appears in the observation plane. By arranging double-deflection elements in front of and possibly behind the component, one can generate aberration figures; the positional deviation of the image of the source can be analyzed in terms of displacement and tilt. Finally, diffractograms of amorphous specimens can be used to analyze the aberrations. If an SEM, the construction of which is shown in Figure 22, is used, the individual components are inserted in the beam path between the illuminating system and the objective lens. The aberrations in the components to be tested lead to a distorted scanning spot at the specimen. On the one hand, the imaging properties can be determined by the achievable point or edge resolution of the modified equipment compared with theoretical predictions. On the other
FIGURE 22. Schematic diagram of the unmodified SEM. In front of the aperture plane, the zoom condenser produces a demagnified intermediate image of the source, with variable magnification. This image is further demagnified by the objective lens and imaged on the specimen as a scanning probe. The scanning coils tilt the illuminating beam in such a way that a square object region can be scanned distortion free. The detector signal from each raster point is displayed on a monitor whose deflector coils are synchronized with the displacement of the probe.
LOW-VOLTAGE ELECTRON MICROSCOPES
87
hand, aberration figures can again be obtained by inserting a double-deflection element in front of the new components. The quality of the scanning remains unaffected, since the scanning coils lie in the path of rays behind the new component close to the objective lens. 9 We used an available commercial SEM. The beam separator was integrated into the microscope without difficulty by means of two adapter flanges. A suitable TEM was not available to us. Building a TEM as a test bed, from individual components from various pieces of apparatus, seemed unreasonable, since the complete peripheral equipment, such as the vacuum system, current and voltage supplies, and control system, would have to be built from scratch. A big advantage of the SEM is that it usually functions with electrons with the nominal energy En--- 15 keV used in the SMART, so that standard test specimens can be employed. A disadvantage of the SEM is that it does not transfer a large field of view. The size of the transferred field of view of the components to be tested can be checked only sequentially with the aid of additional deflecting elements. This disadvantage is, however, more than compensated for by the advantages. The arrangement of the electron-optical elements in the scanning electron microscope ZEISS DSM 960 placed at our disposal is shown in Figure 22. The microscope is a purely magnetic system. The electrons emitted by the thermionic cathode are accelerated to a selectable nominal energy between 1 and 30 keV. The two condenser lenses produce a demagnified intermediate image of the source at a selectable magnification. The diameter of the electron beam used to form the scanning spot by means of the objective lens is limited by a manually operated aperture mechanism, which is also used to center the beam onto the objective lens. Two pairs of crossed coils serve as scanning elements, so that scanning can take place about a point, preferably the comafree point of the objective lens. In the section of the column, there are also situated two quadrupole windings, rotated by 45 ~ relative to each other, that serve as stigmators. The secondary electrons emitted from the specimen and/or backscattered electrons are recorded by a side-mounted detector. This consists of a gridlike collector whose bias voltage can be varied in order to distinguish between secondary and backscattered electrons. The electrons that pass through the grating are accelerated and strike a scintillator with an attached photomultiplier. As shown in Figure 23, the unmodified microscope reaches a resolution limit of 14 nm at an accelerating voltage of 10 kV and a working distance (WD) of 4 mm. The WD is defined as the separation of the specimen from the front pole-piece face of the objective lens. The resolution achieved is in agreement with that of theoretical calculations for an aperture with a diameter of 40/zm. As the intermediate image of the source is located 66 mm in front of the aperture plane, the aperture angle with respect to the intermediate image
B. HARTEL ET AL.
88
350
'
I
'
I
'
I
I
I 100
,
I 150
~ , 300
~250
~
200 -...,. 150 0
50
x (nm)
FmURE23. The optimum resolution of the scanning electron microscope DSM 960 is 14 nm at an accelerating voltage of 10 kV and a working distance (WD) of 4 mm. The image at the top is taken at a magnification of 100,000. To extract the intensity profile below, we averaged the intensity values inside the box, indicated above, along the vertical direction.
amounts to 0.3 mrad. If one further assumes a full width at half m a x i m u m of the energy distribution A E -- 3 eV, the diameters of the scanning spot are found to be d70~ = 10 nm and d90~ = 17 nm. The indices denote the percentage of the electrons that are focused into a circle of the given diameter. As a first step, the framework of the beam separator was mounted between the specimen chamber (with objective lens and scanning coils) and the illuminating system including the aperture (see Fig. 24 without the electron mirror on the left). The modified peripheral e q u i p m e n t m v a c u u m system, water cooling, w i r i n g m a n d most notably the first field lens were tested. In the beginning, the lengthened microscope column was very sensitive to stray magnetic fields with a frequency of 50 Hz. On the one hand, this was caused by the absence
LOW-VOLTAGE ELECTRON MICROSCOPES
89
FIGURE 24. Arrangement for testing the mirror. Two electric-magnetic multipole elements are located between the tetrode mirror and the upper field lens. They serve, on the one hand, as double-deflector elements for the independent adjustment of the optic axis of incoming and outgoing electrons, and, on the other hand, as complex stigmators for the compensation of residual aberrations in the system as a whole. By switching off the beam separator, one can operate the scanning microscope in straight transmission. The resulting additional drift length then leads to an increased diameter of the electron bundle in the objective lens. o f magnetic shielding in the region o f the adapter flanges; on the other hand, the pole pieces of the b e a m separator were found to have an u n e x p e c t e d l y low shielding factor. The m e a s u r e m e n t s and the necessary changes o f the construction are discussed in Section V.B. After the successful test o f the first field lens, the b e a m separator was added to the system as shown in Figure 25. Initially, t h e i m a g i n g properties of the different field lenses No. 1 to No. 4 differed considerably b e c a u s e o f technological difficulties, w h i c h had to be solved (see Section V.C). Additional deflecting elements in the region o f the adapter flanges were attached to the
90
P. HARTEL ET AL. Scanning electron microscope ZEISS DSM 960
Beam separator
FIGURE 25. Test bed for the characterization of the beam separator. The beam separator together with two electrostatic field lenses is integrated into the SEM by means of adapter flanges between the aperture and the objective lens. A double-deflection element is needed for aligning the optic axis with the axes of the upper field lens and of the beam separator. In order for the electrons to strike the objective lens centrally a further deflection element behind the lower field lens is necessary.
microscope for alignment of the optic axes of all imaging elements and the recording of aberration figures. The characterization and improvement of the electron-optical properties of the beam separator close to the theoretical predictions is summarized in Section V.D. In addition, the chromatic and spherical aberrations of the system without the mirror were measured as described in Section V.E. Finally, the complete mirror corrector was installed in the test bed. The arrangement can be seen in Figure 24. The tetrode mirror and the two electricmagnetic multipole e l e m e n t s - - w o r k i n g as stigmator and as double-deflection e l e m e n t - - a r e assembled sideways at the framework of the beam separator. In this setup the simultaneous correction of chromatic and spherical aberrations was proven beyond any doubt. The theoretical resolution limit of 4.5 nm with
LOW-VOLTAGE ELECTRON MICROSCOPES
91
a position of the intermediate image of the source about 135 m m in front of the edge of the beam separator and an aperture angle with respect to this plane of 0.3 mrad has not been reached so far. The results obtained with the electron mirror are presented in Section V.F.
B. Improvement of Magnetic Shielding The SEM with inbuilt beam separator was extremely sensitive to stray magnetic fields, both in direct transmission of the deactivated beam separator and in the first tests of a 90~ The initial presumption that this was due to insufficient screening in the region of the field lenses over a length of some 50 m m could not be confirmed. A further improvement of the shielding beyond that provided for the test bed was necessary, since for high-resolution micrographs the stray magnetic field was not sufficiently reduced. The measured deflection of the electron beam with an amplitude A - - 2 0 0 nm in the object plane was so large that it could not be explained by the four 5-mm-long gaps at which no material of high permeability was present. The strength B of the magnetic flux required to produce so large a ray deviation can be estimated geometrically with the aid of Figure 26. A deviation A in the specimen plane corresponds to a (virtual) displacement of the intermediate image by A = M 9A, in which the magnification of the objective lens
FIGURE26. Sketch of the geometric construction used to estimate the influence of the stray magnetic fields in the test microscope. The four regions without screening in the neighborhood of the beam separator are combined into one region. The beam separator itself is omitted for clarity.
92
E HARTEL ET AL.
amounts to M - 24 in the case of a simple 90 ~ deflection through the beam separator. If one assumes that a homogeneous magnetic field B over a length I of 20 mm at a distance d -- 130 mm from the intermediate image is responsible for the displacement of the electron bundle, one obtains for small deviation angles, ct = l / r - A/d, a necessary strength of the magnetic flux density on the axis of B-
IBI-
~/2U~m M . A = 8.3/zT e d.l
(21)
to produce the measured deflection A. This relation results from equating the Lorentz and centripetal forces: U2
]F]- ev [ B [ - m - F
(22)
and from the nonrelativistic energy relation E,, = eUa --(m/2)v 2. The measurement of the amplitude of the stray field in the vicinity of the microscope yielded values ranging from 0.1 to 0.4/zT. This shows that the observed displacement of the electron probe in the specimen plane of several hundred nanometers cannot be explained by the air gaps in the screening alone. However, the strong influence of the perturbations became reasonable after measurement of the falloff of stray fields between the pole pieces of the beam separator. To create a well-defined stray magnetic field, we placed a large coil 80 cm underneath the beam separator and supplied it with an adjustable alternating voltage Ue/ywith a frequency of v - 50 Hz. The measurements were performed with a small pickup coil. The induced voltage was amplified and displayed on an oscilloscope or measured with a digital voltmeter. Curve (d) in Figure 27 shows the amplitude of the field created by the excitation coil with the beam separator removed. If the beam separator is brought into the field (a), one can see that the field is strongly damped up to a few millimeters in front of the edge, as is the case of measurements along the symmetry axis of a shielding cylinder (e). For the beam separator, however, in contrast to a cylindrical opening, there is a strong edge increase by a factor of 2. Thereafter, the stray field decreases slowly over several plateaus right into the center of the beam separator. The plateaus thereby reflect the inner structure of the beam separator: they correspond to regions between two grooves in the surface of the pole-piece plates. With a cylinder, there are no such edge effects. In the latter case the value of the magnetic flux density at the edge amounts to just one third of the maximum value outside and decreases nearly exponentially below the detectable range. The beam separator therefore screens the stray fields insufficiently. Its screening properties can, however, be significantly improved by simple
93
LOW-VOLTAGE ELECTRON MICROSCOPES \.
\
, \ I
- 1 0 0 mm
p~
9
1 4 0 mm
plates x J
~ ]
'
yoke /
"
B50 Hz i I 2.5
'
I
o 9
i
9( ! ) ' . . ~ -
.... .~..'~'"
.~.''~ ~
2 o
]
i
t
I
9 ~.
9 ~. 9
I
9
i
9 ~
9
beam separator alone (b) beam separator with ring
I~-~a (c~beam separatorwith ring and side sheets I ~'~".
1.5
4. 9 (d) stray field without beam separator V"V (e) screening cylinder
1 0.5 I 0
I
-100
-50
0 50 distance from the beam separator (mm)
100
FIGURE 27. Decline of the vertical magnetic field for a frequency of 50 Hz in the midsection of the beam separator. The magnetic field is generated by a coil 80 cm below the beam separator. The damping can be improved considerably by attaching a ring of Mu-metal at the front face as well as by sealing the side flanges with Mu-metal. For comparison, the magnetic field without the presence of the high-permeability material is shown, as well as the field reduction along the central axis of a cylinder of Mu-metal with a wall thickness of 1.5 mm and a diameter of 96 mm.
methods. If a Mu-metal ring ( ~ i = 94 mm, ~bo = 112 mm, thickness = 7 mm) is pressed on the side surface of the beam separator on which the measurement is performed (b), then the magnetic field at the beam axis is reduced by a factor of 2. A further improvement of the screening can be obtained by covering the remaining side surfaces with Mu-metal sheets (c). The Mu-metal rings that were originally fixed at a distance of 1 mm from the beam separator edge are now pressed onto the side surface with springs. This ensures an improvement of the screening by more than a factor of 2.
94
P. HARTEL ET AL.
This measurement cannot be brought into agreement with the estimate obtained by the method of the magnetic circuit (see, for example, Joos, 1989). While the measurement yields an attenuation of the stray field by a factor of 5-10 with respect to the external field, according to the method of the magnetic circuitmapart from small deviations in the edge areamthe magnetic flux density in the air gap Ba should be a constant, attenuated by a factor Ba no
= u
1
A y + Aa
t.s Ay --I- Aa/IZr
~
1
0_ 4
(23)
with respect to the external magnetic flux density Bo. In this case, the permeability/J~r of the pole-piece plate was taken to be 50,000. The cross-sectional area of the four yokes amounted to A y --- 169 cm 2 and that of the air gap Aa --" 615 cm 2. The method of the magnetic circuit fails on account of the special threedimensional structure of the beam separator with its unfavorable width-toheight relationship of the air gap and the sensitivity of high-permeability material to the skin effect even at low frequencies (see, for example, Joos, 1989). The penetration depth t, which denotes the value at which an external homogeneous alternating field is reduced by a factor 1/e in a plane plate of conductivity cr and permeability lZrlZo, amounts in the present case to t -
1
=
0.14
mm
(24)
where we have assumed a conductivity of 5 m / ( ~ mm 2) at a frequency of v = 50 Hz. This means that the total magnetic flux generated by stray fields is transported along the surface of the pole plates. In this connection, the measured edge increase is understandable. The refinement of the method of the magnetic circuit, taking into account the skin effect through changed cross sections (surface x penetration depth) and small air gaps between pole plates and yokes, does reduce the discrepancy between theory and measurement. However, all this does not yet succeed in explaining the measured results. The measurements shown in Figure 28 demonstrate the influence of the width-to-height relationship of the air gap between two yokes in a simple model system. The massive pole plates were replaced by 1.5-mm-thick Mu-metal sheets. Every two yokes were fastened back to back. One obvious difference between the beam separator (a) and the model system (b-d) is the missing edge structure. This is due to the small thickness of the sheets compared with that of the massive pole plates. All measurements show, however, the formation of a plateau in the field strength starting at a depth of 40 mm. The height of the plateau sinks rapidly with decreasing yoke separation d. Therefore, an effective improvement of the screening effect of the beam separator can be achieved by
LOW-VOLTAGE ELECTRON MICROSCOPES
95
yoke distance d \
i
\ __
-10~) mm
!l
____~
~
-
mm
B50Hz I 2.5 I
Mu-metal sheet
I
" ~ " ~
~
yokes I
";~~ 1.5
~ o.5 0 -100
-50 0 distance from the edge (mm)
50
FIGURE 28. Decrease of the vertical magnetic field at a frequency of 50 Hz for different yoke distances in a simple model system. This system consists of the four yokes of a beam separator and two Mu-metal sheets of thickness 1.5 mm. The magnetic field was induced by a coil placed 80 cm below the sheets. Between the two sheets, the field strength falls to a plateau, whose height decreases with decreasing yoke separation. In the beam separator the field strength again sinks to a plateau, if one neglects the influence of the grooves. The absence of edge enhancement in the model system can be explained by the smaller thickness of the sheets compared with that of the pole plates.
a modification of the yokes. The long gap along the side faces in the region of the edges can be closed off in the neighborhood of the optic axis. This is possible without covering the field-producing coils. Such modified yokes are required if the corrector is installed in the SMART. For the test bed we decided to cover the beam separator with an additional U-shaped Mu-metal sheet with a thickness of 1.5 mm. The screening plate is shown in Figure 33 (see page 103). It is fastened with two screws on the flange
96
P. HARTEL ET AL.
(fight), which is not used for the testing. Without heat treatment the screening factor was 2 but increased to 5 after heating in a vacuum oven. The performance of the screening sheet and the application of Mu-metal tings onto the side faces of the beam separator are cumulative and adequate for the test bench. The total improvement obtained is shown in Figure 29. The upper image was taken without any shielding in the region of the beam separator. It shows an astigmatic distortion of the probe of about 1/zm caused by the static magnetic flux of the earth's magnetic field. The flux passes from the pole plates of the beam separator to the iron circuit of the objective lens. This corresponds to the construction of a magnetic cylinder lens. Magnetic cylinder lenses image astigmatically (with some exceptions which require a special shape of the field and special specimen positions). A temporary solution for the problem is to compensate for the (integral) magnetic flux with a coil wound over the adapter flange with an outer diameter of 200 mm (see Fig. 24). The result achieved with an excitation of 10-A turns is shown in the central image of Figure 29. The resolution limit amounts to 200 nm. With another coil on the upper surface of the beam separator, no further improvement of the resolution was achieved. Therefore a nearly seamless junction of additional screening elements and the side surfaces of the beam separator is mandatory for the SMART. At the same time this also damps out dynamic stray fields. However, the simple improvements of the screening described in this section are sufficient for the test equipment. At high magnification, as shown in Figure 29, gold clusters with a full halfwidth of 40 nm are still visible. The theoretically predicted resolution limit for the extended (about 380-mm) column lies at 20 nm for the aperture in use (diameter, 40 #m).
C. Field Lenses
Field lens No. lmbuilt as a prototypemwas tested before the beam separator was completely assembled. For this, a setup as shown in Figure 24 was used. Instead of the mirror, a blind flange was fitted. For the investigation of the field lens, the objective lens was switched off and the electron bundle was focused instead by the (lower) field lens. The performance of the field lens is documented in the upper photograph in Figure 30. Because of the inadequate screening over a length of some 50 mm above and below the beam separator, the influence of stray magnetic fields of a frequency of 50 Hz is visible as a wavy distortion of the copper mesh, since for the chosen exposure time, at the start of the sweep of each line, mains synchronization occurs. Each line of the image is thus begun at the same mains phase, which leads to an almost identical displacement of the scanning spot in
LOW-VOLTAGE ELECTRON MICROSCOPES
97
FIGURE 29. Images taken with the beam separator in straight transmission. (Top image) Owing to inadequate screening around the lower field lens, the image is unsharp. (Center image) Compensation for the static magnetic flux between the beam separator and the objective lens by means of the compensation coil (see Fig. 24) reduces the unsharpness considerably. (Bottom image) The theoretical resolution limit was reached for the first time with the aid of all the additional screening measures in operation.
98
R HARTEL ET AL.
FIGURE 30. Imaging characteristics of the field lens with switched-off beam separator. The influence of the stray magnetic fields of frequency 50 Hz is reduced by an order of magnitude through the improvement of the magnetic shielding. The electron probe is formed with the lower field lens instead of the objective lens. the object plane by the stray field. The scan time for a row of pixels is around 60 ms. The attainable point resolution of the field lens at higher magnification can only be estimated at around 800 nm on account of the strong stray field. The position of the field lens is unfavorable for an SEM with a very large distance of 160 m m to the specimen and 425 m m from the intermediate image. For an aperture diameter of 70/zm, a best resolution of only 300 nm can be attained. On the basis of these results, three further lenses of the same type were put in service.
LOW-VOLTAGE ELECTRON MICROSCOPES
99
The lower image in Figure 30 was taken with the measuring equipment shown in Figure 24 in straight transmission. The beam separator, electron mirror, and objective lens were switched off. The recording time corresponded to that of the upper image; meanwhile the magnification was eight times higher. Above and below the beam separator, screening cylinders were built in. The beam separator was enclosed in a U-shaped Mu-metal sheet. The bright image regions indicate the crossing point of a structured, orthogonal copper grid; the support film appears dark. Since the scanning coils lie in the region of the magnetic field of the objective lens (which is switched off, but usually causes Larmor rotation), the copper strips seem not to intersect at fight angles; the image is slightly sheared. Influences of stray fields of frequency 50 Hz are no longer detectable in this micrograph. At the start of the tests of the simple 90 ~ deflection, field lens No. 1 was mounted in the lower position and lens No. 2 in the upper position of the beam separator. The optic axes of the field lenses determine the position of the beam separator. Deviations of the beam separator from the ideal position up to -t-0.1 mm are permissible according to theoretical calculations. A suitable displacement of the beam separator belongs to the basic adjustment of the system. For this, the aperture is so adjusted that the upper field lens is irradiated centrally by the electron bundle. A well-tried test method for the adjustment is that of wobbling. For this, one superimposes on the direct voltage of several kilovolts an alternating voltage component of some 100 V with a frequency of about 1 Hz. During the periodic defocusing due to the alternating voltage, the image may be unsharp; however, it should not drift in position. Thus, it is guaranteed that the central trajectory of the electron bundle in the lens will coincide with an unrefracted central beam. In a second step, the lower lens can be centered by a horizontal displacement of the beam separator. After this procedure, the scanning spot and thus the image position exhibited a displacement of 244/zm, if the electrons were focused by either the upper or the lower field lens. This means that despite the wobbling, at least one of the field lenses deflects the beam--assuming a length of 172 mm to the specimen plane--by an angle of 1.4 mrad. A beam tilt of this magnitude is tolerable in the SMART, where the excitation of the field lenses remains more or less constant. However, since the tetrode mirror with appreciably higher requirements on accuracy was completed by the same procedure, an explanation for the beam tilt was necessary. The behavior of both field lenses was investigated by the so-called throughfocus method. For a fixed image position, one lowers the refractive power of one of the lenses while raising the refractive power of the other lens. The measurement of the image shift as a function of the voltage on the relevant central electrodes is shown in Figure 31. This figure shows clearly that field lens No. 1 is responsible for the tilt of the beam. The angle of deflection is proportional to
100
E HARTEL ET AL. 250
200
150
100
t . _ _
I
I
I
I
I
I
_
/
/
/
_ c
_ field lens No. 1 :=! in-iowerpo~iio n
I
~I -
O- A field lens No. 2
I -
50
i
0
I
i
I
2
i
I
4
6
i
,k
8
U (kV) FIGURE 31. Image shift on through-focusing from upper to lower field lens as a function of the voltage on the corresponding middle electrode. While field lens No. 1 exhibits a large linear contribution of 50 #m/kV, the initial increase at field lens No. 2 remains less than 2.5/~m/kV.
the voltage applied at the central electrode. This behavior is incompatible with an ideal, but displaced round lens. In this case the beam tilt would be proportional to the product of the displacement and the refractive power. The refractive power of a thin electrostatic einzel lens is, in the nonrelativistic case, given by 1 3 f 16
2
~
~(z)
dz
(25)
where ~(z) denotes the electric potential on the axis. For small voltages U on the central electrode, the refractive power can be estimated as"
1
-f=
31eff(U) 2
16 d2
~
(26)
where Ua -- 15 kV is the accelerating voltage of the electron, lef f is a length in the order of magnitude of the lens extension, and d is the electrode separation. This formula holds under the assumption IUI D: Fj ~ N(3)3 J
a-+0
>
{oo
d D
(2)
where N(6) is the number of boxes of size a necessary to cover the T set of points. The Hausdorff-Besicovitch dimension is defined as a local property, since it measures the properties in a set in the limit for ~ sizes tending to
CHARACTERIZATION OF TEXTURE IN SCANNING
139
zero. The I'd value for d -- D is often finite, but it may be zero or infinite. In most known cases the Hausd0rff-Besicovitch dimension corresponds to the integer D values for the lines, planes, surfaces, spheres, and so forth. However, there are many sets for which the Hausdorff-Besicovitch dimension is not an integer and in such cases, and after Mandelbrot, these dimensions are said to be fractals. If the I'd limit is a finite number and different from zero, for a small-enough 3, this approximation is valid: N(6) ~ 3-0
(3)
Then the fractal dimension can be determined by finding the In N ( 6 ) versus In(g) plot slope. The D dimension, determined by counting the number of boxes necessary for the set as a box size function, is known as the box dimension (Feder, 1988). Fractals are naturally grouped into two categories: random and deterministic. The fractals in physics belong to the first category; however, it is better to discuss first some of the deterministic fractals, such as the von Koch curve. Figure 1 shows an iterative process for building this fractal curve. A segment is divided into three segments and the middle one is replaced by two equal segments that form part of an equilateral triangle. In the next building phase, four new segments replace each of these four segments, one third of the previous
FIGURE 1. The first (top) and fourth (bottom) generations of the von Koch self-similar fractal curve of fractal dimension Ds = log(4)/log(3) ~ 1.262.
140
JUAN LADAGA AND RITA BONETTO
one long. This proceeding is performed repeatedly, which produces the von Koch curve. Figure 1 shows the first (top) and fourth (bottom) generations of the von Koch curve. This figure has an exact self-similarity (each small portion when magnified can reproduce exactly a larger one). As can easily be seen, the curve length is multiplied by four thirds at each step and the limit curve has an infinite length in a finite area of the plane without self-interception. The self-similarity property is one of the basic concepts of fractal geometry. A line segment, a bidimensional object, or a three-dimensional object presents self-similarity properties. These elements can be divided into N identical parts, each of which is scaled down by the ratio r -- 1/N, r - 1 / N 1/2, or r -- 1 / N 1/3, respectively. In general, it can be said that a D-dimensional self-similar object can be divided into N copies smaller than itself, and each is scaled by a factor of r - 1 / N 1/~ of the total. The fractal, or similarity, d i m e n s i o n Ds is given by Ds --
log(N) log(l/r)
(4)
In the case of the yon Koch curve, the fractal dimension Ds -- log(4)/log(3) 1.262. The real cases, such as coasts, are not exactly self-similar (i.e., each little portion is observed as a bigger portion, but not exactly identical). The fractal dimension concept, however, can also be applied to objects that are statistically self-similar. The dimension, in this case, is also given by Eq. (4). The similarity concept can be more formally expressed by taking T as a set of points in the Euclidean space of dimension E placed in positions Z - (Z1 . . . . . ZE). The T set is self-similar with respect to a real scale factor r (0 < r < 1) if T is transformed into r T with points in positions r Z (rZ1 . . . . . r Z E ) . Therefore, a closed set T is self-similar with respect to an r scale factor, when T is the union of N different and nonoverlapping subsets, each being congruent with rT (identical under translation and rotation transformations). In contrast, the set T is statistically self-similar if it is composed of N different subsets, each being scaled down by an r original factor and identical to r T in all statistical aspects. In many cases the studied sets are not self-similar but self-affine. An affine transformation transforms the points Z - ( Z 1 . . . . . ZE) into new points Z t -(rlZ1 . . . . . rE ZE), where the scale factors rl . . . . . re are not all the same. A closed set is self-affine with respect to a scale vector r -- (rl . . . . . re), if T is the union of N nonoverlapping subsets, each identical to r T in all statistical aspects. The fractal dimension of a self-affine record is not univocally determined. The similarity dimension is not defined since it exists only for self-similar fractals. As regards the box dimension, depending on the initial box size, two possible values are obtained, one called the local d i m e n s i o n D -- 2 - H, with
CHARACTERIZATION OF TEXTURE IN SCANNING
141
0 < H < 1, and another, D = 1, which is called the global dimension for the bigger boxes (Feder, 1988, and reference therein; Mandelbrot, 1985). That is to say, globally, a self-affine record is not fractal. The Hausdorff-Besicovitch local dimension also gives the value D = 2 - H. Another widely used method for the fractal dimension measurement is the Fourier power spectrum (P ~ f~). This consists of graphing at logarithmic scale the power spectrum (P) versus the frequency (f). The straight-line slope, 13, is related to the fractal dimension. For the Brownian fractal surfaces (Barnsley et al., 1988), D = 4 + 13/2
(5)
2. Fractional Brownian Motion and Its Relationship to Self-Affine Records In the one-dimensional random walk, a particle moves by jumping a step length + 17or - r/in r s. If the step length follows a Gaussian distribution with (r/) = 0, the probability distribution is 1 exp p(r/, r) -- ~/4zrDe r
4De r
(6)
and the variance of the process is O2p(r/, r ) d r / - - 2 D e r
(/.]2) =
(7)
oo
where the De parameter is the diffusion coefficient. The random walk study was an important development by Einstein (1905) which provided the theoretical basis for Brownian motion. Although the diffusion coefficient is a Gaussian distribution parameter that determines the variance by means of Eq. (7), this equation is more general and valid even when the jumps take place at regular intervals and when the probability distribution for the step length is discrete, continuous, or with some arbitrary shape (see Feder, 1988). Figure 2 shows the particle position as a function of time t, and Figure 3 the random step-length record. In these figures it can be observed that in Brownian motion, it is not the particle position which is independent from time, but its displacement at a time interval which is independent from the interval. The record corresponding to a random walk is scale invariant (i.e., it is statistically the same at different resolutions). This means that, independently from number b of time steps r among observations, the increments in the particle position constitute an independent Gaussian random process (Fig. 3)
142
JUAN LADAGA AND RITA BONETTO
X
0.4 0.2 0.0 -0.2 -0.4 I
0
200
I
400
I
t
600
I
800
r
1000
FIGURE 2. The particle position as a function of time t.
0.2
0.0
-0.2
0
,
'
200
4()0
6()0
8()0
1()00
FIGURE 3. The random step length of the particle as a function of time t.
CHARACTERIZATION OF TEXTURE IN SCANNING
143
with (,7) - 0 and variance equal to (7 2) -- 2 D E b t
(8)
The probability distribution follows a scaling relation equal to
p(bl/Zrl, b r ) - b-1/Zp(rl, r)
(9)
The previous equation says that the Brownian process is invariant in distribution under a transformation that changes the time scale by b and the length by ~/-b. Therefore, the Brownian record is self-affine. Mandelbrot introduced the fractional Brownian motion concept (Mandelbrot, 1982; Mandelbrot and Van Ness, 1968), generalizing the Wiener equation (1923) for the Brownian particle increments:
Y(t + r ) -
Y(t)~
(10)
( T TM
where H ~1 for the Brownian movement and ~ is a random number from a Gaussian distribution. Changing the exponent of H -- ~1 to any real number in the range 0 < H < 1, we can prove that the fractional Brownian process has an average value of increments equal to zero:
(Y(t + v) - Y(t)) -- 0 and increment variance ((Y(t + r ) V(r)-
(11)
Y(t)) 2) -- V(r), given by (12)
2 D E r 2t-/
The exponent H is called the Hurst exponent H by Mandelbrot. It is worth mentioning that, unlike ordinary Brownian motion, in fractional Brownian motion the past and future increments are correlated. This is observed when one is calculating the correlation coefficient between (e.g., increments A Y_ (between - t and 0) and AY+ (between 0 and t))" ([Y(0)- Y(-t)][Y(t)-
Cov(AY_, AY+) c(t)
-
Var(t - 0)
=
Y(0)])
2DIt - 01TM
(Y(O)Y(t) - (Y(0)) 2 - Y ( - t ) Y ( t ) + Y(-t)Y(O)) 2DIt -01TM (-(Y(t)-
Y(O)) 2 + ( Y ( t ) - Y ( - t ) ) 2 - (Y(O)
Y ( - t ) ) 2)
2.2Dlt] 2/-/ - - 2 D t TM + 2 D 2 2 H t 2H _ 2 D t TM
2.2Dt 2~ = 2 2/-/-1 - 1
(13)
144
JUAN LADAGA AND RITA BONETTO
1 The previous equation implies that fractional Brownian motion (H ~ ~) shows a correlation coefficient C(t) ~: 0 independent from t, which presents 1 persistence (H > ~) or antipersistence (H < 1). The aforementioned for Brownian motion can be generalized for roughness records of surfaces that present a fractal Brownian behavior. In this case, the difference between two points corresponding to the three-dimensional component z (IAzl) may be equivalent to the step length (r/) of the Brownian movement, and the other two bidimensional dimensions (x and y) play the time role. Equation (12) then takes this form:
V ( S ) - - ((Az) 2) "~ s 2H
(14)
where s is the step according to the x axis or the y axis corresponding to the increment in z. For a self-affine record of a fractal Brownian surface, the fractal dimension D is related to H as D -- 3 - H
(15)
As H varies between 1 and 0, D varies between 2 (smooth surface) and 3 (completely rough surface). The H coefficient can be obtained from the slope value in the variogram (log(V) vs. log(s)). Such a slope is equal to 2H, as can be deduced from Eq. (14).
3. Some Examples of the Use of the Variogram The first step for measuring surfaces is to determine their elevation profile (height data). This can be accomplished by several methods; among these we mention contact profilometry, atomic force microscopy, interference microscopy, radiation scattering from rough surfaces (X-ray scattering, neutron scattering, radar, etc.), sonar, and so forth. Many authors have used the variogram to determine parameters that characterize the surface roughness. Burrough (1981) analyzed a great amount of environmental data (different soil characteristics, iron minerals in rocks) and obtained estimations of fractal dimensions. Mark and Aronson (1984) studied topographical surfaces and showed that in most of them there were scale ranges with different fractal dimensions, separated by distinct scale breaks. These breaks represent characteristic horizontal scales, at which surface behavior changes substantially. Sahimi et al. (1995) used the variogram to calculate the fractal dimension of porous media at field scale and thus to show the existence of antipersistence or negative correlations. One surface may behave differently according to the scale under study (Bunde and Havlin, 1996; Kaye, 1989, 1993; Mark and Aronson, 1984; Sahimi
CHARACTERIZATION OF TEXTURE IN SCANNING
145
et al., 1995; Sinha et al., 1988; Zhao et al., 1998). Another way of characterizing the rough surface behavior is by means of a definition of other parameters apart from the fractal dimension. Many authors have worked with three-dimensional roughness, using different experimental techniques but a similar parametric approach. Sinha et al. (1988), with X-ray and neutron scattering from rough surfaces, used three parameters to characterize rough surfaces. These parameters were the roughness coefficient H that is related to the fractal dimension D, the root-mean-square (r.m.s,) roughness amplitude 0-, and the cutoff length ~. The suggested expression for the variance in the isotropic self-affine surfaces case was V(Sx, Sy) -- V ( s ) -- 20-211
e -~s/~2'~]
(16)
where s -- (s 2 + Sy2)1/2 . V(S) trends toward 20- 2 when s trends toward infinity and Eq. (16) trends toward Eq. (14) for s g, and the two probability distributions are related as follows:
P(glw, q) - f y~ P(f, l, glw, q) df
(7)
l
The EM algorithm is an iterative procedure for solving Eq. (6), making use of the associated distribution P (f, l, glw, q). Each iteration consists of the following two steps: E step Compute the conditional expectation of log P(f, l, glw, q), with respect to (f, l) and conditioned on the observed data and the current estimate (w ~k), q~k)) of the parameters: a(w, qlw! ~', q