Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen
2301
3
Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo
Achille Braquelaire Jacques-Olivier Lachaud Anne Vialard (Eds.)
Discrete Geometry for Computer Imagery 10th International Conference, DGCI 2002 Bordeaux, France, April 3-5, 2002 Proceedings
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Achille Braquelaire Jacques-Olivier Lachaud Anne Vialard LaBRI, Universit´e Bordeaux 1 351 cours de la Lib´eration, 33405 Talence cedex, France E-mail: {achille.braquelaire,lachaud,vialard}@labri.fr
Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Discrete geometry for computer imagery : 10th international conference ; proceedings / DGCI 2002, Bordeaux, France, April 3 - 5, 2002. Achille Braquelaire ... (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Tokyo : Springer, 2002 (Lecture notes in computer science ; Vol. 2301) ISBN 3-540-43380-5
CR Subject Classification (1998): I.4, I.3.5, I.5, G.2, I.6.8, F.2.1 ISSN 0302-9743 ISBN 3-540-43380-5 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2002 Printed in Germany Typesetting: Camera-ready by author, data conversion by PTP-Berlin, Stefan Sossna Printed on acid-free paper SPIN: 10846474 06/3142 543210
Preface
DGCI 2002, the tenth in a series of international conferences on Discrete Geometry for Computer Imagery, was held in Bordeaux, France April 3–5, 2002. The aim of the conference was to present recent advances in both theoretical aspects and applications of discrete geometry. It was organized by the Laboratory of Computer Science of Bordeaux (Bordeaux 1 University) and sponsored by the International Association for Pattern Recognition (IAPR) and the French National Center of Scientific Research (CNRS). This DGCI conference confirmed the increasing interest of the computer imagery community for discrete geometry, with 67 papers submitted from 23 countries all around the world. After reviewing, 35 contributions were accepted from which 22 were selected for oral presentation and 13 for poster presentation. These contributions focus mainly on the following topics: Models for Discrete Geometry, Topology, Combinatorial Image Analysis, Morphological Analysis, Segmentation, Shape Representation and Recovery, and Applications of Discrete Geometry in Image Processing and Computer Graphics. This program was completed by invited lectures from three internationally known speakers: Alfred M. Bruckstein (Haifa Computer Science Dept, Israel), Gabor Herman (City University of New York, USA), and Walter Kropatsch (Technical University of Vienna, Austria). Many people have contributed to the organization of the conference. In particular we would like to thank all the authors who submitted papers and the invited speakers for their contribution. Also we would like to thank the program committee and the reviewer board for their careful review and the members of the Steering Committee and of the Local Committee for their help. We are grateful to the following institutions for their financial support: the Bordeaux 1 University, R´egion Aquitaine, the CNRS, the LaBRI (Laboratory of Computer Science of Bordeaux), the ENSEIRB (National School of Engineers in Electronics, Computer Science, and Telecommunication of Bordeaux), and the City of Bordeaux. Finally we thank all the participants and we hope that they found interest in the scientific program and that they enjoyed their stay in the capital of Gasconha.
January 2002
Achille Braquelaire Jacques-Olivier Lachaud Anne Vialard
Organization
Conference Co-chairs A. Braquelaire J.P. Domenger J.O. Lachaud
LaBRI, Bordeaux, France LaBRI, Bordeaux, France LaBRI, Bordeaux, France
Steering Committee E. Ahronovitz G. Bertrand G. Borgefors J.M. Chassery A. Montanvert M. Nivat
France France Sweden France France France
Program Committee E. Andres A. Del Lungo U. Eckhardt C. Fiorio R.W. Hall T.Y. Kong W. Kropatsch A. Kuba J.O. Lachaud R. Malgouyres S. Miguet I. Ragnemalm P. Soille G. Szekely
IRCOM-SIC, Poitiers, France University of Sienna, Italy Universit¨ at Hamburg, Germany LIRMM, Montpellier, France Dept. of EE, University of Pittsburgh, USA CUNY, New York, USA TU, Vienna, Austria University of Szeged, Hungary LaBRI, Universit´e Bordeaux 1, France LLAIC, Universit´e Clermont 1, France ERIC, Universit´e Lyon 2, France ISY, Dept. of EE, Link¨ oping University, Sweden Joint Research Center, Ispra, Italy ETH-Z¨ urich, Switzerland
Local Organizing Committee S. Alayrangues A. Ali-Mhammad G. de Dietrich P. Desbarats B. Kerautret B. Taton A. Vialard
LaBRI, LaBRI, LaBRI, LaBRI, LaBRI, LaBRI, LaBRI,
Bordeaux, Bordeaux, Bordeaux, Bordeaux, Bordeaux, Bordeaux, Bordeaux,
France France France France France France France
VIII
Organization
Referees E. Ahronovitz S. Alayrangues E. Andres E. Balogh G. Bertrand G. Borgefors A. Braquelaire R. Breton S. Brunetti J. Burguet L. Buzer J.M. Chassery D. Coeurjolly P. Costantini A. Daurat
A. Del Lungo M. Dudasne U. Eckhardt C. Fiorio S. Fourey A. Frosini S. Gueorguieva R.W. Hall E. Katona T.Y. Kong W. Kropatsch A. Kuba J.O. Lachaud R. Malgouyres E. Mate
Sponsoring Institutions CNRS R´egion Aquitaine Universit´e Bordeaux 1 LaBRI ENSEIRB Communaut´e Urbaine de Bordeaux Ville de Bordeaux
S. Miguet A. Montanvert A. Nagy I. Nystr¨ om K. Palagyi A. Pasini I. Ragnemalm S. Rinaldi G. Sanniti di Baja P. Soille G. Subrenat G. Szekely L. Tougne A. Vialard
Table of Contents
Topology Invited Paper: Abstraction Pyramids on Discrete Representations . . . . . . . . W.G. Kropatsch
1
XPMaps and Topological Segmentation – A Unified Approach to Finite Topologies in the Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 U. K¨ othe Curves in ZZ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 G. Th¨ urmer Separation Theorems for Simplicity 26-Surfaces . . . . . . . . . . . . . . . . . . . . . . . . 45 J.C. Ciria, E. Dom´ınguez, A.R. Franc´es Topological Quadrangulations of Closed Triangulated Surfaces Using the Reeb Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 F. H´etroy, D. Attali Non-manifold Decomposition in Arbitrary Dimensions . . . . . . . . . . . . . . . . . . 69 L. De Floriani, M.M. Mesmoudi, F. Morando, E. Puppo
Combinatorial Image Analysis 4D Minimal Non-simple Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 C.J. Gau, T. Yung Kong Receptive Fields within the Combinatorial Pyramid Framework . . . . . . . . . . 92 L. Brun, W.G. Kropatsch A New 3D 6-Subiteration Thinning Algorithm Based on P -Simple Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 C. Lohou, G. Bertrand Monotonic Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Y. Song, A. Zhang Displaying Image Neighborhood Hypergraphs Line-Graphs . . . . . . . . . . . . . . 124 S. Chastel, P. Colantoni, A. Bretto The Reconstruction of a Bicolored Domino Tiling from Two Projections . . 136 A. Frosini, G. Simi
X
Table of Contents
Morphological Analysis Invited Paper: Digital Geometry for Image-Based Metrology . . . . . . . . . . . . . 145 A.M. Bruckstein Topological Reconstruction of Occluded Objects in Video Sequences . . . . . . 155 V. Agnus, C. Ronse On the Strong Property of Connected Open-Close and Close-Open Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 J. Crespo, V. Maojo, J.A. Sanandr´es, H. Billhardt, A. Mu˜ noz Advances in the Analysis of Topographic Features on Discrete Images . . . . 175 P. Soille Morphological Operations on Recursive Neighbourhoods . . . . . . . . . . . . . . . . 187 P.P. Jonker
Shape Representation Computing the Diameter of a Point Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 G. Malandain, J.-D. Boissonnat Shape Representation Using Trihedral Mesh Projections . . . . . . . . . . . . . . . . 209 L. Ros, K. Sugihara, F. Thomas Topological Map Based Algorithms for 3D Image Segmentation . . . . . . . . . . 220 G. Damiand, P. Resch On Characterization of Discrete Triangles by Discrete Moments . . . . . . . . . . 232 ˇ c J. Zuni´ Weighted Distance Transforms for Images Using Elongated Voxel Grids . . . 244 I.-M. Sintorn, G. Borgefors Robust Normalization of Shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 J. Cortadellas, J. Amat, M. Frigola Surface Area Estimation of Digitized 3D Objects Using Local Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 J. Lindblad, I. Nystr¨ om
Models for Discrete Geometry Invited Paper: An Abstract Theoretical Foundation of the Geometry of Digital Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 G.T. Herman Concurrency of Line Segments in Uncertain Geometry . . . . . . . . . . . . . . . . . . 289 P. Veelaert
Table of Contents
XI
Discretization in 2D and 3D Orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 M. Couprie, G. Bertrand, Y. Kenmochi Defining Discrete Objects for Polygonalization: The Standard Model . . . . . 313 E. Andres Visibility in Discrete Geometry: An Application to Discrete Geodesic Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 D. Coeurjolly Multi-scale Discrete Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 J. Burguet, R. Malgouyres Invertible Minkowski Sum of Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 K. Sugihara
Segmentation and Shape Recognition Thinning Grayscale Well-Composed Images: A New Approach for Topological Coherent Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . 360 J. Marchadier, D. Arqu`es, S. Michelin An Incremental Linear Time Algorithm for Digital Line and Plane Recognition Using a Linear Incremental Feasibility Problem . . . . . . . 372 L. Buzer Reconstruction of Animated Models from Images Using Constrained Deformable Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 J. Starck, A. Hilton, J. Illingworth Reconstruction of Binary Matrices from Absorbed Projections . . . . . . . . . . . 392 E. Balogh, A. Kuba, A. Del Lungo, M. Nivat A Simplified Recognition Algorithm of Digital Planes Pieces . . . . . . . . . . . . . 404 M.M. Mesmoudi
Applications Ridgelet Transform Based on Reveill`es’ Discrete Lines . . . . . . . . . . . . . . . . . . 417 P. Carr´e, E. Andres A Discrete Radiosity Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 R. Malgouyres
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Abstraction Pyramids on Discrete Representations Walter G. Kropatsch Institute for Computer-Aided Automation Pattern Recognition and Image Processing Group Vienna Univ. of Technology – Austria
Abstract. We review multilevel hierarchies under two special aspects: their potential for abstraction and for storing discrete representations. Motivated by claims to ‘bridge the representational gap between image and model features’ and by the growing importance of topological properties we discuss several extensions to dual graph pyramids and to topological maps: structural simplification should preserve important topological properties and content abstraction could be guided by an external knowledge base.
1
Introduction
At a panel of the last International Workshop on Visual Form (IWVF4) Sven Dickinson asked the following question referring to several research issues in the past and also in the future: “How do we bridge the representational gap between image features and coarse model features?” He identifies the one-to-one correspondence between – salient image features (pixels, edges, corners,...) and – salient model features (generalized cylinders, polyhedrons, invariant models,...) as limiting assumption that makes prototypical or generic object recognition impossible. He suggested to bridge and not to eliminate the representational gap, and to focus efforts on: – region segmentation – perceptual grouping – image abstraction Let us take these goals as a guideline to re-consider research efforts in the area of multiresolution discrete representations under the special viewpoint of abstraction and of representations that are discrete in nature. Regions as aggregations of primitive pixels play an extremely important role in nearly every image analysis
This work was supported by the Austrian Science Foundation under grants P14445MAT and P14662-INF.
A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 1–21, 2002. c Springer-Verlag Berlin Heidelberg 2002
2
W.G. Kropatsch
task. Their internal properties (color, texture, shape, ...) help to identify them and their external relations (adjacency, inclusion, similarity of properties) are used to build groups of regions having a particular meaning in a more abstract context. The union of regions forming the group is again a region with both internal and external properties and relations. A representational concept that supports processes working at multiple levels of abstraction with the possibility to access semantic knowledge from the external world are extremely complex. We would like to highlight a few approaches that may have the potential to be extended into a future complex vision system bridging the representational gap identified by Dickinson. At the last DGCI Udupa [35] considered surfaces as basic descriptive elements for representing boundaries between volumetric regions in 3D. He identifies three ‘important’ properties of such surfaces: 1. connected (topology) 2. oriented (combinatorial maps) 3. closed (Jordan boundary) We would like to address some of these issues in the context of gradually generalizing our discrete image data across levels where geometry dominates up to levels of the hierarchy where topological properties become important. Based on experiences with multiresolution pyramids we present several conceptual extensions with the aim of stimulating further research and collaboration which is necessary to overcome the intrinsic complexity of the proposed system architecture by joint efforts leaving enough room for original contributions. The paper is organized as follows: After considering the formal definition of abstraction (section 2) and the consequences for representations we review discrete representations including a ‘natural’ example of vision based on an irregular sampling (section 3). Image pyramids are the main focus of section 4 where we present the basic ideas and properties of dual graph pyramids and of multilevel topological maps. Abstraction in such multilevel structures can be done either by modifying the contents of a representational cell (section 5) or by ‘simplifying’ the structural arrangement of the cells while major topological properties are preserved (section 6). In this last section we present two simple 3D configurations which turned out to be hard to distinguish by current representations.
2
Visual Abstraction
By definition abstraction extracts essential features and properties while it neglects unnecessary details. Two types of unnecessary details can be distinguished: – redundancies – data of minor importance Details may not be necessary in different contexts and under different objectives which reflect in different types of abstraction. In general, three different types of abstractionare distinguished:
Abstraction Pyramids on Discrete Representations
3
Isolating abstraction: important aspects of one or more objects are extracted from their original context. Generalizing abstraction: typical properties of a collection of objects are emphasized and summarized. Idealizing abstraction: data are classified into a (finite) set of ideal models, with parameters approximating the data and with (symbolic) names/notions determining their semantic meaning. These three types of abstraction have strong associations with well known tasks in computer vision: recognition and object detection tries to isolate the object from the background; perceptual grouping needs a high degree of generalization; and classification assigns data to ‘ideal’ classes disregarding noise and measurement inaccuracies. In all three cases abstraction drops certain data items which are considered less relevant. Hence the importance of the data needs to be computed to decide which items to drop during abstraction. The importance or the relevance of an entity of a (discrete) description must be evaluated with respect to the purpose or the goal of processing. The system may also change its focus according to changing goals after knowing certain facts about the actual environment, other aspects that were not relevant at the first glance may gain importance. Representational schemes must be flexible enough to accommodate such attentional shifts in the objectives.
3
Discrete Representations
A discrete representation is associated with a countable variable which can be mapped into Zn [12]. A digital image is the result of sampling a continuous image at discrete locations the sampling points. Usually this is a finite subset of ‘pixels’ of the discrete grid Z2 . This discretization process maps any object of the continuous image into a discrete version if it is sufficiently large to be captured by the sensors at the sampling points. Resolution relates the unit distance of the sampling grid with a distance in reality. There exist different concepts to model the conversion between continuous and discrete representations. Recently Brimkov etal [5] have introduced new schemes for object discretizations in higher dimensions: k-neighbors in Zn , minimal cover and super-cover. Besides their relevance for visualization purposes such concepts allow also the continuous interpretation of discrete measurements. The properties of the continuous object, i.e. color, texture, shape, as well as its relations to other (nearby) objects are mapped into the discrete space, too. The most primitive discrete representation assigns to each sampling point a measurement, be it a gray or color value from a finite set or a binary value. Hence a digital image is a finite set of integer triples (ix, iy, ig) ∈ Z3 . In order to express the connectivity or other geometric or topological properties this set must be enhanced by a neighborhood relation. In the regular square grid arrangement of sampling points it is implicitly encoded as 4- or 8-neighborhood with the well known problems in conjunction with Jordan’s curve theorem. Note
4
W.G. Kropatsch
that ALL the information about the image’s content is stored at the sampling points! The neighborhood of sampling points can be represented explicitly, too: in this case the sampling grid is represented by a graph consisting of vertices corresponding to the sampling points and of edges connecting neighboring vertices. Although this data structure consumes more memory space it has several advantages, among which we find the following: – The sampling points need not be arranged in a regular grid. – The edges can receive additional attributes too. – The edges may be determined either automatically or depending on the data. Since sub-sampling of a discrete representation is also a discrete representation, the arrangement of a human retina can be considered as discrete, too. Fig. 1(a) shows a small portion of the sampling points of a monkey’s retina, which is similar to the one of a human eye. In Fig. 1(b-f) we opposed a simple sampling concept both with the natural but irregular grid and an artificial but regular grid of similar resolution. Sampling points being within a certain distance to the line have been filled and the radius has been varied. Let this distance be determined by a circle intersecting the continuous line. Optically similar effects can be observed: with a small radius the sequence of bold points has many gaps (Fig. 1b,c,e), the line is not connected. Increasing the radius, the density of black points increases also and the gaps are closed. With a large radius the line becomes thick since also sampling points not directly along the line reach the line (Fig. 1d,f). These effects are typical for discrete lines and they are overcome in many different ways, but most approaches consider the regular grid. The problem arising with irregular grids is that there is no implicit neighbor definition! Usually Voronoi neighbors determine the neighborhood graph. It would be interesting whether concepts for discrete straight lines, discrete planes in 3D and hyper-planes in nD, discrete circles, spheres and hyper-spheres could be recovered also from irregular grids and if the involved computational processes are feasible. The retina example demonstrated that the neighborhood in irregular grids needs to be represented explicitly. This creates a new representational entity: the binary relation of an edge in the neighborhood graph. Together with the fact that a 2D image is embedded in the continuous image plane the line segments connecting the end points of edges partition the image plane into connected faces which are part of the dual graph. In n dimensions n + 1 basic entities are sufficient to describe a discrete configuration embedded in the space spanned by the n coordinate axes, e.g. the cells of abstract cellular complexes [24]. Let us shortly remind some of the most frequent names used by different authors: dimension geometry spel [35] graph 0 point pixel,pointel vertex 1 line linel edge 2 face, region surfel face 3 volume voxel
Abstraction Pyramids on Discrete Representations
Fig. 1. Sampling a line with irregular and regular grids
5
6
W.G. Kropatsch
The reason to review these basic representational entities is to point out that in nearly all discretization concepts only one of these entities (be it the point or the ‘dual’ correspondent, e.g. the face in 2D or the voxel in 3D) carries the measured information. Only few representational schemes allow the other entities to store information appropriate for the particular dimension of the manifold.
4
Pyramids
In this section we summarize the concepts developed for building and using multiresolution pyramids [33,21,28] and put the existing approaches into a general framework. The focus of the presentation is the representational framework, its components and the processes that transfer data within the framework. A pyra1 ✻ λ λ2 λ3
λn size
✡❏ ❏ ✡ ❏ ✡ ❏ ✡ ❏ ✡ ❏ ✡ ❏ ✡ ❏ ✡ ❏ ✡ ❏ ✡ ✡ ❏ (a) Pyramid concept
(b) Discrete levels
Fig. 2. Multiresolution pyramid
mid (Fig. 2) describes the contents of an image at multiple levels of resolution. The base level is a high resolution input image. Successive levels reduce the size of the data by a constant reduction factor λ > 1.0 while constant size local reduction windows relate one cell at the reduced level with a set of cells in the level directly below. Thus local independent (and parallel) processes propagate information up and down in the pyramid. The contents of a lower resolution cell is computed by means of a reduction function the input of which are the descriptions of the cells in the reduction window. Sometimes the description of the lower resolution needs to be extrapolated to the higher resolution. This function is called the refinement or expansion function. It is used in Laplacian pyramids [11] and wavelets [30] to identify redundant information in the higher resolution and to reconstruct the original data. The number of levels n is limited by the reduction factor λ: n ≤ log(image size)/ log(λ). The main computational advantage of image pyramids is due to this logarithmic complexity. We intent to extend the expressive power of these efficient structures by several generalizations. The reduction window and the reduction factor relate
Abstraction Pyramids on Discrete Representations
7
two successive levels of a pyramid. In order to interpret a derived description at a higher level this description should be related to the original input data in the base of the pyramid. This can be done by means of the receptive field (RF) of a given pyramidal cell ci : RF (ci collects all cells (pixels) in the base level of which ci is the ancestor. Since our goal is to bring up the ‘relevant data’ for solving a particular task let us give the term ‘resolution’ a more general meaning beyond the pure geometric definition. This is the base of several pyramidal approaches, two of which are chosen as representatives: irregular graph pyramids and topological maps.
4.1
Irregular Graph Pyramids
A graph pyramid is a pyramid where each level is a graph G(V, E) consisting of vertices V and of edges E relating two vertices. In the base level pixels are the vertices and two vertices are related by an edge if the two corresponding pixels are neighbors. This graph is also called the neighborhood graph. The content of the graph is stored in attributes attached to both vertices and edges. Initially only the attributes of the vertices receive the gray values of the pixels. In order to correctly represent the embedding of the graph in the image plane [19] we additionally store the dual graph G( V , E) at each level. Let us denote the original graph as the primal graph. In general a graph pyramid can be generated bottom-up as follows: while further abstraction is possible do 1. determine contraction kernels 2. perform dual graph contraction and simplification of dual graph 3. apply reduction functions to compute content of new reduced level. The complete formalism of dual graph contraction is described in [28]. Let us explain it here by means a small window of our line example (Fig. 1). ✄❜ ✂ ✄❜
❜✁ ✄❜ ✄❜ ✂ ✄❜
✄❜
✂❜✁ ✂❜✁ ✄❜ ✄❜ ✂ ✁
✂❜✁ ✄❜ ✂ ✂r✁ ❜✁ ✂❜✁ ✄❜ ❜✁ ✄✂❜ ❜✁ ✂
✄❜
✄❜
✄❜ ✂ ✄❜
❜✁ ✄✂❜ ✄r r✁ ✂❜✁ ✂❜✁ ✂ ✄r r✁ ✂❜✁ ✄✂❜ ❜✁ ✂ ✄❜ ❜✁ ✄✂❜ ❜✁ ✄❜ ✂ ✄ ❜✁ ✄✂❜ ✂❜✁ ✂❜✁ ✂❜
❜✁ ✄❜ ✂❜✁ ✄❜ ✂❜✁
Fig. 3. Neighborhood graph G0 and contraction kernel N01
8
W.G. Kropatsch
The first step determines what information in the current top level is important and what can be dropped. A contraction kernel is a (small) sub-tree of the top level the root of which is chosen to survive. Fig. 3 shows the window and the selected contraction kernels each surrounded by an oval. Selection criteria in this case contract only edges inside connected components except for isolated black vertices which are allowed to merge with their background. All the edges of the contraction trees are dually contracted during step 2. Dual contraction of an edge e (formally denoted by G/{e}) consists of contracting e and removing the corresponding dual edge e from the dual graph (formally denoted by G \ { e}). This preserves duality and the dual graph needs not be constructed from the contracted primal graph G at the next level. Since the contraction of an edge may yield multi-edges and self-loops there is a second phase of step 2 which removes all redundant multi-edges and self-loops. Note that not all such edges can be removed without destroying the topology of the graph: if the cycle formed by the multi-edge or the self-loop surrounds another part of the data its removal would corrupt the connectivity! Fortunately this can be decided locally by the dual graph since faces of degree two (having the double-edge as boundary) and faces of degree one (boundary = self-loop) cannot contain any further elements in its interior. Since removal and contraction are dual operations, the removal of a self-loop or of one of the double edges can be done by contracting the corresponding dual edges in the dual graph. The dual ✞❜ ✛❜ ✘ ✞❜ ✞❜ ❜ ❍❍ ✝ ✆ ❅ ❅ ❅ ❍❍ ✞❜ ✛ ✘ ❜ ❅ ❜ ❅ ❅r ✟ ✚ ✙ ✝ ✆ ❅ ❅ ❅ ✟✟ ✁❅ ✛ ✘ ✟ ✁ ❅❜ ❜ ❅❜ ❅r ❅❜ ✝❅ ✆ ✁ ✝✆ ✟✚ ✙ ❆ ❅ ✟ ✟ ✛ ✘ ✛ ✘ ✁ ❆ ❅✟ r❍ ❅❜ ❜❍ ✚ ✙ ❍❍ ❅ ❍❍❆ ❆✞❜ ❍❜ ❅❜ ❜ ❍ ❜ ✚✙✚ ✙ ✝ ✆ Fig. 4. Dually contracted graph G1 and contraction kernel N12
contraction of our example remains a simple graph G1 without self-loops and multi-edges (Fig. 4). Step 2 generates a reduced pair of dual graphs. Their contents is derived in step 3 from the level below using the reduction function. In our example reduction is very simple: the surviving vertex inherits the color of its son. In the only case where the contraction kernel contains two different colors, the isolated vertex is always chosen as surviving vertex. The result of another dual contraction is shown in Fig. 5. The selection rules and the reduction function are the same as in the first iteration. The result shows that the bridges between the gaps of the original sampling has been closed and the three surviving black vertices are connected after two iterations. This fact
Abstraction Pyramids on Discrete Representations
❜ ❇
9
❜ ❜P ❜ PP ✡❅ PP ✡ ❅ P P ❇ ✏r ✏✏ ✡ ❅ ✏ ❇ ✏ ✡ ❅ r ✏ ❇ ✘✏ ✘ ✘ ✘ ❇ ✡ ✘✘✘ ❇✘ r ✘ ✡ ❜ ❜ P PP P P P ❜ P
Fig. 5. Graph G2 after two steps of dual graph contraction
could be used in a top-down verification step which checks the reliability of closing the gap in the more general context. There are lots of useful properties of the resulting graph pyramids. If the plane graph is transformed into a combinatorial map the transcribed operations form the combinatorial pyramid [6,7]. This framework allowed us to prove several of the above mentioned properties and links dual graph pyramids with topological maps which extend the scope to three dimensions. The following table summarizes dual graph contraction in terms of the control parameters used for abstraction and the conditions to preserve topology: Level 0
1
4.2
representation contract / remove conditions (G0 , G0 ) ↓ contraction kernel N0,1 forest, depth 1 (G0 /N0,1 , G0 \ N0,1 ) ↓ redundant multi-edges, self-loops deg v ≤ 2 (G1 , G1 ) ↓ contraction kernel N1,2 forest, depth 1 .. .
The Topological Map
Fiorio, Bertrand and Damiand developed a method that derives the topologically correct region adjacencies both in 2D and in 3D [1,13]. Their base representations are combinatorial and generalized maps. The derivation proceeds in several levels similar to a pyramid with the difference that the lower levels are not used any more after the higher level has been created. In 2D following levels are identified: Level representation merge conditions 0 complete inter-pixel map ↓ adjacent faces same label 1 line map ↓ lines (l1 , v, l2 ) deg v = 2 and colinear(l1 , l2 ) 2 border map ↓ lines (l1 , v, l2 ) deg v = 2 3 topological map
10
W.G. Kropatsch
In addition to the topological map an inclusion tree is generated and maintained to cope with holes in regions. The resulting representation is computed by a sequential scan line procedure comparing 2 × 2 windows with prestored ’precodes’ each requiring a special treatment. The current approach has no limit on the size of the reduction window in the sense that any number of elements (’darts’) of level i may be merged into a single element at level i + 1. However a large set of fused elements could be decomposed into a hierarchy of locally independent fusions to allow parallel implementation. The extension into three dimensions is straight forward and contains six different levels [13]: Level representation merge conditions 0 complete inter-voxel map ↓ adjacent volumes same label 1 line map ↓ faces (f1 , l, f2 ) deg l ≤ 2, coplanar(f1 , f2 ) 2 level 2 map ↓ lines (l1 , v, l2 ) deg l = 2, colinear(l1 , l2 ) 3 border map ↓ faces (f1 , l, f2 ) deg v ≤ 2 4 level 4 map ↓ lines (l1 , v, l2 ) deg v = 2, l1 = l2 5 topological map Besides the inclusion tree that captures the 3D holes, fictive edges are maintained to prevent disconnection of faces or lines. Additional conditions are implicitly expressed to prevent disconnections and suppression of basic elements. The process proceeds similar to the 2D case: 2×2×2 windows are compared with a set of ’pre-codes’ and the corresponding procedure is executed for each code. The levels and several categorizations permit to reduce the number of cases drastically from the ones that would be needed for the exhaustive enumeration.
5
Abstraction in Pyramids
In order to discuss the role of abstraction in general multiresolution hierarchies. Let us consider the structure of the representation and the content stored in the representational units separately. In our generalization we allow the resolution cell to take other simply connected shapes and to describe the content by a more complex ‘language’. The first generalization is a consequent continuation of the observations of Bister etal [4] to overcome the limited representational capabilities of rigid regular pyramids. It necessitates to consider in more detail the structure of a resolution level which was implicitly coded as a matrix in regular sampling grids. A similar conclusion was expressed by DeFloriani etal [14] where a ‘multi-complex’ was presented as a unifying framework for many multiresolution regular cell complexes and extensions to cope with non-regular shapes are envisioned. Since irregular structures reduce the importance of explicitly representing geometry,
Abstraction Pyramids on Discrete Representations
11
topological aspects become relevant. These aspects will be discussed in more detail in section 6. The second generalization started with the works of Hartley [20] who allowed a resolution cell to contain more than one (gray) value and interpreted them as the parameters of a globally defined model. 5.1
Content Models and Reduction Functions
In the topological map approach each cell contains a label identifying the membership of the cell to the class of all those cells having the same label. In this case the contents of the cells merged during the reduction process can be propagated by simple inheritance: the fused cell ‘inherits’ its label from its children and it does not matter from which since all have the same label. In classical gray level pyramids the contents of a cell is a gray value which is summarized by the mean or a weighted mean of the values in the reduction window. Such reduction functions have been efficiently used in Gaussian pyramids. Laplacian pyramids, Ratio pyramids and wavelet pyramids identified the loss of information that occurs in the reduced level and stored the missing information in the hierarchical structure where it could be retrieved when the original base level is reconstructed. All these approaches use one single globally defined model which must be flexible enough to adapt its parameters to approximate the data. In our generalization we would like to go one step further and allow different models to be used in different resolution cells as there are usually different objects at different locations of an image. The models could be identified by a name or a symbol and may be interrelated by semantic constraints. Simple experiments have been done with images of line drawings. This research used the experiences gained with a regular 2 × 2/2 curve pyramid [25] and the chain pyramid [31] in the more flexible framework of graph pyramids. The model describes symbolically the way how a curve intersects the discrete segments of the boundary of a cell and the reduction function consists in the transitive closure of the symbols collected in the reduction window. The concept works well in areas where the density of curves was low, through the rigidity of the regular pyramid ambiguities arise when more curves appeared within the same receptive field. This limitation could be effectively overcome with irregular pyramids in which we could limit the receptive field of a cell to a single curve. Fig. 6 gives an impression of the complexity of the data that have been processed in the minimum line property preserving (MLPP) pyramid in order to find and identify components in a technical drawing. More details can be found in [8]. The content abstraction in this representation has following features: – models are identified by (discrete) names (empty cell, line end, line crosses edge, junction), no parameters were used; – adjacent models have to be consistent (‘good continuation’); – contraction kernels were selected such that only one consistent curve is covered in one receptive field;
12
W.G. Kropatsch
Fig. 6. Technical drawing used to build an MLPP-pyramid
– a few symbolic rules of the form: new model → local generic sub-graph with model assignments
(1)
governed this selection process; – the reduced content appears in the left hand side of the rule (1). One may notice a certain similarity between the selection rule (1) and the use of pre-codes in the efficient computation of the topological maps [2,1,13]. In both cases the knowledge about the models and in what configurations they are allowed to occur needs to be stored in a knowledge base. In order to determine which are the best possible abstractions the local configurations at a given level of the pyramid must be compared with the possibilities of reduction given in the knowledge base. This would typically involve matching the local configuration with the right-hand sides of rules stored in the knowledge base. Such a match may not always be perfect, one may allow a number of outliers. The match results in a goodness of match, which can be determined for all local configurations. The selection can then choose the locally best candidates as contraction kernels and reduce the contents according to the generic models which matched the local configuration. The goodness of match may also depend on a global objective function to allow the overall purpose, task or intention to influence the selection process. 5.2
The Knowledge Base
The knowledge to be used in the pyramid can be organized in many different ways. It is needed in selecting surviving cells and as the model for the reduction
Abstraction Pyramids on Discrete Representations
13
function. Since the aim is to achieve abstraction it should provide also, besides a goodness of fit, a measure of relevance or importance which may depend on varying goals of the system. These variations in the goals further need to be communicated to the system. Hence the language of interaction must be part of the knowledge base. It is certainly impossible to exhaustively enumerate all the possibilities to organize the knowledge. We list a few exemplary ways (1., 2.) one can find in existing approaches and propose a structure (3.) which fits to the concept sketched above. 1. The most common way to enter semantic knowledge into a system is by implicit coding. In the pyramid building process it appears as the parameters that control the selection function or the frequently used filters in the reduction function. Although often computationally efficient implicit coding has no flexibility to adapt to the data and any change requires a modification of the program code. This type of knowledge representation can be found in most regular pyramid approaches which are based on linear filters. 2. In order to separate the knowledge from the process working on the data, globally coded rules or states are associated with specific procedures to treat the data. This knowledge is pre-compiled and is accessed through indices based on locally computed features. As examples can be mentioned the precodes for building the topological map and the syntactical rules used for the line drawing application (Fig. 6). These types of knowledge representations are more flexible than the implicit coding since the knowledge base can be extended or adapted to the special needs of an application. However the knowledge is used mostly in a deterministic way without assigning special priorities to items important for the current task. Furthermore the knowledge is in all cases compiled manually which limits the scope of the application. A few ‘rules’ have been used for the line drawing example, application dependent rules would need an enormous amount of effort. Similarly for the definition of pre-codes of the topological map: the 2D construction needs only 12 pre-codes which grows to more than 4000 cases in 3D. Although this number could be substantially reduced using the six levels and further categorization the extension to higher dimensions seems to be prohibitive. 3. The concept presented in the previous section suggests that the knowledge base can provide pre-stored local configurations that can be used to identify potential local contraction kernels. There are several approaches pointing in this direction, e.g. the one presented by Kittler etal [22] where three interrelated levels express the knowledge of the system: the measurements, the image features, and the object cues. In addition each ‘configuration’ must be associated with the following: – A function computing the importance based on the goodness of match between the data configuration and the pre-stored configuration. – The more abstract description needs to be identified, e.g. by a name or a symbol.
14
W.G. Kropatsch
– The reduction function associated with the new identity calculates the specific parameters (attributes of the survivors) from the attributes of the data configuration. Such a knowledge base could be realized as a formal (graph) grammar. However, as the examples already demonstrated, in order to be effective the knowledge needs a non-negligible degree of complexity. This has consequences for both the knowledge retrieval and updating: Knowledge retrieval requires a high degree of internal organization to quickly access all those configurations that must be checked in a particular case. Exhaustive search may not be feasible. One could imagine a structure similar to the data pyramid since also the abstract terms have neighbors, e.g. associations in an abstract sense, more specific terms they are derived from and more abstract terms they are part of. Let us call this the abstraction pyramid in contrast to the data pyramid having the image data in the base. In this case the possible configurations to be checked in a particular part of the data pyramid could be local neighbors in the abstraction pyramid of the abstract term associated with the data cell. Note that a simplified version has been proposed for regular pyramids by P. Burt: the pattern tree [10]. The second consequence of the high complexity of the knowledge base concerns the updating: both interactive user interfaces and learning strategies could be integrated with the concept of the abstraction pyramid. However there is still a wide field of research necessary before such systems could be used in ‘real’ applications.
6
Preserving Topology
Objects mapped into images remain connected if they are not occluded by other objects nor disturbed by noise. Neither the projection nor the discretization separate the two corresponding adjacent regions in the image. A similar property holds for the adjacency of objects. Hence the connectivity and the adjacency of regions and of boundary segments is a very important property which should not be lost by abstraction. Several authors studied operations that allow the modification of the data structure, e.g. its reduction, while the topological properties of the represented objects and their background is preserved (e.g. [23, 32,3,26,15,9]). In the following we first look at the simpler cases in two dimensions and refer to the dual graph pyramid. Then some considerations about the reduction operations in 3D based on the recent results of Damiand are discussed. 6.1
Preserving Topology in 2D
Table 1 summarizes the necessary primitive operations: The Euler number characterizes the topology of a description given in terms of points (#P ), lines (#L) and faces (#F ). Since we aim at preserving its value the sum of the changes must be zero: ∆#P − ∆#L + ∆#F = 0.
Abstraction Pyramids on Discrete Representations
15
Table 1. Topology Preserving Operations in 2D Euler Incr. Contract(l, p0 ) Remove(l, f0 ) Any Incr. by a contr. by c remov.
Points Lines Faces Config. #P −#L +#F ∆#P −∆#L +∆#F -1 -1 0 (p1 , l, p0 ) 0 -1 -1 (fx , l, f0 ) (−a −b −c) (−1 −1) ×a (−1 −1) ×c
PRE-CONDITION = const. =0 p1 = p0 fx = f0 b = a + c;
CCL Euler same label deg(f0 ) ≤ 2
First we observe the changes introduced by contracting an edge l bounded by the two points p0 , p1 . This eliminates one of the points (i.e. p0 ) and the edge l, hence it does not change the Euler characteristic. The only pre-condition is to avoid contracting a self-loop. If we remove an edge l, the number of points remains the same, but two faces f0 , fx are merged into one (fx ). That reduces the number of faces by one. If we would have the same face on both sides of the edge, i.e. fx = f0 , the edge would be a bridge in G the removal of which would disconnect G. If one of the end points of l would have degree 1, the removal of its only connection to the remaining structure would isolate it. Both cases are excluded from removal by the pre-condition fx = f0 . The second pre-condition deg(f0 ) ≤ 2 identifies a redundant self-loop or a redundant multi-edge: in the later case f0 is bounded by two parallel edges connecting the same end-points. This configuration is simplified in the second phase of dual graph contraction. What about other operations? It is clear that the elimination of an edge must be accompanied by the removal of either a point or a face to preserve the Euler number. So we cannot have less elements involved in a topology preserving modification. But we can also show the following: Contraction and removal are the ONLY operations needed to reduce the structure while preserving the topology. Any other topologypreserving operation can be achieved by appropriate combinations of contraction and removals. If we want to remove a number a of points and a number c of faces we have to remove also a number b = a+c of edges to preserve the Euler number. This can be achieved by a contractions and c removals. Pre-conditions for individual operations can be extended to sets of operations to allow a different order of execution or even parallelism: The requirement for contraction kernel to form a FOREST is such an extension. If the edges of a cycle would be contracted the last one need to be a self-loop which cannot be contracted. Hence sets of edges to be contracted must be acyclic. 6.2
What Remains after Repetitions?
We can repeat contracting edges the end point of which carry the same label and remove all unnecessary self-loops and multi-edges until no further contraction
16
W.G. Kropatsch
nor removal is possible. Note that a very similar strategy is used to create the border map in [2,1] and the topological map in [13]. At convergence we have the following conditions: 1. All edges (p1 , l, p2 ) with different end points have different labels: lab(p1 ) = lab(p2 ). 2. A surviving self-loop (p, l, p) separates two different faces, (f1 , l, f2 ) and the inner face has degree deg(f1 ) > 2. Since any tree of the dual graph would have been eliminated by the rule deg(f1 ) ≤ 2 starting from the leafs up to the root, there must be a cycle C ∈ G and inside this cycle there exists a point p3 ∈ C : lab(p3 ) = lab(p). 3. All faces have three or more sides: deg(f ) ≥ 3. 4. Pseudo or fictive edges are self-loops (p0 = p1 ) which cannot be contracted in the primal graph and which separate two faces with deg(f0 ) > 2 and deg(f1 ) > 2. Such edges were first observed in [29] as an artifact of topology preserving contraction. They connect the boundary of a hole to the surrounding ‘main land’. Holes can be equivalently represented by an inclusion tree as in [13]. 5. Fictive edges appear arbitrarily placed and depend only on the order of contractions and removals. Similar observations can be found in the topological 3D-map [13] where fictive edges appear as the last ones before disconnecting a face or a boundary during the merging of faces and lines (for the process of successive region merging see [1,2]). 6. For each hole there remains exactly one fictive edge (as indicated by the Betti number [17]). 7. Fictive edges are not delineated between two regions as all other edges. Hence they can be continuously deformed and their end points can be moved along the boundary as long as the edge remains fully embedded inside f . Other fictive edges are not excluded from being traversed by the end point! We conjecture that an arrangement of fictive edges can be transformed into any other legal arrangement of fictive edges. Algorithms for continuous deformation [34] or [18] may find a new application for re-arranging fictive edges. 6.3
Preserving Topology in 3D
The primitive operations to build the 3D topological map merge voxels (VFusion), faces (F-Fusion) and linels (L-Fusion). In analogy to the 2D table Table 2 summarizes the necessary primitive operations in 3D: 1. V-Fusion, F-Fusion and L-Fusion are the ONLY operations needed, the reasoning is the same as in 2D. Any other topology-preserving operation can be achieved by appropriate combinations. 2. Pre-conditions in 3D are non-trivial except for volumes: a line may delimit more than 2 faces, and a point may be the intersection of more than 2 lines. Damiand [13] lists additional constraints: no disconnection and no suppression of any face or line should be possible.
Abstraction Pyramids on Discrete Representations
17
Table 2. Topology Preserving Operations in 3D Pts. Lin. Fac. Vol. Config. Euler #P −#L +#F −#V Incr. ∆#P −∆#L +∆#F −∆V V-Fusion -1 -1 (v1 , f, v2 ) F-Fusion -1 -1 (f1 , l, f2 ) L-Fusion -1 -1 (l1 , p, l2 )
PRE-CONDITION = const. =0 v1 = v2 f1 = f2 l1 = l2
CCL Euler same label deg l ≤ 2 deg p ≤ 2
3. It remains to be checked whether the pre-conditions for individual operations can be extended to sets of operations. 4. Semantic control (i.e. checking the same label as for CCL) occurs only in the initial fusions, all other operations are automatic simplifications. 6.4
What Should Remain in 3D after Repetition?
Fig. 7. Are the two 3D configuration the same?
It is not surprising that the complexity of minimal topological cases to be considered grows with dimension. In 2D only holes have to be considered and correctly represented by either an inclusion tree or additional fictive edges. In 3D, tunnels may be present additionally both in the foreground and in the background. Configurations like the two tori as in Fig. 7 need to be distinguished. The answer ‘there are two tori’ is not wrong but it is unsatisfactory since it does not distinguish the two depicted configurations. Most representations do not even have the necessary relation to express the interlacing between the two tori in the second case. In addition they are not connected. So the description most involve the topology of the surrounding background. Let us sketch a possible solution using fictive elements. A fictive surface intersecting the tunnel could be used to make a torus (genus 1) homotopic to
18
W.G. Kropatsch
Fig. 8. Two tori with two fictive surfaces and one fictive edge
a sphere (genus 0). This fictive surface is not fixed geometrically in space but another object, like the second torus, would intersect it and create a hole. The boundary of the hole would be connected by a fictive edge to the outer boundary of the fictive surface added to the torus. An identical reasoning applies to the second torus creating in total two fictive surfaces and one fictive edge as depicted in Figure 8. The consequences for abstract representations open a wide range for further research and may address deep mathematical problems in different fields: knot theory, Morse complexes, algebraic topology, . . . (see [17,16]).
7
Conclusion
We motivated our discussion by Dickinson’s and Udupa’s claims to ‘bridge the representational gap’, to ‘focus on image abstraction’, and to study ‘topological properties’ in the introduction. We first discussed the basic fields, abstraction and discrete representation, in more detail. It seems that there are much less concepts working on discrete irregular grids than on their regular counterparts. We then recalled two pyramidal approaches having the potential to cope also with irregular grids. These pyramids have some useful properties, e.g. 1. they show the necessity to use multi-edge and self-loop to preserve the topology; 2. they allow to combine primitive operations at one level (i.e. collected by the contraction kernel) and across several levels of the pyramid (i.e. equivalent contraction kernels [27]); 3. repeated contraction converges to specific properties which are preserved during contraction; 4. termination criteria allow to stop abstraction before a certain property is lost. 5. pseudo/fictive elements characterize topological relations, a fictive edge characterizes a hole, a fictive face characterizes a tunnel, . . . ; 7.1
Open Problems
There are numerous open problems partly addressed in the paper. Let us just enumerate a few important issues:
Abstraction Pyramids on Discrete Representations
19
✎ ✎ ✎ ✞ ✞ ✞ ✎ ✎ ✞ ✞ ✝ ✝ ✆✌ ✝ ✆✌ ✍ ✆✌ ✍ ✍ ✝ ✆ ✝ ✍ ✌ ✍ ✆✌ Fig. 9. The Olympic problem
1. extensions to 3D, 4D, 5D (see [35] for current data sources) 2. how to represent multiple interlaced tori, chains, the Olympic rings (Fig. 9) in a topologically correct representation? 3. Re-insertion of removed edges/darts (like reconstruction with wavelets): after reducing a level of the pyramid the data remaining in the level below could be checked for redundancies and store only the differences needed for loss-less reconstruction; 4. repeated contraction has several control parameters which allow adaptation to specific applications: – different selection criteria – different termination criteria – different attributes – different reduction functions 5. these control parameters could be organized in a knowledge base to allow further user interaction and automation through learning.
References [1] Y. Bertrand, G. Damiand, and C. Fiorio. Topological Encoding of 3D Segmented Images. In G. Borgefors, I. Nystr¨ om, and G. Sanniti di Baja, editors, Proceedings DGCI’00, Discrete Geometry for Computer Imagery, volume Vol. 1953 of Lecture Notes in Computer Science, pages 311–324, Uppsala, Sweden, 2000. Springer, Berlin Heidelberg, New York. [2] Y. Bertrand, C. Fiorio, and Y. Pennaneach. Border Map: A Topological Representation for nD Image Analysis. In G. Bertrand, M. Couprie, and L. Perroton, editors, Discrete Geometry for Computer Imagery, DGCI’99, volume Vol. 1568 of Lecture Notes in Computer Science, pages 242–257, Marne-la-Vall´ee, France, 1999. Springer, Berlin Heidelberg, New York. [3] J. C. Bezdek and N. R. Pal. An index of topological preservation for feature extraction. Pattern Recognition, 28(3):381–391, March 1995. [4] M. Bister, J. Cornelis, and A. Rosenfeld. A critical view of pyramid segmentation algorithms. Pattern Recognition Letters, Vol. 11(No. 9):pp. 605–617, September 1990. [5] V. E. Brimkov, E. Andres, and R. P. Barneva. Object Discretization in Higher Dimensions. In G. Borgefors, I. Nystr¨ om, and G. Sanniti di Baja, editors, Proceedings DGCI’00, Discrete Geometry for Computer Imagery, volume Vol. 1953 of Lecture Notes in Computer Science, pages 210–221, Uppsala, Sweden, 2000. Springer, Berlin Heidelberg, New York.
20
W.G. Kropatsch
[6] L. Brun and W. G. Kropatsch. Contraction Kernels and Combinatorial Maps. In J.-M. Jolion, W. G. Kropatsch, and M. Vento, editors, Graph-based Representations in Pattern Recognition, GbR 2001, pages 12–21. CUEN, 2001. ISBN 88 7146 579-2. [7] L. Brun and W. G. Kropatsch. Introduction to Combinatorial Pyramids. in print, 2001. Winterschool ”Digital and Image Geometry”, 17.12.2000 - 22.12.2000. [8] M. Burge and W. G. Kropatsch. A Minimal Line Property Preserving Representation of Line Images. Computing, Devoted Issue on Image Processing, 62:pp. 355– 368, 1999. [9] J. Burguet and R. Malgoures. Strong Thinning and Polyhedrization of the Surface of a Voxel Object. In G. Borgefors, I. Nystr¨ om, and G. Sanniti di Baja, editors, Proceedings DGCI’00, Discrete Geometry for Computer Imagery, volume Vol. 1953 of Lecture Notes in Computer Science, pages 222–234, Uppsala, Sweden, 2000. Springer, Berlin Heidelberg, New York. [10] P. J. Burt. Attention mechanisms for vision in a dynamic world. In Proc. 9th International Conference on Pattern Recognition, pages 977–987, Rome, Italy, November 1988. IEEE Comp.Soc. [11] P. J. Burt and E. H. Adelson. The Laplacian pyramid as a compact image code. IEEE Transactions on Communications, Vol. COM-31(No.4):pp.532–540, April 1983. [12] J.-M. Chassery and A. Montanvert. G´eom´etrie discr`ete en analyse d’images. Trait´e des Nouvelles Technologies, s´erie – Images. HERMES, Paris, France, 1991. [13] G. Damiand. D´efinition et ´etude d’un mod` ele topologique minimal de repr´esentation d’images 2d et 3d. PhD thesis, LIRMM, Universit´e de Montpellier, 2001. [14] L. De Floriani, P. Magillo, and E. Puppo. Multiresolution Representation of Shapes Based on Cell Complexes. In G. Bertrand, M. Couprie, and L. Perroton, editors, Discrete Geometry for Computer Imagery, DGCI’99, volume Vol. 1568 of Lecture Notes in Computer Science, pages 3–18, Marne-la-Vall´ee, France, 1999. Springer, Berlin Heidelberg, New York. [15] T. K. Dey, H. Edelsbrunner, S. Guha, and D. V. Nekhayev. Topology Preserving Edge Contraction. in print, 1999. [16] H. Edelsbrunner, J. Harer, and A. Zomorodian. Hierarchical Morse Complexes for Piecewise Linear 2-Manifolds. In SCG’01, June 3–5, 2001, Medford, Mass. USA, ACM 1-58113-357-X/01/0006, 2001. [17] R. Forman. Combinatorial Differential Topology and Geometry. in print, 1999. [18] S. Fourey and R. Malgouyres. Intersection Number of Paths Lying on a Digital Surface and a New Jordan Theorem. In G. Bertrand, M. Couprie, and L. Perroton, editors, Discrete Geometry for Computer Imagery, DGCI’99, volume Vol. 1568 of Lecture Notes in Computer Science, pages 104–117, Marne-la-Vall´ee, France, 1999. Springer, Berlin Heidelberg, New York. [19] R. Glantz and W. G. Kropatsch. Plane Embedding of Dually Contracted Graphs. In G. Borgefors, I. Nystr¨ om, and G. Sanniti di Baja, editors, Proceedings DGCI’00, Discrete Geometry for Computer Imagery, volume Vol. 1953 of Lecture Notes in Computer Science, pages 348–357, Uppsala, Sweden, 2000. Springer, Berlin Heidelberg, New York. [20] R. L. Hartley. Multi–Scale Models in Image Analysis. PhD thesis, University of Maryland, Computer Science Center, 1984. [21] J.-M. Jolion and A. Rosenfeld. A Pyramid Framework for Early Vision. Kluwer Academic Publishers, 1994.
Abstraction Pyramids on Discrete Representations
21
[22] J. Kittler, K. Messer, W. Christmas, B. Levienaise-Obadia, and D. Koubaroulis. Generation of semantic cues for sports video annotation. In Proceedings IEEE International Conference on Image Processing, ICIP2001, pages pp.26–29, Thessaloniki, Gr., 2001. [23] T. Kong and A. Rosenfeld. Digital topolgy: a comparison of the graph-based and topological approaches. In G. Reed, A. Roscoe, and R. Wachter, editors, Topology and Category Theory in Computer Science, pages 273–289, Oxford, 1991. Oxford University Press. [24] V. A. Kovalevsky. Digital Geometry Based on the Topology of Abstract Cellular Complexes. In J.-M. Chassery, J. Francon, A. Montanvert, and J.-P. R´eveill`es, editors, G´eometrie Discr`ete en Imagery, Fondements et Applications, pages 259– 284, Strasbourg, France, September 1993. [25] W. G. Kropatsch. Preserving contours in dual pyramids. In Proc. 9th International Conference on Pattern Recognition, pages 563–565, Rome, Italy, November 1988. IEEE Comp.Soc. [26] W. G. Kropatsch. Property Preserving Hierarchical Graph Transformations. In C. Arcelli, L. P. Cordella, and G. Sanniti di Baja, editors, Advances in Visual Form Analysis, pages 340–349. World Scientific Publishing Company, 1997. [27] W. G. Kropatsch. From equivalent weighting functions to equivalent contraction kernels. In E. Wenger and L. I. Dimitrov, editors, Digital Image Processing and Computer Graphics (DIP-97): Applications in Humanities and Natural Sciences, volume 3346, pages 310–320. SPIE, 1998. [28] W. G. Kropatsch, A. Leonardis, and H. Bischof. Hierarchical, adaptive and robust methods for image understanding. Surveys on Mathematics for Industry, No. 9:pp. 1–47, 1999. [29] W. G. Kropatsch and H. Macho. Finding the structure of connected components using dual irregular pyramids. In Cinqui`eme Colloque DGCI, pages 147–158. LLAIC1, Universit´e d’Auvergne, ISBN 2-87663-040-0, September 1995. [30] S. G. Mallat. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI–11(No. 7):pp. 674–693, July 1989. [31] P. Meer, C. A. Sher, and A. Rosenfeld. The chain pyramid: Hierarchical contour processing. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-12(No.4):pp.363–376, April 1990. [32] P. F. Nacken. Image segmentation by connectivity preserving relinking in hierarchical graph structures. Pattern Recognition, 28(6):907–920, June 1995. [33] A. Rosenfeld, editor. Multiresolution Image Processing and Analysis. Springer, Berlin, 1984. [34] A. Rosenfeld and A. Nakamura. Local deformations of digital curves. Pattern Recognition Letters, 18:613–620, 1997. [35] J. K. Udupa. Go Digital, Go Fuzzy. In G. Borgefors, I. Nystr¨ om, and G. Sanniti di Baja, editors, Proceedings DGCI’00, Discrete Geometry for Computer Imagery, volume Vol. 1953 of Lecture Notes in Computer Science, pages 284–295, Uppsala, Sweden, 2000. Springer, Berlin Heidelberg, New York.
XPMaps and Topological Segmentation – A Unified Approach to Finite Topologies in the Plane Ullrich Köthe Cognitive Systems Group, University of Hamburg
[email protected] Abstract. Finite topological spaces are now widely recognized as a valuable tool of image analysis. However, their practical application is complicated because there are so many different approaches. We show that there are close relationships between those approaches which motivate the introduction of XPMaps as a concept that subsumes the important characteristics of the other approaches. The notion of topological segmentations then extends this concept to a particular class of labelings of XPMaps. We show that the new notions lead to significant simplifications from both a theoretical and practical viewpoint.
1
Introduction
Many computer vision researchers now agree that the notion of finite topological spaces is very useful in image analysis and segmentation, as it allows for consistent descriptions of neighborhood relations, boundaries, and so on. Meanwhile, a number of different methods for the representation of finite topologies has been proposed, including cellular complexes [9], block complexes [10], the star topology [1], the Khalimsky grid [6, 3], combinatorial maps [15, 5] and border maps [2]. All methods approach the problem somewhat differently, but they have a lot in common as well. Unfortunately, the commonalties are not immediately apparent from the literature because most authors present their method in isolation or emphasize the differences to other methods. This makes it unnecessarily difficult to understand and compare the different approaches and to provide reusable implementations in a general image analysis framework. In this contribution I’m going to propose a unified approach to finite topologies in the plane. I’ll show that, under fairly general assumptions, different models can be transformed into each other. This will motivate the introduction of two generalized concepts (the XPMap and the topological segmentation) which subsume most aspects of the existing concepts. The concept of a XPMap is essentially a formalization of the border map introduced in [2]. This paper complements a previous paper [7] where solutions for some software design and implementation problems of the new approach were presented. Here, we look at the problem from a theoretical viewpoint in order to show formally why the underlying unification is possible and which properties it has. Due to space constraints, most proofs had to be skipped. They can be found in the long version [8] of this paper which is available from the author’s WWW site. A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 22–33, 2002. © Springer-Verlag Berlin Heidelberg 2002
XPMaps and Topological Segmentation
23
Fig. 1. The open stars of a vertex, arc and region (from left to right) in a square tessellation of the plane
2
Models for Finite Topological Spaces in the Plane
In this section, we will introduce the topological models we are going to analyze later. Here we only give definitions but do not yet discuss any relationships between them. The most fundamental definition of a finite topological space is obtained on the basis of a complete division of the Euclidean plane. A plane division is a finite set of disjoint cells whose union completely covers the plane. There are three types of cells:
Vertices: V = {v1, ... , vn} is a set of distinct points of the plane Arcs: A = {a1, ... , am} is a set of disjoint open arcs whose end points are vertices from V. (An arc is a homeomorphic image of the interval [0, 1]. An open arc is an arc less its end points. The two end points need not be distinct.) Regions: R = {r1, ..., rk} are the maximally connected components of the complement of the union of V and A. Since V∪A is a closed set, all regions are open. The Euclidean topology of the plane induces a neighborhood relation between the cells, which is used to define the smallest open neighborhood (the open star) of each cell. The set of open stars forms a basis for a finite topological space:
Definition 1: Let T be a division of the Euclidean plane into cells (regions, arcs, vertices). Then the open star (smallest open neighborhood) of a cell z is the set of cells that intersect with every Euclidean neighborhood of the points in z: open − star ( z ∈ T ) = {z ’∈ T | ∀q ∈ z, ∀ε > 0 : Bε (q) ∩ z ’≠ ∅}
( Bε (q) denotes an open ball of radius ε around the point q). The set of open stars is a basis of a finite topological space over the cells of the plane division. Two divisions of the plane are topologically equivalent if there exists a 1-to-1 mapping between the cells of the division so that the open stars are preserved. Some authors, e.g. Ahronovitz et al. [1], further require all regions to be convex polytopes. We will not do this in this paper. The most common example for a star topology is obtained by tessellating the plane into equal squares. Then the open stars of the vertices, arcs, and regions look as depicted in figure 1. The biggest drawback of the plane division definition is its reliance on geometric concepts which prevents separation between topology and geometry. Therefore, a number of definitions not relying on the Euclidean space exist. The best known is the abstract cellular complex introduced to image analysis by Kovalevsky [9]:
Definition 2: A cell complex is a triple (Z, dim, B) where Z is a set of cells, dim is a function that associates a non-negative integer dimension to each cell, and B ⊂ Z × Z is the bounding relation that describes which cells bound other cells. A cell may bound only cells of larger dimension, and the bounding relation must be transitive. If the largest dimension is k, we speak of a k-complex.
24
U. Köthe
A cell complex becomes a topological space by additionally defining the open sets as follows: a set of cells is called open if, whenever cell z belongs to the set, all cells bounded by z do also belong to the set. Kovalevsky proved that every finite topological space can be represented as a cell complex. In image analysis, we want to represent the topological structure of 2-dimensional images, so we are naturally interested in 2-complexes whose bounding relation is consistent with a division of the plane as defined above. We will call those planar cell complexes. A special case of a cell complex is the block complex [10]. It is defined on a given cell complex by grouping cells into sets called block cells, or blocks for short. Blocks are defined by the property that they either contain a single 0-cell, or are homeomorphic to an open k-sphere. (A block is homeomorphic to an open k-sphere if it is isomorphic to a simply connected open set in some k-complex – see [10] for details.) A block’s dimension equals the highest dimension of any of its cells.
Definition 3: A block complex is a complete partition of a given cell complex into blocks. The bounding relation of the blocks is induced by the bounding relation of the contained cells: a block bounds another one if and only if it contains a cell that bounds a cell in the other block. Note that the induced bounding relation must meet the requirements of definition 2, i.e. blocks may only bound blocks of smaller dimension, and the relation must be transitive. This means that all 1-blocks must be sequences of adjacent 0- and 1-cells, and all junctions between 1-blocks must be 0-blocks. Another approach to finite topological spaces originates from the field of graph theory – the combinatorial map [15, 5]:
Definition 4: A combinatorial map is a triple (D, σ, α) where D is a set of darts (also known as half-edges), σ is a permutation of the darts, and α is an involution of the darts. In this context, a permutation is a mapping that associates to each dart a unique predecessor and a unique successor. By following successor chains, we can identify the cycles or orbits of the permutation. An involution is a permutation where each -1 orbit contains exactly two elements. It can be shown that the mapping ϕ = σ α is also a permutation. 1 The cycles of the σ permutation are called the nodes of the map, the cycles of α are the edges, and the cycles of ϕ the faces. A combinatorial map is planar (i.e. can be embedded in the plane) if its Euler characteristic equals two (cf. [15]): n−e+ f = 2
(1)
(with n, e, and f denoting the number of nodes, edges, and faces). Each planar combinatorial map has a unique dual map that is obtained by reversing the roles of the σ and ϕ orbits. It should be noted that the graph of a combinatorial map (i.e. the remains of the map when all faces are removed) must be connected, since each face is associated with exactly one orbit. Thus, faces in a combinatorial map may not have holes. 1
Combinatorial maps are sometimes defined by using four darts per edge (“quad-edges” or “crosses”), e.g. [15]. This allows the realization of non-orientable manifolds and the definition of generalized maps (G-maps) that can be generalized to arbitrary dimensions [12]. However, the half-edge definition suffices in the present context.
XPMaps and Topological Segmentation
25
The final approach we are going to deal with is Khalimsky’s grid [6, 3]. It is defined as the product topology of two 1-dimensional topological spaces. In particular, one defines a connected ordered topological space (COTS) as a finite ordered set of points which alternate being open and closed. The smallest open neighborhood of an open point is the point itself, of a closed point it is the point and its predecessor and successor (in case of a closed endpoint, the neighborhood consists of two points only). Now Khalimsky’s grid is defined like this:
Definition 5: A Khalimsky grid is defined as the space X×Y with the product topology, where X and Y are COTS with at least three points each. The product topology defines the neighborhood of a grid point as N((x,y))=N(x)×N(y). When x and y are both open, the resulting point (x, y) is also open. When they are closed, (x, y) is closed as well, and its neighborhood consists of the 8 incident points. When an open point is combined with a closed point, a mixed point results, whose neighborhood contains the two incident closed points. These neighborhoods are analogous to the open stars in figure 1, with obvious modifications at the gird’s border.
3
Relationships between Finite Planar Topological Spaces
In this section we show how the concepts defined above relate to each other by describing how a topological space represented in one concept can be translated into another. All missing proofs to the theorems can be found in [8].
Theorem 1: Any finite topology induced by a division of the plane can be represented by a planar cell complex. The transformation is achieved by the following algorithm: Associate a 0-, 1- or 2-cell with every vertex, arc, and region respectively. Define the bounding relation so that cell z bounds another cell z’ whenever z’ is part of the open star of z. In [8], we prove that the algorithm actually creates a valid cell complex. The transformation in the opposite direction is not so simple because not every abstract cell complex corresponds to a division of the plane. We postpone this discussion until after the treatment of combinatorial maps because we will need a map as an intermediate representation during this transformation. The transformation of a division of the plane into a combinatorial map is easy as long as the union of the vertices and arcs (the boundary set of the division) is a connected set. Otherwise, at least one region’s boundary consists of several connected components, which is inconsistent with the requirement that every orbit of the ϕ permutation corresponds to a distinct face. We will first discuss the transformation in the restricted case, and then extend the map definition for the general case.
Theorem 2: A finite topology induced by a division of the plane can be represented by a planar combinatorial map if the union of the arcs and vertices is connected. To create a map from a division of the plane, first associate two darts with each arc, one for either direction. If the arc is defined by the function a(t), 0f(X) and f(Z) 0, is a regular complex if and only if the link of each vertex is a regular (d − 1)-complex. All 0-complexes are regular. – A d-complex, with d > 1, is a pseudo-manifold if and only if the link of each vertex is a (d − 1)-pseudo-manifold. For d = 1, only regular 1-complexes (i.e. graphs) with at most two 1-simplexes incident at every vertex are pseudomanifolds. We define regularly adjacent complexes inductively as follows. Definition 3. A regular abstract simplicial 1-complex is a regularly adjacent complex. A regular abstract simplicial d-complex is regularly adjacent if and only if the link of each vertex is a connected regularly adjacent (d − 1)-complex. Vertices for which the link condition in the definition above is not satisfied are called singular vertices. Complex in Figure 1(a) is a pseudo-manifold which is not regularly adjacent. The link of vertex l is not connected since it is represented by the two thick disconnected 1-simplexes {o, p}, {q, r}. For this reason, we can say that vertex l is a singular vertex. Complex in (b) is regularly adjacent but not a pseudo-manifold. The link of each point is a connected graph. Vertices n and m in Figure 1(b), are singular vertices because they violate link condition for pseudo-manifoldness. The complex in figure 2(a) is a 3-pseudo-manifold but not regularly adjacent. This is due to the fact that the link of vertex p is represented by the strip of triangles pinched at q, which is not regularly adjacent since it has the same property of the complex in Figure 1(a). Complex in Figure 2(b) is a regularly adjacent 3-pseudo-manifold. Therefore, pseudo-manifoldness and regular adjacency are two independent requirements for the regularity of a complex. Definition 4. We say that a complex is a quasi-manifold if and only if it is both a pseudo-manifold and a regularly adjacent complex.
Non-manifold Decomposition in Arbitrary Dimensions
p
75
p
q (a)
(b)
Fig. 2. (a) An example of a 3-pseudo-manifold that is not a regularly adjacent; (b) an example of a 3-pseudo-manifold which is a regularly adjacent
The complex in Figure 2(b) is an example of quasi-manifolds. It is easy to reformulate this definition of quasi-manifolds in term of link conditions. This yields to the following inductive characterization of quasi-manifolds. Proposition 2 A d-complex is a quasi-manifold if and only if, for d > 1, the link of each vertex is a connected (d − 1)-quasi-manifold. Only regular 1-pseudomanifolds are quasi-manifolds. A similar characterization can be given for d-quasi-manifolds with boundary by just considering pseudo-manifolds with boundary. We now introduce the definition of combinatorial manifolds and we discuss their relation with quasimanifolds. Definition 5. A combinatorial d-pseudo-manifold Ω (with possible boundary) is called a combinatorial d-manifold if and only if the link of every vertex v is a (d − 1)-complex that is combinatorially equivalent either to the boundary of the standard d-dimensional simplex, if v is an internal vertex, or to the standard (d − 1)-simplex if v is a boundary vertex. The following property, that can be easily proven by induction on the manifold dimension d, gives the relation between manifolds and quasi-manifolds. Proposition 3 For any d ≥ 0, every combinatorial d-manifold is a d-quasimanifold. Conversely, every 2-quasi-manifold is a 2-manifold; however, for d > 2, there exist d-quasi-manifolds that are not d-manifolds. Figure 2(b) shows an example of a three dimensional quasi-manifold which is not a combinatorial manifold, because the link of vertex p is a equivalent to a 2-cylinder (and, hence, is not equivalent to the boundary of a sphere or to a disc).
5
Decomposition of a Non-manifold Complex
In this section we briefly describe an algorithmic procedure to decompose a dcomplex into a natural assembly of quasi-manifolds of dimension h ≤ d. Details
76
L. De Floriani et al.
and proofs about correctness and time complexity of the procedure are given in [4]. The decomposition procedure consists of two sub-procedures, that we will call SplitVertices and CheckTies. Such two procedures are applied iteratively to the complex until no more singularities remain. It can be shown that a single iteration of procedure SplitVertices alone is sufficient to decompose a complex for d ≤ 2. For d ≥ 3 procedure CheckTies is necessary to remove situations such as those depicted in Figure 4, which are not detected by SplitVertices. Procedure SplitVertices works recursively on the dimension d of the complex to decompose, on the basis of the following inductive process. A 1-complex is (isomorphic to) a graph possibly with isolated vertices. We define the decomposition of this graph as follows. – For every vertex v of degree o > 2 (i.e., with a number of incident 1-simplexes > 2) we introduce o distinct copies of v, one for each edge incident into v. This will split Ω into a set of disconnected points, cycles and chains. – If a chain of this expanded graph starts and ends on two copies of the same vertex, we convert it into a cycle by identifying such two copies. The result of this decomposition is a graph whose connected components are either isolated points, or single cycles, or linear chains of adjacent edges. In this decomposition scheme, connected components will be no further decomposed. For d ≥ 2, we consider each vertex v of a d-complex Ω, and, we recursively decompose its link lk(v) (which is a (d − 1)-complex) into an assembly of k connected (d − 1)-complexes. If k > 1, we call vertex v a splitting vertex. In this case, given the collection {li (v), 1 ≤ i ≤ k} of connected components of lk(v) (decomposed recursively), we will modify the input complex Ω by replacing v with k copies of it. More precisely, for every i ∈ {1, . . . k}, vi replaces v in all cells of the star of v that are incident at the cells of component li (v). We iterate this local decomposition process until no more splitting vertices remain. It can be shown that this procedure works in time linear in the number t of top simplexes in the input complex. The resulting decomposition is dependent on the order in which we examine vertices of the input complex. In Figure 3 we give an example of two local decompositions of the 2-complex in (a). The first decomposition (b) starts by considering vertex v, while the second decomposition (c) starts by considering vertex u. The components resulting from the two choices are different.
Fig. 3. An example of decomposition depending on the order of choice of vertices.
For d-complexes, with d > 2, some singularities may remain after running procedure SplitVertices. Such singularities are due to vertices having a link
Non-manifold Decomposition in Arbitrary Dimensions
77
which remains made of a single connected component, but needs to be decomposed at some of its vertices (without disconnecting it) in order to fulfill the link conditions for quasi-manifold complexes. We call such vertices ties. Vertices p and q in Figure 4 are examples of ties. Note that vertices that are potential ties are detected by procedure SplitVertices, depending on the structure of their link components. However, such vertices are left unchanged at that stage, because some such situations can be resolved while splitting some of their neighboring vertices afterwards. Procedure CheckTies tests only such potential ties. If a tie v is found, then we introduce a new copy of v, as above, for the closure of each top simplex in the star of v (see Figure 4(b), for an example). Note that it is sufficient to split just vertex p to remove the tie also at q. Tie removal causes a sort of “explosion” of the complex in the neighborhood of a removed tie. This explosion may generate other singularities. Therefore, procedure SplitVertices is repeated only on those vertices that might be affected by tie removal, and the process is iterated until no more ties are found. It can be shown that a single application of procedure CheckTies has a complexity linear in the number of cells it visits. Moreover, the total number of iterations of the two procedures is bounded from above by the number t of top simplexes in the input complex. Therefore, the total decomposition process for d ≥ 3 is completed in time O(t2 ). However, we believe that this estimate is quite pessimistic. For instance, under the (reasonable) assumption that the number of cells incident in a vertex is bounded from above by a constant, it can be shown that the complexity of the whole decomposition is always Θ(dt).
q p2
q p
(a)
p4
p 1
p p 3 p p7
p p
9
8
5 6
(b)
Fig. 4. An example of decomposition of a tie: Complex in (a) contains two ties p and q. Tetrahedra in (b) correspond to the decomposition of tie p.
We can obtain the original complex Ω from the set of quasi-manifold components by identifying all vertex copies of each non-manifold vertex of Ω. Therefore, such a set of quasi-manifold components, together with a simplicial map that identifies all vertex copies of every non-manifold vertex, can describe completely the original complex Ω.
78
L. De Floriani et al.
Fig. 5. A non-manifold 2-complex Ω and its decomposition with hyper-graph
An example of the results of the decomposition for a 2-non-manifold is depicted in Figure 5. Consider the complex Ω in Figure 5(a). Points b,i and f are the non-manifold points. The decomposed link of vertex b has five connected components: cycles (c, e, f ), (c, d, e), (h, i, m) and (p, q, r), arc (i, f ) and isolated point a. The link of vertex i has two connected components: cycle (b, h, j, m) and arc (b, f, g). The link of f has two connected components: cycle (b, c, e) and arc (b, i, g). Therefore (b, i) and (i, g) are non-manifold edges. Figure 5(b) shows the manifold decomposition of the complex in Figure 5(a).
6
Concluding Remarks
Results in this paper show that any abstract simplicial complex can be decomposed into an assembly of subcomplexes. Each subcomplex belongs to the class of quasi-manifolds complexes. The resulting decomposition is natural (i.e., not arbitrary) in the sense that it is obtained by induction and depends only on the order in which the vertices are selected. The decomposition described can be used to define a hierarchical two-level representation for non-manifolds. In a upper level we can describe, through a hypergraph, the way in which quasi-manifold components are stitched together to form the original complex. In the lower level, we describe each quasi-manifold component. This idea is already well understood and developed for 2-dimensional complexes in [3]. For d-dimensional complexes for d > 2 we expect to find a compact data structure to encode d-quasi-manifolds. Simplex connectivity in a d-quasi-manifold can be completely described by giving the pairs of (d − 1)adjacent d-simplexes. Finally, we note that our decomposition operates on a local basis and, therefore, the structure of the decomposition can be understood by local analysis. This notion will be used to define a measure of the amount of shape
Non-manifold Decomposition in Arbitrary Dimensions
79
complexity at each non-manifold point. This will be used to guide shape simplification process which preserves the iconic content of an object (see [3] for details).
Acknowledgments. This work has been supported by the European Research Training Network “MINGLE - Multiresolution in Geometric Modelling”, contract number HPRN-CT-1999-00117, and by the National Project founded by the Italian Ministry of Instruction, University and Research (MIUR) “SPADA - Representation and processing of spatial data in Geographic Information Systems”.
References [1] M.K. Agoston. Algebraic Topology, A First Course,. Pure and Applied Mathematics, Marcel Dekker, 1976. [2] Bruce G. Baumgart. Winged edge polyhedron representation. Technical Report CS-TR-72-320, Stanford University, Department of Computer Science, October 1972. [3] L. De Floriani, P. Magillo, F. Morando, and E. Puppo. Non-manifold multitessellation: from meshes to iconic representations of 3d objects. In C. Arcelli, L.P. Cordella, and G. Sanniti di Baja, editors, Proceedings of the 4th International Workshop on Visual Form (IWVF4), volume 2059 of Springer-Verlag (LNCS), page 654 ff, Berlin, 2001. Springer-Verlag. [4] L. De Floriani, M. M. Mesmoudi, F. Morando, and E. Puppo. Decomposition of n-dimensional complex into quasi-manifold components. Technical Report DISITR-01-11, Department of Computer and Information Sciences of Genova (DISI), Genova-Italy, 2001. [5] H. Desaulnier and N. Stewart. An extension of manifold boundary representation to r-sets. ACM Trans. on Graphics, 11(1):40–60, 1992. [6] D. Dobkin and M. Laszlo. Primitives for the manipulation of three-dimensional subdivisions. Algorithmica, 5(4):3–32, 1989. [7] H. Edelsbrunner. Algorithms in combinatorial geometry. In Brauer, W., Rozenberg, G., and Salomaa, A., editors, EATCS Monographs on Theoretical Computer Science. Springer-Verlag, 1987. [8] H. Elter and P. Lienhardt. Different combinatorial models based on the map concept for the representation of sunsets of cellular complexes. In Proc. IFIP TC 5/WG 5.10 Working Conference on Geometric Modeling in Computer Graphics, pages 193–212, 1993. [9] R. Engelking and K.Svekhicki. Topology: A Geometric Approach. Heldermann Verlag-Berlin, 1992. [10] B. Falcidieno and O. Ratto. Two-manifold cell-decomposition of r-sets. In A. Kilgour and L. Kjelldahl, editors, Computer Graphics Forum (EUROGRAPHICS ’92 Proceedings), volume 11 n 3, pages 391–404, sep 1992. [11] A. Gueziec, G. Taubin, F. Lazarus, and W. Horn. Converting sets of polygons to manifold surfaces by cutting and stitching. In Scott Grisson, Janet McAndless, Omar Ahmad, Christopher Stapleton, Adele Newton, Celia Pearce, Ryan Ulyate, and Rick Parent, editors, Conference abstracts and applications: SIGGRAPH 98, July 14–21, 1998, Orlando, FL, Computer Graphics, pages 245–245, New York, NY 10036, USA, 1998. ACM Press.
80
L. De Floriani et al.
[12] Leonidas Guibas and Jorge Stolfi. Primitives for the manipulation of general subdivisions and the computation of voronoi diagrams. ACM Transaction on Graphics, 4(2):74–123, April 1985. [13] E. L. Gursoz, Y. Choi, and F. B. Prinz. Vertex-based representation of nonmanifold boundaries. In M. J. Wozny, J. U. Turner, and K. Preiss, editors, Geometric Modeling for Product Engineering, pages 107–130. Elsevier Science Publishers B.V., North Holland, 1990. [14] V. E. Kuznetsov I. A. Volodin and A. T. Fomenko. The problem of discriminating algorithmically the standard three-dimensional sphere. Russisan Math. Surveys, 29(5):71–172, 1974. Original Russian article in Uspekhi Mat. Nauk, 29(1), (1974), pp. 72–168. [15] W. R. B. Lickorish. Simplicial moves on the complexes and manifolds. Geometry and Topology Monographs: Proceedings of the Kirbyfest, 2:299–320, 1999. [16] P. Lienhardt. Topological models for boundary representation: a comparison with n-dimensional generalized maps. CAD, 23(1):59–82, 1991. [17] P. Lienhardt. Aspects in Topology-Based Geometric Modeling : Possible Tools for Discrete Geometry ? In Proceedings of Discrete Geometry in Computer Science LNCS 1347, pages 33–48, 1997. [18] M. Mantyla. An introduction to solid modeling. Computer Science Press, 1983. [19] A. A. Markov. Unsolvability of the problem of homeomorphy. In International Congress of Mathematics, pages 300–306, 1958. In Russian. [20] J. Popovic and H. Hoppe. Progressive simplicial complexes. In ACM Computer Graphics Proc., Annual Conference Series, (Siggraph ’97), 1997. (to appear). [21] J. Rossignac and D. Cardoze. Matchmaker: Manifold BReps for non-manifold R-Sets. In Willem F. Bronsvoort and David C. Anderson, editors, Proceedings of the Fifth Symposium on Solid Modeling and Applications (SSMA-99), pages 31–41, New York, June 9–11 1999. ACM Press. [22] J.R. Rossignac and M.A. O’Connor. SGC: A dimension-indipendent model for pointsets with internal structures and incomplete boundaries. In J.U. Turner M. J. Wozny and K. Preiss, editors, Geometric Modeling for Product Engineering, pages 145–180. Elsevier Science Publishers B.V. (North–Holland), Amsterdam, 1990. [23] John Stillwell. Classical Topology and Combinatorial Group Theory. Number 72 in Graduate Texts in Mathematics. Springer-Verlag, New York, 1993. [24] A. Thompson. Thin position and the recognition problem for s3 . Mat. Res. Lett., 1:613–630, 1994. [25] K. Weiler. Boundary graph operators for non-manifold geometric modeling topology representations. In J.L. Encarnacao M.J. Wozny, H.W. McLaughlin, editor, Geometric Modeling for CAD Applications, pages 37–66, North-Holland, 1988. Elsevier Science. [26] K. Weiler. The radial edge data structure: A topological representation for non-manifold geometric boundary modeling. In J.L. Encarnacao M.J. Wozny, H.W. McLaughlin, editor, Geometric Modeling for CAD Applications, pages 3– 36, North-Holland, 1988. Elsevier Science. [27] Kevin Weiler. Topological Structures for Geometric Modeling. Ph.D. thesis, Computer and Systems Engineering, Rennselaer Polytechnic Institute, Troy, NY, August 1986. [28] Tony C. Woo. A combinatorial analysis of boundary data structure schemata. IEEE Computer Graphics and Applications, 5(3):19–27, March 1985.
4D Minimal Non-simple Sets C.J. Gau1 and T. Yung Kong2 1
Department of Computer Science Graduate School and University Center City University of New York New York, NY 10036, U.S.A.
[email protected] 2 Department of Computer Science Queens College City University of New York Flushing, NY 11367, U.S.A.
[email protected] Abstract. One method of verifying that a given parallel thinning algorithm “preserves topology” is to show that no iteration ever deletes a minimal non-simple (“MNS”) set of 1’s. The practicality of this method depends on the fact that few types of set can be MNS without being a component. The problem of finding all such types of set has been solved (by Ronse, Hall, Ma, and the authors) for 2D and 3D Cartesian grids, and for 2D hexagonal and 3D face-centered cubic grids. Here we solve this problem for a 4D Cartesian grid, in the case where 80-adjacency is used on 1’s and 8-adjacency on 0’s. Keywords. 4D xel, attachment, minimal non-simple, MNS, parallel thinning.
1
Introduction
Loosely speaking, a set of 1’s in a binary image is said to be simple if parallel deletion of those 1’s “preserves topology”. A non-simple set of 1’s must contain a minimal non-simple (MNS) set. So one way to prove that a given parallel thinning algorithm always “preserves topology” is to show that no iteration of that algorithm can ever delete an MNS set of 1’s. The practicality of this method depends on the fact that there are few types of possible MNS sets, and even fewer types of sets that can be MNS without being components of the 1’s. (Here we consider two sets of grid points to be of the same type if one set is a translate of the other.) The problem of identifying all such types of sets has been solved for 2D and 3D Cartesian grids, and for 2D hexagonal and 3D face-centered cubic grids. Ronse [1] introduced the concept of an MNS set (called an MND set in [1]), and solved the problem for a 2D Cartesian grid with (8,4) or (4,8) connectedness. Hall [2] solved the problem for a 2D hexagonal grid with (6,6) connectedness. Ma [3] solved the problem for a 3D Cartesian grid with (26,6), (6,26), (18,6), or A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 81–91, 2002. c Springer-Verlag Berlin Heidelberg 2002
82
C.J. Gau and T.Y. Kong
(6,18) connectedness. The authors [4] solved the problem for a 3D face-centered cubic grid with (18,12), (12,12), or (12,18) connectedness. In this paper we solve the problem for a 4D Cartesian grid with (80,8) connectedness: We determine which sets of grid points can be MNS, and also determine which of those sets can be MNS without being a component of the 1’s. This gives a method of verifying that a given 4D parallel thinning algorithm always “preserves topology”. Such algorithms might be of use in temporal image analysis, in which a time sequence of 3D images is represented as a 4D image. As in [4] and [5], we do not work directly with grid points, but prefer to think in terms of their Voronoi neighborhoods. For a 4D Cartesian grid the Voronoi neighborhoods of grid points are “upright” 4-dimensional hypercubes. We will call such a hypercube a 4-xel. Assuming without loss of generality that the grid points have coordinates of the form (i1 + 0.5, i2 + 0.5, i3 + 0.5, i4 + 0.5) where the i’s are integers, the vertices of a 4-xel have integer coordinates and the edges of a 4-xel have length 1. We identify each binary image on a 4D Cartesian grid with the set of all 4-xels that are centered at the grid points which have value 1 in the image. This allows us to define a binary image simply as a finite set of 4-xels. The concepts of a simple 1 and a simple set of 1’s will be made precise by the concepts of a simple 4-xel and a simple set of 4-xels (in a binary image). We will use the same general approach to MNS sets as was used in [4] and [5]. Our work will be based on a characterization of simple 4-xels in terms of their attachment sets that was given in [6]. The attachment set of a 4-xel in a binary image is the subset of the boundary of the 4-xel that is shared with other 4-xels in the image.
2
n-Xels, 4D Binary Images, Xel-Complexes
An elementary 0-cell is a singleton set {i}, where i is an integer. An elementary 1-cell is a closed unit interval [i, i + 1] of the real line, where i is an integer. A 4xel is a Cartesian product of 4 elementary 1-cells. More generally, for 0 ≤ n ≤ 4 an n-xel is a Cartesian product of n elementary 1-cells and 4 − n elementary 0-cells, in some order. If q is an n-xel for some n, then we say q is a xel. A 0-xel will also be called a vertex and a 1-xel will also be called an edge. A 4D binary image is a finite set of 4-xels. An n-xel is the trajectory of an (n−1)-xel as it moves one unit in the positive or negative direction of the coordinate axis that is perpendicular to the (n − 1)xel. This is illustrated in Figure 1. The reader may sometimes find it helpful to think of 4-xels as trajectories of 3-xels that move through a unit of time. If a xel y is a proper subset of a xel x, then we say y is a proper face of x, and write y < x. A xel x is a Cartesian product of elementary 1-cells and elementary 0-cells, and each proper face of x can be obtained from the product by replacing one or more of its elementary 1-cells [i, i + 1] with {i} or {i + 1}. A set K of xels is called a xel-complex if K is finite and, for every xel x ∈ K, every proper face of x is also in K. If y < x is a k-xel then we say y is a k-face of x.
4D Minimal Non-simple Sets
83
Fig. 1. An n-xel can be viewed as the trajectory of a moving (n − 1)-xel.
Each 4-xel intersects just 80 other 4-xels, and shares a 3-face with just 8 other 4-xels. Accordingly, two 4-xels are said to be 80-adjacent if they are distinct but intersect, and are said to be 8-adjacent if their intersection is a 3-xel. For any set S of 4-xels, and κ = 80 or 8, two 4-xels q1 , q2 ∈ S are said to be κ-connected in S if they are related by the reflexive transitive closure of the κadjacency relation on S. This is an equivalence relation on S, and its equivalence classes are called the κ-components of S.
3
Simple 4-Xels and Attachment Sets of 4-Xels
A 4-xel in a 4D binary image I is said to be simple in I if (loosely speaking) its deletion “preserves topology”. More precisely,if I is a 4D binary image we say that a 4-xel q ∈ I is simple if the polyhedron (I \ {q}) is a strong deformation retract of the polyhedron I. In other words, q is simple if and only if the union of the 4-xels in I can be continuously deformed over itself onto the union of the 4-xels in I \ {q}, in such a way that all points in (I \ {q}) at the start remain fixed throughout the deformation process. This definition of simpleness in terms of the polyhedra I and (I \ {q}) is consistent with the use of 80-connectedness as the definition of connectedness on the 4-xels in I and the use of 8-connectedness as the definition of connectedness on the 4-xels in the complement of I. Indeed, two 4-xels belong to the same 80-component of I if and only if they lie in the same connected component of I, and they belong to the same 8-component of the 4-xels in the complement of I if and only if their interiors lie in the same connected component of R4 \ I. Our definition of simpleness has the advantage of being independent of dimensionality and of the shapes of xels. (For example, the same definition of simpleness could be used in 3D binary images on a face-centered cubic grid, except that instead of 4-xels we would have dodecahedral rhomboids belonging to a tessellation of 3-space by congruent dodecahedral rhomboids. In fact our definition of simpleness would be appropriate even in 5- and higher-dimensional binary images.) The definition involves continuous deformation, but Theorems 1 and 2 below give essentially discrete sets of necessary and sufficient conditions for q to be simple in I.
84
C.J. Gau and T.Y. Kong
These two theorems depend on the concept of the attachment complex of a 4-xel q in I, which is denoted by Attach(q, I) and is defined by Attach(q, I) = {f | f < q and f < y for some y ∈ I − {q}} The closed polyhedral set Attach(q, I) is called the attachment set of q in I. This set is also given by Attach(q, I) = q ∩ (I \ {q}). The boundary complex of a 4-xel q, denoted by Boundary(q), is the set of all the proper faces of q. Evidently, Attach(q, I) ⊆ Boundary(q). For any xel-complex K, the Euler characteristic of K is the integer χ(K) defined by χ(K) = c0 (K) − c1 (K) + c2 (K) − c3 (K) + c4 (K), where cn (K) is the number of n-xels in K. If P is the union of the xels of a xel-complex K, then we define χ(P) to be χ(K). It can be shown that if x is any xel then χ(x) = 1. We are now ready to state the above-mentioned essentially discrete sets of necessary and sufficient conditions for a 4-xel q to be simple. Theorem 1. Let q be a 4-xel in a 4D binary image I. Then q is simple in I if and only if the following all hold: 1. Attach(q, I) is connected and nonempty. 2. Boundary(q) − Attach(q, I) is connected and nonempty. 3. Attach(q, I) is simply connected. Theorem 2. Let q be a 4-xel in a 4D binary image I. Then q is simple in I if and only if the following all hold: 1. Attach(q, I) is connected. 2. Boundary(q) − Attach(q, I) is connected. 3. χ(Attach(q, I)) = 1. Theorems 1 and 2 are Theorems 7 and 9 in [6], except that the definition of a simple 4-xel used in that paper might seem to be more stringent than the definition given above: In [6], a 4-xel q is said to be simple in a 4D binary image I if Attach(q, I) is a strong deformation retract of q. As is explained in [6], it follows from the main result of [7] that the three conditions of Theorem 1 are equivalent to the three conditions of Theorem 2. So the two theorems are equivalent. An elementary proof of the “if” parts of these theorems is given in [6]. A shorter proof can be given using results of algebraic topology—notably Fact 3.4 in [8] (which follows from Corollaries 3.2.5, 1.3.11, and 1.4.10, and Theorem 1.4.11, in [9]) and Corollary 8.3.11 in [10]. Although the “only if” parts of Theorems 1 and 2 are easy to prove if simple 4-xels are defined as in [6], we have to work a little harder to give a proof for the definition of simpleness used in this paper. However, standard techniques of algebraic topology suffice. Indeed, assuming one can q is simple in I use the exact homology sequences of the pairs (q, Attach(q, I)) and ( I, (I \ {q})) together with the excision theorem to deduce that the reduced homology groups
4D Minimal Non-simple Sets
85
of Attach(q, I) are all trivial. The three conditions of Theorem 2 follow from this and the Alexander duality theorem. The definition of simpleness used in [6] is in fact equivalent to the definition used in this paper, because both definitions are equivalent to the three conditions of Theorem 1 or 2. An advantage of the definition we are now using is that it involves only 4-xels (and does not involve their attachment sets). It may be worth mentioning here that, while Theorem 2 provides an easy way of determining whether or not a 4-xel in a 4D binary image is simple, there seems to be no analog of Theorem 2 for 5- and higher-dimensional images. Indeed, for n ≥ 5 the authors do not currently have an easy way to determine if the attachment set of an n-xel in an n-dimensional binary image is simply connected.
4
MNS Sets and the Main Theorem
A set D of 4-xels in a 4D binary image I is said to be simple in I if the elements of D can be arranged in a sequence in which each element is simple after all of its predecessors in the sequence have been removed from the image. In particular, the empty set is simple in I, and a singleton set {q} is simple in I if and only if q is a simple 4-xel of I. Since the deletion of a single simple 4-xel “preserves topology”, so does the parallel deletion of a simple set of 4-xels. More precisely, if D is a simple set of 4-xels of a 4D binary image I, then it follows from the definition of a simple 4-xel (and the transitivity of the relation “is a strong deformation retract of”) that (I \ D) is a strong deformation retract of I. We are interested in ways of proving that the set of 4-xels deleted at each iteration of a given parallel thinning algorithm for 4D binary images is always a simple set. (This would imply that the algorithm “preserves topology”.) Since a non-simple set must evidently contain a minimal non-simple (MNS) set, one method of proof would be to show that at each iteration no set of 4-xels that all satisfy the algorithm’s deletion condition can be an MNS set of the image. (A set D of 4-xels of I is an MNS set of I if and only if D is a non-simple set of I but every proper subset of D is a simple set of I.) In fact, this would show that the set of 4-xels deleted at each iteration is not only simple but also hereditarily simple—i.e., all of its subsets are simple sets of the image. One example of an MNS set in a 4D binary image is shown in Figure 2. The next four theorems state fundamental properties of MNS sets. These results were established for 3D binary images in [5]—see Propositions 4.3, 4.5, 4.6, and 4.7 in that paper—and can be proved for 4D images in the same way. Theorem 3. Let D be a nonempty set of 4-xels in a 4D binary image I. Then D is MNS in I if and only if the following conditions both hold: 1. Each element q ∈ D is non-simple in I \ (D \ {q}). 2. Each element q ∈ D is simple in I \ D whenever D D \ {q}.
86
C.J. Gau and T.Y. Kong
We say that a set D of 4-xels can be MNS if there is some 4D binary image I such that D is an MNS set of I. We say that a set D of 4-xels can be MNS without being a component if there is some 4D binary image I such that D is an MNS set of I and D is not an 80-component of I. Theorem 4. A set of 4-xels can be MNS only if it is a subset of some 2×2×2×2 block of sixteen 4-xels. Theorem 5. Let D be an MNS set of a 4D binary image I, and suppose D is not an 80-component of I. Then every element of D is 80-adjacent to a 4-xel of I that is not in D. Theorem 6. If a set D of 4-xels can be MNS without being a component, then every subset D of D can be MNS without being a component. We now state our Main Theorem, which identifies all sets of 4-xels that can be MNS, and all such sets that can be MNS without being a component: Theorem 7 (Main Theorem). Let D be a set of 4-xels. Then: 1. D can be MNS if and only if D is contained in some 2 × 2 × 2 × 2 block of sixteen 4-xels. 2. D can be MNS without being a component if and only if D is a subset of some 2 × 2 × 2 block of eight 4-xels. Note that there are four types of 2 × 2 × 2 block: Such a block could be a 1 × 2 × 2 × 2, a 2 × 1 × 2 × 2, a 2 × 2 × 1 × 2, or a 2 × 2 × 2 × 1 block.
5 5.1
Proof of the Main Theorem Useful Results
The purpose of this subsection is to present three results that will be used in our proof of the Main Theorem. The first result is the Inclusion-Exclusion Principle for Euler characteristics, which is the following identity: χ(
n i=1
Ki ) =
T ⊆{1,2,...,n},T =∅
(−1)|T |−1 χ(
Ki )
(1)
i∈T
This holds for arbitrary xel-complexes K1 , K2 , . . . , Kn . The identity follows from the Inclusion-Exclusion Principle for finite sets and the definition of χ(K). The second result is the next proposition, which is related to the following lemma: Lemma 1. Let P be a union of xels and let x be an edge or a 2-xel such that x ⊆ Pand χ(x ∩ P) = 1. Then one of the following is true:
4D Minimal Non-simple Sets
1. 2. 3. 4.
x∩P x is a x is a x is a
87
consists of a single vertex of x. 2-xel and x ∩ P is one of the four edges of x. 2-xel and x ∩ P is a union of two edges of x that share a vertex. 2-xel and x ∩ P is a union of three of the four edges of x.
This lemma is easily verified by considering all possible forms of x ∩ P. From the lemma it is not hard to deduce Proposition 1 below, which will be used in section 5.3. We omit the proof of the lemma, but expect most readers will find it intuitively clear that all parts of the proposition are valid in each of the four cases of the lemma. Proposition 1. Let q be a 4-xel. Let P be a union of xels in Boundary(q) and let x be an edge or a 2-xel in Boundary(q) such that χ(x ∩ P) = 1. Then: 1. P is connected if and only if P ∪ {x} is connected. 2. Boundary(q) \ P is connected if and only if Boundary(q) \ (P ∪ {x}) is connected. 3. χ(P) = χ(P ∪ {x}). The following proposition is the third result. This will save us a lot of casechecking in section 5.2: Proposition 2. Let q be a 4-xel, and let X be any nonempty set of xels in Boundary(q) that satisfies one of the following two conditions: A. There is some vertex that belongs to all of the xels in X. B. X = Y ∪ Z, where Y ∩ Z = ∅, there is some vertex that belongs to all of the xels in Y , there is some vertex that belongs to all of the xels in Z, and no xel in Y \ Z intersects a xel in Z \ Y . Then X satisfies the following conditions: 1. X is connected. 2. Boundary(q) \ X is connected. 3. χ( X) = 1. In fact condition A in this proposition is a special case of condition B (since we may take Y = Z = X in B). The proposition follows from Theorem 4.1 in [8]: Condition B implies that X is SN in the sense of [8], and so the theorem implies X is contractible. By standard results ofalgebraic topology (including the Alexander duality theorem) [10], this implies X satisfies conditions 1 – 3. 5.2
The “If ” Parts of the Main Theorem
For any 4-xel q in a 4Dbinary image I, let A(q, I) = {q ∩ x | x ∈ I \ {q}} \ {∅}. Note that A(q, I) = Attach(q, I). To show that the “if” part of assertion 1 of the Main Theorem is valid, let D be a subset of a 2 × 2 × 2 × 2 block of 4-xels such that D is an 80-component of a 4D binary image I. We claim D is MNS in I. Evidently, D satisfies condition 1
88
C.J. Gau and T.Y. Kong
Fig. 2. Let I be the 4D binary image consisting of ten 4-xels shown in this figure. Then the 2×2×2 block of eight 4-xels in the center constitute an MNS set of I. (In the figure a larger scale is used in the direction of the 4th coordinate axis than in the directions of the other three, so the 4-xels appear to have been stretched in that direction.) Notice that there is an edge, indicated by the thick gray line, which connects the two “end” 4-xels; this edge is shared by the eight central 4-xels. Let q be any one of those eight 4-xels. Then it is easy to deduce from Theorem 2 and Proposition 2 that q is simple in the image that remains after any proper subset of the other seven central 4-xels is deleted from I. But if we delete all seven of the other central 4-xels from I then q is non-simple in the remaining image (which consists just of q and the two end 4-xels). Hence, by Theorem 3, the central block of eight 4-xels is MNS in I as we claimed.
of Theorem 3. It remains to show that D also satisfies condition 2 of Theorem 3. Let q ∈ D and let I be obtained from I by deleting any proper subset of the other elements of D. We need to show that q is simple in I . Let X = A(q, I ). As D is contained in a 2 × 2 × 2 × 2 block of 4-xels, the central vertex of that block belongs to all the 4-xels in D and hence to all the xels in X. So, since X = Attach(q, I ), it follows from Proposition 2 that the three conditions of Theorem 2 hold with I in place of I. Thus q is simple in I , as required. To show that the “if” part of assertion 2 is also valid, let I be a 2 × 2 × 2 × 3 block of 4-xels and let D be its central 2 × 2 × 2 × 1 block (which is clearly not an 80-component of I). We claim D is MNS in I. If we can prove this then, by Theorem 6, the “if” part of assertion 2 is valid. By symmetry we may assume that I = {i1 × i2 × i3 × i4 | i1 , i2 , i3 ∈ {[0, 1], [1, 2]}, i4 ∈ {[0, 1], [1, 2], [2, 3]}} so that D = {i1 × i2 × i3 × [1, 2] | i1 , i2 , i3 ∈ {[0, 1], [1, 2]}}. Then D clearly satisfies condition 1 of Theorem 3. To show that D also satisfies condition 2 of Theorem 3, let q ∈ D and let I be obtained from I by deleting any proper subset of the other seven 4-xels in D. We need to show that q is simple in I . Let D− = {i1 × i2 × i3 × [0, 1] | i1 , i2 , i3 ∈ {[0, 1], [1, 2]}} and let D+ = {i1 × i2 × i3 × [2, 3] | i1 , i2 , i3 ∈ {[0, 1], [1, 2]}}. Let X = A(q, I ). Since I ⊆ D− ∪ D ∪ D+ , we have X = Y ∪ Z, where Y = A(q, I ∩ (D ∪ D− )) and Z = A(q, I ∩ (D ∪ D+ )). Since D ∪ D− = {i1 × i2 × i3 × i4 | i1 , i2 , i3 , i4 ∈ {[0, 1], [1, 2]}}, the vertex (1,1,1,1) belongs to all the 4-xels in D ∪D− and hence to all the xels in Y . Similarly, the vertex (1,1,1,2)
4D Minimal Non-simple Sets
89
belongs to all the xels in Z. Moreover, Y ∩ Z = A(q, I ∩ D) = ∅ because at least one element of D \ {q} is in I . Also, no xel in Y\ Z = A(q, I ∩ D− ) intersects a xel in Z \ Y = A(q, I ∩ D+ ). Since X = Attach(q, I ), it follows from Proposition 2 that the three conditions of Theorem 2 hold with I in place of I, which implies q is simple in I , as required. 5.3
The “Only If ” Parts of the Main Theorem
The “only if” part of assertion 1 is just Theorem 4. To prove the “only if” part of assertion 2, let S be any MNS set of a 4D binary image I. By Theorem 4, S is contained in some 2 × 2 × 2 × 2 block. A set T of 4-xels that is contained in some 2 × 2 × 2 × 2 block will be called a spanning set if there is no 2 × 2 × 2 block that contains T . We now suppose that our MNS set S is a minimal spanning set—i.e., we suppose S is a spanning set but no proper subset of S is a spanning set—and deduce that S must be an 80-component of I. This will show that no minimal spanning set can be MNS without being a component, which (by Theorem 6) is enough to establish the “only if” part of assertion 2 of the Main Theorem, since every spanning set contains a minimal spanning set. For any two 4-xels p and q let p − q denote the vector from the centroid of q to the centroid of p. We define the l1 -diameter of S to be maxp,q∈S p − q1 , where v1 is the l1 -norm of the vector v (i.e., the sum of the absolute values of the four components of v). Since S is a spanning set, the l1 -diameter of S is at least 2, and is therefore equal to 2, 3, or 4. Case 1: The l1 -Diameter of S Is 4 In this case S = {q, a} for some 4-xels q and a such that q − a1 = 4. Note that q∩ a consists of just one vertex, v say. Let P = Attach(q, I \ {a}), so that Attach(q, I) = P ∪ {v}. Since S is MNS in I, it follows from Theorem 3 that q is non-simple in I \ {a} but q is simple in I. The latter and Theorem 2 imply P ∪ {v} = Attach(q, I) is connected, and so either v ∈ P or P = ∅. But v ∈ P would imply Attach(q, I) = P ∪ {v} = P = Attach(q, I \ {a}), which (by Theorem 2) would make it impossible for q to be simple in I but non-simple in I \ {a}. Hence P = ∅ and so, by Theorem 5, S is an 80-component of I. Case 2: The l1 -Diameter of S Is 3 In this case it is readily confirmed that S = {q, a, b} for some 4-xels q, a, and b such that q − a1 = q − b1 = 3 and a − b1 = 2. Let q ∩ a = ea and q ∩ b = eb . Then ea andeb are edges and ea ∩ ebconsists of just a vertex. Let P = Attach(q, I \{a, b}), so Attach(q, I) = P ∪ea ∪eb , Attach(q, I \ {a}) = P ∪ eb , and Attach(q, I \ {b}) = P ∪ ea . Since S is MNS in I, it follows from Theorem 3 that q is non-simple in I \ {a, b} but q is simple in I, in I \ {a}, and in I \ {b}. So it follows from Theorem 2 and Proposition 1 that neither χ(P ∩ ea ) nor χ(P ∩ eb ) is equal to 1.
90
C.J. Gau and T.Y. Kong
As Attach(q, I) = P ∪ ea ∪ eb , Attach(q, I \ {a}) = P ∪ eb , Attach(q, I \ {b}) = P ∪ ea , and χ(ea ) = χ(eb ) = χ(ea ∩ eb ) = 1, it follows from Theorem 2 and the Inclusion-Exclusion Principle for Euler characteristics that 1 = χ(P ∪ ea ) = χ(P) + 1 − χ(P ∩ ea ) 1 = χ(P ∪ eb ) = χ(P) + 1 − χ(P ∩ eb ) 1 = χ(P ∪ ea ∪ eb ) = χ(P) + 1 + 1 − χ(P ∩ ea ) − χ(P ∩ eb ) − 1 + χ(P ∩ ea ∩ eb ) and therefore χ(P) = χ(P ∩ ea ) = χ(P ∩ eb ) = χ(P ∩ ea ∩ eb ). So, since neither χ(P ∩ea ) nor χ(P ∩eb ) is equal to 1, χ(P ∩ea ∩eb ) = 1. Thus P ∩ ea ∩ eb = ∅ (since ea ∩ eb consists of just a vertex) and χ(P ∩ ea ∩ eb ) = 0. Therefore χ(P) = χ(P ∩ ea ) = χ(P ∩ eb ) = 0, so P ∩ ea = ∅. Now if P =
∅ then P ∪ e is disconnected, which contradicts Theorem 2 because P ∪ e = a a Attach(q, I \{b}) and q is simple in I \{b}. Hence P = ∅ and so, by Theorem 5, S is an 80-component of I. Case 3: The l1 -Diameter of S Is 2 In this case it is quite easy to verify that S = {q, a, b, c}, where x − y1 = 2 for all distinct x and y in S. Let q ∩ a = fa , q ∩ b = fb , and q ∩ c = fc . Then fa , fb , and fc are 2-xels, every pair of them share an edge, and fa ∩ fb ∩ fc consists of just a vertex. Let P = Attach(q, I \ {a, b, c}), so that Attach(q, I \ {a, b}) = P ∪ fc . Since S is MNS in I, it follows from Theorem 3 that q is non-simple in I \{a, b, c} but q is simple in I, I \ {c}, I \ {b, c}, and I \ {a, b}. So it follows from Theorem 2 and Proposition 1 that χ(P ∩ fc ) = 1. Since Attach(q, I \ {b, c}) = P ∪ fa , it follows from Theorem 2, the fact that χ(x) = 1 for any xel x, and the Inclusion-Exclusion Principle for Euler characteristics that 1 = χ(P∪fa ) = χ(P)+1−χ(P∩fa ). Hence χ(P) = χ(P∩fa ). By symmetrical arguments we must have χ(P) = χ(P ∩ fa ) = χ(P ∩ fb ) = χ(P ∩ fc )
(2)
Similarly, since Attach(q, I \ {c}) = P ∪ fa ∪ fb , we have 1 = χ(P ∪ fa ∪ fb ) = χ(P) + 1 + 1 − χ(P ∩ fa ) − χ(P ∩ fb ) − 1 + χ(P ∩ fa ∩ fb ) and so by eqn. (2) we have χ(P) = χ(P ∩ fa ∩ fb ). By symmetrical arguments we must have χ(P) = χ(P ∩ fa ∩ fb ) = χ(P ∩ fb ∩ fc ) = χ(P ∩ fa ∩ fc ) (3) Again, since Attach(q, I) = P ∪ fa ∪ fb ∪ fc , we have 1 = χ(P ∪ fa ∪ fb ∪ fc ) = χ(P) + 1 + 1 + 1 − χ(P ∩ fa ) − χ(P ∩ fb ) − χ(P ∩ fc ) − 1 − 1 − 1 + χ(P ∩ fa ∩ fb ) + χ(P ∩ fb ∩ fc ) + χ(P ∩ fa ∩ fc ) + 1 − χ(P ∩ fa ∩ fb ∩ fc ) and so by equations (2) and (3) we have: χ(P) = χ(P ∩ fa ∩ fb ∩ fc ) (4) Recalling that χ(P ∩ fc ) = 1, we see that equations (2) and (4) imply χ(P ∩ fa ∩ fb ∩ fc ) = 1. Thus P ∩ fa ∩ fb ∩ fc = ∅ (since fa ∩ fb ∩ fc consists of just
4D Minimal Non-simple Sets
91
a vertex) and χ(P ∩ fa ∩ fb ∩ fc ) = 0. Therefore, by equations (2), (3), and (4), the 2-xel fa satisfies χ(P ∩ fa ) = χ(P ∩ fa ∩ fb ) = 0. Since χ(P ∩ fa ) = 0, P ∩ fa either is empty or is the union of the four edges of fa . But the latter is impossible because fa ∩ fb is an edge of fa that is not contained in P (since χ(P ∩ fa ∩ fb ) = 0). Hence P ∩ fa = ∅. Now if P =
∅ then P ∪ fa is disconnected, which contradicts Theorem 2 because P ∪ fa = Attach(q, I \ {b, c}) and q is simple in I \ {b, c}. So P = ∅ and, by Theorem 5, S is an 80-component of I.
6
Concluding Remarks
We have identified all types of sets of 4-xels that can be minimal non-simple (MNS) in a 4D binary image, and all types that can be MNS without being an 80-component of the image, when 4D 80-connectedness is used on 4-xels in the image and 4D 8-connectedness on 4-xels in its complement. This work is based on a characterization of simple 4-xels that was given in [6], and the InclusionExclusion Principle for Euler characteristics.
References 1. Ronse, C.: Minimal test patterns for connectivity preservation in parallel thinning algorithms for binary digital images. Discrete Applied Mathematics 21 (1988) 67–79 2. Hall, R.W.: Tests for connectivity preservation for parallel reduction operators. Topology and Its Applications 46 (1992) 199–217 3. Ma, C.M.: On topology preservation in 3D thinning. CVGIP: Image Understanding 59 (1994) 328–339 4. Gau, C.J., Kong, T.Y.: Minimal nonsimple sets of voxels in binary images on a facecentered cubic grid. International Journal of Pattern Recognition and Artificial Intelligence 13 (1999) 485–502 5. Kong, T.Y.: On topology preservation in 2D and 3D thinning. International Journal of Pattern Recognition and Artificial Intelligence 9 (1995) 813–844 6. Kong, T.Y.: Topology preserving deletion of 1’s from 2-, 3- and 4-dimensional binary images. In Ahronovitz, E., Fiorio, C., eds.: Discrete Geometry for Computer Imagery: 7th International Workshop (DGCI ’97, Montpellier, France, December 1997), Proceedings. Springer (1997) 3–18 7. Kong, T.Y., Roscoe, A.W.: Characterizations of simply-connected finite polyhedra in 3-space. Bulletin of the London Mathematical Society 17 (1985) 575–578 8. Saha, P.K., Kong, T.Y., Rosenfeld, A.: Strongly normal sets of tiles in N dimensions. Electronic Notes in Theoretical Computer Science 46 (2001) URL: http://www.elsevier.nl/locate/entcs/volume46.html 9. Spanier, E.H.: Algebraic Topology. Springer (1989) 10. Maunder, C.R.F.: Algebraic Topology. Dover (1996)
Receptive Fields within the Combinatorial Pyramid Framework Luc Brun1 and Walter G. Kropatsch2 1
´ Laboratoire d’Etudes et de Recherche en Informatique (EA 2618) Universit´e de Reims - France
[email protected] 2 Institute for Computer-aided Automation Pattern Recognition and Image Processing Group Vienna Univ. of Technology- Austria
[email protected] Abstract. A hierarchical structure is a stack of successively reduced image representations. Each basic element of a hierarchical structure is the father of a set of elements in the level below. The transitive closure of this father-child relationship associates to each element of the hierarchy a set of basic elements in the base level image representation. Such a set, called a receptive field, defines the embedding of one element of the hierarchy on the original image. Using the father-child relationship, global properties of a receptive field may be computed in O(log(m)) parallel processing steps where m is the diameter of the receptive field. Combinatorial pyramids are defined as a stack of successively reduced combinatorial maps, each combinatorial map being defined by two permutations acting on a set of half edges named darts. The basic element of a combinatorial pyramid is thus the dart. This paper defines the receptive field of each dart within a combinatorial pyramid and study the main properties of these sets.
1
Introduction
Regular image pyramids have been introduced 1981/82 [11] as a stack of images with exponentially reduced resolution. Each image of this sequence is called a level. Such Pyramids present several interesting properties within the image processing and analysis framework such as [4]: The reduction of noise, the processing of local and global features within the same frame and the efficiency of many computations on this structure. Using the neighborhood relationships defined on each image the Reduction window relates each pixel of the pyramid with a set of pixels defined in the level below. The pixels belonging to one reduction window are the children of the pixel which defines it. This father-child relationship maybe extended by transitivity down to the base level image. The set of children of one pixel in the base level is named its receptive field (RF) and
The authors wish to thank the anonymous reviewers for their useful comments. This Work was supported by the Austrian Science Foundation under P14445-MAT.
A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 92–101, 2002. c Springer-Verlag Berlin Heidelberg 2002
Receptive Fields within the Combinatorial Pyramid Framework
93
defines the embedding of this pixel on the original image. Using the father-child relationship global properties of a receptive field RF (v) with a diameter m may be computed in O(log(m)) parallel processing steps thanks to local calculus. However, receptive fields defined within the regular pyramid framework are not necessarily connected [4]. Furthermore, the adjacency of two pixels v and w defined at level k may not be easily interpreted on the base level image. Indeed, the boundary between the receptive fields RF (v) and RF (w) associated to this adjacency at level k may be disconnected and even incomplete. Irregular pyramids, first introduced by Meer [16], Montanvert [17] and Jolion [13] are defined as a stack of successively reduced simple graphs (i.e. graphs without double edges nor self-loops). The base level graph may be built from a sampling grid using one pixel adjacency such as the 4−neighborhood. Each graph of the hierarchy is built from the graph below by selecting a set of vertices named surviving vertices and mapping each non surviving vertex to a surviving one [17]. This mapping induces a father-child relationship between a surviving vertex and the set of non surviving vertices mapped to it. The reduction window of one surviving vertex is then defined as its set of children. The receptive field of one surviving vertex is defined by the transitive closure of the father-child relationship. Using this reduction scheme, the receptive field of each vertex in the hierarchy is a connected set of vertices in the base level graph. However, using simple graphs, the adjacency between two vertices is encoded by only one edge while the receptive fields of two vertices may share several boundaries. Edges in the hierarchy may thus encode non connected set of boundaries between the associated receptive fields. Moreover, the lack of self-loops in simple graphs does not allow to differentiate an adjacency relationship between two receptive fields from an inclusion relationship. The last two drawbacks may be overcome by using the Dual graph pyramids introduced by Kropatsch [14]. Using Kropatsch’s reduction scheme, the reduction operation is encoded by edge contractions [14]. This operation contracts one edge and its two end points into a single vertex. The contraction of a graph reduces the number of vertices while maintaining the connections to other vertices. As a consequence some redundant edges such as self-loops or double edges may occur. These redundant edges may be characterized in the dual of the contracted graph. The removal of such edges is called a dual decimation step. Since the reduction scheme requires both features of the initial graph and its dual such pyramids are called Dual graph pyramids. Within such hierarchies, each receptive field is a connected set of vertices in the base level. Moreover, each edge between two vertices in the hierarchy encodes an unique connected boundary between the associated receptive fields. Finally, the use of self-loops within the hierarchy allows to differentiate adjacency relationships between receptive fields from inclusions relations. The basic entity of all the above pyramids is the vertex/pixel. Combinatorial maps are based on darts. Hence receptive fields of Combinatorial Pyramids are expressed in terms of darts. Combinatorial pyramids are equivalent to dual graph pyramids with the exeption that they represent the orientation ex-
94
L. Brun and W.G. Kropatsch
plicitely. The expected advantages of such hierarchies within the image analysis framework are presented in [9]. The remaining of this paper is as follows: In Section 2 we present the combinatorial map model together with the expected advantages of this model within the Pyramid framework. In Section 3 we present the construction scheme of a combinatorial pyramid. Finally Section 4 defines the notion of receptive field within the combinatorial pyramid framework and states its major properties.
2
Combinatorial Maps
Combinatorial maps and generalized combinatorial maps define a general framework which allows to encode any subdivision of nD topological spaces orientable or non-orientable with or without boundaries. The concept of maps has been first introduced by Edmonds [12] in 1960 and later extended by several authors [15]. This model has been applied to several fields of computer imagery such as geometrical modeling [3] and 2D segmentation [1,5]. An exhaustive comparison of combinatorial maps with other boundary representations such as cell-tuples and quad-edges is presented in [15]. Recent trends in combinatorial maps apply this framework to the segmentation of 3D images [6,2] and the encoding of hierarchies [8,9]. The remaining of this paper will be based on 2D combinatorial maps which will be just called combinatorial maps. A combinatorial map may be seen as a planar graph encoding explicitly the orientation of edges around a given vertex. Fig. 1a) demonstrates the derivation of a combinatorial map from a plane graph. First edges are split into two half edges called darts, each dart having its origin at the vertex it is attached to. The fact that two half-edges (darts) stem from the same edge is recorded in the reverse permutation α. A second permutation σ encodes the set of darts encountered when turning counterclockwise around a vertex (see e.g. the σ-orbit (−8, −3, 11, 4) encoding the central vertex in Fig. 1a)). ✞ ✞ ✞ -1 ✆ -2 ✆ ✝7 ✆ ✝ ✝ σ ✞ ✞ σ ✞ ✞ σ 8 9 7 8 ✆✝ 2 ✆✝ 9 ✝1 ✆ ϕ✝ ϕ✞ ✆ -8 ✞ ✞ ✞ -7 -9 3 -3 4 -4 -7 ✆✝ -3 ✆✝ -8 ✆✝ -4 ✆✞ ✞ ✝ ✇ ✇ ✇ ✝ σ ✞ ✞ σ ✞ ✞ σ✝ 10 ✆ -9 ✆ 11 ✆✝ 4 ✆✝ 12 ✆ ✝3 ✆ ✝ 10 11 12 ϕ✞ ✞ ϕ✞ ✞ -11 -10 -5 ✆-11 -6 ✆ ✝ ✆ ✝ ✝ ✆ ✝ -10 5 -5 σ σ σ✞ 6 -6 -12 ✞ ✞ ✇ ✇ ✇ -12 ✝5 ✆ ✝6 ✆ ✝✆ ✇
1 -1
2 -2
✇
✇
a)
b)
✞✛ ✞ ✞ ✔-1 ✛ ✗ -2 ✆ ✝7 ✆ ✝✆ ✝ σ✞ ✞✁ σ❆ σ ✻ ❑✞ ✞❄ ☛ ✲ 1✲ 8✲ 2✲ 9 ✕ ✖ ✝✆ ✆✝ ✆ ✆ ϕ✝ ϕ✝ ✞✻ ✞❄ ✞✻ ✞❄ ✛ ✛ ✛ -7 ✆✝ -3 ✆✝ -8 ✆✝ -4❍ ✞❄ ✝ ✆✞ ✙ ✟ σ ✞❄ σ ✞✻ σ✝ 10 ✆ -9 ✆ ✞❄ ✝ ❍ ❥✞3✻ ✯ ✲11✲ ✲ ✟ 4 ✆✝ 12 ✆ ✻ ✝✆ ✝ ✆ ✝ ϕ✞❄ ϕ✞❄ ✞✻ ✞✻ ✛ ✛ ✗ ✔ -10 -5 ✛ -11 -6 ✛ ✝ ✆ ✝ ✆ ✝ ✆ ✝ ✆ σ σ✁✕ σ✞ ✞❄ ✞ ✻ ❆ ✲6 ✲-12 ✕ ✖ ✝5 ✆ ✝✆ ✝✆ c)
Fig. 1. A 3 × 3 grid encoded by a combinatorial map
Receptive Fields within the Combinatorial Pyramid Framework
95
The symbols α∗ (d) and σ ∗ (d) stand, respectively, for the α and σ orbits of the dart d. More generally, if d is a dart and π a permutation we will denote the π-orbit of d by π ∗ (d). A combinatorial map G is the triplet G = (D, σ, α), where D is the set of darts and σ, α are two permutations defined on D such that α is an involution: ∀d ∈ D
α2 (d) = d
(1)
Note that, if the darts are encoded by positive and negative integers, the involution α may be implicitly encoded by sign (Fig. 1a)). This convention is often used for practical implementations [5] where the combinatorial map is simply implemented by an array of integers encoding the permutation σ. Given a combinatorial map G = (D, σ, α), its dual is defined by G = (D, ϕ, α) with ϕ = σ ◦ α. The orbits of the permutation ϕ encode the set of darts encountered when turning around a face (see e.g. the ϕ-orbit (1, 8, −3, −7) in Fig. 1a)). Note that, using a counter-clockwise orientation for permutation σ, each dart of a ϕ-orbit has its associated face on its right. Figures 1b) and 1c) illustrate an alternative representation of the combinatorial map encoding. Within such a representation, each dart is represented by one vertex and one edge connects a dart d1 to d2 iff either d2 = σ(d1 ) or d2 = ϕ(d1 ). Using this representation, the σ and ϕ orbits of the combinatorial map are represented by the faces of the oriented graph.
3
Combinatorial Pyramids
The aim of combinatorial pyramids is to combine the advantages of combinatorial maps with the reduction scheme defined by Kropatsch [14] (see also Section 1). A combinatorial pyramid is thus defined by an initial combinatorial map successively reduced by a sequence of contraction or removal operations. In order to preserve the number of connected components of the initial combinatorial map, we forbid the removal of bridges and the contraction of self-loops. A self-loop in the initial combinatorial map becomes a bridge in its dual and vice-versa [10]. In the same way, a contraction operation in the initial combinatorial map is equivalent to a removal operation performed in its dual. Therefore, the exclusion of bridges and self-loops from respectively removal and contraction operations corresponds to a same constraint applied alternatively on the dual combinatorial map and the original one. In order to avoid the contraction of self-loops, the set of edges to be contracted must form a forest of the initial combinatorial map. The graph of a combinatorial map is a forest if it does not contain a cycle. A more formal definition may be found in [7][Def. 4]. A set of edges to be contracted satisfying the above requirement is called a contraction kernel: Definition 1. Contraction Kernel Given a connected combinatorial map G = (D, σ, α) the set K ⊂ D will be called a contraction kernel iff K is a forest of G. The set SD = D − K is called the set of surviving darts.
96
L. Brun and W.G. Kropatsch
In the same way, a removal kernel is defined as a forest of the dual combinatorial map. This constraint insures that no self-loop will be contracted in the dual combinatorial map and thus that no bridge may be removed in the initial one. Within our framework, a removal kernel is used to remove redundant edges created by contraction operations. These edges are characterized as self-loops or double edges [14]. A removal kernel is thus defined as a forest of the dual combinatorial map removing any redundant double edge and self-loop. Since one contraction encodes a merge between two regions, the set of darts encoding the adjacency relationships between the image and its background must be excluded from contraction operations. However, after a contraction step, some edges of the image boundary may become double edges. The removal of such edges corresponds to the concatenation of the associated boundaries in the base level image. Such operation preserves the image boundary and is thus allowed. Contraction and removal kernels specify the set of edges which must be contracted or removed. The creation of the reduced combinatorial map from a contraction or a removal kernel is performed in parallel by using connecting walks [9]. Given a combinatorial map G = (D, σ, α), a kernel K and a surviving dart d ∈ SD = D − K, the connecting walk associated to d is either equal to: CW (d) = d, ϕ(d), . . . , ϕn−1 (d) with n = M in{p ∈ IN∗ | ϕp (d) ∈ SD} if K is a contraction kernel and CW (d) = d, σ(d), . . . , σ n−1 (d) with n = M in{p ∈ IN∗ | σ p (d) ∈ SD} If K is a removal kernel. Given a kernel K and a surviving dart d ∈ SD, such that CW (d) = d.d1 . . . dp , the successor of d within the reduced combinatorial map G = G/K = (SD, σ , α) is retrieved from CW (d) by the following equations [9]: ϕ (d) = ϕ(dp ) if K is a contraction kernel σ (d) = σ(dp ) if K is a removal kernel
(2)
Note that, if K is a contraction kernel, the connecting walk CW (d) allows to compute ϕ (d). The σ-successor of d within the contracted combinatorial maps may be retrieved from CW (α(d)) = α(d).d1 , . . . , dp . Indeed, we obtain by using equations 1 and 2: ϕ (α(d)) = σ (α(α(d))) = σ (d) = ϕ(dp ). Fig. 2a) shows the three connection walks defined by kernel α∗ (1, 7, 10) corresponding to the contracted maps in Fig. 2b). On this example, we have ϕ (−2) = 5, ϕ (−5) = 3 and ϕ (−3) = 8. The permutation ϕ remains unchanged for the other surviving darts. We have thus: σ (2) = 5, σ (5) = 3 and σ (3) = 8 while the permutation σ is unchanged for the other surviving darts. Based on the above property we designed two algorithms to traverse connecting walks respectively defined by contraction and removal kernels [9]. One straightforward application of these algorithms consists to compute the reduced combinatorial maps. However, connecting walks may also be used to relate one combinatorial map of the pyramid with the one below. Indeed, we showed [8] that
Receptive Fields within the Combinatorial Pyramid Framework
97
each non surviving dart belongs to one and only one connecting walk. Therefore, the connecting walk CW (d) of each surviving dart d contains the dart d and all non surviving darts mapped to it. Such a sequence of darts corresponds to the reduction window associated to the surviving dart d.
4
Receptive Fields of Darts
The notion of connecting walks introduced in Section 3 allows us to build one reduced combinatorial map from an initial one and a contraction or removal kernel. Therefore, given a sequence of kernels K1 , . . . , Kn and an initial combinatorial map G0 = (D, σ, α) defined from a planar sampling grid (e.g. the 4−neighborhood one) one can define the sequence of reduced combinatorial maps G0 , G1 , . . . , Gn where Gi = Gi−1 /Ki = (SDi , σi , α) for each i ∈ {1, . . . , n}. Note that according to the definition of surviving darts (Definition 1) we have SDi = SDi−1 − Ki = D − ∪ij=1 Kj . Intuitively, one connecting walk CWi (d) defines the set of darts that we have to traverse in the level below in order to connect d to ϕi (d) if Ki is a contraction kernel and d to σi (d) if Ki is a removal kernel (see equation 2). Let us consider the sequence of darts CDSi (d) that we have to traverse in the base level graph G0 to connect d to ϕi (d) if Ki is a contraction kernel and d to σi (d) if Ki is a removal kernel. Such a sequence of darts is called a connecting dart sequence(CDS). Moreover, using the construction scheme described below, we showed [8] that the first dart of CDSi (d) is d. A connecting dart sequence CDSi (d) without its first dart will be denoted CDSi∗ (d). To construct a CDS, let us first suppose that Ki is a contraction kernel. If Ki+1 is a removal kernel we have to traverse the sequence of darts CWi+1 (d) = d.d1 , . . . , dp in Gi to connect d to σi+1 (d). Each dart of CWi+1 (d) is related to the following one by: d1 = σi (d), ∀j ∈ {1, . . . , p − 1} dj+1 = σi (dj ) Since Ki is a contraction kernel, CDSi (dj ) connects dj to ϕi (dj ) for any j in {1, . . . , p − 1}. However, CDSi (α(dj )) connects α(dj ) to ϕi (α(dj )) = σi (α ◦ α(dj )) = σi (dj ) (see equation 1). The connection between dj and σi (dj ) is thus performed by dj CDSi∗ (α(dj )) and the connecting dart sequence of d at level i + 1 is equal to CDSi+1 (d) = d1 · CDSi∗ (α(d1 )) · · · dp · CDSi∗ (α(dp )). Fig. 2c) illustrates the connecting dart sequences defined by the applications of the contraction kernel K1 = α∗ (1, 7, 10) followed by the removal kernel K2 = α∗ (3). Connecting walks defined by K1 are illustrated in Fig. 2a) while the contracted combinatorial map G1 = G0 /K1 is represented in Fig. 2b) together with the connecting walks defined by K2 . According to Fig. 2b) we have to traverse the dart 3 in G1 to connect 5 to σ2 (5) = 8. Moreover, using Fig. 2a), the connection between 5 and 3 requires to traverse the dart −10 in G0 while the connection between 3 and 8 requires to traverse the darts −7 and 1. The connecting dart sequence associated to 5 is thus equal to CDS2 (5) = 5CDS1∗ (−5)3.CDS1∗ (−3) = 5. − 10.3. − 7.1 (Fig. 2c)).
98
L. Brun and W.G. Kropatsch
Conversely, if both kernels Ki and Ki+1 are contraction kernels, the connecting dart sequence of d at level i + 1 is equal to [8]: CDSi (d).CDSi (d1 ) · · · CDSi (dp ). The same construction scheme holds if Ki is a removal kernel. Both cases are resumed in the following definition: Definition 2. Given a combinatorial map G0 = (D, σ, α) and a sequence of contraction or removal kernels K1 , K2 . . . , Kn . The connecting dart sequences are defined by the following recursive construction: ∀d ∈ D
CDS0 (d) = d
For each level i in {1, . . . , n} and for each dart d in SDi – If Ki and Ki−1 have the same type: CDSi (d) = CDSi−1 (d1 ) · · · CDSi−1 (dp ) – If Ki and Ki−1 have different types: ∗ ∗ CDSi (d) = d1 · CDSi−1 (α(d1 )) · · · dp · CDSi−1 (α(dp ))
Where (d1 . . . dp ) is equal to CWi (d). The kernels K0 = ∅ and K1 have the same type by convention. ✞✛ ✞ ✞ ✞ ✞✛ ✞ ✔-1 ✛ ✗ -2 ✆ -2 ✆ ✝ 7✆ -1 ✆ ✎ ✗ ✝7 ✆ ✝✆ ✝ ✝ ✝ ϕ✞❄ σ✞ ✞✁ σ❆ σ ✻ σ✞ σ ✻ ❑✞ ✞❄ ❆❑✞ ✞ ✞ ☛ ✲ 1✲ 8✲ 2✲ 9 ✕ ✲ 8✲ 2✲ 9 ✕ ✖ ✎ 1✆ ✝✆ ✝ ✆ ✝ ✆ ✝ ✆ ✝ ✆ ✝ ✆ ✝ ✆ ✝ ✝2 ✆ ϕ✞❄ ϕ✞❄ ϕ✞✞❄ ϕ✞❄ ✞✞ ✞ ✞✻ ✞✻ ✞✻ ✞✻ ✛ ✛ ✛ ✛ ✛ ✛ ✍ -7 ✆✝ -3 ✆✝ -8 ✆✝ -4❍ -3 ✆✝ -8 ✆ -4❍ -7 ✆ -3 ✆✝ -8 ✆ ✞❄ ✝ ✆✞ ✆✞ ✞❄✝ ✝✝ ✆✝ ✝✝ ✆ ✙ ✟ σ ✞❄ σ ✞✻ σ✝ σ ✞✻ σ✝ 10 ✆ -9 ✆ -9 ✆ ✝ 10 ✆✞✻ ✞❄ ✞✲ ✞❄ ✞❄ ✝ ❍ ❥✞3✻ ✯ ✯ ✲11✲ ✲ ✟ ✲ ✲ ✟ 4 12 3 11 4 12 3 ✝✆ ✆✝ ✆ ✆ ✻ ✝✆ ✝ ✆✝ ✆ ✆ ✻ ✝✆ ϕ✝ ϕ✝ ϕ✝ ✞✞✻ ✞❄ ✞✻ ✞❄ ✞✻ ✞❄ ✞✻ ✻ϕ✞❄ ✛ ✛ ✛ ✛ ✛ ✛ ✛ ✗ ✔ ✖ ✔ -10 -10 -5 -11 -6 -5 -11 -6 ✝ ✆ ✝ ✆ ✝ ✆ ✝ ✆ ✝ ✆ ✝ ✆ ✝ ✆ ✝✆ ✝ ✆ σ σ✁✕ σ✞ ✞❄ σ σ✁✕ σ✞ ✞ ✞❄ ✻ ✲❆✞ ❆✞ ✻ ✲ ✲ ✲ ✕ ✖ ✕ ✖ ✕ -12 -12 ✝5 ✆ ✝6 ✆ ✝✆ ❅ ✝5 ✆ ✝6 ✆ ✝✆ ❅ ✝5 ✆ ❅ ❅ a)G0 , CW1 (−2) ,
CW1 (−3) , CW1 (−5)
b)G1 , CW2 (−8) , CW2 (5)
c) CDS2 (2) ,
CDS2 (5) , CDS2 (−8)
Fig. 2. Receptive fields CDS2 are based on connecting walks CW1 and CW2 .
The set of connecting dart sequences defined at each level defines a partition of the initial set of darts D. Moreover, each connecting dart sequence CDSi (d) defined at level i by d ∈ SDi satisfies [8] CDSi∗ (d) ⊂ ∪ij=1 Ki . Therefore, d is the only dart of CDSi (d) surviving up to level i and CDSi∗ (d) encodes the set of non surviving darts mapped to it at level i. Connecting dart sequences encode thus the notion of receptive field within the combinatorial map framework.
Receptive Fields within the Combinatorial Pyramid Framework
99
Using Definition 2 connecting dart sequences must be computed at each level from the level below. However, such a construction scheme may induce useless calculus if one does not need to compute all connecting dart sequences defined from level 1 to level i. This recursive construction scheme may be avoided by using the following theorem [8]: Theorem 1 Given a combinatorial map G0 = (D, σ, α), a sequence of contraction kernels or removal kernels K1 , K2 . . . , Kn , the relation between the successive darts of a connecting dart sequence CDSi (d) = (d.d1 , . . . , dp−1 ), with i ∈ {1, . . . , n} and d ∈ SDi is as follows: ϕ(d) If Ki is a contraction kernel d1 = σ(d) If Ki is a removal kernel ϕ(dj−1 ) if dj−1 has been contracted ∀j ∈ {2, . . . , p} dj = σ(dj−1 ) if dj−1 has been removed Therefore, connecting dart sequences may be computed directly from the base level graph G0 given the type of the kernel Ki and the type of the reduction operation applied to each dart in ∪ij=1 Kj . This last information may be stored in each dart during the construction of the pyramid by a function state from ∪ij=1 Kj to {Contracted, Removed} such as state(d) is equal to either Contracted or Removed according to the type of the kernel applied to d before level i. Note that the storage of this function only adds on bit per dart. Algorithm 1 uses the function state and the properties established by Theorem 1 to compute one connecting dart sequence at level i from the base level graph G0 . The complexity of this algorithm is linear in the length of the computed connecting dart sequence. Algorithms computing all connecting dart sequences defined at level i in parallel are described in [8]. The complexity of these algorithms is O(log(Mi )) where Mi denotes the longest connecting dart sequence defined at level i. Moreover, by construction connecting dart sequences encode the set of darts in G0 that we have to traverse to connect one surviving dart at level i with its ϕi or σi successor according to Ki . Indeed, given one dart d ∈ SDi , such that CDSi (d) = d.d1 , . . . , dp we showed [8] that: – If Ki is a contraction kernel: ϕ(dp ) if dp has been contracted ϕi (d) = σ(dp ) if dp has been removed – If Ki is a removal kernel: ϕ(dp ) if dp has been contracted σi (d) = σ(dp ) if dp has been removed
(3)
(4)
Note that equations 3 and 4 are similar to equations 2 defined for connecting walks. Given an encoding of the function state and the set of surviving darts SDi , equations 3 and 4 combined with Algorithm 1 allow us to retrieve the reduced
100
L. Brun and W.G. Kropatsch cds compute cds(map G0 ,dart d,level i) { cds CDS=d if (Ki is a contraction kernel) compute cds rec(G0 ,ϕ(d),i,CDS); else compute cds rec(G0 ,σ(d),i,CDS); return CDS } cds compute cds rec(map G0 ,dart d’,level i,cds CDS) { if (d ∈ SDi ) return CDS; CDS=CDS.d’ if (state(d ) = Contracted) compute cds rec(G0 ,ϕ(d ),i,CDS); else compute cds rec(G0 ,σ(d ),i,CDS); return CDS }
Algorithm 1: Computation of the connecting dart sequence of a dart d at level i.
combinatorial map Gi without computing G1 , . . . , Gi−1 . This last property may be useful for the analysis of hierarchies where the first levels of the pyramid often corresponds to low level operations.
5
Conclusion
The presented concept of connecting dart sequences uses the principle of receptive fields for the first time within the combinatorial pyramid framework. The darts of a combinatorial map refine the pixel subdivision of a digital image. Hence receptive fields based on darts can distinguish more configurations. Moreover, using the order defined on such sequences we showed that any reduced combinatorial map may be retrieved directely from the base level. Higher level concepts are typically associated with higher level structural entities like a dart or a vertex at a high pyramid level. Using the receptive field of these structural entities properties and parameters of the corresponding high level concepts may be computed, maybe through complicated calculations, maybe using the hierarchical decomposition, from the gray values, the colors or the coordinates of the discrete (pixel) measurements. In the future we plan to study the interactions between the highly flexible contraction scheme of combinatorial pyramids and the efficiency of computing high level descriptions
Receptive Fields within the Combinatorial Pyramid Framework
101
References [1] E. Ahronovitz, J. Aubert, and C. Fiorio. The star-topology: a topology for image analysis. In 5th DGCI Proceedings, pages 107–116, 1995. [2] Y. Bertrand, G. Damiand, and C. Fiorio. Topological map: Minimal encoding of 3d segmented images. In J. M. Jolion, W. Kropatsch, and M. Vento, editors, 3rd Workshop on Graph-based Representations in Pattern Recognition, pages 64–73, Ischia(Italy), May 2001. IAPR-TC15, CUEN. [3] Y. Bertrand and J. Dufourd. Algebraic specification of a 3D-modeler based on hypermaps. CVGIP: Graphical Models and Image Processing, 56(1):29–60, Jan. 1994. [4] M. Bister, J. Cornelis, and A. Rosenfeld. A critical view of pyramid segmentation algorithms. Pattern Recognit Letter., 11(9):605–617, Sept. 1990. [5] J. P. Braquelaire and L. Brun. Image segmentation with topological maps and inter-pixel representation. Journal of Visual Communication and Image representation, 9(1), 1998. [6] J. P. Braquelaire, P. Desbarats, and J. P. Domenger. 3d split and merge with 3-maps. In J. M. Jolion, W. Kropatsch, and M. Vento, editors, 3rd Workshop on Graph-based Representations in Pattern Recognition, pages 32–43, Ischia(Italy), May 2001. IAPR-TC15, CUEN. [7] L. Brun and W. Kropatsch. Pyramids with combinatorial maps. Technical Report PRIP-TR-057, PRIP, TU Wien, 1999. [8] L. Brun and W. Kropatsch. The construction of pyramids with combinatorial maps. Technical Report 63, Institute of Computer Aided Design, Vienna University of Technology, lstr. 3/1832,A-1040 Vienna AUSTRIA, June 2000. [9] L. Brun and W. Kropatsch. Contraction kernels and combinatorial maps. In J. M. Jolion, W. Kropatsch, and M. Vento, editors, 3rd IAPR-TC15 Workshop on Graph-based Representations in Pattern Recognition, pages 12–21, Ischia Italy, May 2001. IAPR-TC15, CUEN. [10] L. Brun and W. G. Kropatsch. Dual contraction of combinatorial maps. In W. G. Kropatsch and J.-M. Jolion, editors, 2nd IAPR-TC-15 Workshop on Graphbased Representations, volume 126, pages 145–154, Haindorf, Austria, May 1999. ¨ Osterreichische Computer Gesellschaft. [11] P. Burt, T.-H. Hong, and A. Rosenfeld. Segmentation and estimation of image region properties through cooperative hierarchial computation. IEEE Transactions on Sustems, Man and Cybernetics, 11(12):802–809, December 1981. [12] J. Edmonds. A combinatorial representation for polyhedral surfaces. Notices American Society, 7, 1960. [13] J. Jolion and A. Montanvert. The adaptative pyramid : A framework for 2d image analysis. Computer Vision, Graphics, and Image Processing, 55(3):339–348, May 1992. [14] W. G. Kropatsch. Building Irregular Pyramids by Dual Graph Contraction. IEEProc. Vision, Image and Signal Processing, Vol. 142(No. 6):pp. 366–374, December 1995. [15] P. Lienhardt. Topological models for boundary representations: a comparison with n-dimensional generalized maps. Computer-Aided Design, 23(1):59–82, 1991. [16] P. Meer. Stochastic image pyramids. Computer Vision Graphics Image Processing, 45:269–294, 1989. [17] A. Montanvert, P. Meer, and A. Rosenfeld. Hierarchical image analysis using irregular tessellations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(4):307–316, APRIL 1991.
A New 3D 6-Subiteration Thinning Algorithm Based on P -Simple Points Christophe Lohou and Gilles Bertrand Laboratoire d’Algorithmique et Architecture des Syst`emes Informatiques (A2 si), ´ ´ ´ Ecole Sup´erieure d’Ing´enieurs en Electrotechnique et Electronique (Esiee), 2, bld Blaise Pascal, Cit´e Descartes, BP 99, F-93162 Noisy-le-Grand Cedex, France {lohouc,bertrang}@esiee.fr
Abstract. In a recent study [1], we proposed a new methodology to build thinning algorithms based on the deletion of P -simple points. This methodology may permit to conceive a thinning algorithm A from an existent thinning algorithm A, such that A deletes at least all the points removed by A, while preserving the same end points. In this paper, by applying this methodology, we propose a new 6-subiteration thinning algorithm which deletes at least all the points removed by the 6-subiteration thinning algorithm proposed by Pal´ agyi and Kuba [2].
1
Introduction
Some graphical applications require to transform objects while preserving their topology [2,3]. That leads to the well-known notion of simple point: a point in a binary image is said to be simple if its deletion from the image “preserves the topology” [4,5,6,7,8,9,10,11,12,13,14]. A process deleting simple points is called a thinning algorithm. During the thinning process, certain simple points are kept in order to preserve some geometrical properties of the object. Such points are called end points. The result obtained by a thinning algorithm is called a skeleton. A process deleting simple points in parallel may not preserve the topology. For example, a two-width ribbon may vanish because all its points are simple. Therefore, a parallel thinning algorithm must use a “certain deletion strategy” in order to preserve the topology. For example, we may consider a deletion strategy based on subiterations, which consists in dividing a deletion iteration into several subiterations. These subiterations may be based on directions [15, 16,17,2,18] or on subgrids [19,20]. Another example of deletion strategy consists in using an extended neighborhood; such a strategy may lead to fully parallel thinning algorithms [21,3,22]. One of the authors has proposed the notion of P -simple point [23]. A subset composed solely of P -simple points may be deleted in one time while preserving the topology. Furthermore, a P -simple point may be locally characterized, once P is known. In a recent paper [1], we proposed a set P x , locally defined for each point x and from a set P . That has permitted us to propose a new A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 102–113, 2002. c Springer-Verlag Berlin Heidelberg 2002
A New 3D 6-Subiteration Thinning Algorithm Based on P -Simple Points U x
W
E S D
(a)
(b)
p19 p20 p10 p11 p0 p1 p2 p21 p22 p23 p12 p13 p14 p3 p4 p5 p24 p25 p26 p15 p16 p17 p6 p7 p8 p9
N
103
p18
(c)
Fig. 1. (a) The 6-, 18-, and 26-neighbors of x, (b) the six major directions, (c) the used notations
thinning scheme, based on the deletion of P x -simple points, which needs neither a preliminary step of labelling nor the examination of an extended neighborhood, in contrast to the already proposed thinning algorithms based on P -simple points [23]. In this paper, our purpose is to design a new 3D 6-subiteration thinning algorithm based on the deletion of P x -simple points. We apply our general methodology proposed in [1]: from the 6-subiteration thinning algorithm devised by Pal´agyi and Kuba [2], we conceive a first thinning algorithm deleting P x -simple points; then we improve it in such a way that it may delete at least all the points removed by the Pal´ agyi and Kuba’s thinning algorithm, while preserving the same end points.
2
Basic Notions
A point x ∈ ZZ 3 is defined by (x1 , x2 , x3 ) with xi ∈ ZZ. We consider the three neighborhoods: N26 (x) = {x ∈ ZZ 3 : M ax[|x1 − x1 |, |x2 − x2 |, |x3 − x3 |] ≤ 1}, N6 (x) = {x ∈ ZZ 3 : |x1 −x1 |+|x2 −x2 |+|x3 −x3 | ≤ 1}, and N18 (x) = {x ∈ ZZ 3 : |x1 −x1 |+|x2 −x2 |+|x3 −x3 | ≤ 2}∩N26 (x). We define Nn∗ (x) = Nn (x)\{x}. We ∗ call respectively 6-, 18-, 26-neighbors of x the points of N6∗ (x), N18 (x) \ N6∗ (x), ∗ ∗ N26 (x) \ N18 (x); these points are respectively represented in Fig. 1 (a) by black triangles, black squares, and black circles. The 6-neighbors of x determine six major directions (Fig. 1 (b)): Up, Down, North, South, West, East; respectively ∗ denoted by U , D, N , S, W and E. Each point of N26 (x) may characterize one direction amongst the 26 that we can obtain from the 6 major ones; e.g. SW , ∗ U SW . . . Let Dir denote one of these 26 directions. The point in N26 (x) along the direction Dir is called the Dir-neighbor of x and is denoted by Dir(x). In the following, points in N26 (x) are often denoted by pi , with 0 ≤ i ≤ 26 (Fig. 1 (c)); for example, p0 is the U SW -neighbor of p13 , i.e. p0 = U SW (p13 ). Let X ⊆ ZZ 3 . The points belonging to X (resp. X, the complement of X in ZZ 3 ) are called black points (resp. white points). Two points x and y are said to be n-adjacent if y ∈ Nn∗ (x) (n = 6, 18, 26). An n-path is a sequence of points x0 , . . . , xk , with xi n-adjacent to xi−1 and
104
C. Lohou and G. Bertrand
x
x
x
x
(a)
(b)
(c)
(d)
Fig. 2. Points belonging to X and X are respectively represented by black discs and white circles. Only the point x in (d) is 26-simple
1 ≤ i ≤ k. If x0 = xk , the path is closed. Let X ⊆ ZZ 3 . Two points x ∈ X and y ∈ X are n-connected if they are linked by an n-path included in X. The equivalence classes relative to this relation are the n-connected components of X. If X is finite, the infinite connected component of X is the background, the other connected components of X are the cavities. In order to have a correspondence between the topology of X and the one of X, we have to consider two differents kinds of adjacency for X and for X [6]: if we use an n-adjacency for X, we have to use another n-adjacency for X. In this paper, we only consider (n, n) = (26, 6). The presence of an n-hole in X is detected whenever there is a closed n-path in X that cannot be deformed, in X, into a single point (see [5], for further details). For example, a hollow ball has one cavity and no hole, a solid torus has one hole and no cavity, and a hollow torus has one cavity and two holes. Let X ⊆ ZZ 3 . A point x ∈ X is said to be n-simple if its removal does not “change the topology” of the image, in the sense that there is a one to one correspondence between the components, the holes of X and X and the components, the holes of X \ {x} and X ∪ {x} (see [5], for a precise definition). The set composed of all n-connected components of X is denoted by Cn (X). The set of all n-connected components of X and n-adjacent to a point x is denoted by Cnx (X). Let #X denote the number of elements which belong to X. The topological numbers relative to X and x are the two numbers [10]: T6 (x, X) = ∗ ∗ #C6x [N18 (x) ∩ X] and T26 (x, X) = #C26 [N26 (x) ∩ X]. These numbers lead to a very concise characterization of 3D simple points [24]: x ∈ X is 26-simple for X if and only if T26 (x, X) = 1 and T6 (x, X) = 1. Some examples are given in Fig. 2. The topological numbers relative to x and X or X are: (T26 (x, X), T6 (x, X)) = (1, 2), (2, 1), (1, 2), (1, 1) for the configurations (a), (b), (c) and (d), respectively. Only the configuration in Fig. 2 (d) corresponds to a 26-simple point.
3
P -Simple Points
Let us introduce the notions of P -simple point and P -simple set [23]. In the following, we consider a subset X of ZZ 3 , a subset P of X, and a point x of P . The point x is P -simple (for X) if for each subset S of P \ {x}, x is 26simple for X \ S. Let S(P ) denote the set of all P -simple points. A subset D
A New 3D 6-Subiteration Thinning Algorithm Based on P -Simple Points
r
x
x
x
(a)
(b)
(c)
p
105
q
Fig. 3. Points belonging to R, P and X are respectively represented by black discs, black stars and white circles. Only the points x in (a) and (b) are P -simple
of X is P -simple if D ⊆ S(P ). We have the remarkable property that any algorithm removing only P -simple subsets (i.e. subsets composed solely of P simple points) is guaranteed to keep the topology unchanged [23]. We give a local characterization of a P -simple point [25] (see also [26]): Proposition 1. Let R denote the set X \ P . The point x is P -simple iff: T (x, R) = 1, 26 T6 (x, X) = 1 , ∗ ∀y ∈ N26 (x) ∩ P, ∃ z ∈ R such that z is 26-adjacent to x and to y , ∀y ∈ N6∗ (x) ∩ P, ∃ z ∈ X and ∃ t ∈ X such that {x, y, z, t} is a unit square . Some examples are given in Fig. 3: only the points x in (a) and (b) are P -simple. Let us consider the subset X depicted in Fig. 3 (c). The subset S = {p, q, r} is a subset of P \ {x}; and x is non-simple for X \ S. Therefore, by definition, the point x cannot be a P -simple point; or directly with the Proposition 1, the first P -simplicity condition is not verified because T26 (x, R) = 2. For each x of ZZ 3 , we consider a finite family T of pairs of subsets of ZZ 3 k (B (x), W k (x)) with 1 ≤ k ≤ l, such that B k (x) ∩ W k (x) = ∅ and x belongs to B k (x); T is said to be a family of templates. In the following, we consider a subset X of ZZ 3 . Let P (T , X) = {x ∈ ZZ 3 : ∃k with 1 ≤ k ≤ l such that B k (x) ⊆ X and W k (x) ⊆ X}. In fact, P (T , X) corresponds to a Hit or Miss transform of X by T [27,28]. A thinning algorithm, based on the deletion of P -simple points, usually considers subsets P which may be characterized by a certain family T of templates. Such an algorithm must decide whether a point x is P (T , X)-simple or not: it must check if the point x belongs to P (T , X), and in order to check the four ∗ conditions of the Proposition 1, it must check if the points y of N26 (x) belong to P (T , X). Such an algorithm may operate according to different ways to detect the points belonging to P (T , X) and the points being P (T , X)-simple: it use either a preliminary step of labelling or the examination of an extended neighborhood [23] (see details in [1]). Note that a general strategy has already been proposed to design different thinning schemes and algorithms based on P -simple points [29] (see also [30,31]).
106
C. Lohou and G. Bertrand
Let us introduce a subset P x , locally defined for each point x of ZZ 3 and from a set P (described as previously by a family T of templates) [1,32]. From this subset, we will derive the notion of a P x -simple point. For each x of ZZ 3 , we define a new subset P x (T , X) of ZZ 3 , determined by P x (T , X) = {y ∈ N26 (x) : ∃k with 1 ≤ k ≤ l such that [B k (y) ∩ N26 (x)] ⊆ X and [W k (y) ∩ N26 (x)] ⊆ X}. In fact, P x (T , X) is constituted by the points y of N26 (x) ∩ X which “may belong” to P (T , X), by the only inspection of membership to X or to X of points belonging to [B k (y) ∪ W k (y)] ∩ N26 (x). We have P x (T , X) ⊇ [P (T , X) ∩ N26 (x)]. We have proven that a P x (T , X)-simple point is P (T , X)-simple [32]. This implies that an algorithm deleting in parallel P x (T , X)-simple points is guaranteed to preserve the topology, because it deletes P (T , X)-simple subsets. In addition, since P x (T , X) is completely known in N26 (x) for each point x, that permits us to propose a new thinning scheme, based on the deletion of P x (T , X)simple points x, which needs neither a preliminary step of labelling nor the examination of an extended neighborhood, in contrast to the already proposed thinning algorithms based on P (T , X)-simple points (see [1] for further details). In Sect. 5, we will propose a thinning algorithm deleting P x (T , X)-simple points. Notations: In the following, we write P (resp. P x ) instead of P (T , X) (resp. x P (T , X)) and “x is a P -simple point (resp. P x -simple point) means “x is a P (T , X)-simple point (resp. P x (T , X)-simple point)”.
4
Description of the Used Thinning Algorithms
A thinning scheme consists in the repetition until stability of deletion iterations. In the case of 6-subiteration thinning algorithms, an iteration is divided into 6 subiterations, each of them successively corresponding to one of the 6 following directions: Up, Down, North, South, East and West (see Fig. 1 (b)). Let α denotes such a direction. The stability is obtained when there is no more deletion during 6 successive subiterations. Such a thinning scheme can be described by X i = X i−1 \ DEL(X i−1 , α) for the ith deletion subiteration (i > 0), with X 0 = X, and DEL(Y, α) being the set of points to be deleted from Y , according to the direction α corresponding to the ith subiteration. The stability is obtained when X k = X k+6 . Pal´ agyi and Kuba have proposed a 6-subiteration thinning algorithm [2], denoted by pk in the following. A set of 3 × 3 × 3 matching templates is given for each direction. For a given direction α, a point is deletable by pk if at least one template (or theirs rotations around the axis along the direction α) in the set of templates matches it. The set of templates used by pk along the direction α, is denoted by Tα and is represented in Fig. 5 for the direction α = U ; see notations in Fig. 4. The templates for the other directions can be obtained by appropriate rotations and reflections of these templates. Sometimes, we will write that “Tα deletes a point” to mean pk deletes this point during a subiteration along the direction α. We recall the definition of an end point, adopted in [2], that we will also use in our proposed algorithm. A black point x is an end point if the set ∗ N26 (x) contains exactly one black point. We note that end points are prevented
A New 3D 6-Subiteration Thinning Algorithm Based on P -Simple Points
107
A position marked by a matches a black point. A position marked by a matches a white point. At least one position marked by a belongs to X. Every position non marked matches either a black or a white point. A position marked by a matches a black point belonging to a considered set P . Fig. 4. Notations used in the following of the paper
x
α=U
M1
x
M2
x
M3
x
x
x
M4
M5
M6
Fig. 5. The set TU of thinning templates for the direction U , up to rotations around the vertical axis (see notations in Fig. 4)
to be deleted by the templates of Tα . According to the previous general thinning scheme (described in the beginning of this section), DEL(Y, α) is the set of points of Y such that at least one of the templates of Tα matches them, for the direction α corresponding to the deletion subiteration. A 6-subiteration thinning algorithm removing P -simple points, has already been proposed [23]. Now, we give a general scheme for 6-subiteration thinning algorithms deleting P x -simple points. It can be described by the scheme of the beginning of this section with DEL(Y, α) = S(P x ); S(P x ) being the set of P x simple points for Y which are not end points and according to the direction α corresponding to the deletion subiteration. From this scheme, we will propose our algorithm by defining an appropriate P (Sect. 5), in the sense that we investigate P such that our algorithm deletes at least all the points removed by pk. In the following, we write lb to indicate our final algorithm which deletes P x -simple points, while preserving end points. See [1] for details concerning the efficient implementation of such algorithms, with the use of Binary Decision Diagrams [33,34,35]; in fact, pk and lb have the same computational complexity.
108
C. Lohou and G. Bertrand
p3 p6
p13
p7
p3
p13
p7 (a)
p3 p6
(b)
p13
p7 (c)
Fig. 6. This configuration (a) is not P1x -simple (b), and is P2x -simple (c)
5
Our Thinning Algorithm (LB)
In this section, we give the entire reasoning which leads us to propose two successive conditions of membership to a set P . The used methodology consists in proposing successive “refinements” of P , until to obtain a set P such that at least all points deleted by pk are P -simple. This is achieved with our second proposal of a set P . We note that the first proposal, detailed in Sect. 5.1, is directly deduced from pk. We first deal with the direction U until a general comparison of our results. In the following, when we write “a point belongs to P x ” then x is the point p13 for the considered configuration (see Fig. 1 (c)). We write “a configuration is P x -simple” to mean that the central point x(= p13 ) of this configuration is P x -simple. Let y be a point of a configuration, y belongs to {p0 , . . . , p26 }, see Fig. 1 (c); we write “a point y verifies a template T ” to mean that the template T matches the configuration whose central point is y. 5.1
First Membership Condition
We observe that any point of X deleted by TU is such that its U -neighbor belongs to X (see templates in Fig. 5). Thus, we propose to consider P1 = {x ∈ X : the U -neighbor of x belongs to X}. Among all 226 possible configurations, we obtain 4 423 259 ones corresponding to P1x -simple and non end points, for the direction U. Let us consider the configuration in Fig. 6 (a). The three points p3 , p7 and p13 belong to P1x (Fig. 6 (b)) because they belong to X, and each U -neighbor of these points belongs to X. The first and the third P1x -simplicity conditions are not verified for the central point p13 . Thus, the point p13 is not P1x -simple. Nevertheless, it is matched by a rotation around the vertical axis of M5 of TU . Therefore, it should be deleted by our wanted algorithm. Let us examine the behavior of the other points of this configuration with the templates TU (see Fig. 6 (a)). The point p3 may verify a rotation around the vertical axis of M5 or of M6 . The point p7 cannot be deleted, because p6 (= W (p7 )) belongs to X and p3 (= U (p6 )) belongs to X, and the templates are such
A New 3D 6-Subiteration Thinning Algorithm Based on P -Simple Points
α
x
x
x
x
(a)
(b)
(c)
(d)
x
x
x
x
(e)
(f)
(g)
(h)
x
x
x
x
(i)
(j)
(k)
(l)
x
x
x
x
(m)
(n)
(o)
(p)
109
Fig. 7. A point x belongs to P2 iff it verifies at least one of these templates, according to the direction α
that for any point x deleted by TU and for any y belonging to N6∗ (x) ∩ X, y being neither U (x) nor D(x), then the point U (y) must belong to X. With these remarks, we can propose a new set P2 . 5.2
Second Membership Condition
We first introduce some notations. We recall that α denote one of the six deletion directions. Let α denote the opposite direction. Let Nα6 (x) denote the four 6neighbors of x which belong to the 3 × 3 window perpendicular to the direction α and containing x (in fact, Nα6 (x) = N6∗ (x) \ {α(x), α(x)}). We propose to consider P2 = {x ∈ X : the α-neighbor of x belongs to X and for any point y belonging to Nα6 (x), if y belongs to X then α(y) must belong to X}, according to the direction α. With notations used in Sect. 3, the set P2 can be described by the family composed of 16 pairs of subsets of ZZ 3 (B k (x), W k (x)) with 1 ≤ k ≤ 16, depicted in Fig. 7 for the direction α = U ; in fact, there are 6 main templates, up to rotations around the axis (α(x), α(x)). Let us consider the non P1x -simple configuration in Fig. 6 (b) (see notations in Fig. 6 (c)). The point p13 belongs to P2x , as it verifies the template in Fig. 7 (a). The point p3 belongs to P2x , as p3 may verify the templates in Fig. 7 (a), (c), (e), or (g). The point p7 does not belong to P2x because there exists a
110
C. Lohou and G. Bertrand
p2 p3
p13
p7
p5
p2 p3 p6
(a)
p13
x
p5
p7 (b)
(c)
Fig. 8. (a) This configuration cannot be deleted by pk whatever the deletion direction, and is P2x -simple (b), in (c) (obtained from (a)) no point is deleted by pk nevertheless x is deleted by lb
point y(= p6 (= W (p7 ))) in NU6 (p7 ) ∩ N26 (x) which belongs to X, and such that p3 (= U (y)) belongs to X; or more directly because p7 verifies no template in Fig. 7. So, this non P1x -simple configuration (Fig. 6 (b)) is now P2x -simple (Fig. 6 (c)). We obtain 6 129 527 configurations corresponding to P2x -simple and non end points, for the direction U . The 2 124 283 configurations deleted by TU , are also P2x -simple. The fact that the configurations deletable by pk are P2x -simple (for each direction and therefore for the whole algorithm) guarantees that the topology is preserved by pk (as pk deletes subsets of P2x -simple points, see Sect. 3). For a better comparison between pk and lb, we generate the configurations deleted by these algorithms for each direction: pk deletes 9 916 926 configurations, i.e. there exists at least one deletion direction such that a given configuration among these ones is deleted for this direction by pk; lb deletes 23 721 982 configurations (139.2% “better”). We recall that there are 25 985 118 simple and non end points amongst the 67 108 864(= 226 ) possible 3 × 3 × 3 configurations. The configuration depicted in Fig. 8 (a) cannot be deleted by pk, whatever the deletion direction. This configuration is P2x -simple (Fig. 8 (b)), with α = U . Indeed, the point p2 belongs to P2x as p2 may verify the templates in Fig. 7 (a), (b), (c) or (d); p3 belongs to P2x as p3 may verify the templates in Fig. 7 (a), (c), (e) or (g); p13 belongs to P2x as it verifies the template in Fig. 7 (a); p5 does not belong to P2x as p2 (= U (p5 )) belongs to X; and p7 does not belong to P2x as there exists a point y(= p6 (= W (p7 ))) in NU6 (p7 ) ∩ N26 (x) which belongs to X and such that U (y) (= p3 ) belongs to X (or more directly, as p7 verifies no template in Fig. 7). The figure 8 (c) shows an image built from the configuration in Fig. 8 (a) such that each point is either a non simple point (except x) or an end point, and no point can be deleted by pk, nevertheless the point x is deleted by lb.
A New 3D 6-Subiteration Thinning Algorithm Based on P -Simple Points
Initial object
pk: 16 − 2 256
111
lb: 16 − 2 256
Initial object
pk: 142 − 170 001
lb: 73 − 163 874
Fig. 9. Skeletons of a synthetic object and of a vertebra, with pk and lb. Under each figure are given the number of the last subiteration of deletion and the number of deleted points
5.3
Results
The skeletons of some images, obtained respectively by pk and lb are shown in Fig. 9. We observe that the number of deletion subiterations required by lb is less than or equal to the one of pk. The number of points deleted by lb is less
112
C. Lohou and G. Bertrand
than or equal to the one of pk. We recall that it is possible that lb needs more subiterations to obtain a skeleton than pk needs (see Fig. 8 (c)).
6
Conclusion
We have conceived a new 6-subiteration thinning algorithm, based on the deletion of P x -simple points, by applying a recent methodology that we proposed in [1]. As it deletes solely P x -simple points, this algorithm is guaranteed to preserve the topology. Furthermore, we have proposed some various sets P such that our final algorithm deletes at least all the points removed by pk, while preserving the same end points. This also implies that pk is guaranteed to preserve the topology. In addition, our final algorithm also deletes points removed by the Gong and Bertrand’s algorithm [16] (in the variant proposed by Rolland et al. [17]) while preserving the same end points. In another study [1,32], we succeeded in proposing a new 12-subiteration thinning algorithm for 3D binary images, which produces curve or surface skeletons, and such that it deletes at least the points removed by one other 12-subiteration thinning algorithm [18]. A future work will propose new fully parallel thinning algorithms for 2D and 3D binary images.
References 1. C. Lohou and G. Bertrand. A new 3D 12-subiteration thinning algorithm based on P -simple points. In 8th IWCIA 2001, volume 46 of ENTCS, pages 39–58, 2001. 2. K. Pal´ agyi and A. Kuba. A 3D 6-subiteration thinning algorithm for extracting medial lines. Pattern Recognition Letters, 19:613–627, 1998. 3. C.M. Ma and M. Sonka. A fully parallel 3D thinning algorithm and its applications. Computer Vision and Image Understanding, 64(3):420–433, 1996. 4. D.G. Morgenthaler. Three-dimensional simple points: Serial erosion, parallel thinning, and skeletonization. Technical Report TR-1009, Computer Vision Laboratory, University of Maryland, 1981. 5. T.Y. Kong. A digital fundamental group. Computer and Graphics, 13(2):159–166, 1989. 6. T.Y. Kong and A. Rosenfeld. Digital topology: introduction and survey. Computer Vision, Graphics and Image Processing, 48:357–393, 1989. 7. P.K. Saha, B. Chanda, and D.D. Majumder. Principles and algorithms for 2D and 3D shrinking. Technical Report TR/KBCS/2/91, N.C.K.B.C.S. Library, Indian Statistical Institute, Calcutta, India, 1991. 8. R.W. Hall. Connectivity preserving parallel operators in 2D and 3D images. In Vision Geometry, volume 1832 of SPIE, pages 172–183, 1992. 9. T.Y. Kong. On the problem of determining whether a parallel reduction operator for n-dimensional binary images always preserves topology. In Vision Geometry II, volume 2060 of SPIE, pages 69–77, 1993. 10. G. Bertrand. Simple points, topological numbers and geodesic numbers in cubic grids. Pattern Recognition Letters, 15:1003–1011, 1994. 11. C.M. Ma. On topology preservation in 3D thinning. Computer Vision, Graphics, and Image Processing: Image Understanding, 59(3):328–339, 1994.
A New 3D 6-Subiteration Thinning Algorithm Based on P -Simple Points
113
12. T.Y. Kong. On topology preservation in 2-D and 3-D thinning. International Journal of Pattern Recognition and Artificial Intelligence, 9(5):813–844, 1995. 13. T.Y. Kong. Topology-preserving deletion of 1’s from 2-, 3- and 4-dimensional binary images. In 7th DGCI, volume 1347 of LNCS, pages 3–18, 1997. 14. S. Fourey and R. Malgouyres. A concise characterization of 3D simple points. In 9th DGCI, volume 1953 of LNCS, pages 27–36, 2000. 15. Y.F. Tsao and K.S. Fu. A parallel thinning algorithm for 3D pictures. Computer Graphics and Image Processing, 17:315–331, 1981. 16. W. Gong and G. Bertrand. A simple parallel 3D thinning algorithm. In International Conference on Pattern Recognition, pages 188–190, 1990. 17. F. Rolland, J.-M. Chassery, and A. Montanvert. 3D medial surfaces and 3D skeletons. In Visual Form 1991, pages 443–450, 1991. 18. K. Pal´ agyi and A. Kuba. A parallel 3D 12-subiteration thinning algorithm. Graphical Models and Image Processing, 61:199–221, 1999. 19. G. Bertrand and Z. Aktouf. A three-dimensional thinning algorithm using subfields. In Vision Geometry III, volume 2356 of SPIE, pages 113–124, 1994. 20. K. Pal´ agyi and A. Kuba. A hybrid thinning algorithm for 3D medical images. Journal of Computing and Information Technology, CIT 6, pages 149–164, 1998. 21. C.M. Ma. A 3D fully parallel thinning algorithm for generating medial faces. Pattern Recognition Letters, 16:83–87, 1995. 22. A. Manzanera, T. M. Bernard, F. Prˆeteux, and B. Longuet. A unified mathematical framework for a compact and fully parallel n-D skeletonization procedure. In Vision Geometry VIII, volume 3811 of SPIE, pages 57–68, 1999. 23. G. Bertrand. On P -simple points. Compte Rendu de l’Acad´ emie des Sciences de Paris, t. 321(S´erie 1):1077–1084, 1995. 24. G. Malandain and G. Bertrand. Fast characterization of 3D simple points. In IEEE International Conference on Pattern Recognition, pages 232–235, 1992. 25. G. Bertrand. Sufficient conditions for 3D parallel thinning algorithms. In Vision Geometry IV, volume 2573 of SPIE, pages 52–60, 1995. 26. G. Bertrand and R. Malgouyres. Some topological properties of discrete surfaces. In 6th DGCI, volume 1176 of LNCS, pages 325–336, 1996. 27. J. Serra. Image analysis and mathematical morphology. Academic Press, 1982. 28. P.P. Jonker. Morphological operations on 3D and 4D images: From shape primitive detection to skeletonization. In 9th DGCI, volume 1953 of LNCS, pages 371–391, 2000. 29. G. Bertrand. P -simple points: A solution for parallel thinning. In 5th DGCI, pages 233–242, 1995. 30. R. Malgouyres and S. Fourey. Strong surfaces, surface skeletons and image superimposition. In Vision Geometry VII, volume 3454 of SPIE, pages 16–27, 1998. 31. J. Burguet and R. Malgouyres. Strong thinning and polyhedrization of the surface of a voxel object. In 9th DGCI, volume 1953 of LNCS, pages 222–234, 2000. 32. C. Lohou and G. Bertrand. A 3D 12-subiteration thinning algorithm based on P -simple points. Submitted for publication. 33. R.E. Bryant. Graph-based algorithms for boolean function manipulation. IEEE Transactions on Computer, Vol. C-35(8):677–691, 1986. 34. K.S. Brace, R.L. Rudell, and R.E. Bryant. Efficient implementation of a bdd package. In 27th IEEE Design Automation Conference, pages 40–45, 1990. 35. L. Robert and G. Malandain. Fast binary image processing using binary decision diagrams. Computer Vision and Image Understanding, 72(1):1–9, 1998.
Monotonic Tree Yuqing Song and Aidong Zhang Department of Computer Science & Engineering State University of New York at Buffalo Buffalo, NY 14260, USA
Abstract. Contour trees have been used in geographic information systems (GIS) and medical imaging to display scalar data. Contours are only defined for continuous functions. For an image represented by discrete data, a continuous function is first defined as an interpolation of the data. Then the contour tree is defined on this continuous function. In this paper, we introduce a new concept termed monotonic line, which is directly defined on discrete data. All monotonic lines in an image form a tree, called monotonic tree. As compared with contour trees, monotonic trees avoid the step of interpolation, thus can be computed more efficiently. Monotonic tree can also be used as a hierarchical representation of image structures in computer imagery.
1
Introduction
The concepts of contour trees have been developed by Morse [1], Roubal and Peucker [2], and recently by van Kreveld et al. [3]. In geographic information systems (GIS), contour trees are used to display scalar data defined over the plane, or the three-dimensional space. For example, the elevation in the landscape can be modeled by scalar data over the plane, where a contour (also called an isoline) is a line where the elevation function assumes the same value. Contour trees are also used in medical imaging to show the scanned data. Contours are only defined for continuous functions. For an image represented by discrete data, a continuous function is first defined as an interpolation of the data. Then the contour tree is defined on this continuous function. In this paper, we introduce a new concept termed monotonic line, which is directly defined on discrete data. We observe that for any 2D Morse function, a curve is a normal contour with value v iff it’s a boundary of the set {x ∈ IR2 |f (x) > v}. This is not true for non-Morse functions. However, the equivalent condition is more general, and can be used to define contours for discontinuous or discrete functions. Specifically, an outward-falling/climbing monotonic line of an gray image is a boundary where the image assumes higher/lower values in the pixels adjacent to the boundary from inside than those from outside (see Figure 1(a)). The two kinds of monotonic lines correspond to positive and negative contour lines in [1], respectively. To make the boundary of the image domain a monotonic line, we extend the A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 114–123, 2002. c Springer-Verlag Berlin Heidelberg 2002
Monotonic Tree
115
input image to the whole digital plane such that the extended function assumes −∞ out of the domain. It can be proved that monotonic lines don’t cross each other, i.e., if l1 = ∂X, l2 = ∂Y are two monotonic lines, where X, Y are two simply connected regions, then X ⊆ Y , Y ⊆ X or X Y = Ø. Based on this property, we can define a parent-child relation: monotonic line l1 is the parent of monotonic line l2 , if l2 is directly enclosed by l1 . Under this parent-child relation, all monotonic lines in an image form a rooted tree, called monotonic tree. See Figure 1(b)(c). A monotonic tree can be reduced. A maximal sequence of uniquely enclosing monotonic lines is called a monotonic slope. All monotonic slopes in an image form the topological monotonic tree (TMT). See Figure 1(d). Algorithms for computing traditional contour trees can be easily modified to compute monotonic trees or topological monotonic trees. Because the monotonic line is directly defined on pixels, the interpolation step is avoided. Thus the monotonic trees and the topological monotonic trees can be computed more efficiently.
a g
b
5 1 4 0 3 9 10 7 5 11 2 8 3 7 6 9 0 1 2 3
h
c
4 5 6 5 7 0 6 2 8 1 4 5
d
j k
e f
(a)
(b) a
a
b c d
i
b e f
g h i j k
C
E G
(c)
(d)
Fig. 1. (a) An outward-falling monotonic line (the solid line in the figure), (b) a set of monotonic lines, (c) the monotonic tree, (d) the topological monotonic tree (TMT).
Monotonic tree can be used as a hierarchical representation of image structures in computer imagery. As compared with other models such as wavelet, the monotonic tree model has following advantages. (1) The monotonic tree retrieves and represents the structures of an image at all scales. In addition, these structures are organized hierarchically as a tree,
116
Y. Song and A. Zhang
which gives us a better way to analyze the relationship between different levels. (2) The monotonic tree retrieves the structures of an image directly and maintains their original shapes. An example is shown in Figure 2. In this example, the tree region and water wave region are characterized by the shapes and permutation of the TMT elements in these two regions, which makes it possible to recognize the trees and water waves by classifying and clustering the TMT elements. Based on our monotonic tree model, we made an online demo for scenery analysis, which is available at “http://monet.cse.buffalo.edu:8888”.
(b)
(a)
(c) Fig. 2. (a) Original image, (b) the elements of the TMT at a smaller scale, and (c) the elements of the TMT at a larger scale. In (b) and (c), the TMT elements are shown by black and white regions in a gray background.
2
Preliminary
In our theoretical discussion, we choose hexagonal grid for the digital plane. We use notations in [6] with modifications. The digital plane is a pair (Vh , Πh ), where Vh is the set of pixels: √ Vh = {h1 (1, 0) + h2 (−0.5, 0.75)|h1 , h2 ∈ ZZ},
(1)
and Πh is the edge-adjacency shown in Figure 3(a). Πh is a symmetric binary relation on Vh such that Vh is Πh -connected. Formally, Πh = {(p, q)|p, q ∈ Vh , and p − q = 1}.
(2)
Monotonic Tree
117
Each pixel P has 6 edges, named as ei (P ) for i = 0, 1, ..., 5. See Figure 3(b). Each ei (P ) is an element of Πh . We define function nextP on {ei (P )}5i=0 by nextP (ei (P )) = e(i+1)%6 (P ).
(3)
Function nextP defines the counter-clockwise direction in the border of P . e 3 (P)
h2
e 2 (P) P
(0,1)
(-1,1)
e 4 (P)
(1,1)
e 1(P)
e 5(P) (-1,0)
(0,0)
(-1,-1)
(1,0)
(0,-1)
h1
e 0(P)
(b) γ
(1,-1)
X α (a)
β
(c)
Fig. 3. (a) Hexagonal grid, (b) six edges of pixel P , and (c) the connected boundary of a digital region.
For any subset X of Vh , its border is defined as ∂X = {(p, q) ∈ Πh |p ∈ X, and q ∈ X}.
(4)
We define function nextX on ∂X such that for e = (p, q) ∈ ∂X with p ∈ X and q ∈ X, nextp (e) if nextp (e) ∈ ∂X; nextX (e) = (5) next−1 q (e) otherwise. Function nextX defines the counter-clockwise direction in the border of X. For example, in Figure 3(c), nextX (α) = β and nextX (β) = γ. The following definition defines connected regions. Definition 1. A region X is called a connected region if (X, Πh |X ) is a connected graph, i.e., for any p, q ∈ X, there exists a Πh -connected path in X connecting p and q, where a Πh -connected path is a sequence of pixels {pi }ni=1 such that (pi , pi+1 ) ∈ Πh for i = 1, 2, ..., n − 1. For a connected region with no holes, its border is a connected boundary, which is an boundless list of pixel edges. See Figure 3(c). Formally, we give two definitions.
118
Y. Song and A. Zhang
Definition 2. A boundless list is a pair (X, next) such that X is a set and (1) next is a bijective function from X to X; and (2) ∀x, y ∈ X, there exists an integer n ≥ 0 such that either x = nextn (y), or y = nextn (x). A circular list is a boundless list (X, next) such that X is a finite set. It’s easy to see that for a circular list (X, next) and any x, y ∈ X, there exists an integer n ≥ 0 such that x = nextn (y). Definition 3. For a region X ⊆ Vh , a connected boundary of X is a subset S of its border ∂X such that nextX (S) = S and (S, nextX |S ) is a boundless list. About the boundaries of regions, we have following lemma. Lemma 1. For any X ⊆ Vh , (1) ∂X = ∂(Vh − X); (2) nextX = (nextVh −X )−1 ; (3) the border of X can be decomposed into a set of connected boundaries; and (4) if X is bounded (i.e., it’s finite), then its connected boundaries are circular lists. This lemma is obvious. Next we give the definition for simple connection. Definition 4. A region X ⊆ Vh is called simply connected if both X and Vh − X are connected. The simple connection we defined here is a little bit different from the traditional definition on Euclidean plane. However, if we add an infinite pixel to the hexagonal grid and make it a digital sphere, then our definition fits the definition of simple connection on Euclidean sphere. About simple connection, we have following lemma. Lemma 2. For a bounded region X ⊂ Vh , X is simply connected iff (∂X, nextX ) is a circular list. That is to say, for a bounded region X, (∂X, nextX ) is a circular list iff both X and Vh − X are connected. This lemma is an equivalent of Jordan’s curve theorem on the hexagonal grid. Due to the limited space, in this paper, we give our lemmas and theorems with the proof omitted. Next, we model gray images on the hexagonal grid. Definition 5. A gray image is a pair (f, Ω) such that Ω ⊂ Vh is a bounded and simply connected region, and f is a real valued function defined on Ω. For any gray image (f, Ω), we extend the function f to the whole plane by f (p) if p ∈ Ω; fE (p) = (6) −∞ otherwise. fE is called the extended function of f . In fact, instead of choosing −∞, we can choose any value which is less than all values assumed by f , or greater than all values assumed by f .
Monotonic Tree
3
119
Monotonic Line
In this and following sections, let I = (f, Ω) be a fixed gray image, and fE be the extended function of f . For a region X ⊆ Vh , we denote the immediate interior [6] of ∂X as IntBorderP ixelSet(X), and the immediate exterior as ExtBorderP ixelSet(X), i.e., IntBorderP ixelSet(X) = {x ∈ X|(x, y) ∈ Πh f or some y ∈ X}; (7) ExtBorderP ixelSet(X) = {y ∈ Vh − X|(x, y) ∈ Πh f or some x ∈ X}. (8) Now we can define monotonic line. Definition 6. A monotonic line of I is a boundary ∂X such that X ⊆ Ω is simply connected and not empty, and there exists some v ∈ IR with the property that either of the following is true: (1) ∀x ∈ IntBorderP ixelSet(X), fE (x) > v, and ∀y ∈ ExtBorderP ixelSet(X), fE (y) < v; (2) ∀x ∈ IntBorderP ixelSet(X), fE (x) < v, and ∀y ∈ ExtBorderP ixelSet(X), fE (y) > v. If (1) is true, ∂X is called outward falling; if (2) is true, ∂X is called outward climbing. We denote the set of all monotonic lines of I as M onotonicLineSet(I). We can prove that monotonic lines don’t cross each other, i.e., for any ∂X, ∂Y in M onotonicLineSet(I), one of the following is true: X ⊆ Y , Y ⊆ X or X Y = Ø. Theorem 1. ∀∂X, ∂Y ∈ M onotonicLineSet(I), one of following is true: X ⊆ Y , Y ⊆ X or X Y = Ø. The basic idea to prove this theorem is simple. Suppose we have two monotonic lines ∂X and ∂Y crossing each other. Let a, b, c, d be four pixels around a crossing point. See Figure 4. By the definition of monotonic line, we discuss on four cases: (1) both ∂X and ∂Y are outward falling; (2) ∂X is outward falling, ∂Y is outward climbing; (3) ∂X is outward climbing, ∂Y is outward falling; and (4) Both ∂X and ∂Y are outward climbing. For each case, we can get a contradiction. For example, in case (1), there exists v1 such that fE (a) > v1 , fE (c) > v1 , and fE (b) < v1 , fE (d) < v1 ; and there exists v2 such that fE (c) > v2 , fE (d) > v2 , and fE (a) < v2 , fE (b) < v2 . Then we get both fE (a) > v1 > fE (d) and fE (d) > v2 > fE (a), which is a contradiction.
120
Y. Song and A. Zhang
X
c
a b
d
Y
Fig. 4. ∂X crosses ∂Y .
4
Monotonic Tree and Topological Monotonic Tree
We first define some relations on M onotonicLineSet(I). Definition 7. For any monotonic lines ∂X, ∂Y ∈ M onotonicLineSet(I), – ∂X encloses ∂Y , denoted as Enclose(∂X, ∂Y ), if X ⊃ Y ; – ∂X directly encloses ∂Y , denoted as DirectEnclose(∂X, ∂Y ), if Enclose(∂X, ∂Y ) and there is no ∂Z ∈ M onotonicLineSet(I) such that X ⊃Z ⊃Y; – ∂X uniquely directly encloses ∂Y , denoted as U niqueDirectEnclose(∂X, ∂Y ), if DirectEnclose(∂X, ∂Y ) and ∀∂Z ∈ M onotonicLineSet(I), DirectEnclose(∂X, ∂Z) ⇒ ∂Y = ∂Z. The relation DirectEnclose is a parent-child relation on the set of monotonic lines in gray image I. Based on Theorem 1, we can easily prove that: Theorem 2. (M onotonicLineSet(I), DirectEnclose) is a rooted tree, and ∂Ω is its root. The tree (M onotonicLineSet(I), DirectEnclose) is called the monotonic tree of image I, and denoted as M onotonicT ree(I). Next we can define monotonic slope and topological monotonic tree. Definition 8. A monotonic slope s is a maximal sequence of monotonic lines s = {li }ni=1 with n ≥ 1 such that (1) ∀i = 1, 2, ..., n − 1, U niqueDirectEnclose(li , li+1 ); and (2) either all li are outward-falling, or all li are outward-climbing.
Monotonic Tree
121
s is called outward-falling/outward-climbing if all li are outwardfalling/outward-climbing. The first monotonic line l1 is called the enclosing line of the slope s. The set of all monotonic slopes is denoted as M onotonicSlopeSet(I). We can also define some relations on M onotonicSlopeSet(I). Definition 9. For any sa , sb ∈ M onotonicSlopeSet(I), we define (1) Enclose(sa , sb ) if ∃la ∈ sa , ∃lb ∈ sb , Enclose(la , lb ); (2) DirectEnclose(sa , sb ) if ∃la ∈ sa , ∃lb ∈ sb , DirectEnclose(la , lb ). The relation DirectEnclose is a parent-child relation on the set of all monotonic slopes. Theorem 3. (M onotonicSlopeSet(I), DirectEnclose) is a rooted tree, and the monotonic slope which contains ∂Ω is the root. The tree (M onotonicSlopeSet(I), DirectEnclose) is called the topological monotonic tree of I, and denoted as T opologicalM onotonicT ree(I).
5
Properties of Monotonic Tree
In this section, we introduce some theorems about monotonic tree. Theorem 4. Inside Intersecting Theorem For any ∂X, ∂Y ∈ M onotonicLineSet(I), if IntBorderP ixelSet(X) IntBorderP ixelSet(Y )
= Ø, then ∂X is outward falling ⇔ ∂Y is outward falling. This theorem is equivalent to the statement that for a sequence of monotonic lines which intersect from inside, either they are all outward falling, or they are all outward climbing. See Figure 5(a).
(a)
(b)
Fig. 5. (a) An inside-intersecting sequence, and (b) two inside-intersecting sequences which intersect from outside.
122
Y. Song and A. Zhang
Theorem 5. Outside Intersecting Theorem For any ∂X, ∂Y ∈ M onotonicLineSet(I), if IntBorderP ixelSet(X) ExtBorderP ixelSet(Y )
= Ø, then ∂X is outward falling ⇔ ∂Y is outward climbing. This theorem is equivalent to the statement that for two inside-intersecting sequences, if they intersect from outside, then the monotonic lines in one sequence are all outward falling, and the monotonic lines in the other sequence are all outward climbing. See Figure 5(b). Theorem 6. Separation Theorem Let x, y be two pixels in Ω. (1) If there is a Πh -connected path P = {w1 = x, w2 , ..., wn = y} in Ω such that f is constant along this path, i.e., f (w1 ) = f (w2 ) = ... = f (wn ), then ∀ ∂X ∈ M onotonicLineSet(I), x ∈ X ⇔ y ∈ X. (2) If f (x)
= f (y), then there is some ∂X ∈ M onotonicLineSet(I) such that x ∈ X ⇔ y ∈ X. This theorem states that for any two pixels in the image domain, (1) if they are connected by a path where the function assumes constant value, then no monotonic line separates them; and (2) if the function assumes different values at the two pixels, then there is some monotonic line separating them. Theorem 7. Private Region Theorem For any ∂X ∈ M onotonicLineSet(I), let {∂Yi }ni=1 (n ≥ 0) be the set of children of ∂X in themonotonic tree. Then n (1) (X − i=1 Yi ) IntBorderP ixelSet(X) is not empty; and n (2) ∀x, y ∈ (X − i=1 Yi , ) f (x) = f (y). This theorem states that each monotonic line has a nonempty private region and that the function is constant over this region. Based on this theorem, we give following definition. Definition 10. For any ∂X ∈ M onotonicLineSet(I), let {∂Yi }ni=1 (n ≥ 0) be the set of children of ∂X in the monotonic tree. We define the private region of ∂X to be: n P RegionI (∂X) = X − i=1 Yi . We define the assumed value of ∂X to be the constant value assumed by f over P RegionI (∂X). The assumed value of ∂X is denoted as V alueI (∂X). Theorem 8. Value Jumping Theorem For any ∂P, ∂C ∈ M onotonicLineSet(I), if ∂P is the parent of ∂C, then (1) V alueI (∂P )
= V alueI (∂C); and (2) ∂C is outward falling ⇔ V alueI (∂P ) < V alueI (∂C). The property (1) in this theorem says that there is a value jump (up or down) from a child to its parent. Value jumping is a natural property for contour trees, but it’s not straight forward for monotonic trees.
Monotonic Tree
6
123
Conclusion and Discussion
Contour trees are only defined on continuous functions. When applying the contour tree model to computer imagery, we have to make an interpolation of the discrete data, which make the computation not efficient. We solved this problem by introducing our monotonic tree model. One main difficulty of computer imagery comes from the discrete and noise nature of images. While we have plenty of powerful theories to analyze the structures of smooth functions, we need more for discrete functions. The monotonic tree can be used as a theoretical tool for discrete functions. The capacity of the monotonic tree model can be extended. The topological structure of the monotonic lines in an image is captured by the topological monotonic tree. We may further define differential slope and differential monotonic tree to capture the differential information. A differential slope may be defined as a sequence of monotonic lines where the gradient is smooth.
References 1. S. Morse. Concepts of use in computer map processing. Communications of the ACM, 12(3):147-152, March 1969. 2. J. Roubal and T.K. Peucker. Automated contour labeling and the contour tree. In Proc. AUTO-CARTO 7, pages 472-481, 1985. 3. M. van Kreveld, R. van Oostrum, C. Bajaj, V. Pascucci, and D. Schikore. Contour trees and small seed sets for iso-surface traversal. In Proc. 13th Ann. Sympos. Comput. Geom., pages 212-220, 1997. 4. Mark de Berg and Marc J. van Kreveld. Trekking in the alps without freezing or getting tired. In European Symposium on Algorithms, pages 121-132, 1993. 5. M. van Kreveld. Efficient methods for isoline extraction from a tin. International Journal of GIS, 10:523-540, 1996. 6. Gabor T. Herman. Geometry of Digital Spaces. Birkhauser Boston, 1998.
Displaying Image Neighborhood Hypergraphs Line-Graphs S. Chastel, P. Colantoni, and A. Bretto ´ LIGIV – Universit´e de Saint-Etienne 3, rue Javelin Pagnon BP 505 ´ F-42007 SAINT-ETIENNE Cedex 01
[email protected] Abstract. Graph-based structures are commonplace in image processing. Our contribution in this article consists in giving hints representing a new modeling of digital images: image neighborhood hypergraphs. We give some results on the hyperedges coloring of them. We also describe techniques we used to display image neighborhood hypergraphs line-graphs. These techniques form the basis of a tool that allows the exploration of these structures. In addition, this tool can be used to visualize, explore and describe features of image regions of interest such as object edges or noise.
For nearly fifty years, image processing, image analysis, and computer vision are fields that have been intensively studied. Many various mathematical theories and techniques have been widely involved in these scientific fields: linear algebra, continuous optimization, statistics, combinatorics, to name a few. One of the main problematics in image analysis is the foundation of formal modelings of the digital image. In any scientific field, an efficient modeling should establish structural relationships between the different objects of that domain. A digital image may be considered as a union of elementary spatial and colorimetric units. Therefore, working on the organization of these components may be seen as a combinatorial problem, since combinatorics is the science that studies organization of discrete objects in mathematically formalized modelings. This is maybe why graph theory and associated techniques arose in image analysis and had been widely used by many researchers. An attractive approach is to develop a modeling technique based on a generalized graph theory: hypergraph theory. Hypergraph theory arose from the seminal work of C. Berge in the early seventies [2]. In graph theory, one studies the binary relationships between elements. In hypergraph theory, the relationships between basics elements are generalized: elements in relation belong to a same set if they share a common property. We recently proposed a modeling based on hypergraph theory [6]. Our first intentions in dealing with this modeling was to try to characterize regions of interest in a picture. That is why we tried to represent the image neighborhood hypergraph structure itself and this paper presents some results on this. Trying A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 124–135, 2002. c Springer-Verlag Berlin Heidelberg 2002
Displaying Image Neighborhood Hypergraphs Line-Graphs
125
to represent it in a bidimensional way in simply drawing and coloring the hyperedges, we will show that this might appear as an unappropriate representation of that modeling. A tridimensional representation of the INH line-graph appeared to be a better way to display them and is the objet of our third part.
1
Theoretical Background
Given a set X, we will denote by X (2) the set of unordered pairs of X. A graph G [1,13] is an ordered pair (X, E) such that E is a subset of X (2) . The size of G is the cardinality of X. Elements of X are called vertices, those of E edges. If e = {x, y} is an edge of G, x and y are said to be adjacent. We say that a graph G = (X , E ) is a subgraph of a graph G = (X, E) if X ⊂ X and E ⊂ E. A graph is said to be complete if E = X (2) . A complete graph whose size is n will be denoted by Kn . A complete maximal (in the sense of inclusion) subgraph of a graph G is said to be a clique of G. A clique whose size is maximum is called a maximum clique [5]. To any vertex of a graph may be associated its neighborhood Γ (x) = {y ∈ X, {x, y} ∈ E}. The degree of a vertex x is the cardinality of the set Γ (x). If any vertex in the graph has the same degree, the graph is said to be regular. A hypergraph H [4] on a set X is an ordered pair (X, E) where E is a set of nonempty subsets of X such that e∈E e = X. Elements of X are called vertices, those of E hyperedges. The size of a hypergraph is the cardinality of its set of vertices. The line-graph of a hypergraph is defined as the graph whose set of vertices are the hyperedges of the hypergraph and where there is an edge between two vertices if the associated hyperedges have a nonempty intersection. To any graph may be associated its neighborhood hypergraph defined by: X, ({x} ∪ Γ (x))x∈X . Moreover, we will say that the hyperedge {x} ∪ Γ (x) is generated by the vertex x and x will be called source pixel of {x} ∪ Γ (x). A digital image I [11] is a map from a subset X (generally finite) from ZZ 2 in a subset C of ZZ n . Elements of X are called points, those of C colors. A couple (x, I(x)) where x belongs to X is called a pixel. However, the confusion between a point x and (x, I(x)) is often made and we will not depart from it since it keeps its meaningfulness. A tiling (or a tessellation) of IR2 [19] is a partition of IR2 . The tilings generally studied are constrained by a limited number of geometric configurations called tessels. Given a tiling, the choice of an arbitrary point in the tessels, and the fact to link two points if the tessels share a common side allows to build a mesh. For the case of regular tilings where the tessels are regular polygons, the center of gravity of the polygon is often chosen and therefore leads to a regular mesh. In image processing, three types of meshes are used: hexagonal, triangular and square ones. Because of the current technological devices and the natural data structuring, the last type is the most used and we will therefore restrict our study to this kind of mesh. If a distance on ZZ 2 defines a undirected, simple, loopless and regular graph on a mesh, that distance will be called a grid distance. A grid is then a nonempty set of ZZ 2 with an associated grid distance. On square grids, two distance are mainly used: the city block (or
126
S. Chastel, P. Colantoni, and A. Bretto
square) distance for which a given pixel has four neighbors and the chessboard (or diamond) distance for which a given pixel has eight ones. This paper will only deal with the chessboard distance.
2 2.1
Image Neighborhood Hypergraph Modeling Spatiocolorimetric Neighborhood of a Pixel
Let d be a distance on the set of colors C and d be a distance that defines a grid on X ⊂ ZZ 2 . Let α and β be two strict positive reals. A unique neighborhood Γα,β (x) for the digital image I may be associated to any pixel x of X by: Γα,β (x) = {y ∈ X, y = x such as d (I(x), I(y)) < α and d (x, y) < β} We will call d grid distance and d colorimetric distance. The definition of colorimetric distances highly correlated with the perception of the human brain is always nowadays an extremely delicate domain of study of the neuroscientific field. However we will assume that such functions exist even if the ones we will use may not appear appropriate with our sensations at the sight of an image. Looking at the previous definition, it appears that the first part of it defines a neighborhood in the space C whereas the second one only involves the spatial domain. We will therefore respectively call α and β, colorimetric threshold and spatial threshold. Moreover we will qualify Γα,β (x) of spatiocolorimetric neighborhood. That notion allows to describe some consistency or homogeneity of a pixel with its environment. It is also interesting to see that such a neighborhood has a useful property: it increases in the sense of inclusion both with α and β. For instance, if we chose two colorimetric thresholds α and α such as α < α , for a fixed β, Γα,β (x) ⊆ Γα ,β (x). That is due to the fact that if a pixel y is, from the color point of view, close of x at α, it remains close to x at α . This argument is the same if we chose the spatial thresholds and it is possible to show it also for them. The main interest of that property is that it implies a certain regularity in the modeling that we will now present. 2.2
Image Neighborhood Hypergraph
It is now possible to define an image neighborhood hypergraph (INH) Hα,β on X by Hα,β = X, ({x} ∪ Γα,β (x))x∈X . It is useful to precise that such a definition is correct since any hyperedge Γα,β (x) is nonempty: it contains at least the pixel x. Moreover the union of all the hyperedges is X itself. We will call Γα,β (x) hyperedge centered in (or generated by) the pixel x. As it directly inherits from the spatiocolorimetric neighborhood definition, an INH also increases with α and β in the sense of the hyperedges inclusion. The figure 1 shows a part of an INH and its associated line-graph for a spatial threshold β of 1 for the chessboard distance. For this figure we did not precise the colorimetric distance and its associated threshold as that is not relevant. However the first noticing is that displaying hypergraphs structures in this way leads to a muddled interlacing of patterns of various shapes and colors.
Displaying Image Neighborhood Hypergraphs Line-Graphs
127
E0 E1 E4
x4
E5
x5
E6
x6
x3
x2
x0
x1
x7
x8
E2
E4
E3
E2
E3 E7 E8
E5
E6
E1 E0
E7
E8
Fig. 1. A part of an image neighborhood hypergraph and its associated line-graph.
3
First Attempts of Representation
In this section we will describe the difficulties we encountered with our first attempts to represent the image neighborhood hypergraph structure in a simple bidimensional way as in the figure 1. That simple idea consisted in associating to represent hyperedges by the set of pixels that they contained. 3.1
Coloring Hyperedges
The first idea in representing an INH will probably be to associate a color to any hyperedge of it. Moreover it is then natural to color them in such a way that two intersecting hyperedges will have different colors. In order to make the representation as simple as possible it should be interesting to have the least number of colors. Generally speaking, this problem is known as the hyperedge coloring of a hypergraph H and that minimum number of colors associated with it is called the chromatic index of H, denoted by q(H). That problem relies on the chromatic number of a graph as lemma 1 precises it. The chromatic number of a graph is the minimum number of colors that are needed to color the vertices of that graph in such a way such that two adjacent vertices have distinct colors [1]. Lemma 1. The chromatic index q(H) of a hypergraph H is also the chromatic number χ(L(H)) of its associated line-graph L(H). q(H) = χ(L(H)) Proof. The proof directly comes from the definitions. Two intersecting hyperedges will have distinct colors and therefore their associated vertices in the linegraph of the hypergraph will also have different colors. That fact allows us to claim that necessarily q(H) ≤ χ(L(H)). Conversely two distinct vertices of the line-graph with different colors ensures that their associated hyperedges will also have different colors and therefore that χ(L(H)) ≤ q(H). Hence we can conclude to the equality of both quantities χ(L(H)) and q(H).
128
S. Chastel, P. Colantoni, and A. Bretto
Unfortunately computing the chromatic number of a graph is generally not an easy thing... it is NP-hard [15]. Nevertheless we can particularize it to our line-graphs. The worst case that may appear is the one such that any hyperedge of the hypergraph has maximal cardinality, that is, when only the spatial threshold is discriminating. In this case, on the 8-connected grid, for a fixed threshold β, a pixel generates a square hyperedge whose cardinality is (2β + 1)2 (we will assume that there are no border problems if we suppose that the image loops itself). It is easy to see that the resulting line-graph is regular: as the same hyperedges configuration appears on the whole image, the number of intersections is the same for every vertex of the graph. Remark that, in the case of a regular graph, computing the chromatic number is as hard as in the general case [17]. However it is possible to represent the vertices of the line-graph by the pixel that generated the corresponding hyperedge and it is clear that any vertex-pixel of a fixed hyperedge e (that we will call as the “initial” hyperedge) is the center of a hyperedge e that intersects with e. This is the case of the pixel we called “member” in the figure 2. But these hyperedges are not the only ones that intersects e. This is the case for instance for the pixel that we called “external” on the figure 2. It is however easy to see that no pixel further than 2β from the “initial” pixel will generate a hyperedge that intersects with the initial hyperedge. Hence we can precisely give the number of intersections that a hyperedge have in the INH: (4β + 1)2 − 1 (if we assume that a hyperedge does not intersect with itself). The degree of a vertex in the line-graph is (4β + 1)2 − 1 (see Fig. 3). external hyperedge pixel generating the external hyperedge or external pixel member pixel initial pixel initial hyperedge Fig. 2. Determination of the worst case for the chromatic index in a image neighborhood hypergraph. We display in this the case for β = 2
Lemma 2. In the line-graph L(H ,β ) of an image neighborhood hypergraph H ,β , the degree of any vertex is (4β + 1)2 − 1.
Displaying Image Neighborhood Hypergraphs Line-Graphs
129
Computing the chromatic number of that graph is similar to determining the size of the largest clique in it [1]. As we represent the vertices of the line-graph by pixels on a grid, we can make some claims from the geometrical point of view to determinate the size of this clique. Let us take a vertex o (associated to a pixel O) in the line-graph. There is at least one clique C that lies in the neighborhood of o in the graph. Using the representation of the line-graph that we previously defined, we may see the pixel O as a “central” one (in the sense of geometrical central symmetric) for the clique and we will say that the clique C is centered in o. Therefore if we take another vertex p in the line-graph that belongs to the neighborhood of o and if we consider the pixel P associated with it, we can conclude that the symmetric Q of P is associated to a vertex q that also lies in the neighborhood of o and therefore belongs to the same clique. Moreover we may apply the same arguments to any vertex associated with a pixel which is at the same distance of P from O thanks to isometric transformations of the discrete plane [25,20]. Then we can assert the following result: Lemma 3. If a pixel P such that d (O, P ) = β0 is associated to a vertex p that belongs to a clique C centered in o, then the set of pixels {P ∈ X such that d (O, P ) = β0 } is associated with vertices of the graph that also belong to C. Under the previous assumptions and notations, we may also say that any pixel located in the disk whose center is O and whose radius is β0 is also associated with vertices that also belong to the clique. That fact is simply due to the definition of our worst case as any hyperedges are of size β and a triangular inequality argument. The idea is now to determinate precisely the maximal size of that disk.
P O Q
β0 2β
Fig. 3. Illustration of some geometric remarks concerning hyperedges coloration. Example of a partial line-graph built with β = 2
Assume that such a disk D exists. Let us call its center O and its radius β0 . Let us take a pixel P at β0 from its center. It is clear that P belongs to
130
S. Chastel, P. Colantoni, and A. Bretto
D. Its symmetrical Q around O also belong to D. If we want P and Q to be respectively associated with vertices p and q that belong to the clique, necessarily P and Q are linked by an edge in the line-graph and therefore each lies in the neighborhood of the other one if the line-graph. Moreover the distance between P and Q is simply 2β0 and cannot exceed 2β (as we already prove it). Therefore we have to solve the equation: arg maxβ0 ∈IN 2β0 ≤ 2β ... whose solution is simply: β0 = β The maximal radius of the disk we sought is then β and we can give the following lemma: Lemma 4. The upper bound for the size of the maximum clique of a line-graph associated with an image neighborhood hypergraph H ,β is (2β + 1)2 . In the worst case i.e. when the colorimetric threshold is not discriminating, such a size is reached. So, for instance, for an INH built for β = 2, the representation of its hyperedges already leads to a 25-colored muddle. That quadratic increase of the number of colors is unfortunately not the only problems we were confronted to represent the hyperedges of an INH. 3.2
Other Representation Problems
Hyperedges Connectivity. On the 8-connected grid, let us consider two hypergraphs representations built for α = 5 on the subimage of the figure 4 and the colorimetric distance simply compares the absolute value of the difference between the colorimetric value of the pixels with α. For β = 1, the hyperedge generated by the “central” pixel is represented in blue, for β = 2, in red. In order not to overload that figure, the superimposition of the two hyperedges has not been explicitly shown but it must be clear that due to the increasing property of the INH, the blue hyperedge is contained in the red one. What is remarkable in that figure is that such a representation of hyperedges may lead to the appearance of disconnected areas in it and holes in hyperedges increasing the difficulty of the reading for a potential user. Which Curves to Chose? As there are no convexity properties for the hyperedges (see figure 4), our initial choice of curves of the splines family did not appeared as very relevant. Moreover, after coloration and connectivity difficulties, the problem of the choice of the control points of these curves was added. The representation of the hypergraph structure itself appeared us then so tricky that we chose to go towards another representation of the structure: the one of its line-graph.
Displaying Image Neighborhood Hypergraphs Line-Graphs
131
1111111111111111 0000000000000000 0000000000000000 1111111111111111 000 111 0000000000000000 1111111111111111 000 111 00000000 11111111 0000000000000000 1111111111111111 000 111 00000000 11111111 0000000000000000 1111111111111111 00000000 11111111 0000000000000000 1111111111111111 00000000 11111111 0000000000000000 1111111111111111 15 00000000 11111111 0000000000000000 1111111111111111 00000000 11111111 0000000000000000 1111111111111111 00000000 11111111 0000000000000000 1111111111111111 00000000 11111111 0000000000000000 1111111111111111 00000000 11111111 0000000000000000 1111111111111111 0000000000000000 1111111111111111 0000000000000000 1111111111111111 0000000000000000 1111111111111111 14
11
12
9
13 3
15
6
11
5
7
12
15
14
13
8
14
16
8
11
22
20
16
19
14
Fig. 4. In an INH, hyperedges are not necessarily connected in the sense of the underlying grid.
4
Line-Graph Representation of an INH
Our main motivation with that representation was not to display a particular graph structure: we had the intention to make it interactively investigational by a potential user. It is certainly of great interest for the one interested in discovering our modeling to chose himself his own point of view on the structure and, better, in allowing him to navigate through the structure itself. We develop such a tool and in investigating the structure we were able to characterize image phenomena such as noise or object edges lying inside an image [10,22]. 4.1
Image Correlation
Our first concern was to strongly correlate our representation with the data it came from. That is why we chose to display the original image on the representation. In order not to disturb the visualization of the graph structure, we chose to draw it with a parametric opacity. As any hyperedge is generated by a pixel of the digital image, it is natural to represent it by a point or better by a polyhedron or a sphere located at the vertical of the pixel. The elevation of the center of gravity of that polyhedron from the source pixel is controllable and may moreover be proportional to the intensity of the source pixel giving the representation a tridimensional aspect that makes easier its reading. To show the size of the hyperedge we make its radius proportional to the cardinality of the hyperedge (that proportional factor is also controllable). 4.2
Line-Graph Structure Representation
The line-graph is not a planar graph. For β = 1 for instance, we showed that the degree of its vertices may reach 24 and therefore it should not be rare to
132
S. Chastel, P. Colantoni, and A. Bretto
P N O
Fig. 5. The representation of the line-graph structure may gives bad impression. Some vertices may appear linked in the structure whereas they are not. The local display of a hyperedge of an image neighborhood hypergraph.
Fig. 6. Sample representations of the line-graph of an INH. The image is a RGB-color picture. For a pixel x, color is coded by (xr , xg , xb ). The colorimetric distance used was d(x, y) = (xr − yr )2 . α was set to 25 and β to 1. Note the weak degree of some vertices in that graph: that allowed us to partially characterize edges of objects in an image [10].
see K5 or K3,3 in that graph [1]. However we chose to link the vertices of the graph by simple segment lines even if it could give bad impression on the graph. From the figure 5 for instance, two hyperedges are explicitly drawn. They are generated by the pixels O and P . They intersect and therefore a link between O and P must be shown. Let us assume that the hyperedge generated by N is reduced to N itself. The user may therefore have the (bad) impression that N is linked with both P and O as N lies between these two points. Firstly it must be clear that such cases may be corrected by the user himself. As the elevation
Displaying Image Neighborhood Hypergraphs Line-Graphs
133
Fig. 7. Sample representations of the line-graph of an INH. The image is a RGB-color picture. For a pixel x, color is coded by (xr , xg , xb ). The colorimetric distance used was d(x, y) = max{|xr − yr |, |xg − yg |, |xb − yb |}. α was set to 20 and β to 2. Note that some areas are disconnected from the rest of the graph, some vertices are also completely isolated from it. That allowed us to characterize noise in an image [7,8,9,22].
of the polyhedron representing the hyperedge generated by N is controllable by the user. As its cardinality is 1, its intensity will probably different enough from its neighbors and therefore its elevation may also be different enough in order not the segment line not to cross the polyhedron. Moreover, as the previous argument may not appear as sufficient, we give the possibility to the user to individually select a polyhedron and to display the corresponding hyperedge. Only the pixels belonging to the hyperedge are drawn and moreover in order not to give also bad impression on that representation, parabola arcs are drawn if the pixels are located are located at more than 2 units (in the sense of the grid distance) from the source pixel (see Fig. 5). 4.3
Current Restrictions
The main problem with this kind of representation is that it is not possible, at the moment, to display too large images. For instance the figures 6 and 7 only involves 33 × 21 and 55 × 46 color images. The main drawback is the memory cost of the structure and the slowness of the current video cards1 . However, at the sight of the progress recently made in that technical field, we do not despair to display larger structures in a near future. Moreover as it is possible to deal 1
For the figures 6 and 7, the construction and display have been made on a 1.2 MHz Athlon GeForce2 MX video card. Navigation through the structure is very flowing but begins to be difficult when the size of the image reaches more than 50000 pixels.
134
S. Chastel, P. Colantoni, and A. Bretto
with sub-images and parallelize processes, it should be possible to build subrepresentations of the image and to deal only with the nearest of them when the display is made. 4.4
Characterizing Regions of Interest in an Image
However, at the sight of pictures resulting of the display, it is interesting to note that regions of interest appear from the graph representation. For instance, weakly connected vertices appear in the left upper part of figure 6 characterizing edges of objects or of homogeneous areas in the original image [10]. Totally disconnected vertices, small sets of them or very weak connections between a set of vertices and larger size areas of the graph as shown in figure 7 allowed us to give non-statistically based definitions of noise in an image [7,8,9,22].
References 1. B. Bollob´ as Modern Graph Theory, Springer–Verlag, (1998). 2. C. Berge, “Introduction ` a la th´eorie des hypergraphes”, Les Presses de l’Universit´e de Montr´eal, (1973) 3. C. Berge, “Graphs”. North Holland, (1987). 4. C. Berge, “Hypergraphs”. North Holland, (1989). 5. I. Bomze, M. Budinich, P. Pardalos, and M. Pelillo, The maximum clique problem, http://www.dsi.unive.it/ pelillo/papers/tr99-1.ps.gz, (1999). 6. A. Bretto, J. Azema, H. Cherifi, and B. Laget, Combinatorics and image processing in “Computer Vision Graphic in Image Processing”, 59, (5), September, (1997) 265–277. 7. A. Bretto, and H. Cherifi, A noise cancellation algorithm based on hypergraph modeling in “IEEE Workshop on Digital Signal Processing” (1996) 1–4. 8. A. Bretto, and H. Cherifi, Noise detection and cleaning by hypergraph model in “IEEE Computer Sciences” International Symposium on Information Technology: Coding and computing, IEEE Computer Sciences, (2000) 416–419. 9. A. Bretto and S. Chastel, A note on valued polyominoes hypergraph image modeling, International Journal of Computational and Numerical Analysis and Applications, to appear. 10. S. Chastel, A. Bretto, and H. Cherifi, Image neighborhood hypergraph model and edge detection in “JCIS Proceedings” 3, Association for Intelligence Machinery, (1998), 272–275. 11. J. P. Cocquerez and S. Philipp, Analyse d’images : filtrage et segmentation, Enseignement de la physique, Masson, (1995). 12. T.K. Dey, H. Edelsbrunner, S. Guha “Computational topology”, Technical report, (1998). 13. R. Diestel, “Graph Theory”, Graduate Text in Mathematics, Springer-Verlag, (1997). 14. M. Gondran, and M. Minoux, “Graphs and Algorithms” Wiley, Chichester, (1984). 15. I. Hoyler, The NP-completeness of edge coloring, SIAM J. Comput., 10: 718–720, (1981) 16. J. Jing-Ying, Multilevel Median Filter Based on Fuzzy Decision in “IEEE Trans. Image Processing”, 4: 680–682, (1995)
Displaying Image Neighborhood Hypergraphs Line-Graphs
135
17. D. Leven and Z. Galil, NP-completeness of finding the chromatic index of regular graphs, J. Algorithms, 4: 35–44 (1983) 18. V.A. Kovalevsky, Finite topology as applied to image processing in “Computer Vision, Graphics, and Image Processing” 46 (1989) 141–161. 19. A. Montanvert, and J.M. Chassery, “G´eometrie discr`ete en analyse d’images” Trait´e des Nouvelles Technologies. S´erie Images. Herm`es, Paris (1991). 20. J.-P. R´eveill`es, G´eom´ etrie discr`ete, calculs en nombre entiers et algorithmique, Th`ese de doctorat, Universit´e L. Pasteur, Strasbourg, (1991). 21. F.P. Preparata, and M.I. Shamos, “Computational Geometry : An Introduction”, Springer, New York, (1985). 22. S. Rital, A. Bretto, H. Cherifi, and D. Aboutajdine, Mod´elisation d’images par hypergraphe. Application au d´ ebruitage, ISIVC Proceedings (ed. IAPR), Rabat (2000), 25–34. 23. A. Rosenfeld, “Digital Image Processing”, Academic Press, San Diego, (1982) 24. J.C. Russ, “The image processing handbook” CRC press, IEEE press, Springer, Berlin (1999) 25. K. Voss, Discrete Images, Object and Functions in ZZ n , Springer-Verlag (1993)
The Reconstruction of a Bicolored Domino Tiling from Two Projections A. Frosini and G. Simi Universit` a degli Studi di Siena Dipartimento di Matematica Via del Capitano 15 53100 Siena, Italy frosini,
[email protected] Abstract. We challenge the problem of the reconstruction of a bicolored domino tiling of a rectangle from its horizontal and vertical projections. We give two N P -completeness results after having defined two non equivalent and very natural notions of projections on a generic bicolored domino tiling. The more general problem of the reconstruction of monochromatic domino tilings is still left open. Keywords. Domino tiling, Reconstruction Problem, N P -completeness.
1
Introduction
The aim of Discrete Tomography is the reconstruction of a discrete finite set of points in the d-dimensional integer lattice Zd using projections on lower dimensional subspaces. Important applications of Discrete Tomography are in image processing, in reconstructing structures from data get by an electronic microscopy, in data security and data compression (regarding the projections as an encoding process and reconstruction as a decoding process of a given object), and in computer-aided tomography. Interesting results have been achieved in reconstructing planar sets using projections on one, two or more mono-dimensional subspaces. In [2], Ryser studies the problem of how to reconstruct a binary matrix (which models bidimensional sets) from its projections and finds a P-time algorithm for it. In [1] the authors extend the above result to matrices with a finite number of different entries (which models colored bidimensional sets) and find the N P -completeness when n > 3 (n − colors problem). These results become relevant when applied to the algorithms for the reconstruction of polyatomic crystal structures. In this paper we use standard techniques of Discrete Tomography in order to reconstruct tilings of rectangular subsets of the plane with two colored dominoes. The horizontal and vertical projections of the tilings are a priori knowledge. In [5] the author shows that the reconstruction problem of a bicolored domino tiling is at least as hard as the reconstruction problem of a three entries matrix (3−colors problem) and leaves as an open problem its computational complexity. A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 136–144, 2002. c Springer-Verlag Berlin Heidelberg 2002
The Reconstruction of a Bicolored Domino Tiling from Two Projections
137
Furthermore in [6] a polynomial time algorithm which reconstructs a bicolored domino tiling from one projection is given. The paper is organized as follows: in Section 2 we give some general definitions about bicolored domino tilings, in Section 3 we find the computational complexity of the reconstruction problem on such a class using two orthogonal projections. This problem arose in [5]. In the last section we consider and solve the same reconstruction problem using the different and non equivalent notion of horizontal and vertical projections introduced in [6].
2
Definitions
Let us consider an infinite squared surface S composed by cells and consider a two length horizontal or vertical bar called dominoes. A rectangular subset of S of dimension m×n has a domino tiling if it could be completely covered with non overlapping dominoes. Furthermore, a bicolored domino tiling, is a domino tiling which use two different kinds of dominoes: white and black ones. Columns are numbered from 1 to n, starting from the leftmost one, while rows are numbered from 1 to m, starting from the topmost one. A vertical domino covering two cells ci,j and ci+1,j is said to start on line i and end on line i + 1, while an horizontal domino covering two cells ci ,j and ci ,j +1 is said to start on column j and end on column j + 1. V
(4,1) (2,2) (2,3) (4,0) (3,2) (4,0) (3,2) (1,3) (3,2) (2,3) (3,3) (4,4) (6,1)
H
(3,3) (5,4) (6,2)
Fig. 1. A bicolored domino tiling of size 6 × 10 and its horizontal and vertical projections.
Let B be a bicolored domino tiling of dimension m × n, we define H = ((w1 , b1 ), . . . , (wm , bm )) the vector of the horizontal projections of B, where wi is the number of white dominoes which intersect at least one cell of line i and bi is the number of black dominoes which intersect at least one cell of line i, for each 1 ≤ i ≤ m. V = ((w1 , b1 ), . . . , (wn , bn )) the vector of the vertical projections of B, where wj is the number of white dominoes which intersect at last one cell of column j and bj is the number of black dominoes which intersect at least one cell of column j, for each 1 ≤ j ≤ n.
138
A. Frosini and G. Simi
The two vectors H and V do not univocally determine the number of horizontal (vertical as well) white dominoes on each row. The same holds for the black dominoes. Through the paper we will indicate with B a bicolored domino tiling of dimension m × n; the vectors of horizontal and vertical projections of B will be denoted with H and V in Section 3 and with R and C in Section 4.
3
The Reconstruction of a Bicolored Domino Tiling Consistent with Two Given Projections
We define the two problems: Consistency(H,V ) (b.d.t). Instance: two vectors H ∈ (N × N)m and V ∈ (N × N)n . Question: does there exist a bicolored domino tiling such that its horizontal and vertical projections are H and V respectively? Reconstruction(H,V ) (b.d.t). Instance: two vectors H ∈ (N × N)m and V ∈ (N × N)n . Output: a bicolored domino tiling such that its horizontal and vertical projections are H and V . In [5] the author made a step towards the solution of the 3 − colors problem by reducing its instances to instances of Consistency(H,V ) (b.d.t.) so that if this problem allowed a P -time solution, then also 3 − colors would. Unfortunately in this paragraph we prove that Consistency(H,V ) (b.d.t.) is N P complete: this prevent us from making any conjecture about the complexity of the 3 − colors problem. We achieve the above mentioned result by using a reduction which involves the N P -complete problem PARTITION (see [3]). PARTITION Instance: a finite sequence of integers A = a1 , . . . , ak . Question: let J = {1, . . . , k}. Is there J ⊂ J such that aj = aj ? j∈J
j∈J−J
Lemma 1. Let B be a bicolored domino tiling and H its horizontal projection. m The number of white dominoes in B is lower bounded by w1 +···+w and upper 2 bounded by w1 + · · · + wm . Proof. We get the lower bound when in B there are only vertical white dominoes so that each of them is counted in two different entries of H. We get the upper bound when in B there are only horizontal white dominoes, so that each of them is counted only in one entry of H.
The Reconstruction of a Bicolored Domino Tiling from Two Projections
139
Theorem 1. Consistency(H,V ) (b.d.t.) is N P -complete. Proof. The problem obviously belongs to N P . Let A = a1 , . . . , ak be an instance I of PARTITION with i=J ai = 2s. We want to define in polynomial time k − 1 instances I1 , . . . , Ik−1 of the problem Consistency(H,V ) (b.d.t.) such that a solution for I exists if and only if a solution for at least one of the I1 , . . . , Ik−1 exists. The index i of the generic instance Ii represents the cardinality (which is obviously less than, but not equal to k) of the set J . Let us construct Ii and let Bi be one of its solutions, if it exists. We define H ∈ (N × N)4s and V ∈ (N × N)3 as follows: − H is composed by couples which can be arranged in k blocks, each of them encoding a different element of A. The height of the j-th block is 2aj : H = ((1, 1), (2, 1), . . . , (2, 1), (1, 1), . . . , (1, 1), (2, 1), . . . , (2, 1), (1, 1)) block 1 of length 2a1 block k of length 2ak − V is composed by 3 couples: V = ((s, s + i), (2s − k, 2k), (s, s + k − i)) (see Figure 2 a)). We make some useful remarks: let us consider the first block of horizontal projections of H, and let Bi1 be a bicolored domino tiling consistent with it as shown in Figure 2 b) and c), it holds: i)
by Lemma 1, the minimum number of white dominoes in Bi which are k required by H is j=1 2aj − 1 = 4s − k, which is also the maximum number of white dominoes required by V (we compute it by adding w1 + w2 + w3 of V , that is s + (2s − k) + s). It follows that Bi has exactly 4s − k white dominoes; ii) again by Lemma 1, the minimum number of white dominoes in Bi1 is 1 (1 + 2 + · · · + 2 + 1) = 2a1 − 1. 2 2a1 times Figure 2 c) shows a solution having a higher number of white dominoes; iii) since Bi is composed by k blocks, using i) we get that, for each 1 ≤ j ≤ k, Bij has exactly 2aj − 1 white dominoes, which are all vertical ones; iv) from the previous remarks it follows that each Bij has two horizontal black dominoes which are placed on the first and the last row of the block; vi) since the number of black dominoes in the second column of Bi is 2k and remark iii) and iv) hold, we get that each Bij has only two horizontal black dominoes in the first and on the last rows.
140
A. Frosini and G. Simi (7,6) (5,7)
(5,6) (1,1) (2,1) (2,1) (2,1) (2,1) (2,1) (2,1) (2,1) (2,1) (1,1)
a)
(1,1)
(1,1)
(1,1)
(2,1)
(2,1)
(2,1)
(2,1)
(2,1)
(2,1)
(2,1) (2,1)
(2,1) (2,1)
(2,1) (2,1)
(1,1)
(2,1)
(2,1)
(1,1)
(2,1)
(2,1)
(2,1)
(2,1)
(2,1)
(2,1)
(2,1)
(2,1)
(1,1)
(1,1)
(1,1)
b)
c)
Fig. 2. The tiling which leads to a solution of the instance A = 5, 3, 2 of PARTITION with i = 2.
The above remarks imply that Bi can assume only fixed configurations: each of them leads us to a solutions of PARTITION. On line 1, Bi has a white vertical domino and a black horizontal domino; if the white domino is placed on column 1, then the tiling of the whole first block Bi1 is univocally fixed. We can consider such a tiling as the coding of the belonging or no-belonging of the index 1 to J (see Figure 2 a)). If the white domino is placed on column 3 we get a symmetrical tiling. For the remaining k − 1 blocks a similar behavior holds: by placing the first line of each of them we determine the whole tiling of the block. On column 1, Bi has only the white dominoes belonging to the coding of i elements of A whose sum is s. The indexes of such elements forms the required set J . On the other hand let J be a solution for I having cardinality i. We proceed by coding each element having index in J by a block which has white dominoes
The Reconstruction of a Bicolored Domino Tiling from Two Projections
141
in column 1 and each element having index in J − J by a block which has white dominoes in column 3. The constructed tiling is one of the solutions of the instance Ii of Consistency(H,V ) (b.d.t.).
Corollary 1. The problem Reconstruction(H,V ) (b.d.t.) is N P -hard.
4
A New Reconstruction Problem on the Class of Bicolored Domino Tilings
In this section we give a different and very natural definition of projections for a bicolored domino tiling (as introduced in [6]). Through the section, the new vectors of horizontal and vertical projections will be referred to as R and C in order to distinguish them from H and V previously defined. In [6] the author shows a polynomial algorithm which solves Reconstruction(R) (b.d.t.) and he leaves Reconstruction(R,C) (b.d.t.) as an open problem. In spite of the two notions of projections are not equivalent, we prove the N P -completeness of Consistency(R,C) (b.d.t.) by reducing it to the problem PARTITION in a similar way as in the Theorem 1. This result solves a question proposed in [5] and [6]. However a general question about domino tilings remains still open: the computational complexity of the reconstruction of a domino tiling from two projections. A new notion of projections. Let B be a m × n bicolored domino tiling, we define R = (r1 , . . . , rm ) and C = (c1 , . . . , cn ) as the vectors of the horizontal and vertical projections respectively, where, for each 1 ≤ i ≤ m and 1 ≤ j ≤ n, ri is the number of cells covered with a white domino on line i and cj is the number of white cells covered with a white domino on column j. Theorem 2. Let H and R be the two horizontal projections of B, there does not exist neither a function f : (N × N)m → Nm which maps H into R nor a function g : Nm → (N × N)m which maps R into H (this means that the two notion of projections are not equivalent). Proof. examples a) and b) of Fig. 3 shows that a function f which maps H into R can not be defined. Examples c) and d) prevent us from defining the function g which maps R into C.
The same result holds if we use the vertical projections V and C. Again a proof of N P -completeness which involves PARTITION.
Theorem 3. Consistency(R,C) (b.d.t) is N P -complete
142
A. Frosini and G. Simi
H
R
H
R
H
R
H
R
(1,1) (2,1) (2,1) (1,1)
2 2 2 2
(1,1) (2,1) (2,1) (1,1)
1 2 2 1
(1,1) (1,1)
1 1
(1,2) (1,2)
1 1
a)
b)
c)
d)
Fig. 3. Examples showing the non equivalence between H and R.
Proof. The problem obviously belongs to N P . Let A = a1 , . . . , ak be an instance I of PARTITION, J = {1, . . . , k} and i∈J ai = 2s. We want to construct in polynomial time an instance I of Consistency(R,C) (b.d.t) such that a solution of I exists if and only if a solution of I exists. We define R ∈ N4s and C ∈ N3 as follows: – R is composed of k blocks such that, for each 1 ≤ j ≤ k, the j-th block encodes the element aj ∈ A and its length is 2aj : R=(
1, 2, . . . , 2, 1 ,..., 1, 2, . . . , 2, 1 ); block 1 of length 2a1 block k of length 2ak
– C is the following vector: C = (2s, 4s − 2k, 2s). We prove that if B exists, then it can assume only fixed configurations, each of them leading us to a solution of I. We immediately note that, for each 1 ≤ j ≤ k, the block B j of B has 2aj − 1 white dominoes. The reconstructing procedure of B is the following: – on row 1 we place a vertical white domino on column 1 or 3 and an horizontal black domino covering the two remaining cells. No other tile is possible, otherwise we get stuck with the reconstruction on row 2. We choose to put the white domino on column 1 (a symmetrical behavior if we choose column 3) (see Figure 4 a)); – on row 2 we place a vertical white domino and a vertical black domino in order to satisfy the second entry of R; – on row 3 we have only a free cell where we place a vertical white domino; – in an iterative manner we fill all the positions till row 2a1 where the entry of R is 1 and the cell on column 1 is covered with an ending vertical white domino. So the two free cells should be covered with one or two black dominoes; – the above observations can be extended to each one of the k blocks which has a black cell on column 2 on the first and the last row. The total number of black cells on such a column is 2k and so the remaining 4s − 2k cells are covered with white vertical dominoes, as requested from the second entry of C(see Figure 4 b));
The Reconstruction of a Bicolored Domino Tiling from Two Projections
10
14
10
10
14
10
10
14
10
10
14
143
10
1 2 2 2 2 2 2 2 2 1 1 2 2 1 1 2 2 2 2 1 a)
b)
c)
d)
Fig. 4. The reconstruction of a bicolored domino tiling associated to the instance A = 5, 2, 3 of PARTITION from R and C.
– now we can tile column 1 and 3 of each block of B with white or black vertical dominoes according to the entries of C (see Figure 4 c) and d)). Finally we observe that, for each 1 ≤ j ≤ k, the block j which codes the element aj ∈ A has the whole column 1 or the whole column 3 covered with aj vertical white dominoes. Furthermore in column 1 of B we have s vertical white dominoes which belongs to the blocks j1 . . . jt and which code some elements aj1 , . . . , ajt whose sum is s. The set J = {j1 , . . . , jt } is the desired solution of the instance I. On the other hand, if a solution of I exists, it is easy to construct a solution for the corresponding instance I .
Corollary 2. The problem Reconstruction(R,C) (b.d.t.) is N P -hard.
5
Conclusions
In this paper we have studied and solved the computational complexity of the reconstruction problem on the class of the bicolored domino tilings using two different pairs of orthogonal projections. These two problems arose while searching
144
A. Frosini and G. Simi
a solution for the complexity of the 3 − colors problem and for the reconstruction of a monochromatic domino tiling from two projections. These two classical challenges are still left unsolved.
References 1. M.Chrobak, C.Durr , Reconstructing Polyatomic Structures from Discrete X-Rays: NP-Completeness Proof for Three Atoms, Theoretical Computer Science, 259:81-98, (2001), 2. H.Ryser , Combinatorial Mathematics, Mathematical Association of America and Quinn & Boden, Rahway, New Jersey, (1963) 3. M.R.Garey , D.S. Johnson , Computers and intractability: A guide to the theory of N P -completeness, Freeman, New York, (1979) 4. C.D¨ urr, E.Goles, I.Rapaport, E.R´emila, Tiling with bars under tomographic constraints Theoretical Computer Science to appear. 5. C.Picouleau, Reconstruction of domino tiling from its two orthogonal projections Theoretical Computer Science, 255, pgg.437-447 (2001). 6. C.Picouleau, Reconstruction of a coloured domino tiling from its projections Theoretical Computer Science submitted. 7. G. J. Woeginger , The reconstruction of polyominoes from their orthogonal projections, Inf. Proc. Letters 77 pagg.225-229 (2001).
Digital Geometry for Image-Based Metrology Alfred M. Bruckstein Ollendorff Professor of Science Computer Science Department Technion, IIT 32000, Haifa, Israel
[email protected] Abstract. This paper surveys several interesting issues arising in digital geometry due to the need to perform accurate automated measurements on objects that are seen through the eyes of various types of imaging devices. These devices are typically regular arrays of (light) sensors and provide us matrices of quantized probings of the objects being looked at. In this setting, the natural questions that may be posed are: how accurately can we locate and recognize these objects from classes of possible objects, and how precisely can we measure various geometric properties of the objects of interest, how accurately can we locate them given the limitations imposed upon us by the geometry of the sensor lattices and the quantization and noise omnipresent in the sensor device output. Yet another exciting area of investigation is the design of (classes of) objects that enable optimal exploitation of the imaging device capabilities, in the sense of yielding the most accurate measurements possible.
1
Introduction
Scanned character recognition systems are working quite well by now, several companies have grown based on the need to do image based inspection for quality control in the semiconductor industry and, in general, automated visual inspection is by now widely used in many areas of manufacturing. In these important applications one needs to perform a series of precise geometric measurements based on images of various types of (planar) objects or shapes. The images of these shapes are provided by sensors with limited capabilities. These sensors are spatially arranged in (regular) planar arrays providing (matrices of) quantized (pixel-)values that need to be processed by automated metrology systems to extract information on the location, identity, size and orientation, texture and color, of the objects being looked at. The geometry of the sensor array is a crucial factor in the measurement performances that are possible. When sensor arrays are regular planar grids, we have to deal with a wealth of issues involving geometry on the integer grid. This is how digital geometry enters the picture in industrial metrology tasks, in very fundamental ways.
2
The Digitization Model and the Metrology Tasks
We shall here assume that the planar shapes, the objects we are interested to locate, measure and recognize are binary (black on a white background) and live A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 145–154, 2002. c Springer-Verlag Berlin Heidelberg 2002
146
A.M. Bruckstein
in the real plane, R. Hence their full description can be given via an indicator function ξ(x, y) which is 1 (black) on the shape and 0 (white) on the background. The digitization process will be by point sampling on the integer grid, Z2 , hence the result of digitization will be a discrete indicator function on the integer grid: a discrete binary image, or matrix of picture elements, or pixels. The generic problem we shall deal with is: given the discretized shape recover as much information as possible on the “pre-image”, i.e. on the original binary shape that lives on the continuous real plane. The necessary information on the pre-image shape might be its location and orientation area, perimeter, etc. In order to solve the particular problem at hand we shall also exploit whatever prior information we may have on the continuous pre-images. This prior information sometimes defines the objects or shapes we digitize as members of parameterized sets of possible pre-images. For example, we might know that the shapes we are called upon to measure are circular with varying locations and sizes. In this case the parameter defining the particular object instance being analyzed from its digitization is a vector comprising three numbers: two coordinates pointing out the center of the disk and a positive number providing its radius.
3
The Wonders and Uses of Digital Lines
Digital lines, or digital straight segments result from sampling half-plane preimages or planar polygonal shapes (on the integer grid). More is known about this topic than anyone can possibly know, but I would dare to say that the basic facts are both simple and beautiful. Half-planes are not very interesting or practically useful objects, however they already pose the following metrology problem: given the digital image of a half-plane, locate it as precisely as possible. Of course, we must ask ourselves whether and how our location estimation improves as we see more and more of the digitized boundary. Well, it turns out that we can think about the location estimation problem as a problem of determining the halfplane pre-images that satisfy all the constraints that the digitized image provides. Indeed every grid-point pixel that is 0 (white) will tell us that the half-plane does not cover that location while every black (1) pixel will indicate that the halfplane covers its position. It should come as no surprise that the boundary pixels, i.e. the locations where white pixels are neighboring black ones, carry all the information. The constraint that a certain location in the plane belongs, or does not belong to the half-plane that is being probed translates into a condition that the boundary line has a slope and intercept pair in a half-plane defined in the dual representation space (which is called in pattern recognition circles the Hough parameter plane). Therefore, as we collect progressively more data in the “imageplane” we have to intersect more an more half-planes in the Hough plane to get the so called “locale”, or the uncertainty region in parameter space where the boundary line parameters lie, see [8]. Looking at the grid geometry and analyzing the lines that correspond to grid-points in the dual plane one quickly realizes that only the boundary points contribute to setting the limits of the locale of interest, and a careful analysis reveals that, due to the regularity of the sampling grid,
Digital Geometry for Image-Based Metrology
147
the locales are always polygons of at most four sides, see [6]. Hence as more and more consecutive boundary points are added to the pool of information on the digitized half plane, we have to perform half-plane intersections with at most four sided polygonal locales to update them. Clearly the locales generally decrease in size as the number of points increases, and we can get exact estimates on the uncertainty behavior as the jagged boundary is progressively revealed. This idea, combining Leo Dorst’s discovery on the geometry of locales for digital straight lines with the process of successively performing the half-plane intersections for each new data point while walking along the jagged digitized boundary, led to the simplest, and only recursive O(length) algorithm for detecting straight edge segments. For a complete description of this algorithm see [13]. The jagged edges that result from discretizing half-planes have a beautiful, self-similar structure, intimately related to the structure of the real number that defines the slope of their boundary line. One can readily see that at various sampling resolutions the boundary maintains its jaggedness in a fractal manner, but here we mean a different type of self-similarity, inherent in the jagged boundaries at any given resolution! The paper [3] summarizes the wealth of interesting and beautiful properties that were described over many years of research on digital straight lines using a very simple unifying principle: invariance under re-encoding with respect to regular grids embedded into the integer lattice. Not only does this principle help in deriving in a very straightforward manner digital straight edge properties that were discovered and proved in sinuous ways, but it also points out all the selfsimilarity type properties that are possible, making nice connections to number-theoretic issues that arise in this context and the general linear group GL(2, Z) that describes all integer lattice isomorphisms. Using the wonderful properties of digital straight lines, we can not only solve the above-mentioned, and somewhat theoretical issue of locating a half-plane object of infinite extent but we can also address some very practical issues like measuring the perimeters of general planar shapes from their versions digitized on regular grids of pixels. Indeed, analyzing the properties of digitized lines made possible the rational design of some very simple and accurate perimeter estimators, based on classifications of the boundary pixels into different classes according to the jaggedness of their neighborhoods. Building upon earlier work of Proffit and Rosen [16], Koplowitz and Bruckstein proposed a general methodology for the design of simple and accurate perimeter estimation algorithms that are based on minimizing the maximum error that would be incurred for digitized straight edges over all orientations [11]. This methodology enables predictions of the expected performance for shapes having arbitrary, but bounded curvature, boundaries. Note also that using the recursive O(1)-per boundary pixel algorithm for detecting digital straightness of Lindenbaum and Bruckstein [13] one could parse general, curved object boundaries into digitally straight segments and then estimate the pre-image object’s perimeter as a sum of the lengths of the line-segments so-detected. In terms of the methodology presented in [13] this algorithms yields zero error for digital straight edges of infinite extent at all orientations, and hence should be the best perimeter estimator ever obtainable!
148
4
A.M. Bruckstein
Digital Disks, Convex and Star-Shaped Objects
From the realm of half-plane objects we can move to either infinite extent regions that have more complex boundaries (say parabolas, hyperbolae or some periodic functions along a principal direction) or to the analysis of finite extent objects like polygons, disks and other interesting shapes. Some work has indeed been done on detecting polygonal preimages from their digitized versions, and in fact a good algorithm for parsing a jagged boundary into digital straight segments turns out to be a crucial ingredient in solving various issues regarding the metrology of such objects. Suppose next that we have the prior information that the objects discretized are disks of various locations and sizes. Then the metrology question arising naturally is: how precisely can we determine the location of a disk and its radius. Considering the digitization model by point sampling, as discussed above, given a digitized image of black and white pixels, we know that if a certain point in the plane is the center of a disk of unknown radius, this point will necessarily be closer to all black grid points than to any white grid point. Hence the locus of all possible points in the plane closer to all black points than to any white points is the locale of possible disk centers, and its size will quantify our uncertainty in locating the object in the preimage plane. It is interesting to note that this locale can be found without knowledge on the radius, which will still need to be estimated. It turns out that the locale as defined above is a well-known concept in computational geometry, and it is known that it is a convex region in the plane. Efrat and Gotsman have done a careful analysis of the problem and produced an O(R log R) algorithm to determine the locale, where R is the radius of the disk. We refer the interested reader to the paper [7] for details. Note again that the locale we are talking about is independent of the radius parameter. Had we prior knowledge on the exact radius, the location of the disk center could be determined by intersecting all disks of radius R around the black grid points with all the complements of disks or radius R around the white (uncovered) grid points. The resulting intersection locale is generally not a convex shape, due to the precise knowledge of the radius. For general convex shapes the question of determining the location and area and perimeter cannot be addressed in any generality. The digitized version of a convex shape is a set of black grid points on a background of white ones. As a union of square pixels the digitized shape will not be convex. Hence much work was done addressing the question whether there is a good definition of convexity for discrete objects [18]. A variety of proposals were made and can be found in the literature. The metrology questions however, in all cases remain: determine with best precision the location (first order moments), orientation (second order moments) and other metric properties, like area (zeroth order moment) and perimeter of the shape. These questions, too have received some attention. It turns out that computing the moments of the black grid points yields good estimates for the corresponding continuous quantities, and more refined, boundary estimation procedures (say, based on polygonalization of the jagged boundary via an efficient digital straight segment detection, as discussed
Digital Geometry for Image-Based Metrology
149
above) do indeed provide improved estimates but the improvement needs to be carefully weighed against the increased complexity involved. Among the many procedures that propose polygonal approximations to preimages based on the discrete grid points that were covered by the shape, and also based on the ones that were not covered, one stands out in elegance and usefulness: the minimum perimeter polygon that is enclosing all black (covered) points and excludes all white (uncovered) ones. This minimum perimeter polygon turns out to be the relative convex hull of the black points with respect to the white ones. It can be computed easily and may serve as a good approximation for preimages for all metrology purposes. So far we talked about disks and convex objects. The next level of complexity in planar shapes are the so called star-shaped objects. These are defined as the shapes that have a “kernel region” inside them so that from any point in the kernel the entire boundary of the shape can be “seen”, i.e. a line from the chosen point to any boundary point will lie entirely inside the shape. It is easy to see that this definition generalizes convexity an a rather natural way and that the kernels must be convex regions. Determining star-shapedness of a planar shape is not a too difficult task for polygons and for spline-gons and the algorithms for doing this rely on locating and using the inflection points on the boundary, and intersecting the regions in the plane from where the convex boundary regions are seen, see [1]. As with the notion of convexity, determining digital star-shapedness posed a series of special problems that needed careful analysis. This was the topic of a paper by Shaked, Koplowitz and Bruckstein, and there is was shown that the relative convex hull, or minimal perimeter polygon of the grid points covered by the shape wuth respect to the ones that remained uncovered, provides a convenient computational way to determine digital star-shapedness, see [17].
5
Shape Designs for Good Metrology
Up to this point we have discussed ways to analyze and measure planar shapes when seen through the looking glass of grid probing, or point-sampling discretization. The classes of shapes were assumed given in some perhaps parameterized form, and we dealt with questions about recovering their various features and parameters, or about measuring their size and perimeter and determining their location with the highest precision possible. When considering such issues, a further question that can be posed is the following: design planar shapes or collections of shapes that will interact with the discretization process in such a way that the quantities we need to measure will be very easily read out in the discretized images we get. Could we design an object in the plane (that can be a union of continuous binary shapes), so that digitization of this object translated to various locations, will yield black and white patterns on the (discretization) grid that clearly exhibit, say in a binary representation, the X and Y translation values up to a certain desired precision? Interestingly, recently a new pen-like device was invented and advertised, that has the following feature: it automatically computes with very high precision the
150
A.M. Bruckstein
location of its tip on any of the pages of a paper pad by looking at a faint pattern of dots that is printed on these sheets of paper. The pattern of these dots is so designed that the image obtained on any small region as seen by the pen near it’s tip (with the help of a tiny light detector array) uniquely and easily locates the pen-tip’s position on any of the pages of the pad, see [19]. This example shows that it is good engineering to think about designing shapes to have such “self-advertising” properties and this approach could provide us surprisingly efficient and precise metrology devices. This problem was posed by Bruckstein O’Gorman and Orlitsky, at Bell Laboratories, already in 1989, with the aim of designing planar patterns that will serve as location marks, or fiducials on printed circuit boards. The need for location or registration fiducials in printing circuit boards and in processing VLSI devices is quite obvious. When layers of printing and processing are needed in the manufacturing operation, the precision in performing the desired processes in perfect registration with previously processed layers is indeed imperative. The work of [2] showed that there exists an information theoretic bound that limits the location precision for any shape that has an spatial extent of say AxA in pixel-size. Such a shape, when digitized will provide for us about A2 meaningful bits of information, via the pattern of black and white pixels in the digitized image. This number of bits 2 can only effectively encode 2A −1 different locales, and hence the precision to which we can refine a region one pixel-square in size has a maximal area that 2 must exceed 1/(2A −1 ). If we want balanced X and Y axis precision, we can 2 only locate the pattern to a subpixel precision of 1/ 2(A −1)/2 . This is the best precision possible assuming optimal exploitation of the real estate of area AxA, assigned to the location mark. The important issue that was further settled in [2] is the existence of a fiducial pattern that indeed achieves this precision. The pattern is so cute that we exhibit it in Fig. 1. Looking at this fiducial pattern it becomes obvious what it does. It is indeed a continuous 2D (analog) input that employs the point sampling discretization process to compute its X and Y displacement by providing a binary read-out of the subpixel location of the fiducial within the one pixel to which it can readily be located using the left lowest grid-point (the “rough location” mark) covered by the shape. This leftmost bit of information is also the reason we can only use A2 − 1 bits for subpixel precision, i.e. for cutting the one pixel precision (provided by the “rough location” bit) into locale slices. This process turns the fiducial and the discretization process into a nice analog computer that yields the displacements in the X and Y direction easily, and achieves the highest precision in this task that is possible based on the available data. The analysis provided in [2] goes even further. The optimal fiducials turn out to require highly precise etchings on the VLSI or circuit board devices and hence might be difficult to realize in practice. Hence there is a need to analyze other types of fiducial shapes that achieve suboptimal exploitation of the area, however can provide good location accuracies. For rotational invariance, circularly symmetric shapes turn out to be necessary, and therefore bull-eye fiducials were also proposed in [2] and further analyzed by [7].
Digital Geometry for Image-Based Metrology
151
Fig. 1. An optimal 2D fiducial of area 3x3
The most interesting question that remains to be addressed here is the following: can we invent shapes that provide other metrological measures as easily as the above discussed example advertised its location?
6
The Importance of Being Gray
So far we have discussed the case of binary continuous images being pointsampled into matrices of zeros and ones, or Black and White pixels. However the real world is far richer in possibilities and complications. First of all, point sampling is not a good model of the imaging process as performed by real life cameras. Those carry out, at each sensor level, a weighted integration of the incoming light from the continuous input pattern. This integration happens around each grid point, and the pixel influence region may be assumed circular. The integration yields, at each grid point, values that continuously vary from a lowest value for white (no object) input over the pixel influence region to to highest value that corresponds to having the input object cover the entire area of integration. The result of this integration is then transformed into a discrete value encoded by several bits, via quantization. Therefore even for binary preimages, we get at each grid point a pixel value that is the quantization of a continuous variable proportional to the fraction of the pixel influence region that is covered by the input object. Furthermore we may also consider the advantages of using non-binary, grayscale of color pre-images. The combination or more realistic sampling and quantization processes with the use or grey levels in preimages open for us a great
152
A.M. Bruckstein
variety of further possibilities. As an example, Kiryati and Bruckstein have analyzed, following a question of Professor Pavlidis, the trade-off between spatial resolution and number of grey levels when the aim is to get as much information as possible on a class of binary pre-images that comprise polygonal shapes. The conclusion of this research was that “Gray Levels Can Improve the Performance of Binary Image Digitizers”, see [10]. The paper introduces a measure of digitization-induced ambiguity in recovering the the binary preimage, hence it is quite relevant to metrology under such sampling conditions. It is then shown that, if the sampling grid is sufficiently dense (i.e. the sampling rate is high!) and if the pixels would provide us exact grey-levels rather than quantized values, then error-free reconstruction of the binary pre-image becomes possible. This is not too surprising, however, when the total bit budget for the digitized image representation is limited (i.e. the sampling rate and the quantization depth are related, both being finite) the bit allocation problem that arises shows that the best resource allocation policy is to increase the grey level quantization accuracy as much as possible, once a sufficiently dense spatial sampling resolution has been reached. Therefore once we have a grid dense enough to ensure that all linear borders of the binary input image polygonal shapes can adequately be “seen” in the sampled image, all the remaining bit resources should go towards finer gray level quantization. Professor Pavlidis’ question, which prompted this research asked to explain why grey-level fax machines at low resolution yield nicer images than fax machines at higher resolution, even for binary document images. It was clear that some sort of anti-aliasing effect is in place, however [10] showed quantitatively that even in terms of a well-defined metrology error measure, the grey-levels help considerably more than increased spatial resolution. Imagine next that we allow grey level input images too. In this case we shall certainly have, in conjunction with multilevel quantizations at each pixel much more information for location and various other measurements. A gradual boundary in the input image, or equivalently an area integration sensor providing a quantized multilevel pixel value at each grid-point, will transform the issue of locating a half plane into a problem of locating precisely several parallel digital straight edges, when they are simultaneously sampled. Such richness of detail will certainly dramatically reduce the size of the uncertainty locales, and enable us to design a wealth of improved location and orientation fiducials in the future. For a beginning of work in this direction see the recently completed thesis of Barak Hermesh, titled “Fiducials for Precise Location Estimation”. The conclusion therefore is that gray levels matter, they are good for us! And the last word on these issues certainly has not been said yet.
7
Concluding Remarks
Somebody said that every paper and book are autobiographical. This is certainly true of this short paper. It surveys my own research that dealt with digital geometry and metrology issues. As is clear from the list of papers below, my excitement with the interesting topics of research that arise at the interface
Digital Geometry for Image-Based Metrology
153
between continuous and discrete geometry is permanent. And I am sure that the attendees of the Discrete Geometry for Computer Imaginary International Conferences all share my excitement with these topics. More on the vast subject of discrete geometry can be found in several books [4,5,12,14,15]. Acknowledgements. Thanks to my past and present students who over the years have taught me so many things on digital geometry and other topics. Thanks to Professors Theo Pavlidis and Azriel Rosenfeld for their leadership in digital geometry and for their friendship. Thanks to Yana Katz who made the speedy preparation of this paper possible.
References 1. R. Bornstein and A.M. Bruckstein, “Finding The Kernel Of Planar Shapes”, Pattern Recognition, Vol. 24, No. 11, 1019-1035, 1991. 2. A. M. Bruckstein, L. O’Gorman, and A. Orlitsky, “Design Of Shapes For Precise Image Registration”, IEEE Transactions on Information Theory, Vol IT-44/7, 3156-3162, 1998, ( AT&T Bell Laboratories Technical Memorandum ). 3. A. M. Bruckstein, “Self-Similarity Properties of Digitized Straight Lines”, Contemporary Mathematics, Vol. 119, 1–20, 1991. 4. J.-M. Chassery and A. Montanvert, G´eom´etrie Discr´ete en Analyse d’Images, Hermes, Paris, 1991. 5. L. S. Davis, editor, The Azriel Rosenfeld Book:“Foundations of Image Understanding”, University of Maryland, U.S.A., Kluwer, 2001. 6. L. Dorst, “Discrete Straight Line Segments: Parameters, Primitives and Properties”, Ph.D. Thesis, Technological University Delft, 1986. 7. A. Efrat and C. Gotsman, “Subpixel Image Registration Using Circular Fiducials”,International Journal of Computational Geometry and Applications, Vol.4, 403–422, 1994. 8. D. I. Havelock, “The topology of Locales and Its Effects on Position Uncertainty” IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 13, No. 4 380–386, 1991. 9. B. Hermesh, “Fiducials for Precise Location Estimation”, Research Thesis, Technion, IIT, Haifa Israel, 2001 10. N. Kiryati and A. M. Bruckstein, “Gray Levels Can Improve the Performance of Binary Image Digitizers” CVGIP: Graphical Models and Image Processing, Vol. 53, No. 1 31–39, 1991. 11. J. Koplowitz, and A. M. Bruckstein,“Design Of Perimeter Estimators For Digitized Planar Shapes”, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. PAMI-11/6, 611-622, 1989. 12. L. J. Latecki, Discrete Representation of Spatial Objects in Computer Vision, Vol. 11, Kluwer, 1998. 13. M. Lindenbaum and A. M. Bruckstein , “On Recursive, O(N ) Partitioning of a Digitized Curve into Digital Straight Segments”, IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 15, No. 9 949–953, 1993. 14. M. D. McIlroy, “Number Theory in Computer Graphics”, The Unreasonable Effectiveness of Number Theory, Proceedings of Symposia in Applied Mathematics, Vol. 46, 105–121, 1992
154
A.M. Bruckstein
15. R. A. Melter, A. Rosenfeld, P. Bhattacharya, editors, “Vision Geometry”, Contemporary Mathematics, Vol. 119, AMS, Rhode Island, 1991. 16. D. Proffit and D. Rosen, “Metrication Errors and Coding Efficiency of Chain Coding Schemes for the Representation of Lines and Edges”, Computer Graphics Image Processing, Vol. 10, 318–332, 1979. 17. D. Shaked, J. Koplowitz, and A. M. Bruckstein, “Star-Shapedness of Digitized Planar Shapes”, Contemporary Mathematics, Vol. 119, 137–158, 1991. 18. K. Voss, Discrete Images, Objects, and Functions in Z n , Springer-Verlag, 1991. 19. http://www.anoto.com/technology/anotopen/
Topological Reconstruction of Occluded Objects in Video Sequences Vincent Agnus and Christian Ronse Laboratoire des Sciences de l’Image, de l’Informatique et de la T´el´ed´etection (UMR 7005 CNRS–ULP), 67400 Illkirch, France {agnus,ronse}@dpt-info.u-strasbg.fr http://lsiit.u-strasbg.fr/
Abstract. In [1,2] we have introduced a new approach for the spatiotemporal segmentation of image sequences. Here a 2D+t sequence is considered as a 3D image, and 2D objects moving in time (or following a given motion model) are segmented as 3D objects with the use of connected morphological filters, and are represented as spatio-temporal flat zones. However when an object undergoes occlusion by another in the sequence, their 3D trajectories intersect, and the spatio-temporal segmentation will fuse the two objects into a single flat zone. In this paper we introduce a method for separating occluded objects in spatio-temporal segmentation. It is based on a study of the changes of topology of the temporal sections of a flat zone. A topologically constrained watershed algorithm allows to separate the objects involved in the occlusion.
1
Introduction
The goal of segmentation is to partition an image into regions having the same attributes (color, contrast, texture, etc.). In the case of 2D+t image sequences, attributes for segmentation are generally contrast and/or motion. In [1,2] we introduced a new morphological approach enabling the segmentation of 2D+t regions following a given (local) spatio-temporal translation model. The method also provides a natural framework for tracking objects and, using an extension of the method, for classifying motions. Other morphological approaches [4,5] produce a 2D spatial segmentation and try to preserve space-time consistency by propagating markers in the future and applying a new spatial segmentation. (see [1,2] for a detailed discussion of other methods, morphological or based on optical flow). Our 2D+t morphological approach avoids the computation of the optical flow and the control of marker propagation in the future. It naturally handles the case of new object emerging in the sequence or objects leaving the scene. However our method encounters a problem: the occlusion between objects produces a unique flat zone covering many different objects. In this paper we present a new method of topological reconstruction in order to split these flat zones and seperate the different objects. A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 155–164, 2002. c Springer-Verlag Berlin Heidelberg 2002
156
V. Agnus and C. Ronse
In Section 2 we recall briefly the spatio-temporal segmentation method of [1,2]. Section 3 introduces our topological reconstruction approach for dealing with occlusions. First we explain how occlusions lead to the fusion of the flat zones corresponding to the different objects. Then we outline the decomposition of spatio-temporal objects into connected blocks, which fall into three classes: single-objects, junctions and disjunctions. Single-objects become markers for a modified 3D watershed algorithm, which takes into account the topological relations between 2D time sections, in order to separate the different objects involved in an occlusion. Finally we discuss the oversegmentation and the ways to reduce it. Experimental results are presented in Section 4, where we give the conlusion.
2
Segmentation Using Motion Information
Here we summarize our spatio-temporal segmentation method given in [1,2]. Considering a sequence as a single 3-dimensional image, an erosion in the temporal direction modifies the original sequence only at motion boundaries. We can use this information to find the moving objects. From these boundaries we reconstruct geodesically [7] the original signal, and this produces flat zones (maximal connected regions where grey level is constant) corresponding to moving objects. With this method we can avoid choosing the structuring element size to create flat zones, which was required in [6]. This reconstruction is performed both for dark and light objects in the scene, using duality in mathematical morphology. From these two reconstructed images, we allocate a region marker for each flat zone, keeping the one which is farthest from the original signal. This processing can produce false detections in some cases. They are removed by our motion filters. We now describe briefly the construction of flat zones for moving objects. Objects undergoing a typical motion are reconstructed from their motion boundaries. These are computed by taking the difference between the original sequence S and its erosion in the temporal direction. The shape of the temporal structuring element allows to detect a given motion model. This difference sequence is filtered by grey-level thresholding and area opening on connected components, to obtain a binary sequence Smb which represents only significative motion borders. We notice that these connected components are inside light moving objects. From these components, a new sequence Sm is built, for which we give in each Smb connected component the minimum of its grey-levels from the original sequence. The geodesic reconstruction of the orginal sequence by dilation of the Sm sequence produces flat zones at moving object locations. These large flat zones are used to determine moving objects. The generation of flat zones is applied for both bright and dark moving objects, using the principle of morphological duality. An election stage is performed in order to keep the most significant flat zones among bright and dark objects. This new sequence is filtered out by studying the shape of flat zones in space-time. We remove those that do not undergo a given motion model. The remaining ones receive a unique label to differentiate the objects in the scene (sequence So named object). The flat zones
Topological Reconstruction of Occluded Objects in Video Sequences
157
produced belong here to space-time, so we have an object tracking across time, and a boundary localization at each time. However our method from [1,2] does not deal with occlusions: occluded objects are fused with occluding ones when they are both lighter (or darker) than the background. We solve this problem in the next section.
3
Topological Reconstruction
In some cases, occlusions cause the creation of the same flat zone for different objects in the scene. This drawback comes from our construction method. We name these flat zones multi-objects. The real objects (named single-objects) belonging to the same flat zone can be found by studying the topological structure of the multi-objects in space-time. In a first part we describe when the creation of multi-objects occurs, and in a second part we give our method of topological reconstruction. It can determine a single-object in each branch of a multi-object. Branches which don’t contain a single-object are filled with the labels of the other single-objects. This operation is performed by using a spatio-temporal watershed algorithm with both greylevel and topological changes as hierarchical propagation priorities. 3.1
Generation of Multi-objects
The sequence Sm , from which flat zones are reconstructed, is computed by keeping the minima of the grey-level of the orginal sequence in each connected component of the filtered difference sequence Sd . So a flat zone recovers only the darkest object at occlusion places. Fig.1 shows the flat zones generated by our method in the case of two crossing bright objects. The sequence Sm , shown as two dark rectangles in Fig.1.c, is computed by keeping the minima of each connected component of the motion border. So only one flat zone is constructed for the two bright objects. This flat zone L2 is a multi-object.
C 01
Tt t43 t2 t1
02 p
G
L1
02
01
p
C a)
T
b)
C
t1 t2 t3 c)
t4
L2 p
T
C
d)
Fig. 1. Generation of one flat zone for several objects: a) grey-level sequence, b) spatiotemporal representation along cross section C, c) grey level at point p, d) flat zones produced
To differentiate the objects in this flat zone, we suppose that objects cannot be ubiquitous, so the multi-object shape allows us to determine the number
158
V. Agnus and C. Ronse
of single-objects within it. We apply our topological reconstruction method, see Fig.2. From the sequence object So we decompose each spatio-temporal flat zone into regions where topology is preserved in some sense (deepened below). The study of adjacency between these regions allows to classify them as representing an object or an occlusion place. The latter are filled with the labels from regions classified as single-object. So we can allocate a label for each different object contained in a unique flat zone from the sequence object. In a first part we describe the framework of the detection of objects in flat zones, this step is named connected block decomposition. We give also an algorithm to build them. The propagation of object labels at occlusions places is described in Section 3.4, where we use a modified version of the Salembier’s region growing algorithm [6]. To overcome the oversegmentation due to the generation of labels we define a region adjacency graph contraction to reduce the number of regions. T
a)
T
E
b)
T
E
c)
T
flooding
?
E
d)
E
Fig. 2. Topological reconstruction step: a) the sequence object So , b) topological invariant region decomposition, c) removal of occlusions zones, d) relabeling using watershed
3.2
Connectivity Block Decomposition
The study of topological changes (i.e., of neigbourhoods of spatial connected components in a multi-objet) can give information about the number of singleobjects in the multi-object. For example, consider a multi-object with a “Y” shape (Fig.2); the temporal dimension is along the vertical axis. We can determine that this object contains at least 2 different objects. Indeed the two upper branches point to the presence of two different objects. The base of the “Y” is the place of the occlusion between these objects. We now develop a categorization of multi-objects branches. For this we decompose the multi-object into connected blocks. They are connected components which don’t contain a local topological change in the multi-object. A distinction is made within these blocks: those which contain a single-object and the other ones. The blocks of a same object which have more than two adjacent connected blocks in the past or future are considered as blocks that do not contain a single-object. It is the place where the objects move away or approach. We give a more formal definition of connected blocks and a method to build them. Definition 1. The temporal section Rt of region R ⊂ E × T at time t is the set Rt = R ∩ (E × {t}).
Topological Reconstruction of Occluded Objects in Video Sequences
159
Definition 2. A subset X of region R ⊂ E × T is called a connected block of R iff it is a maximal subset of R such that: a) X = ∅ and X connected, b) for all times, the temporal sections Xt are connected (spatially, because they belong to E) and c) a temporal section Xt−1 or Xt+1 of X which is adjacent to Xt , may not be adjacent to Rt \ Xt .
T
T
a) B1
B1
B2 E
T
b)
c) B1
B2 E
B’11
B’7 B3
B4
B’2
B5
B6
B9
B10 B’8
E
Fig. 3. Examples of connected blocks on spatio-temporal objects: a) and b) two counter-examples, c) connected blocks for many configurations, where bright letters represent single-objects
There is some similarity between our decomposition and Reeb Graphs [3]. Given a smooth function f defined on a smooth surface M , the Reeb Graph of f associates to every connected component of a level set f −1 (t) a point of height t. Smoothly evolving level curves give then rise to arcs in the graph and critical points of the surface produce junction nodes in the graph. In the case of image sequences, the surface M would consist of the spatiotemporal contours of the sequence, and the function f would be time. Then the Reeb Graph would code the evolution of object spatial contours at different times. Our representation is simpler, in that we only consider connected components of objects, while we ignore inner contours and holes. 3.3
Connected Block Decomposition Algorithm
To obtain this decomposition, we build a label sequence named Stopo from the begin to the end of the sequence, in a recursive scheme. We label the spatial connected components along time using the labels of connected components already labeled in the past. We use three arrays of labels: – ARRN SO which contains the labels whose connected component descended from a junction (2 spatial connected components in past) or fathering a disjunction (2 in future). This array permits us to know the connected blocks to remove, i.e. the labels not representing a single-object. – ARRP [Lt ] which contains the set of spatial connected component labels at time t−1 which are contiguous to the connected component at time t labeled Lt . – ARRF [Lt−1 ] which contain the set of spatial connected component labels at time t which are contiguous to the connected component at time t − 1 labeled Lt−1 .
160
V. Agnus and C. Ronse
We have: L ∈ ARRP [L ] iff L ∈ ARRF [L]. The labeling is built as follows: At t = 0, a spatial labeling is performed in the time section t = 0. To go to step t + 1: a temporay labeling of spatial connected components of So in time section t + 1 is carried out. The new labels are partially absorbed by the labels of previous time sections already processed. We keep the new label if a birth or topological change occurs. For this we use the array ARRP [] and ARRF []. For each new label L (at time t + 1), we have to deal with several cases: – if ARRP [L ] is empty, a new object appears at time t + 1; it is a birth. This new label is preserved for the next time section. – if ARRP [L ] contains at least two labels: it is a junction. The label L is inserted in ARRN SO because this label indicates a topological change. This label represents the birth of a new connected block. – if ARRP [L ] = {L}, we have to deal with two cases: • if ARRF [L] = {L }, two connected components are neighbours in time. They belong to the same branch of the multi-object (i.e in the same connected block). The new label L take the value of the label L. • if ARRF [L] contains at least two labels, there is a disjunction at time t. The label L is preserved due to the topological change and the creation of a new connected block and the label L is inserted in ARRN SO.
T multi-object
T
T
T
ARRNSO L1
a)
b)
T
c)
L1
E
L3
ARRNSO L1
L4 L2
L3
L4 L2
L3
L5
e)
L1
f) E
L1
ARRNSO L1
E
T L5
ARRNSO L1
L3
L1
E
T L5
L2
d)
L1
E
T
L4 L2
ARRNSO L1
nso
ARRNSO L1
ARRNSO
L4 L2
L3
L5
g) E
L1
h) E
nso
E
Fig. 4. Construction of Stopo and Sso : a) sequence object So , b) birth of a new connected block, c) contruction of the connected block, d) topological change (disjunction, two new connected blocks), e) a new connected block born, f ) topological change (junction), g) sequence Stopo , h) sequence Sso (connected blocks of Stopo containing only singleobject)
Fig.4 represents the application of this algorithm in a general scheme and Fig.5 illustrates the local count of neighbouring spatial connected components between two succesive time sections. Now we have a marker only for each single-object in each multi-object. We start from the object sequence So , it is decomposed into connected blocks (sequence Stopo ) as described above. Connected blocks neighbour in past to a junc-
Topological Reconstruction of Occluded Objects in Video Sequences T
T
T
LA
T
LB
LC
L1 ARRP[LA ]={} ARRP[L ]
E
LD
LE
L2
ARRP[LB ]={L1 } ARRF[L1 ]={LB }
L3
ARRP[LC ]={L2 } ARRF[L2 ]={LC ,LD }
E
161
L4
ARRP[LE ]={L3 ,L 4 } ARRF[L3 ]={LE }
E
E
Fig. 5. Connected component count: birth, same connected component, disjunction, junction T
L1
T
a)
0
L1
T
b)
0
L2
L2
2 3
000000000000 111111111111 111111111111 000000000000 000000000000 111111111111
E
c)
0 1
0
L1
2 3
0 1
2 3
L2
1 0 4
E
00000000000000000000 11111111111111111111 5 0000 5 5 11111111111111111111 00000000000000000000 1111 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 6 00000000000000000000 11111111111111111111
1
E
Fig. 6. Single vs. Intensity and topological hierarchy: a) start of the flooding of a multi-object with two labels, b) grey-level propagation, c) topological and grey level propagation
tion or in future to a disjunction are removed, they are not single-objects (sequence Sso ). Single-object labels are propagated in their respective multi-object to fill the occlusions section. The propagation is based on an extension of watershed using grey-level and topological information, described below. 3.4
Topological Watershed Algorithm
The occlusion regions have been removed during our topological labeling. To reconstruct the segmentation at these places, we use a modified version of the watershed region growing defined by Salembier [6]. We use grey-level information of the source sequence to allow the tracking of objects across time, and the topological change in multi-objects to synchronize the propagation stage. We must use a double hierarchy (grey-level and topology) to ensure the good propogation of labels in space-time. Indeed, consider Fig.6, we have a multi-object with two labels L1 and L2 of single-objects. The number in each spatial connected component corresponds to the time step in the flooding process. If we use only grey-level information, the L2 label reaches the disjunction before L1 , so at this time section the space flooding is performed whitout taking into account the label L1 . To avoid this drawback, before flooding a time section we synchronize the labels: the labels are propagated normally if and only if they belong to the same connected block. Labels which cross the border of a new connected block are temporarily stopped. When all labels are stopped (i.e they all belong to a connected block border) the flooding process restarts normally. This approach allows the labels to wait for others before flooding in space at a time section.
162
V. Agnus and C. Ronse
To implement this propagation scheme, the use of ordered queues (OQ) is natural to manage this double hierarchy. We use two OQs: OQG and OQT . The first one is used to flood the time sections and the other one is used to store labels which are stopped. The hierarchy within these OQs is the similarity between grey-levels of neighbouring pixels. When OQG is empty (i.e., all pixels are stopped) and all occlusion places are not flooded, we swap these two OQs, so that the flooding process can continue with labels synchronized. For instance in Fig.6 the labels are progated using OQG , and when L2 reaches the disjunction time section (step 1), it is stopped and stored in OQT . The label L1 progresses via OQG until it reaches the border between connected blocks (step 4). It is stopped and stored in OQT . The queue OQG is now empty: all labels belong to a border of topological change. We swap the two OQs and the flooding stage restarts (steps 5 and 6). We have described how to find objects in a multi-object and how to recover their shape in space-time, but some drawbacks remain. The search for an object, in some cases, generates an over-labeling. We describe in the next section when these cases occur, and how to deal with them. 3.5
Reduction of Oversegmentation
The construction of connected blocks strongly depends on the topology of the multi-object. In two cases, too many connected blocks are constructed. The first one is due to topological changes arising from small noisy objects, the second one depends on our connected block decomposition. For instance, when two objects cross each other, their trajectories form together a “X” shape, our block decomposition gives four labels B1 to B4 and an occlusion place C, corresponding to the four branches and the crossing in the “X” shape. We must keep all B labels because we don’t know if they represent the same object before and after the occlusion, indeed some objects can hide other ones which can appear after the occlusion. To avoid the influence of topological changes due to small deformations, we pre-filter the object sequence So by removing with an area opening all small spatial connected components in each time section. Block connected decomposition is made on the filtered sequence and the watershed labeling stage is processed on the object sequence using markers obtained with the filtered object sequence. To overcome the over-labeling inherent to some multi-object shapes in spacetime, we have to use more global information; we use the contraction of a region adjacency graph. We define a model M(R) for each region R. It contains information about it. We use mean grey-level M.g, presence time (birth M.ts and death M.te ), and multi-object membership M.object and size (i.e., the volume in space-time M.N ) to characterise them. An order O(R1 , R2 ) is computed to determine the priority of merging regions. The region merging stage runs as follow: the two regions of the pair with the lowest merging order are merged together, the model M of the union is reestimated, and the merging order is recomputed with the neighbouring regions.
Topological Reconstruction of Occluded Objects in Video Sequences
a)
b)
c)
d)
e)
163
f)
Fig. 7. Application example: column a) Original grey-level sequence, b) object sequence, c) connected blocks, d) single-objects, e) topological watershed, f ) regions merged
When regions R1 and R2 merge, the new model M(R1 ∪ R2 ) is: — M.g ← (N1 g1 + N2 g2 )/(N1 + N2 ), M.N ← N1 + N2, — M.ts ← min M(R1 ).ts , M(R1 ).ts and M.te ← max M(R1 ).te , M(R1 ).te , — M.object ← M(R1 ).object (label of So ), where Ni = M(Ri ).N and gi = M(Ri ).g. The merging order is defined as follow: O(R1 , R2 ) = N1 (g1 − g R1 ∪R2 )2 + N2 (g1 − g R1 ∪R2 )2 2 N1 ∆t (R1 ∪ R2 )/∆t (R1 ) − N1 − N2 , 2 × + N2 ∆t (R1 ∪ R2 )/∆t (R2 ) − N1 − N2 where ∆t (Ri ) = M(Ri ).te − M(Ri ).ts + 1 is the region presence time. This term measures the error when two regions merge. It is the product of two factors. The
164
V. Agnus and C. Ronse
first one expresses the grey level difference between regions, and the second one compares the region spatial mean size. In some case, we don’t allow fusion (in putting the merging order to infinity). Regions which belong to different multi-objects are not merged. In the same way, regions which share a same presence time cannot be merged because they are not ubiquitous.
4
Results and Conclusion
Fig.7 show the step of our topological reconstruction. From the original sequence Fig.7.a, flat zones are produced by our approach[1,2] (Fig.7.b). It contains three multi-objects. Fig.7.c represents the decomposition into connected blocks, and single-objects blocks are shown in Fig.7.d. The labels are propagated in their respective multi-objects (Fig.7.e) using our double hierarchical watershed algorithm. The over labeling is reduced with our adjacency region graph contraction method (Fig.7.f ). An animated version of Fig.7, as well as other pratical applications (in particular, results on standard motion sequences) can be found at URL http://arthur.u-strasbg.fr/∼agnus/DGCI2002/ . The flat zone generation and topological reconstruction are obtained with low-level morphological operators, that are well suited for fast implementations, thanks to optimal implementations which access each pixel only once. This contrasts with optical flow based methods, which require repeated iterations.
References 1. V. Agnus, C. Ronse, and F. Heitz. Segmentation spatio-temporelle de s´equences d’images. In “12`eme Congr`es Francophone Reconnaissance des Formes et Intelligence Artificielle”, volume 1, pp. 619–627, Paris, France, Feb. 2000. 2. V. Agnus, C. Ronse, and F. Heitz. Spatio-temporal segmentation using 3d morphological tools. In 15th International Conference on Pattern Recognition, volume 3, pp. 885–888, Barcelona, Spain, Sep. 2000. 3. S. Biasotti, B. Falcidieno, and M. Spagnuolo. Extended Reeb Graphs for Surface Understanding and Description In 9th Discrete Geometry for Computer Imagery Conference, LNCS, Springer Verlag, pp. 185-197, Uppsala, 2000. 4. B. Marcotegui and F. Meyer. Morphological segmentation of images sequences. Mathematical Morphology and its Applications to Image Processing, J. Serra & P. Soille Eds., pp. 101–108, 1994. 5. F. Marqu´es. Temporal stability in sequence segmentation using the watershed algorithm. Mathematical Morphology and its Applications to Image Processing, P. Maragos & R. W. Schafer & M. A. Butt Eds., pp. 321–328, 1996. 6. P. Salembier, P. Brigger, J. R. Casas, and M. Pard´ as. Morphological operators for image and video compression. IEEE Trans. on Image Proc., 5(6): pp. 881–898, June 1996. 7. L. Vincent. Morphological grayscale reconstruction in image analysis: applications and efficient algorithms. IEEE Trans. on Image Proc., vol. 2, pp. 176–201, Apr. 1993.
On the Strong Property of Connected Open-Close and Close-Open Filters Jose Crespo1 , Victor Maojo1 , Jos´e A. Sanandr´es1 , Holger Billhardt1 , and Alberto Mu˜ noz2 1
Laboratorio de Inteligencia Artificial Facultad de Inform´ atica Universidad Polit´ecnica de Madrid 28660 Boadilla del Monte (Madrid), Spain
[email protected] 2 Departamento de Radiodiagn´ ostico Hospital 12 de Octubre Ctra. Andalucia Km. 5’400 28041 Madrid
Abstract. This paper studies connectivity aspects that arise in image operators that process connected components of an input image. The focus is on morphological image analysis (i.e., on increasing image operators), and, in particular, on a robustness property satisfied by certain morphological filters that is denominated the strong-property. The behavior of alternating compositions of openings and closings will be investigated under certain assumptions, especially using a connected component preserving equation. A significant result is the finding that such an equation cannot guarantee the strong property of certain connected alternating filters. The class of openings and closings by reconstruction should therefore be defined to avoid such situations.
1
Introduction
This paper studies the strong property of morphological filters that satisfy certain conditions regarding connectivity. The strong property is a robustness property introduced in the morphological filtering framework [1] [2]. Morphological filters that satisfy this interesting property are more robust to small variations of the input image, such as noise, under certain limits. The strong property is somewhat related to connectivity issues in the sense that some morphological filter types that do not satisfy the strong-property condition are in fact strong when they are connected. In this work we will study this property for connected filters that satisfy different requirements. A central point will be to study connected filters that satisfy a well-known additional condition that not only the filter output is invariant but also each of its connected components. In this paper we will obtain a significant result concerning the strong property of alternating filters when such a condition is used to define them, in the sense A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 165–174, 2002. c Springer-Verlag Berlin Heidelberg 2002
166
J. Crespo et al.
that the fact that an alternating filter is strong does not imply that the dual family under that definition is strong as well. Of course, it is not the case that the result contradicts the morphological duality principle; instead, the issue is that the condition does not treat symmetrically (and dually) openings and closings. We therefore define the class of openings and closings by reconstruction in a way to avoid such a problem. Those definitions have been used previously by the authors of this paper, and this work strongly supports that choice. The outline of the paper is as follows. Section 2 provides some background on mathematical morphology, in which some concepts concerning connectivity are commented. Then, Section 3 studies the strong property and the behavior of connected openings, closings, and alternating filters that satisfy certain conditions. An important result showing certain problems motivates the definition of openings and closings by reconstruction to avoid them. Finally, a conclusion section ends the paper.
2 2.1
Some Concepts and Definitions Basic Notions
Mathematical morphology concerns the application of set theory concepts to image analysis. General references are [3] [1] [2] [4]. A basic set of notions on mathematical morphology can be the following: – Mathematical morphology deals with increasing mappings defined on a complete latice [5] [2]. In a complete lattice there exists an ordering relation, and two basic operations called infimum and supremum (denoted by and , respectively). – A transformation ψ is increasing if and only if it preserves ordering. – A transformation ψ is idempotent if and only if ψψ = ψ. – A transformation ψ is a morphological filter if and only if it is increasing and idempotent. – An opening (often denoted by γ) is an antiextensive morphological filter. – A closing (often denoted by ϕ) is an extensive morphological filter. In all theoretical expressions in this paper, we will be working on the lattice P(E), where E is a given set of points called space and P(E) denotes the set of all subsets of E (i.e., P(E) = {A : A ⊆ E}). In other words, inputs and outputs will be supposed tobe sets or, equivalently, binary functions. In this lattice, the sup and the inf operations are the set union and the set intersection operations, while the order relation is the set inclusion relation ⊆. Even though we will work on the lattice P(E), results are extendable for gray-level functions by means of the so called flat operators [6] [1]. 2.2
The Point Opening γx : The Connected Component Extraction Operator
Let us assume that the space E is provided with a definition of connectivity. For all pairs of points x, y in E, it is possible to establish whether they are connected
On the Strong Property of Connected Open-Close and Close-Open Filters
167
or not. For example, when the space of points E is R2 or Z2 (associated with the usual connectivity), a pair of points x, y in a set A is said to be connected if there exists a path linking x and y that is also included in A. Connectivity is established more generally in [2] by means of the connected class concept. A connected class C in P(E) is a subset of P(E) such that (a) ∅ ∈ C and for all x ∈ E, {x} ∈ C; and (b) for each family Ci in C, i Ci = ∅ implies i Ci ∈ C. No definition of neighborhood relationships (i.e., no particular topology) has been assumed for E in the definition of the connected class C. The subclass Cx that has all members of C that contain x (i.e., Cx = {C ∈ C : x ∈ C}) defines an opening called point opening [2]. The point opening of a point x, denoted by γx , has as invariant class (i.e., the class formed by those sets that are left unchanged by γx ) Cx ∪ {∅}. For all x ∈ E, A ∈ P(E)
γx (A) =
{C : C ∈ Cx , C ≤ A}.
(1)
The operation γx is therefore idempotent (i.e., γx (γx (A)) = γx (A) or, equivalently, γx γx = γx ) and antiextensive (i.e., γx (A) ≤ A or, equivalently, γx ≤ I). When we associate, for example, the operation γx with the usual connectivity in Z2 , the opening γx (A), A ∈ P (Z2 ), can be defined as the union of all paths that contain x and that are included in A. Figure 1 shows an example of the result of γx (A) where the set A comprises the black regions (two connencted components or grains) and x belongs to a connected component of A. When a space E is equipped with the opening γx , connectivity issues in E can be expressed using γx . We can establish, for example, whether or not a set A ∈ P(E) is connected (a set A is connected if and only if A = γx (A), x ∈ A), and whether or not a pair of points x, y belong to the same connected component in A (x, y belong to the same connected component in A if and only if x ∈ γy (A) or, equivalently, if and only if γx (A) = γy (A) = ∅).
x
(a) Input set A (in black)
x
(b) γx (A)
Fig. 1. Connected component extraction. The opening γx (A) extracts the connected component of A to which x belongs.
168
J. Crespo et al.
The dual operation of γx is the closing ϕx that is equal to E \ γx (A), for all A ∈ P(E), where \ denotes set difference and symbolizes the set complementation operator. 2.3
Connected Operators and Connected Component Locality
After establishing the connected class concept in the previous section, let us define the concept of connected operator. [7] [8] [9] [10]. – A connected operator only extends the input image flat zones (or piecewiseconstant regions). It should be noticed that, from its definition, a connected operator cannot introduce discontinuities and, therefore, preserves shapes. In a binary framework, connected operators are grain-removing and pore-filling operations. Connected filters are connected operators that are idempotent. The fact that connected filters preserve shapes makes them useful tools for image simplification and segmentation purposes [11] [12]. The concept of connected-component (c.c.) local operator, which is defined next, embraces both increasing and non-increasing operators that treat each grain and pore independently of the rest of the input [13]. Definition 1 Let E be a space equiped with γx . An operator ψ : P(E) −→ P(E) is said to be connected-component local (or c.c. local) if and only if, ∀A ∈ P(E), ∀x ∈ E (a) γx (A) = ∅, γx ψ(A) = ∅ ⇒ ∀B ∈ P(E), γx (A) = γx (B) : γx ψ(B) = ∅. (b) γx (A) = ∅, γx ψ(A) = ∅ ⇒ ∀B ∈ P(E), γx (A) = γx (B) : γx ψ(B) = ∅.
3
Connected Filters and the Strong Property
In this section we will study the strong property and its relationship with openings and closings, in particular to their alternating sequential compositions. The strong property [14] [2] is a robustness property satisfied by certain morphological filters. Definition 2 A filter Ψ is strong if and only if Ψ = Ψ (I Ψ ) = Ψ (I Ψ ). If a filter is strong then certain variations (such as noise) of the input do not cause variations in the output. Figure 2 illustrates this be an input concept. Let A set. If Ψ is strong, then for all sets B such that A Ψ (A) ≤ B ≤ A Ψ (A) =⇒ Ψ (A) = Ψ (B). Another way tostate this is saying that Ψ is strong if and only if Ψ is both an -filter and a -filter (see [14] [2] for those related concepts). Openings and closings are, respectively, anti-extensive and extensive filters that satisfy the strong property. In the definition of this important property does not appear any connectivity reference nor γx in the formula. Nevertheless, in practice the strong property
On the Strong Property of Connected Open-Close and Close-Open Filters
169
Ψ(Α) A B
(a) Sets A, B, and Ψ (A). If Ψ is strong, then Ψ (B) = Ψ (A)
Fig. 2. Strong filter example.
is in some way quite related to connectivity, and, in particular, to connected operators. Some types of filters are not strong when they are not connected but, on the contrary, satisfy this property if they are connected. This can be the case, for example, of the connected alternating filters ϕγ and γϕ. In fact, the central point of this section will be to study the strong property under certain conditions for ϕ and γ, and to use it to characterize the class of filters by reconstruction. In some theoretical studies in mathematical morphology [2] [7] [15], the following equation ψγx ψ = γx ψ
(2)
where ψ is an operator, has served to define some types of filters, especially connected filters. This interesting equation will be studied in detail in the following sections. 3.1
Opening Case
If ψ is an opening γ in equation (2), we have the following property: Property 1 γγx γ = γx γ =⇒ γ is c.c. local. Therefore, such an opening removes grains and treats each one independently from the rest. In fact, if an opening γ is c.c. local and connected, then we can define γ using the useful concept of trivial opening: γ= γ◦ γx , (3) x
where γ◦ is a trivial opening. Definition 3 An opening γ◦ is a trivial opening [2] if
170
J. Crespo et al.
γ◦ (A) =
A, if A satisfies an (increasing) criterion ∅, otherwise
Notice that the trivial opening definition does not imply c.c. locality. In fact, an opening by reconstruction γ can be computed using a non-local trivial opening (and, nevertheless, the resulting opening by reconstruction would of course be local). Thus, for connected openings, it can be said that equation (2) is equivalent to expression (3). 3.2
Closing Case
A different situation arises using equation (2) when ψ is a closing ϕ. We introduce the following property: Property 2 A closing ϕ is connected =⇒ ϕγx ϕ = γx ϕ. Property 2 states that any connected closing satisfies equation (2), i.e., that grains of the closing output are invariant under the closing. A proof is relatively straightforward. Let E be the space, let ψ be a closing, and let G be a connected component of the output ψ(A), where A is an input set. Then ψ(G) must be equal to G if ψ is idempotent, antiextensive and increasing. Since ψ is connected, the possibilities are (assuming the space E is connected): (a) G, (b) ∅, or (c) E. However, (b) is not possible (ψ is antiextensive), and (c) is not possible either (ψ is increasing and idempotent). Therefore, ψ(G) = G. An important aspect is that equation (2), when ψ is a closing ϕ, does not imply that the closing ϕ is c.c. local. And, therefore, a connected closing that satisfies equation (2) (in fact, from property 2, all of them do) cannot always be expressed by the dual of expression (3), which would be ϕ=
ϕ◦ ϕx ,
(4)
x
where ϕ◦ is a trivial closing. The definition of the trivial closing follows (which is the dual of that of the trivial opening). Definition 4 A closing ϕ◦ is a trivial closing [2] if E, if A satisfies an increasing criterion ϕ◦ (A) = A, otherwise 3.3
Alternating Filters ϕγ and γϕ
In this section we will get an important result by investigating the strong property of alternating filters using equation (2). Later, we will define the filter by reconstruction class in a way to avoid this type of situations.
On the Strong Property of Connected Open-Close and Close-Open Filters
171
In [7] [15], it has been shown that if a connected opening γ and a connected closing ϕ satisfy equation (2) ψγx ψ = γx ψ (substituting γ for ψ, and similarly for ϕ), then the alternating filter ϕγ has the desirable strong property. From property 2, we also can state this as follows. Property 3 Let γ be a connected opening that satisfies γγx γ = γx γ, and let ϕ be a connected closing (i.e., γ and ϕ satisfy equation (2)). Then, the alternating filter ϕγ is strong. However, there is a problem using equation (2) to characterize connected openings and closings because it is not symmetrical, i.e., it does not treat in the same way to openings and closings. We have seen that, for openings, equation (2) means that we can express the opening as in expression (3). However, for closings, equation (2) does not imply that we can express the closing as in expression (4) (which is the dual of expression (3)). Therefore, the following important (but somewhat undesirable) result should not surprise us excessively: Property 4 Let γ and ϕ be, respectively, a connected opening and a connected closing that satisfy equation (2). Then, the alternating filter γϕ is not necessarily strong. Notice that property 4 does not contradict the morphological duality principle. Let us show and study an example of a case in which γϕ is not strong, under the assumptions of property 4. Let us first consider the opening and closing used in the example in Figure 3, which are computed employing reconstruction algorithms [16]. The opening by reconstruction will use as marker the result of the erosion εB , where B is the structuring element that can be seen in Fig. 3(a), along with the input set. The opening γ removes vertical grains of width smaller than that of B, and therefore γ eliminates the small vertical grains at both sides, as shown in Fig. 3(b). The closing ϕ uses as marker the dilation δC where the structuring element C is composed by two particles, as can be seen in Fig. 3(c). The closing ϕ fills vertical pores of width 1 if there are vertical grains at left and right at a certain distance determined by C (the distance between the central pore and the grains at each side). In Fig. 3(d), the central pore has been filled. Both γ and ϕ satisfy equation (2), but γ is c.c. local whereas ϕ is not. The example in Figure 4 illustrates property 4. Fig. 4(a) shows the input set (which is the same as that in Fig. 3(a)). The closing first fills the central pore in Fig. 4(b), and thesubsequent opening removes the grains at the sides in Fig. 4(c). The result of A γϕ(A) is shown in Fig. 4(d), where we can notice that the central pore is not filled, and the grains at both sides are missing. Because the grains at both sides are missing, in Fig. 4(e) the result of the subsequent closing does not fill the central pore. Then, in Fig. 4(f) the last opening leaves the image unchanged, and the final result γϕ(A γϕ(A)) is shown. We see clearly that γϕ(A γϕ(A)) = γϕ(A), and that, therefore, γϕ is not strong. The reason to obtain this result, i.e., that ϕγ is strong whereas γϕ is not, when γ and ϕ are connected and satisfy equation (2), is because of the nonsymmetrical nature of equation (2).
172
J. Crespo et al.
B
B
(a) Input set A (in black) and stelt B underneath
(b) γ(A)
X
C
B
(c) Input set A (in black) and stelt C underneath
(d) ϕ(A)
Fig. 3. Example of results of opening γ and closing ϕ. Note: the structuring elements B and C are both centered.
3.4
Openings and Closings by Reconstruction, and the Filter by Reconstruction Class
The “negative” result of property 4 strongly supports that classes of connected openings and closings be defined by dual expressions. An important class should be that defined by expresions (3) and (4). The authors of this paper have used in other works [9] [13] [17] expresions (3) and (4) to define the important class of openings and closings by reconstruction. Since those expressions are dual of each other, openings and closings are not considered differently. Definition 5 An opening γ is an opening by reconstruction if and only if γ = γ γ , where γ◦ is a trivial opening. ◦ x x Definition 6 A closing ϕ is a closing by reconstruction if and only if ϕ = ϕ ϕ , where ϕ◦ is a trivial closing. ◦ x x The duality of the previous definitions eliminates problems such as that commented in property 4, and we have that: If γ and ϕ satisfy, respectively, definitions 5 and 6, then ϕγ is strong and, by duality, γϕ is strong as well. We then normally use the term filters by reconstruction to denote those filters composed of openings and closings by reconstruction. Sometimes filters by reconstruction (in particular, openings and closings by reconstuction) are defined in terms of
On the Strong Property of Connected Open-Close and Close-Open Filters
(a) Input set A (in black)
(c) γϕ(A)
(e) ϕ(A
γϕ(A))
173
(b) ϕ(A)
(d) A
(f) γϕ(A
γϕ(A)
γϕ(A))
Fig. 4. Strong property not satisfied. Since γϕ(A) (in (c)) is different from γϕ(A γϕ(A)) (in (f)), then γϕ is not strong. Note: the opening γ and closing ϕ used are those described in Fig. 3.
algorithms by reconstruction, which is an alternative and valid way to define them if the definition indicates what the markers used in the reconstruction are (in general, algorithms by reconstruction just guarantee the connectedness of the resulting operator). Nevertheless, we prefer definitions 5 and 6 since they are not linked to any particular implementation.
4
Conclusions
This paper has investigated the strong property of connected morphological filters, in particular of connected openings, closings and alternating filters. We have first studied a well known condition that guarantees that the connected components of a filter output are also invariant. This condition does not treat symmetrically openings and closings, and we have obtained a result in which a property satisfied by a filter is not satisfied by the dual family under that condition (a result that of course does not stand in contradiction to the morphological duality principle). This result supports that the class of openings and closings by reconstruction should be defined by dual formulae, which prevents results such as that commented in the paper.
174
J. Crespo et al.
Acknowledgements. This work has been supported in part by “Fondo de Investigaci´on Sanitaria” (Spanish Ministry of Health) and by “Sociedad Espa˜ nola de Radiolog´ıa M´edica”.
References 1. Serra, J.: Mathematical Morphology. Volume I. London: Academic Press (1982) 2. Serra, J., ed.: Mathematical Morphology. Volume II: theoretical advances. London: Academic Press (1988) 3. Matheron, G.: Random Sets and Integral Geometry. New York: Wiley (1975) 4. Heijmans, H.: Morphological Image Operators (Advances in Electronics and Electron Physics; Series Editor: P. Hawkes). Boston: Academic Press (1994) 5. Birkhoff, G.: Lattice Theory. American Mathematical Society, Providence (1984) 6. Maragos, P., Schafer, R.: Morphological filters — part I: Their set-theoretic analysis and relations to linear-shift-invariant filters. IEEE Trans. Acoust. Speech Signal Processing 35 (1987) 1153–1169 7. Serra, J., Salembier, P.: Connected operators and pyramids. In: Proceedings of SPIE, Non-Linear Algebra and Morphological Image Processing, San Diego. Volume 2030. (1993) 65–76 8. Crespo, J., Serra, J., Schafer, R.: Image segmentation using connected filters. In Serra, J., Salembier, P., eds.: Workshop on Mathematical Morphology. (1993) 52–57 9. Crespo, J., Serra, J., Schafer, R.: Theoretical aspects of morphological filters by reconstruction. Signal Processing 47 (1995) 201–225 10. Heijmans, H.: Connected morphological operators for binary images. Computer Vision and Image Understanding 73 (1999) 99–120 11. Crespo, J., Schafer, R., Serra, J., Gratin, C., Meyer, F.: The flat zone approach: A general low-level region merging segmentation method. Signal Processing 62 (1997) 37–60 12. Crespo, J., Maojo, V.: Shape preservation in morphological filtering and segmentation. In: XII Brazilian Symposium on Computer Graphics and Image Processing, IEEE Computer Society Press, SIBGRAPI 99. (1999) 247–256 13. Crespo, J., Schafer, R.: Locality and adjacency stability constraints for morphological connected operators. Journal of Mathematical Imaging and Vision 7 (1997) 85–102 14. Matheron, G.: Filters and lattices. In Serra, J., ed.: Mathematical Morphology Volume II: theoretical advances. London: Academic Press (1988) 115–140 15. Salembier, P., Serra, J.: Flat zones filtering, connected operators, and filters by reconstruction. IEEE Transactions on Image Processing 4 (1995) 1153–1160 16. Soille, P.: Morphological Image Analysis: Principles And Applications. SpringerVerlag Berlin, Heidelberg, New York (1999) 17. Crespo, J., Maojo, V.: New results on the theory of morphological filters by reconstruction. Pattern Recognition 31 (1998) 419–429
Advances in the Analysis of Topographic Features on Discrete Images Pierre Soille EC Joint Research Centre Institute for Environment and Sustainability TP 262, I-21020 Ispra, Italy
[email protected] Abstract. By viewing the grey scale values of 2-dimensional (2-D) images as elevation values above the image definition domain, geomorphological terms such as crest lines, watersheds, catchment basins, valleys, and plateaus have long been used in digital image processing for referring to image features useful for image analysis tasks. Because mathematical morphology relies on a topographic representation of 2-D images allowing for grey scale images to be viewed as 3-D sets, it naturally offers a wide variety of transformations for extracting topographic features. This paper presents some advances related to the imposition of minima, the lower complete transformation, the hit-or-miss transform, and the extraction of crest lines by a skeletonisation procedure. Keywords. Mathematical morphology, minima imposition, plateau, lower complete transformation, grey scale hit-or-miss, crest lines, grey scale skeletonisation, watersheds
1
Introduction
The topographic representation of grey scale images has long been considered in digital image processing for extracting features such as ridges and valleys [11], watersheds [8], and plateaus. In contrast to approaches based on the analysis of the discrete derivatives of the image, mathematical morphology [20] offers a set theoretic framework which can be considered as a suitable alternative approach in some applications. In this paper, we adopt the morphological framework and revisit several transformations dealing with the processing of topographic features of digital 2-dimensional (2-D) grey scale images. Background definitions and notations used in this paper are described in [26]. In section 2, we introduce a minima imposition technique based on a carving procedure rather than the usual reconstruction based technique. This approach is a suitable alternative in situations where the creation of flat regions by the reconstruction would impede further processing. We then show in section 3 that the suppression of image plateaus can be achieved by an interpolation technique taking into account the morphology of both the descending and ascending plateau
This work was supported by the EC-JRC EuroLandscape Project.
A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 175–186, 2002. c Springer-Verlag Berlin Heidelberg 2002
176
P. Soille
borders. This is enabled by a grey weighted geodesic distance called the geodesic time function. The proposed approach leads to an enhanced algorithm for the lower complete transformation which a necessary preprocessing for computing the steepest slope path linking an arbitrary pixel to the regional minimum of its corresponding catchment basin. In section 4, we extend the hit-or-miss transform to the processing of grey scale images by introducing the notion of constrained and unconstrained hit-or-miss. These operations are useful for extracting specific grey scale neighbourhood configurations. Before concluding, we show in section 5 that crest lines can be obtained by order independent homotopic thinnings.
2
Minima Imposition by Carving
The minima imposition is an image transformation which modifies an input grey scale image in such a way that the only remaining minima are those given by an additional input image called marker image. Note that we refer to an image minimum in the sense of a regional minimum, i.e., a connected component of pixels having the same value and whose external boundary pixels have all a greater value. Denoting by f the input grey scale image, the marker image fm is defined as follows: fm (x) = 0 if x belongs to a marker, tmax otherwise. The minima imposition is usually achieved by performing the following morphoε logical reconstruction by erosion Rε [2, 14]: R(f +1)∧fm (fm ). This procedure is 1 a connected algebraic closing which may create large flat regions because the catchment basins of all unmarked minima are partially filled in. This is not a problem if one is interested in computing the watersheds of the filtered image. However, there are applications where markers of thin nets can be detected while a watershed could not be used for extracting the nets due to gaps occurring along the net. Indeed, in this situation, the reconstruction based minima imposition removes all parts of the net which do not contain a marker. In addition, if one would like to simulate a flow of water on the topographic representation of the image, the reconstruction based minima imposition has the drawback of creating potentially large flat regions which in turn pose a problem for the determination of flow directions from a local neighbourhood. These problems motivated us to look for an alternative way to impose the minima of an image. Rather than filling the unmarked regional minima, we propose to carve the image in such a way that these regional minima flow further down. The carving procedure relies also on a flooding simulation but, rather then filling in unmarked minima, a non-ascending path linking them to their nearest marked minimum is created. The pseudo-metric used for calculating the distance between an unmarked minimum and a marked minimum is based on the flooding paths. The procedure is detailed hereafter. First, all regional minima not marked by the marker image are stored in a binary image. The flooding simulation then starts by inserting into a priority 1
An operator is connected [21, 19] if and only if it coarsens the partition of any given input image, the partitioning being taken in the sense of that induced by the flat zones of the image.
Advances in the Analysis of Topographic Features on Discrete Images
177
queue the external boundary pixels of the marker minima, the priority being inversely proportional to the intensity value of the considered pixel. Pixels are then iteratively retrieved from the non-empty queue with the highest priority (i.e., lowest intensity elevation) while inserting their unprocessed neighbours in the priority queue (again considering a priority inversely proportional to their elevation). An additional image is used to store the direction of propagation of the flood at each pixel. Before inserting a pixel in the queue, we check whether it belongs to an irrelevant minimum. If so, the stored directions are used to backtrack the flooding path until we reach a pixel of elevation equal to that of the reached minimum and set all pixels along the path to this elevation. The reached minimum is then discarded from the binary mask of irrelevant minima while inserting all its unprocessed external boundary pixels in the priority queue. The process terminates when the priority queue is empty. By construction, all irrelevant minima are removed by the procedure because each of them can be linked to a marker by a non-ascending path. A comparison between the minima imposition by reconstruction and by carving is performed in Fig. 1 on two samples of a Pan-European digital elevation model (DEM). Given the scale of the input DEM (250 m), a valid assumption is to consider that all minima occurring inside the DEM definition domain are irrelevant because almost all European catchment basins are connected to the sea. Therefore the fillhole transformation generalised to grey scale images by a reconstruction by erosion can be used to suppress all interior minima as originally proposed in [29]: FILL(f ) = Rfε (fm ), where fm (x) = f (x) if x lies on the border of f , max(f ) otherwise. Figures 1(a)–(b) illustrate that the reconstruction based minima imposition fills most of the fragments of the stream so that any subsequent flow simulation process will be unable to follow the stream path indicated by lower elevation values in the initial data. Figures. 1(d)–(e) illustrate another problem occurring when a stream goes through a very narrow valley. Indeed, if it happens that the DEM resolution is too coarse to resolve a narrow valley, the fillhole procedure will fill it upstream creating thereby a large flat region suppressing relevant topography in the valley bottom. The images appearing at the bottom of Fig. 1 show that both problems are solved by the carving procedure. Contrary to the minima imposition by reconstruction, the carving procedure is not a connected algebraic closing but merely an anti-extensive and idempotent operation. In contrast to the flooding simulation introduced in [5, 34] for the computation of watershed boundaries and [12] for the computation of drainage networks, the proposed carving algorithm does not require the prior sorting of the image pixels in increasing order of elevation. Indeed, the sorting is achieved by the priority queue which is initialized by the relevant image minima. A similar approach has already been used for the computation of watersheds [3] and contributing drainage areas [30]. As concerns the computation of the binary mask of all regional minima, it is readily obtained thanks to the fast algorithm proposed by Breen and Jones [4], see also [26, p. 169].
178
P. Soille
Fig. 1. Minima imposition: Comparison between procedures based on morphological reconstruction by erosion and the proposed carving. The displayed samples were extracted from a Pan-European DEM at the resolution of 250 m. The markers were defined for both procedures as the regional minima of the input DEM connected to its border, according to [29].
Advances in the Analysis of Topographic Features on Discrete Images
3
179
Lower Complete Transformation
In some applications, it may be useful to transform an input image into a lower complete image, i.e., an image where the only pixels having no neighbour with a strictly lower intensity are those belonging to the regional minima of the image [13]: f is lower complete if and only if ∀p ∈RMIN(f ), ∃p ∈ NG (p) | f (p ) < f (p). For example, a lower complete image is necessary for calculating the steepest slope path linking a pixel to a regional minimum. Indeed, steepest slope directions can be calculated for each pixel of a lower complete image. Another potential application is for watershed transformation because, on a lower complete image, the calculation of the distance function on plateaus advocated in [34] can be skipped. An image can be transformed into a lower complete image by adding an auxiliary relief to the plateaus. This relief is defined as the geodesic distance function calculated from the descending border of the plateau (reference set) using the plateaus as geodesic mask. A fast implementation based on priority queues is detailed in [30]. However, this procedure tends to create parallel flow directions. Garbrecht and Martz [9] proposed an alternative procedure ensuring better flow convergence whereby the topography on the plateau is created by adding the inverse geodesic distance from higher terrain to the geodesic distance from lower terrain may itself contain a plateau. However, as noted by these authors, the topography created on the plateau may itself contain a plateau. A solution to this problem is to define the topography on the plateau as the geodesic time function [23, 25] using the descending border of the plateaus as marker and the inverse of the geodesic distance from higher terrain as grey scale geodesic mask (see also [24]). This procedure is illustrated in Fig. 2 for a 7 × 7 image containing a plateau. Figure 2b shows the inverse geodesic distance function computed on the plateau pixels having no lower neighbour and starting from the neighbour pixels having a higher elevation. However, contrary to the methodology described by Garbrecht and Martz [9], we do not compute the geodesic distance away from lower elevations but the geodesic time function [25] using the inverse of the geodesic distance (Fig. 2b) as geodesic mask and the descending border as marker image. The resulting geodesic time function is displayed in Fig. 2c. By construction, a flow direction is directly defined for all plateau pixels as illustrated in Fig. 2d. The flow direction of a given point is set as the direction of its 8-nearest neighbour point producing the steepest downward slope. As concerns the computation load, fast algorithms for geodesic distance functions are described in [33] for 8-connected and [22] for Euclidean distance calculations. A fast geodesic time function algorithm based on priority queues is described in [26, pp. 202–202].
4
Extraction of Surface Specific Neighbourhoods
On a binary image, the extraction of specific neighbourhood configurations is based on the hit-or-miss transform: The hit-or-miss transformation, HM T , of a
180 9 9 8 8 7 7 7
9 6 6 6 6 6 7
P. Soille 9 6 6 6 6 6 5
9 6 6 6 6 6 7
9 6 6 6 6 6 7
9 6 6 6 6 6 8
9 9 9 9 8 8 8
(a) Input image with plateau at elevation 6.
3 3 3 3
3 2 2 2
3 2 1 2
3 2 2 2 3
3 3 3 3 3
(b) Inverse of geodesic distance away from higher elevations.
8 7 5 3
8 5 4 2
8 5 3 2
8 5 4 2 3
8 7 5 5 5
(c) Created topography using geodesic time function.
# # . & .. # # . # # # . - (d) Resulting flow directions.
Fig. 2. Determination of flow directions on a plateau from an artificial topography created by computing the geodesic time function from the descending border of the plateau (plateau pixels in bold) using the inverse of the geodesic distance away from higher elevations as geodesic mask. In the image of flow directions (d), the flow direction of pixels marked by the sign is set according to the original values of the image.
set X by a composite structuring element B = (BF G , BBG ) is the set of points, x, such that when the origin of B coincides with x, BF G fits X and BBG fits X c: HM TB (X) = {x | (BF G )x ⊆ X, (BBG )x ⊆ X c } = εBF G (X) ∩ εBBG (X c ). (1) This equation could be extended to grey scale images but in this latter case we would combine the erosion of a grey scale image with an erosion of the complement of this image. In addition, due to the increasingness of the HMT, the stacking of the hit-or-miss transformations of the successive cross-sections of a grey scale image does not define the subgraph of a grey scale image. However, as illustrated in Fig. 3 for a 1-D signal, when we position B at a given pixel x, BF G matches the cross-sections of f from the level 0 up to a given level which we denote by tF G , while BBG matches the complement of the cross-sections of f from the level tmax + 1 down to a given level which we denote by tBG . For example, in Fig. 3a at position x = 8, BF G matches the cross-sections of f from the level 0 up to the level tF G = 5 while BBG matches the complement of the cross-sections of f from the level tmax down to the level tBG = 3. Depending on whether we constrain the SE component containing the origin to match either the foreground (if O ∈ BF G ) or the background (if O ∈ BBG ) of x, we will obtain two different definitions for the grey tone hit-or-miss transform. The first will be referred to as the unconstrained hit-or-miss. Accordingly, the previously mentioned constraint is at the basis of the constrained hit-or-miss. Unconstrained hit-or-miss. The output of the unconstrained hit-or-miss, denoted by U HM T , of a grey scale image f by a composite SE B at a position x is defined as the number of cross-sections CS t (f ) such that BF G at x matches CS t (f ) while, simultaneously, BBG at x matches CS t (f ): [U HM TB (f )](x) = card{t | (BF G )x ⊆ CS t (f ), (BBG )x ⊆ CS t (f )}.
(2)
Advances in the Analysis of Topographic Features on Discrete Images t
181
t Bc
B 7
7
6
6
5
5
4
4
3
3
2
2
1
1
0 0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17
x
(a) SE B and input signal f with its subgraph highlighted in grey.
0 0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17
x
(b) SE Bc and input signal f with its subgraph highlighted in grey.
Fig. 3. On the extension of the hit-or-miss operator to grey tone images: Two case studies depending on whether the origin of the composite SE belongs to BF G or BBG . In both diagrams, the centre of the pixels of each cross-section (or its complement) matched by the considered composite SE are marked with a bullet.
The unconstrained hit-or-miss transform is illustrated on a 2-D image in Fig. 4. Constrained hit-or-miss. The definition of the constrained hit-or-miss, denoted by CHM T , involves an additional constrain, considered for each pixel position x. Namely, the SE component containing the origin O must match the foreground F G(x) if O ∈ BF G or the background BG(x) if O ∈ BBG (where F G(x) = Tt≥f (x) (f ) and BG(x) = Tt≤f (x) (f )): (BF G )x ⊆ F G(x), if O ∈ BF G . (BBG )x ⊆ BG(x), if O ∈ BBG . As for a graphical representation and looking back to Fig. 3, we only consider the t-connected components2 of pixels marked by a bullet and which have a nonempty intersection with the t-boundary of the subgraph of f . This happens for x = 13 in Fig. 3a and x = 10 in Fig. 3b. In terms of morphological transformations, when O ∈ BF G , (BF G )x ⊆ F G(x) iff f (x) = [εBF G (f )](x). Accordingly, when O ∈ BBG , (BBG )x ⊆ BG(x) iff f (x) = [δBBG (f )](x). Both definitions of the grey tone hit-or-miss are equivalent in the binary case and come down to Eq. 1. In the grey scale case, they are equivalent only in situations where the component of B containing the origin is restricted to a single pixel. The constrained hit-or-miss is by definition more restrictive than the unconstrained one as highlighted by the following ordering relationship: CHM TB ≤ U HM TB . The hit-or-miss (whether unconstrained or constrained) by B is complementary to that by Bc : HM TB = HM TBc . Both types of HM T are non-increasing transformations. From a computational point of view, it can be shown that the proposed HMTs can be defined by performing simple conditional tests on the grey scale erosion by BF G and dilation by BBG . In addition, 2
We call t-connectivity the 1-D dimensional connectivity defined along the discrete t-axis for a fixed value of x, i.e., (x1 , t1 ) is t-connected to (x2 , t2 ) iff x1 = x2 and |t1 − t2 | = 1.
182
P. Soille 2
1
2
2
1
2
2
1
2
2 2
1
2
1
2 2 1
1 1
2 1 1
2 2
2
2
2
(a) 384 × 256 image f of a honeycomb illustrating the hexagonal tessellation.
(b) Composite SE B with pixels of BF G at 1 and BBG at 2. The origin is the bold 1 pixel.
(c) U HM TB (f ).
(d) Threshold of U HM TB (f ) for all nonzero values: Tt>0 [U HM TB (f )].
Fig. 4. Grey scale unconstrained hit-or-miss transform extracting the upper corner of each hexagonal cell appearing in the input image. The threshold of the U HM T image for all non-zero values highlights the performance of the detection of the considered surface specific points.
these latter definitions are directly suited to the extension to rank operations, similarly to other morphological transformations [27].
5
Crest Lines by Order Independent Thinning
Recent developments about binary order independent homotopic thinnings [16] can be used for extracting crest lines in grey scale images. The principle for anchored grey scale skeletons are detailed in [17] and briefly summarized hereafter. Let us first present a generic definition of an order dependent grey tone thinning algorithm (where N8< (p) denotes the set of neighbours of p whose values are lower than p): for each pixel p of f if p is simple in f add p to the set SimpleSet of simple pixels for each pixel p ∈ SimpleSet (arbitrary sequential scanning order) if p is simple in f f (p) ← max{f (p ) | p ∈ N8< (p)}
Advances in the Analysis of Topographic Features on Discrete Images
183
This algorithm will be referred to as Proc1. At the end of Proc1, the input image f contains a thinned image which is homotopic to the original image. Note also that, in accordance with the anti-extensivity property of any thinning, the output image is always less or equal to the input image. Similarly to binary sequential homotopic thinnings, the resulting image depends on the scanning order of the set SimpleSet of simple pixels. This also happens for all algorithms based on grey tone sequential homotopic thinning [10] [26, p. 142] or topological operators [1, 7]. An order independent thinning can be obtained as follows [17]: First, apply Proc1 with all possible sequential scanning orders; second, set the value of each pixel p to the maximum output value at p. In other words, the output image equals to the point-wise maximum of all possible outputs of Proc1, i.e., for all possible sequential scanning orders of the set SimpleSet of simple pixels. In practice, the notion of order independent thinning allows us to avoid testing all possible configurations by directly testing whether a given simple pixel will be removed by all possible sequential scanning orders. Skeletons are obtained by thinning the image until idempotence. By considering a set of predefined points called anchor points, we obtain anchored skeletons. The algorithm is much faster if each level set of the grey scale image contain at least one anchor point because the Ronse’s 8-deletability test of a connected component of simple pixels [18] can be skipped. The extraction of crest lines on a digital elevation model using the regional maxima as anchor points is illustrated in Fig. 5. Note that the resulting skeleton is not one pixel thick and may even contain very thick regions similarly to the thick watersheds described in [34]. This is due to particular configurations of the grey scale values such as those illustrated in Fig. 6 and which has been sampled from Fig. 5b in a region where thick skeletal lines occur. The order independent thinning has set a thick region to the elevation 307. However, this thick region, although mostly surrounded by crest lines, cannot be further thinned (i.e., set to either 1 or 3 in this example) because it is itself leading to a crest line. That is, the pixel appearing in a shaded box is not simple. Thick watershed zones (in the sense that one cannot decide whether the thick region drains to a catchment basin or another) always correspond to thick regions of the grey tone skeleton. Finally, it is worth mentioning that recent work on distance ordered homotopic thinning [32, 15, 31] could be made independent of the order used for processing the pixels at the current distance value, simply by using order independent thinning of each distance level.
6
Conclusion and Perspectives
New developments related to the minima imposition, the lower complete transformation, the hit-or-miss transform, and the extraction of crest lines have been proposed. Beyond practical applications to the processing of Pan-European digital elevation models which are currently investigated [28], we believe that these developments will be of interest to other image analysis problems where the searched features correspond to topographic features. Note that the minima
184
P. Soille
Fig. 5. Crest lines extracted by order independent homotopic thinning. Left: Shaded view of a 1 km input digital elevation model of the Pyrenees. Right: Support of the skeleton of the DEM using the regional maxima as anchor points.
Fig. 6. Thick skeletal lines extracted from Fig. 5b. The pixels at intensity 1, 2, and 3 belong to the regional minima of the grey tone skeleton, all other pixels belonging to the skeletal lines. The two pixels marked in bold clearly belong to the crest lines, impeding thereby further order independent homotopic thinning of the thick region.
imposition technique based on carving is not incompatible with that based on reconstruction by erosion. Indeed, it is enough to define two sets of markers, the first containing markers leading to a carving while the second to a reconstruction by erosion, the priority queues ensuring an ordered propagation. Finally, it would be of interest to extend the developments of order independent homotopic thinnings to the computation of watersheds and compare this approach with those based on flooding simulations [34], flow simulations [30], and orders [6]. Extensions of the proposed carving, lower complete, and hit-or-miss transformations to 3-D grey scale images are straightforward. However, additional research will be required to extend the notion of order independent homotopic thinnings for producing 3-D grey scale skeletons and watersheds.
Advances in the Analysis of Topographic Features on Discrete Images
185
References [1] G. Bertrand, J.-C. Everat, and M. Couprie. Image segmentation through operators based upon topology. Journal of Electronic Imaging, 6(4):395–405, 1997. [2] S. Beucher. Segmentation d’images et morphologie math´ ematique. PhD thesis, Ecole des Mines de Paris, June 1990. [3] S. Beucher and F. Meyer. The morphological approach to segmentation: The watershed transformation. In E. Dougherty, editor, Mathematical morphology in image processing, pages 433–481. Marcel Dekker, New York, 1993. [4] E. Breen and R. Jones. Attribute openings, thinnings, and granulometries. Computer Vision and Image Understanding, 64(3):377–389, 1996. [5] S. Collins. Terrain parameters directly from a digital elevation model. The Canadian Surveyor, 29(5):507–518, December 1975. [6] M. Couprie and G. Bertrand. Tesselations by connection in orders. In Proc. of Discrete Geometry for Computer Imagery’2000, Uppsala, volume 1953 of Lecture Notes in Computer Science, pages 15–26. Springer-Verlag, 2000. [7] M. Couprie, F. Nivando Bezerra, and G. Bertrand. Grayscale image processing using topological operators. In L. Latecki, R. Melter, D. Mount, and A. Wu, editors, Vision Geometry VIII, volume SPIE-3811, pages 261–272, 1999. [8] H. Digabel and C. Lantu´ejoul. Iterative algorithms. In J.-L. Chermant, editor, Quantitative analysis of microstructures in materials sciences, biology and medicine, pages 85–99, Stuttgart, 1978. Dr. Riederer-Verlag GmbH. [9] J. Garbrecht and L. Martz. The assignment of drainage direction over flat surfaces in raster digital elevation models. Journal of Hydrology, 193:204–213, 1997. [10] V. Goetcherian. From binary to grey tone image processing using fuzzy logic concepts. Pattern Recognition, 12:7–15, 1980. [11] R. Haralick. Ridges and valleys on digital images. Computer Vision, Graphics, and Image Processing, 22:28–38, 1983. [12] D. Mark. Automated detection of drainage networks from digital elevation models. Cartographica, 21:168–178, 1984. [13] F. Meyer. Skeletons and perceptual graphs. Signal Processing, 16:335–363, 1989. [14] F. Meyer and S. Beucher. Morphological segmentation. Journal of Visual Communication and Image Representation, 1(1):21–46, September 1990. [15] C. Pudney. Distance-ordered homotopic thinning: A skeletonization algorithm for 3D digital images. Computer Vision and Image Understanding, 72(3):404– 413, December 1998. [16] V. Ranwez and P. Soille. Order Independent Homotopic Thinning. In G. Bertrand, M. Couprie, and L. Perroton, editors, Proc. of Discrete Geometry for Computer Imagery’99, volume 1568 of Lecture Notes in Computer Science, pages 337–346. Springer-Verlag, 1999. [17] V. Ranwez and P. Soille. Order independent homotopic thinning for binary and grey tone anchored skeletons. Pattern Recognition Letters, 2002, Publication pending. [18] C. Ronse. A topological characterization of thinning. Theoretical Computer Science, 43:31–41, 1986. [19] P. Salembier and J. Serra. Flat zones filtering, connected operators, and filters by reconstruction. IEEE Transactions on Image Processing, 4(8):1153–1160, August 1995. [20] J. Serra. Image analysis and mathematical morphology. Academic Press, London, 1982.
186
P. Soille
[21] J. Serra and P. Salembier, editors. Mathematical morphology and its applications to signal processing, 1993. Universitat Polit`ecnica de Catalunya, Barcelona. [22] P. Soille. Spatial distributions from contour lines: An efficient methodology based on distance transformations. Journal of Visual Communication and Image Representation, 2(2):138–150, June 1991. [23] P. Soille. Morphologie math´ ematique : du relief a ` la dimensionalit´ e —Algorithmes et m´ethodes—. PhD thesis, Universit´e catholique de Louvain ; en collaboration avec l’Ecole des Mines de Paris, February 1992. [24] P. Soille. Generalized geodesic distances applied to interpolation and shape description. In J. Serra and P. Soille, editors, Mathematical Morphology and its Applications to Image Processing, pages 193–200. Kluwer Academic Publishers, 1994 [25] P. Soille. Generalized geodesy via geodesic time. Pattern Recognition Letters, 1235–1240, December 1994 [26] P. Soille. Morphological image analysis: Principles and Applications. SpringerVerlag, Berlin, New York, 1999. Second extended edition to appear in 2002, see also http://ams.egeo.sai.jrc.it/soille [27] P. Soille. On morphological operators based on rank filters. Pattern Recognition,2002. [28] P. Soille. Carving and adpative drainage enforcement of grid digital elevation models. Water Resources Research, Submitted. [29] P. Soille and M. Ansoult. Automated basin delineation from digital elevation models using mathematical morphology. Signal Processing, 20: 171–182, June 1990. [30] P. Soille and C. Gratin. An efficient algorithm for drainage networks extraction on DEMs. Journal of Visual Communication and Image Representation, 5(2): 181–189, June 1994. [31] S. Svensson, G. Borgefors, and I. Nystr¨ om. On reversible skeletonization using anchor-points from distance transforms. Journal of Visual Communication and Image Representation, 10:379–397, 1999. [32] H. Talbot and L. Vincent. Euclidean skeletons and conditional bisectors. In P. Maragos, editor, Visual Communications and Image Processing, volume SPIE1818, pages 862–876, 1992. [33] B. Verwer, P. Verbeek, and S. Dekker. An efficient cost algorithm applied to distance transforms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(4):425–429, April 1989. [34] L. Vincent and P. Soille. Watersheds in digital spaces: An efficient algorithm based on immersion simulations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(6):583–598, June 1991.
Morphological Operations in Recursive Neighbourhoods Pieter P. Jonker Pattern Recognition Group, Faculty of Applied Sciences, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft, The Netherlands
[email protected] Abstract. This paper discusses the use of Recursive Neighbourhoods in Mathematical Morphology. Its two notable applications are the recursive erosion / dilation, as well as the detection of foreground-background changes to be used in skeletonization. The benefit of the latter over an extension of the neighbourhood or the use of sub-cycles is emphasized. Two applications are presented that use the recursive neighbourhood in a 3D surface and a 3D curve anchor-skeleton variant.
1 Introduction In previous papers [3], [4], [5], we have described a general principle for morphological operations on cubic tessellated binary images X N . For the sake of clarity we will briefly repeat this in sections 1 and 2, and in examples use images of dimensions 2 and 3. In section 3 we elaborate on the principle of recursive neighbourhoods. The use of recursive neighbourhoods is not new, however, its principle of work and its possibilities are often not well understood. In section 3 we show that from the four alternatives to circumvent the problem of detecting the topology change of a two pixel/voxel thick N structure in a 3 neighbourhood, as is necessary in skeletonization, the recursive neighbourhood approach is generally the fastest. The recursive neighbourhood detects change, either foreground changed to background or background changed to foreground. This change detection can be fruitfully used in a number of applications, but also give rise to problems, e.g., excessive protrusions in skeletons, when the recursive neighbourhood is not correctly used. In chapter 4 we present two examples of its use. The essence of both examples is the use of a surface, and a curve skeleton, forced through anchor points, whereas its surface and curve protrusions are recursively eroded due to the use of the recursive neighbourhood. From [3], [4], [5], we derived that using a recursive neighbourhood is based on scanning an image with a set of masks that comprises a structuring element S. The masks contain foreground, background and don’t-cares. On each element of the image X an inexact match ( ≅ ) between all masks of the set and the neighbourhood extracted A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 187–196, 2002. © Springer-Verlag Berlin Heidelberg 2002
188
P.P. Jonker
from the image can be performed, whereas the result written to the output image Y is the union of all matches. A second input image Z can be used in the same match to perform dyadic operations. Dyadic operations can be used to locally enable or disable the morphological operation, using the points from a second input image. To elaborate on this: Let x, y, and s be the elements of images X, Y, and S. Let S, e.g., be a 3 x 3 structuring element and M k an equivalent 3 x 3 neighbourhood around pixel x k . For binary morphology, the transformation Y ← X ≅ S , is informally: If for any pixel x k in an input image X, its neighbourhood M k matches inexactly with a structuring element S, the pixel y k of output image Y is set to one. If M k doesn’t match S, y k is set to zero. In the inexact neighbourhood match the foreground pixels in S should match with foreground pixels in M k at the same positions AND the background pixels in S should match with background pixels in M k at the same positions, whereas in the don’t care positions of S a match is not required. This inexact match can be extended to: If for any pixel x k in an input image X, its neighbourhood M k matches one mask SSi from a set of masks SS the pixel y k of output image Y is set to one, else to zero. Meaning that the union of all mask matches is taken. A further extension is: If for any pixel x k in an input image X, its neighbourhood M k matches any mask SSi from a given set of masks SS the pixel y k of output image Y is set to one, else set to zero, OR if its neighbourhood matches any mask SiR from a given set of masks SR , the pixel y k is set to zero, else to one. The image transformation Y ← X ≅ S is now implemented with a set of masks S consisting of a subset SS (the SET-masks) and a subset SR (the RESET-masks), one of which may be empty. This means that either a pixel y k is set to zero, if one of the RESET masks fits, or the pixel is set to one, if one of the SET mask fits, where the SET masks are chosen to dominate over the RESET masks. A second input image Z can be used to locally enable/disable the transformation, yielding a dyadic operation. If a "mask-bit" z of a mask of the set S is set to don’t care, the transformation is enabled. If z is do-care, then if the pixel z k of Z matches with z , the operation is disabled, else enabled. Operations that use Z to locally mask-off, or insert seeds, are the propagation operation and the anchor-skeleton. Finally, an operation on an image can be done by performing a transformation with a mask-set once, twice or more, or until the image is idempotent (does not change) under the operation. In many cases practical use can be made of the intermediate results in the output image. This is called (spatial) recursion. For instance, if the transformation is performed by a raster scan over the image, i.e., from top-left to bottom-right, the fact that some neighbours of pixel y k have already obtained a new value can be utilized. For this purpose, the neighbourhood M k of pixel x k is as well as the structuring element S is extended. Both the Local Neighbourhood (LN) and the Recursive Neighbourhood
Morphological Operations in Recursive Neighbourhoods
189
(RN) can be used in a neighbourhood matching procedure, referred to as local neighbourhood operations (LNO) and recursive neighbourhood operations (RNO).
LN
x 4k x 3k x 2k
y r3k y r2k y r1k
x 5k x 0k x1k
y r4k y k
x6k x 7k x 8k RN
zk
z
Y
X r k
Z
O Fig. 1. Drawing conventions for a neighbourhood match. Here shown for 2D images and 3 x 3 neighbourhoods. The LN is taken from input image X, the RN from output image Y. For dyadic operations a second input image Z is used.
Figure 1 shows the matching process with a single mask from a set S, whereas the neighbourhood M k is extracted from the three different images. Note that when using a software raster scan over the image, it is beneficial for RNOs to scan in all odd scans from top-left to bottom-right and in all even scans from bottom-right to top-left. In this case, the RN should be transposed to match with the scan direction. RESET
Dilate26cct
RESET
RESET
Detect 26cct
SET
SET
SET
Erode26cct Erode18cct Erode6cct
Fig. 2. Structuring elements for simple operations in X 3 . Opaque elements are don’t cares, light grey are background elements, and dark grey are foreground elements.
Figure 2 shows some structuring elements or mask-sets for simple Local Neighbourhood Operations in X 3 , while Figure 3 shows an example of a dyadic Recursive Neighbourhood Operation; the propagation operation, a recursive conditional dilation: Objects in an image are recursively dilated (the first mask), wherever foreground in image Z and background in X was found (the second mask).
190
2
P.P. Jonker
Skeletonization, Shape Primitives
RESET
RESET
Erosions can be described as a match on foreground area in X 2 and on foreground volume in X 3 . Skeletonization can be seen as conditional erosion [3]. Figure 4 shows that in X 3 a volume is eroded to a curved surface -the surface skeleton-, where after the surface is eroded to a space curve -the curve skeleton-. The condition for the erosion is that surfaces, or curves, should not be eroded. Those conditions are also known as shape primitives in X N . The sets of shape primitives and how they can be found are described in [5]. In 3D one set is called Surf26, which represents the Propagation26cct surface primitives: On all possibilities of a curved 3 Fig. 3. A dyadic RNO in X 3 ; surface to interesect a 3 neighbourhood, one of these masks (or a rotated and/or mirrored version) the propagation operation will hit. The set Curv26 represents likewise the space curve primitives: On all possibilities of a 3 space curve to interesect a 3 neighbourhood one of these masks (or a rotated and/or mirrored version) will hit. Iterating over an image X 3 with the erosion mask Erode26cct of Figure 2 and a set of surface primitives (Surf26) and the set of curve primitives (Curv26) yields a skeleton. Erode26cct erodes surfaces from volumes (it hits only on the core of the volumes, not on their boundaries). The set Surf26 detects surfaces and thus may be used to prevent the erosion of surfaces (the masks only hit on the core of the surface, not on the surface boundaries). The set Curv26 detects curves and can be used to prevent the erosion of curves (the masks only hit on the kernel of the curves). Consequently, as volumes, surfaces and curves cannot be eroded from their kernels, they are eroded from their boundaries. So only closed surfaces and curves will remain.
a)
b)
c)
d)
Fig. 4. Original1, surface skeleton, curve skeleton, and the topological kernel.
To prevent the erosion of boundaries, from the set Curv26 a set Curv26e can be derived that contains all surface edge situations. Similarly, a set Curve26e can be made from the set Curv26 containing all curve end situations [4],[5]. 1
Courtesy Dr. K. Katada, Fujita Health University Japan
Morphological Operations in Recursive Neighbourhoods
3
191
On the Necessity and Use of Recursive Neighbourhoods
A recursive neighbourhood can be used for a number of reasons. First it can be used to erode or dilate objects or background in one or two passes over the image. For instance the masks of Figure 3 can be used to quickly select objects. With objects stored in image X and seed voxels stored in image Z, starting from the seed voxel, a wave front dilation starts over objects that are connected to the seed. Figure 5 shows how such dilation evolves in a 2D situation. The dilation mask (5a) is assumed to make a raster scan over the image from top-left to bottom down. A part of an image X is shown in 5b…5e. In 5b and 5c the mask does not fit and hence the pixels are reset to foreground in the output image Y. When the scan runs over the image at one row lower, shown in Figure 5d and 5e, in situation 5d the pixel swaps value from background to foreground but the swap in 5e depends on which neighbourhood is used. Using the recursive neighbourhood (partially using output image Y), the pixels swaps value, as the mask does not match, but when using the Local Neighbourhood (only using input image X) the mask fits and the pixel remains background. Of course, an unconditioned recursive dilation in a downward scan followed one in an upward scan over the image would fill the whole image. Likewise, a recursive erosion would erode a convex object in two of such scans over the image. Hence its application is found in conditional erosions and dilations. RESET
a)
b)
c)
d)
e)
Fig. 5. Recursive dilation
Secondly, the Recursive Neighbourhood can be used to detect change within a skeletonization procedure2. SET
a)
b)
c)
d)
e)
Fig. 6. Topology check using the local (b, c, d) or recursive (b,c,e) neighbourhood
The problem with a topology check (e.g., a mask) that works on a 3N neighbourhood is that it cannot detect two element (pixel, voxel, …) thick structures. Figure 6 shows a situation in which the mask of Figure 6a is used to detect a straight single pixel thick 2
The first one known to me that used it for this purpose was Hilditch (1969) [2]
192
P.P. Jonker
vertical line. This curve primitive (6a) is one of the masks that are used as topology check for a 2D skeleton. A raster scan is assumed. The sequence 6b, c, d shows what happens if the local neighbourhood is used for a topology check. 6b, c, e shows what happens if the recursive neighbourhood is used. In situation 6d the topology is broken, in 6e it is preserved as now the output image is partly used in the mask match. An alternative to the use of the recursive neighbourhood is to use only the Local Neighbourhood, but to erode from one direction at a time -explicitely or implicitely[1]. This technique is called sub-cycling. In the example of Figure 6, the tests of 6c and 6d would then have been done in subsequent cycles through the image, first an erosion from the left, and secondly an erosion from the right. The same effect can be obtained by first processing all odd lines of the image followed by the processing of all even lines in the image. This is often done on parallel hardware such as massively parallel processor arrays [7]. In literature [10], often the terminology sequential algorithm versus parallel algorithm is used for the use of the recursive neighbourhood versus the local neighbourhood with sub-cycles. However, both approaches can be implemented on sequential as well as parallel machines. A third method is the extension of the neighbourhood -for instance in 2D to 3 x 4- to detect the two element thick structures [8]. This lets the number of masks (or topology tests), as well as the number of image elements to be tested grow considerably. When a recursive neighbourhood can be addressed, this is the fastest method, as it diminishes the number of tests within the neighbourhood, the number of topology tests as well as the total number of cycles through the image. When the skeleton is made, the topology checks can be split into checks on the interior of the object and checks on the boundary. In 3D playing with the subsets: erosion mask (Erode26cct), surface detection mask-set (Surf 26), curve detection mask-set (Curv26), the mask-sets for surface ends (Surf26e) and curve ends (Curv26e) skeleton variants can be made. The set {Erode26cct, Surf26, Surf26e, Curv26} is used for the surface skeleton. The set {Erode26cct, Surf26, Curv26, Curv26e} was used to obtain the curve skeleton. The set {Erode26cct, Surf26, Curv26} was used to obtain the last skeleton of Figure 4. The recursive neighbourhood should be used, only for the Surf 26 and Curv26 sets and not for Erode26cc and the Surf26e and Curv26e sets. If the recursive neighbourhood is used for the erosion mask, the erosion will not be performed boundary by boundary like peeling an onion skirt, but the erosion will immediately propagate over the object. The final skeleton will not be on the medial axis, but will lay on the bottoms of the objects, when raster scanning. If the recursive neighbourhood and/or sub-cycling method is also used for the object boundary conditions, e.g., Surf26e and Curv26e, this will lead to the sprouting of spurious protrusions. This can be best explained when using the sub-cycle technique. Due to the first subcycle, locally a noisy boundary may be formed, which may be cancelled out in the subsequent sub-cycle by an erosion from another direction. However, a boundary detection condition mask, may decide that this is the beginning of a protrusion and decides to keep it. So object core conditions should be treated differently from object boundary conditions, to avoid excess sprouting of protrusions. But even then, in the object core conditions, e.g., (Surf26, Curv26), the recursive neighbourhood should
Morphological Operations in Recursive Neighbourhoods
193
only be checked to verify if a foreground is changed into background, where a background is expected in the mask. This is shown in Figure 7. SET
a)
b)
c)
d)
Fig. 7. Using the recursive neighbourhood for foreground
Suppose the mask of Figure 7a is used to detect a skeleton curve in 2D. The pixel in 7b is reset to background as the mask does not fit. The pixel in 7c is correctly set to foreground; the mask fits when foreground in the mask (the North pixel) is checked in the input image X and the background in the mask (the West pixel) is verified using the output image Y. 7d Shows what happens if both are verified using the output image Y. The topology is broken. The use of the full recursive neighbourhood in the sense of Figure 7b, can be profitably when a skeleton without end conditions is made. For instance in a skeleton without end-voxel conditions, yielding the topological kernel, as shown in Figure 4d, the space curve conditions are verified in the full recursive neighbourhood. The benefit of this is that the non-closed curves are eroded recursively, i.e., in a few passes through the image as the erosion propagates over the skeleton branches within a single cycle through the image. This makes this skeleton fast.
4
Advanced Examples
In this section two examples are presented of applications that make use of 3D morphological operations, most notable of variants of the 3D skeleton operator and the propagation operation. Both use the recursive neighbourhood in various ways. Consider Figure 8. The original image3 is taken by an acquisition system -based on multiple cameras- that produces a stream of 3D object data [11]. As the system produces imperfect data, each image of the stream has to be processed before rendering texture and colour onto it. The lower images in Figure 8 are cross sections of the legs. The procedure used to clean-up the original is the following sequence: 1) dilate the object with a 26 connected contour 2) dilate the object with a 6 connected contour, twice 3) erode a 26 connected contour from the object 4) erode a 6 connected contour from the object
3
Courtesy Prof. Matsuyama, Kyoto University Japan This is an old stream; currently his 3D streams have a far better quality [11]
194
Fig. 8.
P.P. Jonker
Aerobic girl
original
Aerobic girl processed
5) iteratively erode a 6 connected contour from the object, (Erode6cct) but use also the mask-sets Surf26 and Surf26e, thus performing a few cycles of the surface skeleton. Anchor the original object into the skeleton; with a logic OR operation, after each skeleton cycle the original data points are re-inserted, thus forcing the skeleton through the original data points. Step 1 through 4 performs a 3D closing operation. Due to step 5 the result is a closed single voxel thick surface that goes through the original data points. Like alternating 4 and 8 connected in 2D, the alternation of the 26 and 6 connected erosions/dilations in 3D leads to a better erosion/dilation metric [3]. 6) propagate the edge of the image into the image (Propagation26cct), stop at the object boundary and invert the result. This makes the object solid. 7) extract the contour of the solid object. Steps 6 and 7 remove all inner data points of the object. 8) take a surface skeleton without boundary conditions using {Erode26cct, Surf26}. This finds the closed surface contour only, i.e., it removes sprouting surfaces. The full recursive neighbourhood is used in Surf26 and consequently the boundary surfaces are eroded recursively and two skeleton cycles are sufficient. In Robot Soccer [6],[9] two teams of each 4 autonomous robots play soccer in a field of 5 x 10 meter. One of the issues is to quickly plan collision free paths for the robot. Moreover, if a robot should play together with a teammate, his presence is required on a certain point in space AND time, e.g., to intercept a passed ball. A 3D image X 3 represents the universe of the robot with two space dimensions (x, y) and one time dimension (t). Note that if the orientation of the robot should also be taken into account, the problem should be solved in X 4 (x, y, , t). Figure 9 shows 12 pictures of a soccer field. The soccer field is in this example divided into cubic voxels of 0.3 x 0.3 x 0.6 [m·m·s]. The robot speed is {0, 1, 2} · [0.5 m/s]. The image size [x, y, t] is
Morphological Operations in Recursive Neighbourhoods
195
17 x 35 x 35 voxels. The origin of the field is in the centre of the image at t = 0. Assume an attacker robot with ball wants to score, while the goal is defended by two defender robots. The attacker is on point (-6,-6,0) and tries to get to point (6,-2,34) to score, and the defenders are at points (15,7,0) and (10,-6,0) and are perceived by the attacker robot to head for the points (-10,6,34) and (0,0,34). The path planning procedure of the attacker robot consists of the following steps. y t
Fig. 9. Search space
moving objects
reduced search space
collision free path
1) Recursively dilate the start point towards the positive time direction. The resulting cone indicates the points in the field that can be reached by the robot in positive time direction, when using speed 1. See Figure 10a. RESET
SET Fig. 10. a) Fully recursive neighbourhood (reset) mask to propagate a cone in space-time and b) a fully recursive neighbourhood (set) mask to erode objects in space-time
2) Recursively dilate the goal point towards the negative time direction 3) AND the results of 1) and 2). The top row, left column of Figure 9 shows the result, the initial search space. 4) From the two defender robots a speed and heading is perceived by the vision system of the attacker robot. It assumes that for the duration of the path those robots will linearly continue their path. The top row, second column in Figure 9 shows two objects, the linear paths of the defender robots in space-time, dilated one step, to introduce a safety margin. 5) The EXCLUSIVE OR of search space and the moving object images yields the reduced search space of the top row, third column of Figure 9. 6) The curve skeleton without end-curve conditions {Surf26, Curv26}, with start and endpoint of the attacker robot anchored into the skeleton, yield the final collision free path. Note that both Surf26 and Curv26 are used in the full recursive neighbourhood. This enables the fast erosion of sprouting surfaces and space curves. A problem in the planning is that the path to be found may not
196
P.P. Jonker
run backwards in time. To prevent this, first all skeleton scans over the image are only applied in the forward time direction. Secondly, the erosion mask is adapted, so that it does not erode backward in time (see Figure 10b). Thirdly, masks in Curv26 that have configurations that allow connectivity’s backward in time are omitted. Thus paths that run backward in time are never found, as their topology is not preserved. 7) As the curve skeleton finds all possibilities to go from start to goal, a simple algorithm is: go left at a branch point if y > 0, right if y < 0, leading the attack over the wings. Alternatively, the distance can be propagated over the branches, to find the shortest of the branches. 8) If no fixed end-time tend is assumed for the attacker robot, tend can be iteratively reduced, until no path is found anymore. The three rows of Figure 9 show the paths at end-times tend = 34, tend = 22 and finally tend = 16, the minimum time for a collision free path. A start value for the end-time is tend=xend-xstart+yend-ystart. 9) For tend = 34, a path is found in 84 msec on a PIII, 600 Mhz .
References 1.
Arcelli C., Cordella L., Levialdi S. (1975) Parallel thinning of binary pictures. Electronic letters 11: pp 140-149 2. Hilditch C.J. (1969) Linear Skeletons from Square Cupboards. In: B. Meltzer and D. Mitchie (eds.), Machine Intelligence, Vol. 4, Edinburgh University Press, 404-420 3. Jonker P.P. (1992) Morphological Image Processing: Architecture and VLSI design. Kluwer Dordrecht, ISBN 90-2012766-7 4. Jonker P.P. (2000), Morphological Operations on 3D and 4D Images: From Shape Primitive Detection to Skeletonization. Proc. 9th Int. Conf., DGCI 2000 (Uppsala, Dec.13-15), Lecture Notes in Computer Science, vol. 1953, Springer, Berlin, 2000, 1371-391. 5. Jonker P.P. (2002), Skeletons in N dimensions using Shape Primitives, Pattern Recognition Letters, April 2002. 6. P.P. Jonker, J. Caarls, and W. Bokhove (2001), Fast and Accurate Robot Vision for Vision based Motion Proc. 4th Int. Workshop on Robocup (Melbourne, Australia, Aug.31-Sep.1, Springer Verlag, Berlin, 149-158. 7. Kyo, S., Okazaki, S., Fujita, Y., Yamashita, N (1997), A Parallelizing Method for Implementing Image Processing Tasks on SIMD Linear Processor Arrays, Proceedings of the IEEE workshop on Computer Architecture for Machine Perception (CAMP 97): 180-184 8. Ma C.M. (1994) On Topology Preservation in 3D Thinning. CVGIP-IU, 59(3), 328-339 9. www. robocup.org , www.robocup.nl 10. Rosenfeld A, Pfalz J.L. (1966) Sequential operations in digital picture processing. Journal of the ACM, 13(4): 471-494 11. Wu X, Wada T, Matsuyama T (2001) Real-time Active 3D Object Shape Reconstruction for 3D Video. Proc. of the 4th Int. Workshop on Cooperative Distributed Vision, March 22-24, Kyoto, Japan: 455-474.
[email protected] Computing the Diameter of a Point Set Gr´egoire Malandain and Jean-Daniel Boissonnat INRIA, 2004 route des lucioles, BP 93, 06 902 Sophia-Antipolis Cedex, France {gregoire.malandain,jean-daniel.boissonnat}@sophia.inria.fr
Abstract. Given a finite set of points P in Rd , the diameter of P is defined as the maximum distance between two points of P. We propose a very simple algorithm to compute the diameter of a finite set of points. Although the algorithm is not worst-case optimal, it appears to be extremely fast for a large variety of point distributions.
1
Introduction
Given a set P of n points in Rd , the diameter of P is the maximum Euclidean distance between any two points of P. Computing the diameter of a point set has a long history. By reduction to set disjointness, it can be shown that computing the diameter of n points in Rd requires Ω(n log n) operations in the algebraic computation-tree model [PS90]. A trivial O(n2 ) upper-bound is provided by the brute-force algorithm that compares the distances between all pairs of points. In dimensions 2 and 3, better solutions are known. In the plane, it is easy to solve the problem optimally in O(n log n) time. The problem becomes much harder in R3 . Clarkson and Shor gave a randomized O(n log n) algorithm [CS89]. This algorithm involves the computation of the intersection of n balls (of the same radius) in R3 and the fast location of points with respect to this intersection. This makes the algorithm less efficient in practice than the brute-force algorithm for almost any data set. Moreover this algorithm is not efficient in higher dimensions since the intersecd tion of n balls of the same radius has size Θ(n 2 ). Recent attempts to solve the 3-dimensional diameter problem led to O(n log3 n) [AGR94,Ram97b] and O(n log2 n) deterministic algorithms [Ram97a,Bes98]. Finally Ramos found an optimal O(n log n) deterministic algorithm [Ram00]. All these algorithms use complex data structures and algorithmic techniques such as 3-dimensional convex hulls, intersection of balls, furthest-point Voronoi diagrams, point location search structures or parametric search. We are not aware of any implementation of these algorithms. We suspect that they are very slow in practice compared to the brute-force algorithm, even for large data sets. Some of these algorithms could be extended in higher dimensions. However, this is not worth trying since the data structures they use have sizes that depend exponentially on the dimension: e.g. the size of the convex hull of n points of Rd d can be as large as Ω(n 2 ). Our algorithm works in any dimension. Moreover, it does not construct any complicated data structure; in particular, it does not require that the points are A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 197–208, 2002. c Springer-Verlag Berlin Heidelberg 2002
198
G. Malandain and J.-D. Boissonnat
in convex position and therefore does not require to compute the convex hull of the points. The only numerical computations are dot product computations as in the brute-force algorithm. The algorithm is not worst-case optimal but appears to be extremely fast under most circumstances, the most noticeable exception occuring when the points are distributed on a domain of constant width, e.g. a sphere. We also propose an approximate algorithm. Independently, Har-Peled has designed an algorithm which is similar in spirit to our algorithm [Har01]. We compare both methods and also show that they can be combined so as to take advantage of the two.
2
Definitions, Notations, and Geometric Preliminaries
We denote by n the number of points of P, by h the number of vertices of the convex hull of P, and by D the diameter of P. δ(·, ·) denotes the Euclidean distance, and δ 2 (·, ·) the squared Euclidean distance. A pair of points of P is called a segment. The length of a segment pq is the euclidean distance δ(p, q) between p and q. A segment of length D is called maximal. For p ∈ P, F P (p) denotes the subset of the points of P that are at maximal distance from p. The segment joining two points p and q is called a double normal if p ∈ F P (q) and q ∈ F P (p). If pq is a maximal segment, pq is a double normal. The converse is not necessarily true. Observe that the endpoints of a maximal segment or of a double normal belong to the convex hull of P. Observe also that, if the points are in general position, i.e. there are no two pairs of points at the same distance, the number of double normals is at most h/2. B(p, r) denotes the ball of radius r centered at p, Σ(p, r) its bounding sphere. The ball with diameter pq is denoted by B[pq] and its boundary by Σ[pq]. Since the distance between any two points in B[pq] is at most δ(p, q), we have: Lemma 1 If p, q ∈ P and if pq is not a maximal segment, any maximal segment must have at least one endpoint outside B[pq]. As a corollary, we have: Lemma 2 If p, q ∈ P and if P \ B[pq] = ∅, pq is a maximal segment of P and δ(p, q) is the diameter of P.
3
Computation of a Double Normal
Algorithm 1 below repeatidly computes a furthest neighbour of a point of P until a double normal DN is found. To find a furthest neighbour of p ∈ P, we simply compare the distances between p and all the other points in P (F P scan). Point p is then removed from P and won’t be considered in further computations.
Computing the Diameter of a Point Set 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12:
199
procedure DoubleNormal( p, P ) // p is a point of P ∆20 ← 0 i ← 0 repeat // F P scan increment i ∆2i ← ∆2i−1 P ← P \ {p} // remove p from P from any further computation find q ∈ F P (p), i.e. one of the furthest neighbours of p if δ 2 (p, q) > ∆2i then ∆2i ← δ 2 (p, q) and DN ← pq p←q until (∆2i = ∆2i−1 ) return DN Algorithm 1: Computes a double normal.
Lemma 3 Algorithm 1 terminates and returns a double normal. Proof. ∆i can only take a finite number of different values and strictly increases: this ensures that the algorithm terminates. After termination (after I iterations) we have q ∈ F P (p) and all the points of P belong to B(p, δ(p, q)). Since ∆I−1 = δ(p, q), all the points of P belong also to B(q, δ(p, q)) and therefore p ∈ F P (q). ✷ After termination of Algorithm 1, the original set P has been replaced by a strictly subset P since some points have been removed from P (line 6 of algorithm 1). By construction, the returned segment pq is a double normal of the reduced set P (lemma 3), and it is also a double normal of the original set P. Lemma 4 The only numerical operations involved in Algorithm 1 are comparisons of squared distances. Lemma 5 Algorithm 1 performs at most h F P scans and takes Θ(nh) time. Proof. The upper bound is trivial since all the points q that are considered by Algorithm 1 belong to the convex hull of P and all points q are distinct. As for the lower bound, we give an example in the plane, which is sufficient to prove the bound. Consider a set of 2n + 1 points p0 , . . . , p2n placed at the vertices of a regular polygon P (in counterclockwise order). For i > 0, we slightly move the pi outside P along the ray Opi by a distance εi for some small ε < 1. Let pi be the perturbed points It is easy to see that the farthest point from pi is always pi+n mod (2n+1) except for pn+1 . Therefore, the algorithm will perform F P scans starting successively at pσ0 , . . . , pσ2n+1 where σi = i × n (modulo 2n + 1). ✷ Although tight in the worst-case, the bound in lemma 5 is very pessimistic for many point distributions. This will be corroborated by experimental results.
200
4
G. Malandain and J.-D. Boissonnat
Iterative Computation of Double Normals
Assume that Algorithm 1 has been run and let Q = P \ B[pq]. If Q = ∅, pq is a maximal segment and δ(p, q) is the diameter of P (lemma 2). Otherwise, we have to determine whether pq is a maximal segment or not. Towards this goal, we try to find a better (i.e. longer) double normal by running Algorithm 1 again, starting at a point in Q rather than in P, which is sufficient by lemma 1. Although any point in Q will be fine, experimental evidence has shown that choosing the furthest point from Σ[pq]1 is usually better. Algorithm 2 below repeats this process further until either Q becomes empty or the current maximal distance ∆ does not increase. 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13:
∆2 ← 0 stop ← 0 pick a point m ∈ P repeat // DN scan DoubleNormal(m, P) // yields a double normal pq of length δ(p, q) if δ 2 (p, q) > ∆2 then ∆2 ← δ 2 (p, q) and DN ← pq Q ← P \ B[pq] if Q = ∅ then find m ∈ Q a furthest point from Σ[pq] else stop ← 1 // terminates with Q = ∅. until Q = ∅ or stop = 1 return DN ← pq, ∆2 ← δ 2 (p, q) Algorithm 2: Iterated search for double normals.
Lemma 6 Algorithm 2 can be implemented so that the only numerical computations are comparisons of dot products of differences of points. Lemma 7 Algorithm 2 performs O(h) DN scans. Its overall time-complexity is O(nh). Proof. The first part of the lemma comes from the fact that the algorithm enumerates (possibly all) double normals by strictly increasing lengths. Let us prove now the second part of the lemma. Each time Algorithm 1 performs a F P scan starting at a point p (loop 3-11), p is removed from further consideration (line 6). Moreover, except for the first point p to be considered, all these points belong to the convex hull of P. It follows that the total number of F P scans is at most h + 1. Since each F P scan takes O(n) time, we have proved the lemma. ✷ 1
This point is the furthest point from
p+q 2
outside B[pq].
Computing the Diameter of a Point Set
5
201
Diameter Computation
Assume that Algorithm 2 terminates after I iterations. Since, at each iteration, a new double normal is computed, the algorithm has computed I double normals, noted pi qi , i = 1, . . . , I, and we have δ(p1 , q1 ) < . . . < δ(pI−1 , qI−1 ).. Each time Algorithm 1 is called, some points are removed from the original data set. We rename the original data set P (0) and denote by P (j) the set of points that remain after the j-th iteration, i.e. the one that computes pj qj . Hence set P (i) is strictly included in P (i−1) . Moreover, each segment pi qi is a double normal for all the sets P (j) , j = i − 1, . . . , I 2 . It is easily seen that, at each iteration j, the length of the computed double normal pj qj is strictly greater than the distances δ(x, F P (x)) computed so far, or equivalently, than the lengths of all the segments in P \ P (j) × P since Algorithm 1 removed the corresponding x from P . When Algorithm 2 terminates, we are in one of the two following cases : Case 1 : δ(pI , qI ) > δ(pI−1 , qI−1 ) and Q = P (I) \ B[pI qI ] = ∅. In this case, pI qI is a maximal segment of P: by lemma 2, it is a maximal segment of P (I) , and, as mentionned above, no segment with an endpoint in P \ P (I) can be longer. Case 2 : δ(pI , qI ) ≤ δ(pI−1 , qI−1 ). In this case, P (I−1) \ B[pI−1 qI−1 ] was not empty before the computation of [pI , qI ]. We have to determine whether pI−1 qI−1 is a maximal segment or not. Thanks to lemma 1, if a longer double normal exists, one of its endpoints lies in P (I) \ B[pI−1 qI−1 ]. If this last set is empty, which is checked by Algorithm 3, pI−1 qI−1 is a maximal segment of P. Required: P (I) and pI−1 qI−1 (provided by Algorithm 2) 1: Q ← P (I) \ B [pI−1 qI−1 ] 2: if Q = ∅ then 3: pI−1 qI−1 is a maximal segment of P Algorithm 3: Checks whether Q = P (I) \ B[pI−1 qI−1 ] = ∅.
If Q = P (I) \ B[pI−1 qI−1 ]
= ∅, we have to check whether there exists a maximal segment with an endpoint in this set. To search for such maximal segments, we propose two methods. For clarity purpose, we will write P instead P (I) in the following. 5.1
Exhaustive Search Over Q × P
The first method (Algorithm 4) simply considers all segments in Q × P. 2
Strictly speaking, as pi and qi do not belong to P (j) , we should say that pi qi is a double normal for all the sets P (j) ∪ {pi , qi }, j = i − 1, . . . , I.
202
G. Malandain and J.-D. Boissonnat
Required: ∆2 (provided by Algorithm 2) and Q (provided by Algorithm 3) 1: if Q = ∅ then // Exhaustive search with an endpoint in Q 2: for all points pi ∈ Q do 3: for all points pj ∈ P do 4: if δ 2 (pi , pj ) > ∆2 then 5: ∆2 ← δ 2 (pi , pj ) 6: return ∆2 Algorithm 4: Exhaustive search over Q × P.
5.2
Reduction of Q
As it might be expected and is confirmed by our experiments, the observed total complexity is dominated by the exhaustive search of the previous section. It is therefore important to reduce the size of Q. For that purpose, we propose to reuse all the computed segments pi qi , i = 1, . . . , I − 2, and pI qI . Principle. Assume that we have at our disposal an approximation ∆ of the diameter of set P and a subset Q ⊂ P that contains at least one endpoint of each maximal segment longer than ∆ (plus possibly other points). To identify such endpoints in Q (i.e. to find the maximal segments longer than ∆), we may, as in Algorithm 4, exhaustively search for a maximal segment over Q × P. The purpose of this section is to show how this search can be reduced. Under the assumption that the diameter of P is larger than ∆, we know, from lemma 1, that any maximal segment will have at least one endpoint outside any ball of radius ∆/2. Consider such a ball B of radius ∆/2. The exhaustive search over Q × P can then be reduced to two exhautive searches associated to a partition of Q into Q ∩ B and Q \ B . More precisely, if p ∈ Q, searching for a point q such that δ(p, q) > ∆ reduces to searching q in P \ Q \ B if p belongs to B , and searching q in P otherwise. This way, instead of searching over Q × P, we search over (Q ∩ B ) × (P \ Q \ B ) and (Q \ B ) × P, therefore avoiding searching a maximal segment in (Q ∩ B ) × (P ∩ B ). B should be chosen so as to maximize the number of points in P ∩ B , which reduces the cost of searching over (Q ∩ B ) × (P \ Q \ B ). The idea is to reuse the already found segments pi qi (which are double normals of P) and to iteratively i center the balls of radius ∆/2 at the points pi +q 2 . Algorithm. Assume that Algorithm 2 terminates under case 2, yielding the segment pmax qmax (i.e. pI−1 qI−1 ) of length ∆ = δ(pmax , qmax ) which is considered as an estimation of the diameter. Moreover, we assume that the set Q computed by Algorithm 3 is not empty. All the double normals pi qi that have been found by Algorithm 2, except pI−1 qI−1 , are collected into a set S.
Computing the Diameter of a Point Set
203
Required: ∆2 = δ 2 (pmax , qmax ) and S provided by Algorithm 2 Required: Q(0) = Q provided by Algorithm 3 1: for all segments pi qi ∈ S, i = 1 . . . |S| do i 2: B ← B pi +q , ∆/2 2 3: d2 ← max δ 2 (p, q) (q, p) ∈ Q(i−1) ∩ B × P \ Q(i−1) \ B 4: 5: 6: 7: 8: 9:
if d2 > ∆2 then // A better diameter estimation was found ∆ 2 ← d2 Add segment pq to set S Q(i) ← Q(i−1) \ B // new set Q if Q(i) = ∅ then return ∆2 // diameter has been found
Algorithm 5: Iterative reduction of Q by successive examination of all segments pi qi .
If Algorithm 5 terminates with Q(|S|)
= ∅, one still must run Algorithm 4 with Q = Q(|S|) , i.e. the exhaustive search over Q(|S|) × P.
6
Diameter Approximation def
Our algorithm provides a lower bound√ ∆ = ∆min on the diameter. It also provides an upper bound ∆max = ∆min 3. Indeed, let pq be the double normal whose length is ∆min . All the points of P belong to the intersection of the two balls of radius ∆min centered at p and q. With only slight modifications, our algorithm can also be used to compute a better approximation of the diameter. More precisely, for any given ε, we provide an interval [∆min , ∆max ] of length ≤ ε that contains the true diameter. Since the algorithm provides a lower bound ∆, we simply need to ensure that ∆ + ε is an upper bound of the true diameter. We will just indicate where the necessary modifications must take place. First, during the iterative search of double normals (line 9 in Algorithm 2) the ball centered at p+q 2 and passing through the furthest point m contains all the points of P. The diameter ∆max of that ball is given by →− → + ∆2 ∆2max = 4 − mp. mq 2
where ∆ = δ(p, q). Therefore, when ∆2max ≤ (∆ + ε) , we have found an εapproximation of the diameter and we stop. Second, the intermediate step (Algorithm 3) checks if P (I) contains the endpoint of some potential maximal segment. Here the set Q has to be replaced by I−1 ∆+ε P (I) \ B pI−1 +q , . 2 2 A better estimate than ∆ + ε of the upper bound ∆max is then obviously pI−1 + qI−1 2 × max δ ,q q∈R 2
204
G. Malandain and J.-D. Boissonnat
pI−1 + qI−1 ∆ + ε pI−1 + qI−1 ∆ with R = P (I) ∩ B , \B , . 2 2 2 2 If Q is empty, we stop. The exhaustive search over Q × P described in Algorithm 4 will possibly update both ∆ and ∆max . Required: ∆2 provided by algorithm 2, Required: Q, ∆2max and provided by modified algorithm 3 1: if Q = ∅ then // Exhaustive search with an endpoint in Q 2: for all points pi ∈ Q do 3: for all points pj ∈ P do 4: if δ 2 (pi , pj ) > ∆2 then 5: ∆2 ← δ 2 (pi , pj ) 6: if δ 2 (pi , pj ) > ∆2max then 7: ∆2max ← δ 2 (pi , pj ) 2 8: return ∆ and ∆2max Algorithm 6: Modified exhaustive search over Q × P.
Finally, in Algorithm 5 (line 2), we will use ∆max instead of ∆ and update both ∆2 and ∆2max when necessary (lines 4-6).
7
Experiments
We conduct experiments with different point distributions in Rd : Volume based distributions: in a cube, in a ball, and in sets of constant width (only in 2D); Surface based distributions: on a sphere, and on ellipsoids; and with real inputs3 The interested reader will find detailed results and discussion in [MB01] for our own method.
8
Comparison with Har-Peled’s Method
The most comparable approach to ours is the one developed very recently by S. Har-Peled [Har01]. Although it is similar in spirit, Har-Peled’s algorithm is quite different from ours. We first summarize his method and then compare experimentally the two methods. Since the two methods have different advantages and drawbacks, it is worth combining them, leading to good hybrid algorithms with more stable performances. 3
Large Geometric Models Archive, http://www.cs.gatech.edu/projects/large models/, Georgia Institute of Technology.
Computing the Diameter of a Point Set
205
Table 1. CPU times for 3D volume based synthetic distributions.
Inputs Cube Points 10,000 our method 0.01 HPM - original 0.01 HPM - our implementation 0.02 hybrid method #1 0.01 hybrid method #2 0.02
Running time in seconds 3D Volume Based distributions Cube Cube Ball Ball Ball 100,000 1,000,000 10,000 100,000 200,000 0.19 0.53 0.04 0.79 1.20 0.18 1.96 0.31 18.16 53.88 0.18 1.92 0.20 5.12 20.57 0.18 2.00 0.13 2.25 5.26 0.35 1.50 0.07 1.05 3.29
In his approach, Har-Peled recursively computes pairs of boxes (each enclosing a subset of the points). He throws away pairs that cannot contain a maximal segment. To avoid maintaining too many pairs of boxes, Har-Peled does not decompose a pair of boxes if both contain less than nmin points (initially set to 40 in HarPeled’s implementation). Instead, he computes the diameter between the two corresponding subsets using the brute-force method. Moreover, if the number of pairs of boxes becomes too large during the computation (which may be due to a large number of points or to the high dimension of the embedding space), nmin can be doubled: however, doubling nmin increases the computing time. Differently from our method, Har-Peled’s algorithm depends on the coordinate axes (see table 2). Table 2. CPU times for 3D surface based synthetic distributions. The points sets in the second and the third columns are identical up to a 3D rotation.
Inputs
Ellipsoid (regular) Points 1,000,000 our method 1.34 HPM - original 1.78 HPM - our implementation 1.81 hybrid method #1 1.82 hybrid method #2 2.30
Running time in seconds 3D Surface Based distributions Ellipsoid Ellipsoid Sphere Sphere Sphere (rotated) 1,000,000 1,000,000 10,000 100,000 200,000 2.02 1.61 1.08 358.21 not computed 3.84 37.70 2.13 95.49 328.90 3.51 23.88 0.63 39.97 166.26 3.38 6.38 0.33 6.99 16.75 3.10 1.79 0.44 8.58 19.75
We provide an experimental comparison of both approaches, using the original Har-Peled’s implementation4 which only works for 3D inputs. In order to 4
Available at http://www.uiuc.edu/˜sariel/papers/00/diameters/diam prog.html.
206
G. Malandain and J.-D. Boissonnat
be able to deal with inputs in higher dimensions, we have re-implemented his algorithm, following the same choices that were made in the original implementation. 8.1
Hybrid Methods
It should be first notice that both methods can easily be modified to compute the diameter between two sets, i.e. the segment of maximal length with one endpoint in the first set and the other in the second set. Both methods have quadratic parts. Ours with the final computation over Q × P, and Har-Peled’s one when computing the diameter for a pair of small boxes. We have implemented two hybrid methods that combines Har-Peled’s method and ours. We first modified Har-Peled’s algorithm by replacing each call to the brute-force algorithm by a call to our algorithm. We also tested another hybrid method where we modified our algorithm by replacing the final call to the bruteforce algorithm by a call to the first hybrid method. The two hybrid methods can be tuned by setting several parameters. The experimental results presented here have been obtained with the same values of the parameters. The results show that the hybrid methods are never much worse than the best method. Moreover, their performances are more stable and less sensitive to the point distribution. Table 3. CPU times on real inputs. Running time in seconds Inputs Bunny Hand Dragon Buddha Blade Points 35,947 327,323 437,645 543,652 882,954 our method 5.73 0.29 8.51 172.91 0.49 Har-Peled’s method (HPM) - original 0.08 0.45 0.90 0.72 1.00 HPM - our implementation 0.07 0.43 0.89 0.69 0.94 hybrid method #1 0.07 0.41 0.86 0.67 0.90 hybrid method #2 0.10 0.32 1.37 1.09 0.50
9
Discussion
Our method is based on the computation of double normals. Computing a double normal appears to be extremely fast under any practical circumstances and in any dimension (despite the quadratic lower bound of lemma 5). Moreover, the reported double normal is very often the true maximal segment. This is not too much surprising since, on a generic surface, the number of double √ normals is finite and small. In any case, having a double normal provides a 3-approximation of the diameter in any dimensions.
Computing the Diameter of a Point Set
207
However, even if the reported double normal is a maximal segment s, it may be costly to verify that this is indeed the case. A favourable situation is when the point set is contained in the ball B of diameter s. The bad situation occurs when there are many points in set P \ B since we verify that none of these points is the endpoint of a maximal segment. This case occurs with sets of constant width but also with some real models: e.g. bunny, dragon and buddha (see tables [MB01]. For these three cases, the first double normal found by the algorithm was the maximal segment. The second found double normal was shorter. After Algorithm 3, Q contains respectively 1086, 2117, and 2659 points for the bunny, dragon, and buddha models. For both the bunny and the buddha, the second double normal was very close to the first one, then very few points were removed from Q (respectively 7 and 36), and most of the points of Q will undergo the final quadratic search. This explains why there is a so little difference between our method with and without the reduction of Q for these two models [MB01]. For the dragon model, the second double normal is quite different from the first one, hence the noticeable improvement of our method with the reduction of Q. Table 4. CPU times for synthetic distributions in higher dimensions.
Inputs Points our method HPM - our implementation hybrid method #1 hybrid method #2 our method HPM - our implementation hybrid method #1 hybrid method #2 our method HPM - our implementation hybrid method #1 hybrid method #2 our method HPM - our implementation hybrid method #1 hybrid method #2
Running time in seconds volume distributions surface distributions Cube Ball Regular Ellipsoid Ellipsoid Sphere 100,000 100,000 100,000 100,000 100,000 Dimension = 6 0.31 36.95 0.11 0.33 not computed 0.85 466.44 0.97 0.87 465.08 0.67 77.20 0.79 0.73 118.06 0.66 63.31 0.19 0.65 142.38 Dimension = 9 0.89 128.02 0.51 0.52 not computed 139.23 568.99 264.96 590.14 569.08 17.42 135.90 44.54 67.27 232.39 1.21 121.91 1.25 16.03 302.86 Dimension = 12 3.87 445.03 1.08 7.88 not computed 629.37 651.56 648.88 650.98 647.74 44.45 354.14 58.53 56.11 511.41 19.72 380.41 13.00 24.62 745.60 Dimension = 15 10.99 798.66 7.26 20.31 not computed 734.69 735.26 731.76 733.70 737.51 64.49 610.70 69.11 90.35 701.18 44.37 782.20 21.30 70.41 1120.57
Har-Peled’s method does not suffer from this drawback. However, it depends on the coordinate axes (since the boxes are aligned with the axes) and on the dimension d of the embedding space. The first hybrid method compensates for the quadratic search between small boxes (boxes containing less than nmin points), i.e. one major drawback of original Har-Peled’s method.
208
G. Malandain and J.-D. Boissonnat
The second hybrid method compensates for the major drawback of our method, by building pairs of boxes from Q × P.
References [AGR94]
N. M. Amato, M. T. Goodrich, and E. A. Ramos. Parallel algorithms for higher-dimensional convex hulls. In Proc. 35th Annu. IEEE Sympos. Found. Comput. Sci., pages 683–694, 1994. [Bes98] S. Bespamyatnikh. An efficient algorithm for the three-dimensional diameter problem. In Proc. 9th Annu. ACM-SIAM Symp. Discrete Algorithms, pages 137–146, 1998. [CS89] K. L. Clarkson and P. W. Shor. Applications of random sampling in computational geometry. Discrete Comput. Geom., 4:387–421, 1989. [Har01] S. Har-Peled. A practical approach for computing the diameter of a pointset. In Symposium on Computational Geometry (SOCG’2001), pages 177– 186, 2001. [MB01] Gr´egoire Malandain and Jean-Daniel Boissonnat. Computing the diameter of a point set. Research report RR-4233, INRIA, Sophia-Antipolis, July 2001. http://www.inria.fr/rrrt/rr-4233.html. [PS90] F.P. Preparata and M.I. Shamos. Computational Geometry: An Introduction. Springer Verlag, October 1990. 3rd edition. [Ram97a] E. Ramos. Construction of 1-d lower envelopes and applications. In Proc. 13th Annu. ACM Sympos. Comput. Geom., pages 57–66, 1997. [Ram97b] E. Ramos. Intersection of unit-balls and diameter of a point set in R3 . Comput. Geom. Theory Application, 8:57–65, 1997. [Ram00] Edgar A. Ramos. Deterministic algorithms for 3-D diameter and some 2-D lower envelopes. In Proc. 16th Annu. ACM Sympos. Comput. Geom., pages 290–299, 2000.
Shape Representation Using Trihedral Mesh Projections Llu´ıs Ros1 , Kokichi Sugihara2 , and Federico Thomas1 1 Institut de Rob` otica i Inform` atica Industrial, CSIC-UPC. Llorens Artigas 4-6, 08028 Barcelona, Spain,
[email protected],
[email protected], 2 Dept. of Mathematical Engineering and Information Physics, University of Tokyo. 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8656, Japan.
[email protected].
Abstract. This paper explores the possibility of approximating a surface by a trihedral polygonal mesh plus some triangles at strategic places. The presented approximation has several attractive properties. It turns out that the Z-coordinates of the vertices are completely governed by the Z-coordinates assigned to four selected ones. This allows describing the spatial polygonal mesh with just its 2D projection plus the heights of four vertices. As a consequence, these projections essentially capture the “spatial meaning” of the given surface, in the sense that, whatever spatial interpretations are drawn from them, they all exhibit the same shape, up to some trivial ambiguities.
1
Introduction
A polygonal mesh is a piecewise linear 2-manifold made up with planar polygonal patches, glued along the edges, and possibly containing holes. A polygonization method is an algorithm able to construct a polygonal mesh approximating a given surface. The literature on polygonization methods, mainly on triangulations, is vast (see [3] for a recent survey on triangulations and algorithms to simplify them). In general, the main goal is to obtain meshes that are close to the surface within a known error, as a way to understand and represent the surface shape [7]. Other goals have been to increase the speed of polygonization and the ability of the polygonizer to satisfy some constraints in the solution (e.g., one might request the most accurate approximation using a given number of line segments or triangles). In general, a polygonal mesh cannot be reconstructed from its projection onto a plane because infinitely many meshes generate exactly the same projection. For example, for the triangular mesh projection in figure 1, there are many different reconstructions, as illustrated. The first two seem to have no meaning; but, actually, there is a rather “hidden” meaningful reconstruction: Nefertiti’s face! Can we obtain a spatial mesh approximating Nefertiti’s face in such a way that its projection still keeps its spatial meaning? A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 209–219, 2002. c Springer-Verlag Berlin Heidelberg 2002
210
L. Ros, K. Sugihara, and F. Thomas
Fig. 1. Arbitrary reconstructions of this triangulated projection have no spatial meaning. But actually, a very specific one of them really does: it shows Nefertiti's face.
There is a class of meshes whose projections fully determine the spatial shape once the heights of four vertices are given. We call these projections unequzvocal because their reconstructions represent essentially the same object, up to some trivial ambiguities. For example, the projection in figure 2a unequivocally represents a truncated tetrahedron, as seen in figures 2d, e, and f. Observe that it suffices to set the heights of P , Q , T and R to determine those of S and U , using the fact that all cofacial vertices must be coplanar and, hence, S must lie on the face-plane R P Q S , and U on S Q T U . One of our goals is then to approximate any given surface with a polygonal mesh yielding unequivocal projections that uniquely identify the spatial shape up to the trivial ambiguities produced by changing the heights of only four vertices. Section 2 presents the trihedral polygonal mesh, the model we use to this end, and shows how its projections are unequivocal in the sense given above. Nevertheless, we need to go beyond this goal if this representation is to be useful. Consider what happens if the (x,y) vertex positions in figure 2a are slightly altered (figure 2b). The new projection no longer represents a correct truncated tetrahedron for, to be so, the edges joining the two triangular faces, when extended, should be concurrent at the apex of the (imaginary) original tetrahedron. Equivalently, note that once P, Q , R and T are given, the height of U is overconstrained, for it can be calculated from both the coplanarity of SQTU or that of R P T U . For generic vertex positions, the two values of this height do not necessarily coincide, and the only spatial reconstruction that keeps cofacial vertices coplanar is a trzvzal one, with all vertices lying on a single plane [5,6]. This makes the four provided heights inconsistent between each other. In sum, the consistency of the four heights only holds at very specific positions of the vertices and inevitable discretization errors will make this representation useless. This problem is common in Computer Vision [8] and Computer Graphics [12, 101, and mathematical characterizations of generically consistent projections are given in [11,9]. The way we use to make this representation robust against these
Shape Representation Using Trihedral Mesh Projections
211
Fig. 2. A truncated tetrahedron (a) and three possible reconstructions (d, e, f). The slightest perturbation destroys the correctness of the projection (b), but this can be avoided adding new triangular faces (c).
errors follows from this observation: if the height of a vertex in a projection is overconstrained because the vertex lies on several planes that fix it, we just introduce new triangular faces around it for preventing this to occur (figure 2c). Section 3 gives a fast algorithm to this end, derived from this observation, using the so-called T/TT-transformations. Section 4 describes a complementary optimization step that properly places these transformations to minimize the reconstruction errors by reducing the problem to a cyclic AND/OR graph search. We finally conclude in section 5.
2
Trihedral Polygonal Meshes
Trihedral meshes, i. e., those where all vertices have exactly three incident faces, produce unequivocal projections. Indeed, figure 3 shows that in them, after fixing the planes of two adjacent faces, we have enough data to derive the heights of the remaining vertices. Clearly, the heights of the bold vertices fix the shadowed face-planes and the heights of other vertices on them. At this point, any other surrounding face has three vertices whose height is known and, so, its plane can be fixed too. The same argument can be iteratively applied and the result is a height propagation reaching all vertices in the projection. In the schematic representation of this height propagation (figure 3) every face f receives three incoming arrows from the three vertices that fix it. The derivation of heights for the rest of vertices on f is indicated with outgoing arrows from f . The result is a tree-shaped structure spanning all vertices and faces. In this tree, a path from any of the initial four vertices to any other vertex will be hereafter referred to as a propagation wave. Note that, height propagations
212
L. Ros, K. Sugihara, and F. Thomas
where a face is fixed from three (almost) collinear vertices must be avoided. Section 4 gives a way to compute propagations eluding these collinearities. A trihedral mesh approximating a convex or concave surface can be readily obtained by distributing a set of random points all over the surface and computing its tangent planes at these points. This leads to a plane arrangement whose upper envelope –if the surface is convex– or lower envelope –if it is concave– provides a good mesh approximation of the surface. Since the tangent plane orientations are random, any three of such planes meet in a single point, and hence the mesh is trihedral. Alternatively, a trihedral mesh approximation of a piece of concave or convex surface can be obtained by starting with a rough mesh approximation and iteratively applying a bevel-cutting [2] and/or a corner-cutting [1] operation to attain the desired approximation. Obviously, the situation becomes much more complex when concavities and convexities are simultaneously present. The first step in these cases would be to decompose the surface into patches having congruent signs for the maximum and minimum curvatures at all their points. If this is done for a general C ∞ surface, we would get patches labeled (+, +), (+, −), (−, +) or (−, −) separated by curves which could be labeled with (−, 0), (+, 0), (0, −), or (0, +), and isolated points (actually, maxima or mimima) which would be labeled with (0, 0). Saddle points would be also labeled with (0, 0) but they would apFig. 3. A height propagation start- pear as intersections of separating curves. 2 ing at four pre-specified (bold) vertices. If we extend this treatment to C surfaces, Several vertices can have an overcon- we could get entire patches with one of strained height. the above nine possible labels. For example, all plane patches would have the label (0, 0). Patches labeled with (+, +) or (−, −) represent fully convex or concave patches and thus they can be polygonized as described above. Patches labeled as (∗, 0) or (0, ∗) can be polygonized by locating random points along the direction of maximum curvature. Patches (0, 0) would only require a single point on them. Unfortunately, the treatment of (+, −) or (−, +) patches remains as an open problem for us. The connection between polygonized patches can be obtained by computing tangent planes on points along their common boundaries. In sum, the polygonization we propose can be done, first for each patch by generating the tangent planes in a sufficiently high density, and next by connecting them using the tangent planes generated along their common boundaries.
Shape Representation Using Trihedral Mesh Projections
213
Fig. 4. (a and b) T and TT-transformations. (c) Overhanged and self-intersecting reconstructions induced by T-transformations at locally non-convex faces.
3
T and TT-Transformations
In a trihedral mesh a projection is overconstrained because any of its vertices lies on three faces and, potentially, up to three propagation waves can determine a height at the same time. However, as done in figure 2c, this can be avoided by adding triangular faces. To this end, we first compute an arbitrary height propagation spanning all vertices, and check which of them receives more than one wave. We then take one overconstrained vertex v at a time and prevent all but one waves from reaching v as follows. To stop the wave getting v from face f , we apply either of these two transformations (figure 4a and b): – A T-transformation, which places a new edge joining the two neighboring vertices of v in f , say vl and vr . – A TT-transformation, which places a new vertex v on f near v and the three new edges (v , v), (v , vl ) and (v , vr ). After either transformation, f cannot constrain the height of v anymore. Also, the added triangles are innocuous because all heights can still be determined from the four initial ones. Which transformation is preferred depends on the geometry of face f around vertex v. If all points inside the triangle vl vr v belong to f , we say that f is locally convex at v. So, for situations where f is locally convex at v, simplicity prevails and T-transformations are enough (figure 4a). When local nonconvexities are present (figure 4b), T-transformations yield occluded or partially occluded crossing edges whose spatial reconstructions have overhanged parts, or self-intersecting faces (figure 4c). Here, TT-transformations are preferred for they can avoid this. An observation complements the strategy. In an overconstrained vertex v, either two or three incoming propagation waves arrive. If no more than one of
214
L. Ros, K. Sugihara, and F. Thomas
Fig. 5. A projected dodecahedron (a) together with a height propagation (b) and the T-transformations it yields (c). A protruded tetrahedron (d) and two possible corrections: (e), involving TT-transformations, and (f), involving only T-transformations.
them comes through a locally non-convex face, then we can always drop the incidence constraint in this vertex just with T-transformations: we just leave the eventual “bad” wave to determine the height of v and stop the others with T-transformations. This completes the description of a one-sweep algorithm removing overdetermination. As an example, figures 5a-c show a projected dodecahedron before and after applying T-transformations. In general, when the approximated surface is uniformly convex, or uniformly concave, all faces of the resulting trihedral polygonal mesh will be locally convex, and hence T-transformations will suffice. However, even when local nonconvexities exist at the faces, there still might be some height propagations where only T-transformations suffice. In figure 5e, for example, an algorithm computing an arbitrary propagation can be forced to use TT-transformations, whereas with a proper search, a robust projection is obtained only with T-transformations (figure 5f). But one certainly finds correct projections where no propagation strictly using T-transformations can be found [5, Section 8.4].
4
Optimal Propagations and Cyclic AND/OR Graphs
The algorithm in the preceeding section corrects the incidence structure by finding an arbitrary height propagation and inserting a T or a TT-transformation whenever a vertex height is determined by two or more faces. However, arbitrary propagations might travel along “degenerate paths” where the planes for some of the faces are determined by three aligned (or almost aligned) vertices. Clearly,
Shape Representation Using Trihedral Mesh Projections
215
these degenerate propagations must be avoided if we want to minimize the errors during the reconstruction of the spatial shape from the initial set of four heights. This section provides an algorithm to find height propagations that avoid these degeneracies by formulating the problem as that of finding the least cost solution of a cyclic AND/OR graph [4]. We now recall some preliminary concepts about this kind of graphs. An AND/OR directed graph G, can be regarded as a hierarchic representation of possible solution strategies for a major problem, represented as a root node, r, in G. Any other node v represents a subproblem of lower complexity whose solution contributes to solve the problem at hand. There are three types of nodes: AND nodes, OR nodes and TERMINAL nodes. Every node v has a set S(v) of successor nodes, possibly empty, to which it is connected in either of two ways: – An AND node v is linked to all nodes si ∈ S(v) through directed AND arcs (v, si ), meaning that the subproblem for v can be trivially solved once all subproblems for the nodes in S(v) have been solved. – An OR node v is linked to all nodes si ∈ S(v) through directed OR arcs (v, si ), meaning that the subproblem for v can be trivially solved once any one of the subproblems for the nodes in S(v) has been solved”. – A TERMINAL node represents a yet-solved or trivial subproblem and has no successors. With this setting, a feasible solution to the problem becomes represented as a directed subgraph T of G verifying: – r belongs to T . – If v is an OR node and belongs to T , then exactly one of its successors in S(v) belongs to T . – If v is an AND node and belongs to T , then every successor in S(v) belongs to T . – Every leaf node in T is a TERMINAL node. – T contains no cycle, it is a tree. One can also assign a cost c(u, v) > 0 to every arc (u, v) in G and ask for the solution T with minimum overall cost C(T ) = (u,v)∈E(T ) c(u, v), where E(T ) is the set of arcs of T . Note that, as defined, G can contain cycles. This turns out to be the main difficulty for this optimization problem, which, in the past, was usually tackled by a rather inefficient trick: “unfolding” the cycles and applying standard AND/OR search methods for acyclic graphs. However, explicit treatment of cycles has recently been considered, and an efficient algorithm is achieved in [4]. The search for an optimal height propagation is next reduced to this model. This amounts to (1) constructing an AND/OR graph Ghp whose feasible solutions define a height propagation, and (2) define a cost function that promotes non-degenerate propagations over degenerate ones.
216
L. Ros, K. Sugihara, and F. Thomas
Fig. 6. AND/OR subgraphs for the propagation rules. AND nodes are indicated by joining all their emanating arcs. (a) Constructed subgraph translating rule R2 for a quadrilateral face. Dummy-face nodes are shadowed in grey. Note that, actually, there is only one vertex node for each vertex in the trihedral mesh, but for clarity they are here duplicated. (b) Propagation waves reaching a vertex. (c) Subgraph for rule R3, with an arc for each of the possibilities in (b).
4.1
Feasible Height Propagations
A height propagation can be defined by the following rules, with the given straightforward translation into AND/OR subgraphs. R1: Four selected vertices of the projection trigger the propagation. For this, we put a TERMINAL node for each of the triggering vertices. R2: Every face in the polygonization can be determined once the heights of any three of its vertices are determined. If deg(f ) denotes the number of vertices ) of face f , then there are cf = deg(f possible combinations of three vertices 3 determining f . If we put a node in Ghp for every vertex, except for the four triggering ones, then this rule is translated by adding an OR node for every face, linked to cf new “dummy-face” AND nodes, each representing one of the above combinations. Each dummy-face node is in turn linked with arcs to the three involved vertices in the combination. Figure 6 gives a schematic representation. The newly introduced vertex nodes have not been assigned a type yet. This type is induced by the following rule. R3: Except for the initial four vertices, the height of every other vertex is determined once one of its incident faces has a determined plane. This implements the fact that the propagation wave fixing the height of a vertex can come from any of its three incident faces (figure 6b). This rule can be represented by setting each vertex node as OR type, and linking it to the face nodes of its incident faces figure 6c. R4: The height propagation must reach all vertices. For this, we add a root AND node r to Ghp and link it to all vertex nodes. Note that a feasible solution tree of Ghp provides instructions to derive a height propagation that reaches all vertices, starting at the four pre-specified heights.
Shape Representation Using Trihedral Mesh Projections
4.2
217
Cost Function
In order to penalize propagations using sets of almost-aligned vertices, we proceed as follows. Consider a height propagation that fixes a face-plane f from the point coordinates of three previously fixed vertices vi , vj and vk . We can simply penalize the corresponding arcs in Ghp emanating from f by giving them a cost that is inversely proportional to the area of the triangle defined by vi , vj and vk in the projection. The rest of arc costs are actually irrelevant, but need to be positively defined [4]. In sum, for every directed arc (u, v) we define its cost as follows: 1. c(u, v) = 1/det(v1 , v2 , v3 ), if u is a dummy-face AND node and v is any one of its descendants. Here, vi , vj and vk are the homogeneous coordinates of the vertices associated with the three descendants of u. 2. c(u, v) = 1, if u is an OR node. 3. c(u, v) = 1, if u is the root AND node. Once the least cost solution T is found, the projection can be made robust to slight vertex perturbations as follows. At a vertex v receiving more than one propagation wave, we put a T/TT-transformation on all faces fixing v, except on the one in the propagation wave represented in T . 4.3
Complexity Analysis
The worst-case complexity of computing the optimal solution of a cyclic AND/OR graph with n nodes is O(n3 ) [4]. We now prove that the number of nodes in Ghp grows linearly with the number of vertices of the trihedral polygonal mesh. Let e, v and f be the number of edges, vertices and faces of the given mesh. Then, 2e = 3v because the mesh is trihedral. Moreover, if the mesh has h holes, with “the outside” of the mesh counting as a hole too, then Euler’s relation says that v − e + f = 2 − h. From these two equalities the number of faces of the mesh can be written in terms of the number of vertices and holes, f = v+4 2 − h. Let us now count the number of nodes added by each of the rules R1,...,R4: – Rule R1 adds four vertex nodes. – Rule R2 adds one OR node for each face, amounting to f = v+4 2 − h = O(v) total nodes, assuming a constant number of holes. Also, for every face f this ) rule adds cf = deg(f dummy-face AND nodes. Although this number is 3 clearly in the worst case O(deg(f )3 ), if we divide the sum of face degrees by the number of faces, the average face degree is six, at an increasing number of randomly placed vertices in the mesh: 3v 6v allf aces deg(fi ) = v+4 = , f v + 4 − 2h − h 2 which will keep the number of dummy-face AND nodes linearly growing: 6 v+4 f = 20 − h = O(v). 3 2
218
L. Ros, K. Sugihara, and F. Thomas
– Rule R3 adds a linear number of OR vertex nodes. – Rule R4 only adds one AND node, the root. Up to now we have assumed that the four vertices triggering the propagation are a priori selected. But other height propagations starting at other four vertices could yield better height propagations. To test all possibilities, we do not need to repeat the AND/OR search for every different combination of four vertices. Indeed, note that these vertices just fix the planes of the faces they belong to. So, any other set of four vertices on these faces will yield the same optimal propagations, provided that two of them lie on the common edge. We can equivalently think of pairs of faces triggering the propagation and use their face nodes as TERMINAL in Ghp . The choice of TERMINAL vertices (instead of TERMINAL faces) was done to be coherent with previous explanations. In sum, if one wants to search over all possible starting places of propagation, then for each pair of adjacent faces the AND/OR search needs to be repeated. This amounts to solve e = 32 v optimization problems in the worst case, meaning that the overall complexity will be O(v 4 ), under the assumption that the face degree is six.
5
Conclusion
We have shown how trihedral mesh projections can capture the spatial shape of a given object’s surface, up to some trivial ambiguities. We have also presented a local strategy that takes a trihedral projection as input and places some triangular faces at strategic places until it is made robust to perturbations in its vertex coordinates. Finally, we have found how to put these triangles so that the spatial reconstruction is performed in the most accurate way possible, avoiding height propagations along degenerate paths. Although we can deal with an important range of surfaces, no algorithm has been devised yet to obtain trihedral meshes approximating surfaces with saddlecrests or saddle-valleys. This constitutes a main issue for further research. Acknowledgements. The authors wish to thank Carme Torras and Pablo Jimenez for fruitful discussions around the AND/OR search strategy. This work has been partially supported by the Spanish Ministry of Education under grant FPI94-46634232, by the Spanish CICYT under contracts TIC960721-C02-01 and TIC2000-0696, and by the Grant-in-Aid for Scientific Research of the Japanese Ministry of Education, Science, Sports and Culture.
References 1. C. Boor. Cutting corners always works. Computer Aided Geometric Design, 4:125– 131, 1987. 2. D. Fox and K. I. Joy. On polyhedral approximations to a sphere. In IEEE Int. Conf. on Visualization, pages 426–432, 1998.
Shape Representation Using Trihedral Mesh Projections
219
3. P. Heckbert and M. Garland. Survey of polygonal surface simplification algorithms. In Multiresolution Surface Modeling Course SIGGRAPH’97, May 1997. Available at http://www.cs.cmu.edu/˜ph. 4. P. Jim´enez and C. Torras. An efficient algorithm for searching implicit AND/OR graphs with cycles. Artificial Intelligence, 124:1–30, 2000. 5. L. Ros. A Kinematic-Geometric Approach to Spatial Interpretation of Line Drawings. PhD thesis, Polytechnic University of Catalonia, May 2000. Available at http://www-iri.upc.es/people/ros. 6. L. Ros and F. Thomas. Overcoming superstrictness in line drawing interpretation. IEEE Trans. on Pattern Analysis and Machine Intelligence. Accepted for publication. Preliminary scheduled for Vol 24, No. 4, April 2002. 7. W. Seibold and G. Wyvill. Towards an understanding of surfaces through polygonization. In IEEE Int. Conf. on Visualization, pages 416–425, 1998. 8. K. Sugihara. An algebraic approach to shape-from-image problems. Artificial Intelligence, 23:59–95, 1984. 9. K. Sugihara. Machine Interpretation of Line Drawings. The MIT Press, 1986. 10. K. Sugihara. Resolvable representation of polyhedra. Discrete and Computational Geometry, 21(2):243–255, 1999. 11. W. Whiteley. Some matroids on hypergraphs with applications to scene analysis and geometry. Discrete and Computational Geometry, 4:75–95, 1988. 12. W. Whiteley. How to design or describe a polyhedron. Journal of Intelligent and Robotic Systems, 11:135–160, 1994.
Topological Map Based Algorithms for 3D Image Segmentation Guillaume Damiand and Patrick Resch LIRMM, 161 rue Ada, 34392 Montpellier Cedex 5, France {damiand,resch}@lirmm.fr
Abstract. One of the most commonly used approach to segment a 2D image is the split and merge approach. In this paper, we are defining these two operations in 3D within the topological maps framework. This mathematic model of regions segmented image representation allows us to define these algorithms in a local and generic way. Moreover, we are defining a new operation, the corefining, which allows to treat big images. They are cut into small units, treated separately, then the result of each of them are combined to reconstruct the final representation. These three operations let us view efficient 3D segmentation algorithms, which is a difficult problem due to the size of data to treat.
1
Introduction
The region segmentation is a difficult problem that was studied in many different works in 2 dimensions. It consists in making a partition of an image into connected sets of pixels verifying an homogeneity criterion, and that we are calling regions. The main approaches for the region segmentation are the split and merge methods. The top-down approach [16,13] consists in taking big regions and cutting into smaller and smaller regions. The bottom-up approach [6,11] is the opposite approach, which begins with many small regions that are merged into bigger and bigger regions. At last the mixed approach [12,17] consists in mixing the two previous ones, possibly making again the process until for example, result stabilization. These approaches require a “good” model of images representation. Many works in 2 dimensions [10,7,8,3] have shown that topological maps are a model that allows to introduce these segmentation algorithms in an efficient way. Moreover, thanks to that model we can also define many processing algorithms allowing to access or to modify the result of this segmentation. Recently, the topological maps have been extended in dimension 3 [5,2]. Indeed, more and more domains need to work in dimension 3, as medical imagery, geology, or industry. In order to carry out segmentation algorithms as in dimension 2, we need to define basic operations on topological maps. This problem turns out to be more difficult than in dimension 2, because first now we have an additional dimension, that sets down new problems, but also because it is much A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 220–231, 2002. c Springer-Verlag Berlin Heidelberg 2002
Topological Map Based Algorithms for 3D Image Segmentation
221
more difficult to represent and to make visual objects in 3 dimensions. These visualization problems act as a brake to the comprehension and the development of new algorithms. Moreover, there are complexity constraints, in memory space as well as in execution time, that are much more important than in dimension 2. Indeed, the treated data quantity is much more important in dimension 3. It requires us to define very efficient algorithms to use them on big images. In this paper, we present the two algorithms of merge and split on the 3D topological maps. These two algorithms are the basic operations for the regions segmentation. The use of the topological maps let us define these algorithms in a generic way, because they work on any configuration, and in a local way because they treat the map element by element, only looking the direct neighborhood of the current element. These two properties make these algorithms more simple to understand but more efficient in complexity too. We also present the algorithm of corefining that allows us to treat big 3D images in parallel. We are going to cut this image into several small units, we are going to segment each unit in parallel, then we will reconstruct all the image with this units, using the corefining operation. Because of a lack of space and to not lose ourselves into technical details we are just presenting the principle of the algorithms and the main ideas of these three operations. For more details about these operations, we can report to [18,19]. We are first presenting Section 2 the combinatorial maps then the topological maps that are combinatorial maps verifying specific properties. Then we are presenting our three algorithms : merge Section 3, split Section 4 and corefining Section 5. We are describing their principle and the different cases we have met. At last, we are concluding and presenting some perspectives Section 6.
2
Topological Maps Recall
Topological maps allow to represent the nD regions segmented images. They encode at the same time the topology and the geometry of images. Topological maps are combinatorial maps with particular properties. So we are beginning by recalling the notion of combinatorial map. This is just a short reminder; a more detailed description can be found in [5,1]. 2.1
Combinatorial Maps
Combinatorial maps are a mathematical model of representation of space subdivisions in any dimension. They were introduced in the sixties by [9], at first as a planar graph representation model, and extended by [14] in dimension n to represent orientable or not-orientable quasi-manifold. Combinatorial maps encode space subdivisions and all the incidency relations. They are made of abstract elements, called darts, on which are defined application, called βi . We are giving here the combinatorial map definition in n dimensions, that we can find for example in [15].
222
G. Damiand and P. Resch
Definition 1 (combinatorial maps). Let n ≥ 0. A n combinatorial map, (or n-map) is an (n + 1)-uplet M = (B, β1 , . . . , βn ) where : 1. 2. 3. 4.
B is a finite set of darts; β1 is a permutation on B; ∀i, 2 ≤ i ≤ n, βi is an involution on B; ∀i, 1 ≤ i ≤ n − 2, ∀j, i + 2 ≤ j ≤ n, βi ◦ βj is an involution.
In this definition, there is an application βi for each space dimension which puts in relation two i-dimensional cells. When two darts are linked with βi , we say that they are βi -sewed. Each space cell is implicitly represented by a set of darts. We can see figure 1.a an example of an image and figure 1.b the corresponding combinatorial map. Each dart is represented by a segment, the β1 relation by light grey arrows and the β2 relation by dark grey arrows. β1 put in relation a dart and the next dart of the same face. For example, the light grey face of the image is represented by four β1 -sewed darts in the map. The adjacency between this face and the dark grey face is represented by two darts β2 -sewed together. We are using the simplified representation (figure 1.c) that does not represent explicitly the applications, because it is more understandable.
a. A 2D object.
b. The corresponding map.
c. Simplified representation.
Fig. 1. An object and the corresponding combinatorial map represented by two different way.
2.2
The Topological Maps
In the combinatorial maps framework, several representations of a single object exist. We want a unique characterization of objects, for example to make easier isomorphisms. That is the main goal of the topological maps. They are mathematical model of 3D segmented images representation, which encode all incidency and adjacency relations. They represent interpixel elements composing the edge of boundary faces of an 3D image. Moreover, they are minimal, stable for rigid transformations, and they characterize the image’s objects with their topology. We remind that a boundary face is a surface between two neighbouring regions. We can see on figure 2.a an example of a 3D image, and on figure 2.b its boundary faces. The construction of a topological map is progressive: at the
Topological Map Based Algorithms for 3D Image Segmentation
223
beginning all image’s voxels are encoded with a combinatorial map, then we simplify this map with successive mergings. The corresponding topological map of our example is shown on figure 2.c. We can see that each face of the map encodes a boundary face of the image.
a. A 3D image
b. Its boundary faces.
c. The topological map.
Fig. 2. A 3D image and the corresponding topological map.
Inclusions of volumes are represented by a inclusion tree where each node corresponds to a region and its son nodes correspond to included regions. The root is R0 , the infinite region which rounds up the image. When there is a hole on a face, the connexion between exterior and interior borders of this face is represented with a virtual edge. This is a one degree edge (that is adjacent twice to the same face). Such an edge should have been removed during the construction of the topological map, but have been retained to conserve the map connected, and each face homeomorphic to a topological disk. These edges are also useful to represent closed faces (without border), as we can see for the torus example shown figure 3.
a. A torus.
b. Intermediary step.
c. The topological map.
Fig. 3. The topological map of a torus.
This torus is represented with a single closed boundary face. An intermediate step of the topological map’s construction is shown on figure 3.b, where we can see two virtual edges (in grey) which keep the upper and lower faces connected.
224
G. Damiand and P. Resch
The topological map shown figure 3.c is only composed of virtual edges. We obtain the classic minimal representation of the torus composed of one face, one vertex and two edges. Combinatorial maps encode only the topological part of our model. To encode the geometry of the corresponding image, we are using a geometrical model, which links a geometrical face to each topological face of the map . We are calling embedding this geometrical model. This distinction between topology and embedding allows the differentiation of treatments and sometimes enables a hierarchization of these treatments. Indeed, some operations only work on the topological model, others only on the geometrical model, and some on both. We can see figure 4.a the topological map of two adjacent objects and figure 4.a the embedding of these objects. Each dart of the topological is linked with the border
3
a c
b β2
d
β2
β3
1
2
Fig. 4. Embedding example of a topological map.
of an embedding surface. For our example, the dart named a of the topological map is linked with the surface named 1, the two darts named b and c are linked with the same surface 2 because they belong to the same topological face, and the dart named d is linked with the surface 3.
3
Merge
The merging operation consists, starting from two adjacent regions R1 and R2 , to gather them into a single region R’, union of the two first regions. Algorithm 1 make this operation on a topological map. It is local because it treats independently each dart of R1 . There are two different cases depending if the current dart belongs to a virtual edge or not. In the first case, we are testing if the deletion of this edge disconnects the map into several connected components. If it happends, we are modifying the inclusion tree this way. In the second case, we are just sewing the two faces adjacent to the currently treated edge together, and we are deleting this edge. This operation is made for all the darts of the boundary faces, so the two regions are merged and boundary faces are finally destroyed. At last, the topological map is simplified, because operations could have made the map incoherent or not minimal. For
Topological Map Based Algorithms for 3D Image Segmentation
225
Algorithm 1 Merge 3D Data: Two adjacent regions R1 and R2 . Result: The two regions are merged into R2 . foreach dart b of the region R1 do if b belongs to the boundary between R1 and R2 then if b belongs to a virtual edge then foreach dart t of the orbit < β1 , β2 > of β1 (b) and β13 (b) do if all regions of β3 (t) are included into R1 ∪ R2 then Daughter(R2 )=Daughter(R2 )+regions of the connected component of β3 (t) Destroy the virtual edge incident to b; else β2 -sew(β2 (b), β32 (b)); Destroy β3 (b) and b; Simplification of R2;
that, first we are removing degree 2 vertex and edges. Then, as virtual edges have just a topological existence, they can be moved to reduce the edge number. These simplification step guarantee the minimality of the topological map. Moreover, during this step, we are using the Euler characteristic to keep invariable all topological characteristics of the map. We can see figure 5 an example of the merging operation. Figure 5.a shows a topological map that represent three adjacent objects, before the merging of R1 and R2 . Figure 5.b represents the map obtained after the merging. We have destroyed the face between R1 and R2 and β2 -sewn each other darts incident to this face. We can see on this figure that this map is not a topological map because
R1
R2
a. A topological map.
R2 b. After the merging.
c. After the simplification.
Fig. 5. An example of Merge 3D
it is not the minimal representation. Indeed, we have two edges incident to only two faces. These edges can be removed without lost of topological information. This is done by the simplification step of our algorithm. After the removal of these edges, we obtain some vertices incident to only two different edges. They
226
G. Damiand and P. Resch
can be also removed without lost of information. Finally, we obtain the map shown figure 5.c that is the topological map representing two adjacent objects. We can note that this simplification step can be performed during the merging, but this leads to a less understandable algorithm, but also with a more efficient complexity. We chose to present here the more simple algorithm, even though its complexity is a little higher, to make easier its comprehension.
4
Split
This operation is the opposite of the merging one. It consists in spliting a region by a separation face. This face is composed of a list of vertices which represent the intersections of a plane and the edges of the map. Constraints have been fixed on this separation face to simplify the algorithm and to limit the number of different cases to treat. A simple solution to do complex splits is to combine several simple splits and merge operation to obtain the wanted result. The principle of the algorithm is first to insert a new boundary face into the embedding then the corresponding boundary face in the topological map which represents the region. Algorithm 2 Split 3D Data: A list List of vertex representing the separation face A region R. Result: R is splited in two regions along the separation face. foreach dart b of R do foreach dart t of the embedding of b do foreach vertex S of List do InsertVertexPlongement(S,t); if t is on an edge then InsertVertexMap(S,b) if the separation face cuts an edge then separate the concerned boundary face; Create 2 new boundary faces and β2 -sewed them to the previous separated boundary face; else if the separation face cuts a closed face then Create 4 faces composed each by a single dart a, b, c and d; Insert a in the closed face; β2 -sew(a, b); β2 -sew(c, d); β3 -sew(b, c); else // The separation face cuts a not closed face F without cutting its edge; Create 4 faces Fa , Fb , Fc andFd copyings of F ; β3 -sew Fc and Fd and link Fa et Fb with a virtual edge; Insert the 4 faces between F and the face which was β2 -sewed to F ; Simplify the map;
Topological Map Based Algorithms for 3D Image Segmentation
227
There are three different cases : 1. If the separation face of R1 cuts the edge of an already existing boundary face, like the example shown figure 6.a. New vertices will be inserted into this boundary face then they have to be separated with inserted edges between these vertices. At last the new boundary face can be inserted in the middle of the region. It is sewed on this new edge. We can see on figure 6.c the resulting topological map of the operation of spliting. β2 β2 R1
R1
β3
R1
R1 a. Two adjacents cubes.
b. The corresponding map.
R2 c. After the split.
Fig. 6. Case 1 of Split 3D
2. If the separation face does not cut any edge of already existing boundary faces (figure 7.a). A new face is inserted which is composed by four half-faces. These half-faces are copying of cutted region’s faces. Two of this half-faces are linked with an virtual edge because they represent the same surface as it is shown on figure 7.b.
R1 R1
a. Two adjacents cubes.
β3
R 1’
β3
β2
β2
β2
b. After the split.
β2
Fig. 7. Case 2 of Split 3D
3. If the separation face cuts a closed face. A closed face is only represented with virtual edges in the topological map. There is an example on figure 8.b with its embedding in a. To obtain the minimal map, the new inserted boundary faces are only composed with one dart and one of this faces has to be directly
228
G. Damiand and P. Resch
β1 -sewed to the closed face. The result of the split for a torus is shown on figure 8.c. We can verify with the Euler formula that the characterization of the torus is preserved. There is now 1 vertex, 3 edges and 2 faces, which gives a genius of 1.
β2
β2 β β
2
β
2
β2
a. A torus and the separation face.
b. The corresponding map.
3
β2
c. After the split.
Fig. 8. Case 3 of Split 3D
5
Corefining
One method to segment a big 3D image consists in cutting it into several parts and segmenting each part in parallel. During the reconstruction of the image, we have to glue back the different segmented parts. The operation which performs this glue back is the corefining. To design the two faces of each volume which are going to be put in contact, we are passing two embedding darts to the algorithm. The faces which contain these darts gives the coordinates of the corresponding planes (as we works with interpixel). In a first time, we build the new boundary faces. For that, we extract the embeddings of the two faces belonging to the two planes, because we consider that the initial image was cut in regular parts. We are building the map of these new border faces A and B. This construction can be different if the new boundary face cuts a closed face or the border of an existing boundary face, in a similar way that the three different cases explained for the split. We can see figure 9.a an example of this first step. In a second step, we are inserting the intersection vertices of the two faces A and B in the embedding of the two maps of these two faces. These two maps have now exactly the same vertices, topological as well as geometrical. We are β3 -unsewing and destroying faces belonging to R0 which are previously β3 -sewn to A and B. Then, for each dart of A, we are looking for a similar dart in B (a dart which connects the same vertices). If such a dart does not exist, we are adding in B a copy of this dart. And we are exactly doing the same operation for
Topological Map Based Algorithms for 3D Image Segmentation
a. Step 1.
b. Step 2.
229
c. Final result.
Fig. 9. Corefining principle.
each dart of B, with eventually adding some copies in A. Then we are obtaining two faces A and B totally similar, as we can see on figure 9.b. We just have to β3 -sew these two faces. For that, we are using the same coverage of the darts of A and B, and β3 -sew each couple of darts. We can see on figure 9.c the result of this final operation. Algorithm 3 Corefining 3D Data: Two maps M1 and M2 and two darts d1 and d2 of their embeddings. Result: The maps M1 and M2 are β3 -sewn by the faces containig d1 and d2 . Step 1: foreach region R of M1 (resp. M2 ) do Building the new boundary face A (resp. B) of R; Extracting this embeding and set it to A (resp. B); Step 2: Inserting intersection vertices of A and B into the faces A and B; foreach dart d of A do if it does not exist in B a dart similar to d then Add a copy of d in B; foreach dart d of B do if it does not exist in A a dart similar to d then Add a copy of d in A; foreach dart a and b of A and B joining the same vertices do β3 -sew(a, b);
6
Conclusion
In this paper, we have presented the basic operations for the 3D segmentation : merge and split plus an interesting operation in a parallelization goal : corefining. These operations are defined on the 3D topological map, a mathematical model of 3D images representation. The two first operations were already presented
230
G. Damiand and P. Resch
in [4], but our approach differs from this solution by our aims to define local algorithms. The last operation is very interesting, because it allows us to segment big images by working in parallel onto different small parts of the image. Such a segmentation is very difficult to perform if we want to do in a direct way. We have totally used all the topological map properties to obtain algorithms the most generic as possible. We are thus obtaining operations working on every possible case. Moreover, these algorithms operate in a local way, which simplifies a lot the different treatments. Indeed, we are treating each element without particular order, and we just have to look at the direct neighborhood of the current element. These different advantages allow us to obtain algorithms relatively simple and understandable, but also to keep a good complexity. Now, we have to implement these three operations in our computer software, to obtain finally some 3D segmentation algorithms. We are currently working to use these algorithms to perform segmentation refinement. Our approach consists in starting from a first segmentation, performed in a classical way on the voxel matrix with some algorithms used in the signal treatment research. Then, we are computing the topological map corresponding to this first segmentation. We can thus perform some treatments onto this map to refine this segmentation : automatic treatments, for example to remove small regions or particular configurations, or interactifve treatments made by an expert. These results are under development to obtain a software for cerebral tumor diagnostic. Moreover, we also work on some other operations, that could be combination of these three basic operations, but also some specific ones as the chamfering or boolean operations.
References [1] Y. Bertrand, G. Damiand, and C. Fiorio. Topological encoding of 3d segmented images. In Discrete Geometry for Computer Imagery, number 1953 in Lecture Notes in Computer Science, pages 311–324, Uppsala, Sweden, december 2000. [2] Y. Bertrand, G. Damiand, and C. Fiorio. Topological map: Minimal encoding of 3d segmented images. In Workshop on Graph based representations, pages 64–73, Ischia, Italy, may 2001. IAPR-TC15. [3] J.-P. Braquelaire and L. Brun. Image segmentation with topological maps and inter-pixel representation. Journal of Visual Communication and Image Representation, 9(1):62–79, march 1998. [4] J.-P. Braquelaire, P. Desbarats, and J.-P. Domenger. 3d split and merge with 3-maps. In Workshop on Graph based representations, pages 32–43, Ischia, Italy, may 2001. IAPR-TC15. [5] J.-P. Braquelaire, P. Desbarats, J.-P. Domenger, and C.A. W¨ uthrich. A topological structuring for aggregates of 3d discrete objects. In Workshop on Graph based representations, pages 193–202, Austria, may 1999. IAPR-TC15. [6] R. Brice and C.L. Fennema. Scene analysis using regions. Artificial intelligence, 1:205–226, 1970. [7] L. Brun. Segmentation d’images couleur a ` base Topologique. Th`ese de doctorat, Universit´e Bordeaux I, d´ecembre 1996.
Topological Map Based Algorithms for 3D Image Segmentation
231
[8] L. Brun and J.-P. Domenger. A new split and merge algorithm with topological maps and inter-pixel boundaries. In The fifth International Conference in Central Europe on Computer Graphics and Visualization, february 1997. [9] R. Cori. Un code pour les graphes planaires et ses applications. In Ast´erisque, volume 27. Soc. Math. de France, Paris, France, 1975. [10] J.P. Domenger. Conception et impl´ementation du noyeau graphique d’un environnement 2D1/2 d’´ edition d’images discr` etes. Th`ese de doctorat, Universit´e Bordeaux I, avril 1992. [11] C. Fiorio and J. Gustedt. Two linear time union-find strategies for image processing. Theoretical Computer Science, 154:165–181, 1996. [12] S.L. Horowitz and T. Pavlidis. Picture segmentation by a directed split-andmerge procedure. In Proc. of the Second International Joint Conf. on Pattern Recognition, pages 424–433, 1974. [13] C.H. Lee. Recursive region splitting at hierarchical scope views. Computer Vision, Graphics, and Image Processing, 33:237–258, 1986. [14] P. Lienhardt. Subdivision of n-dimensional spaces and n-dimensional generalized maps. In 5th Annual ACM Symposium on Computational Geometry, pages 228– 236, Saarbr¨ ucken, Germany, 1989. [15] P. Lienhardt. Topological models for boundary representation: a comparison with n-dimensional generalized maps. Computer Aided Design, 23(1):59–82, 1991. [16] R. Ohlander, K. Price, and D.R. Reddy. Picture segmentation using a recursive region splitting method. Computer Graphics and Image Processing, 8:313–333, 1978. [17] M. Pietikainen, A. Rosenfeld, and I. Walter. Split and link algorithms for image segmentation. Pattern Recognition, 15(4):287–298, 1982. [18] P. Resch. Algorithmes pour la manipulation des cartes topologiques en 2 et 3 dimensions. M´emoire de dea, Universit´e Montpellier II, june 2001. [19] P. Resch. Algorithmes pour la manipulation des cartes topologiques en 2 et 3 dimensions. Annexe technique, Universit´e Montpellier II, june 2001.
On Characterization of Discrete Triangles by Discrete Moments ˇ c Joviˇsa Zuni´ Computer Science, Cardiff University, Queen’s Buildings, Newport Road, PO Box 916, Cardiff CF24 3XF, Wales, U.K.
[email protected] Abstract. For a given real triangle T its discretization on a discrete point set S consists of points from S which fall into T . If the number of such points is finite, the obtained discretization of T will be called discrete triangle. In this paper we show that the discrete moments having the order up to 3 characterize uniquely the corresponding discrete triangle if the discretizationing set S is fixed. Of a particular interest is the case when S is the integer grid, i.e., S = Z2 . Then the discretization of a triangle T is called digital triangle. It turns out that the proposed characterization preserves a coding of digital triangles from an integer grid of a given size, say m × m within an O(log m) amount of memory space per coded digital triangle. That is the theoretical minimum. Keywords. Digital triangle, digital shape, coding, moments.
1
Introduction
The basic motivation for this paper was recovering a simple and efficient characterization of digital triangles. By digital triangles we mean digital (binary) pictures of real triangles, or more formally, a digital triangle D(T ) is the set consisting of integer points which fall into a real triangle T : D(T ) = {(i, j) | (i, j) ∈ T,
i, j are integers} = {(i, j) | (i, j) ∈ T ∩ Z2 } .
That is the most usual digitization scheme for planar regions. But, sometimes the digitization (i.e., discretization) is made by using another “discretizationing” set than it is Z2 . Some other examples of discrete presentation of real objects are: Discrete images on the hexagonal grid, radar images, images made on statistically distributed set of points, e.t.c. Because the method presented here can be applied to discretizations on different sets we start with
The author is also with the Mathematical institute of Serbian Academy of Sciences, Belgrade.
A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 232–243, 2002. c Springer-Verlag Berlin Heidelberg 2002
On Characterization of Discrete Triangles by Discrete Moments
. . . . .. . . . . . . . .. . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . .. .. . . . . . . .. . . . . . .. . . .. .
233
C
14
15
17
13
16
18
A
19
11
10
12
20
7
9
8
5
6
4
3
2
1
B
Fig. 1. A discretization of the triangle ABC consisting of 20 points is shown above.
a general definition of a discrete triangle D(T ) from a fixed discrete point set S (see Fig. 1). Thus, we define D(T ) = {(x, y) | (x, y) ∈ T ∩ S} . Through the paper, it will be assumed but not mentioned, all discrete triangles consist of a finite number of points. For an illustration, the discretizations on the set consisting of all points with the coordinates which are rational numbers (i.e., S = Q2 ) are not considered. It is not convenient to use a real triangle to represent its discretization. In that case any discrete triangle can be represented by infinitely many real triangles, since there is a continuum of real triangles which have the same discretization on a given set of points. Depending on S, it can be difficult to answer which different real triangles have different discretizations. By the way, if we have a binary picture of a triangular objects, the “original triangle” is usually unknown. Consequently, the characterization of discrete triangles by a real triangle with a given discretization requires a procedure for reconstruction of an original triangle from its discretization. In the next section we give a characterization of discrete triangles which is simple and fast for any choice of the discretizationing set S. For such a characterization we will use, so called, discrete moments. Precisely, the discrete moment µp,q (X) of a finite set X is defined as: µp,q (X) = xp · y q . (x,y)∈X
The moment µp,q (X) has the order p + q.
234
ˇ c J. Zuni´
We prove that ten discrete moments having the order up to 3 are enough for a unique characterization of discrete triangles discretized on a fixed discrete set. In Section 3 we give some performance analysis of the proposed characterization if the discretization is made on a squared integer grid of a given size, i.e., on Z2 ∩ [0, m − 1]2 , for some integer m. Z2 ∩ [0, m − 1]2 , will be called the m × m integer grid. It turns out again that the use of moments is a powerful tool in image analysis ([3], [8]). Section 4 contains concluding remarks. Through the paper a finite set means that the set consists of a finite number of points. Also, a unique characterization and coding will have the same meaning. We shall say that a function f (x, y) separates sets S1 and S2 if f (x, y) has the different sign in the points of S1 and S2 . For example, (x, y) ∈ S1 implies f (x, y) > 0, while (x, y) ∈ S2 implies f (x, y) < 0.
2
Characterization of Discrete Triangles
In this section it will be shown that the discrete moments having order up to 3 match uniquely the discrete triangles presented on a fixed set S. We start with the following theorem. Theorem 1. Let S1 and S2 be two finite planar sets. If there exists a function of the form f (x, y) = αp,q · xp · y q , (1) p+q≤k
where p, q ∈ {0, 1, . . . , k} and αp,q are arbitrary real numbers, such that f (x, y) separates S1 \ S2 and S2 \ S1 then µp,q (S1 ) = µp,q (S2 ) is equivalent to
for all non negative integers
p, q,
with
p + q ≤ k,
S1 = S2 .
Proof. If S1 = S2 then the corresponding discrete moments are equal obviously. What we have to prove is: The equalities of the corresponded moments of the order up to k preserve S1 = S2 . We prove that by a contradiction. Let
xp · y q = µp,q (S1 ) = µp,q (S2 ) =
(x,y)∈S1
xp · y q
(x,y)∈S2
holds for all non negative integers p and q satisfying p + q ≤ k, and for some different finite sets S1 and S2 . Since S1 = S2 we can assume S1 \ S2 is non empty, else we can start with the non empty S2 \ S1 . Further, because there exists a function f (x, y) of the form (1) f (x, y) =
p+q≤k
αp,q · xp · y q ,
On Characterization of Discrete Triangles by Discrete Moments
235
which separates S1 \ S2 and S2 \ S1 . Let be f (x, y) > 0 for (x, y) ∈ S1 \ S2 , while (x, y) ∈ S2 \ S1 implies f (x, y) < 0. Then 0< f (x, y) = αp,q · xp · y q (x,y)∈S1 \S2
=
p+q≤k
−
=
αp,q ·
p+q≤k
αp,q · xp · y q
(x,y)∈S1 ∩S2
αp,q · xp · y q
αp,q ·
(x,y)∈S2 \S1
The contradiction
xp · y q −
p+q≤k
0 1. In this case the solution a(Λ), b(Λ), c(Λ), d(Λ), and e(Λ) should be used.
250
I.-M. Sintorn, G. Borgefors
In practice, for each integer approximation of a an integer neighbourhood of b is checked, and for each integer approximation of b an integer neighbourhood of c, etc, is searched. For each set of integer local distances the maximal difference is computed from the error equations (5) − (9). Each solution is, of course, also examined to make sure that it fulfills the regularity criteria (1) and (2), Case 1.
3
Results
In this section Tables of solutions and an illustrating example are presented. In Tables 1 and 2 the optimal, real, and integer local distances for Λ = 1.5, and Λ = 2.58 are shown, respectively. The sizes of Λ were chosen to represent a small and a fairly large Λ. Λ = 2.58 is also the Λ occurring in the image used in the application example. In the Tables, the solutions for aopt , and a ≡ 1 are presented, as well as the best integer solutions for integer and real scale factors up to 20. Only solutions better than those for lower scale factors are listed. As can be seen by comparing the Tables, the error (maxdiff) grows with increasing Λ, as can be expected. It is interesting to compare our optimal values to those in [5], which optimizes the maximal error over circular trajectories. Their optimal local distances for Λ = 1.5 become: a = 0.9237, b = 1.3063, c = 1.3855, d = 1.6652, e = 1.9042, and for Λ = 2.58: a = 0.8953, b = 1.2662, c = 2.3099, d = 2.4774, e = 2.6342. As can be seen in the Tables, the two methods generate very similar local distances. A comparison of the errors for the two methods is irrelevant, since they use different optimization criteria. As mentioned above, the error grows rapidly with increasing Λ. This means that the calculated weighted distances differ quite a lot compared to the Euclidean distances. Hence, this WDT is not a good choice when true distances are desired. The WDT also becomes more rotation dependent than desired. In many applications, however, only relative distances are needed and rotation is not an issue. The use of parallelepipedic WDTs can then save much time and memory. As an example, the medial axis (MA), also known as the set of centres of maximal balls, of an object is calculated on an image with different resolution along the z-axis as well as on the same image interpolated to cubic voxels. The MA in turn, can, e.g. be used for image description, analysis, and compression. The binary test image comes from a magnetic resonance angiography image, where the arteries have been segmented. The size of the original image is 280 × 430 × 96 voxels (11.6 Mbyte) and each voxel is of size 1 × 1 × 2.58 mm3 . Interpolated to equal resolution along all three axes the image size becomes 280 × 430 × 250 voxels (30Mbyte). The number of voxels in the original image is 416,423 compared to 1,084,929 for the interpolated image. Fig. 3 shows projections, middle slices, and middle slices of the MA of the interpolated and original images. All images are from the right (z − y) side of the volume to show the difference in resolution in the z-direction. Hence, the big bend in the middle of the images is the aorta bending towards the person’s back. Fig. 3a) and 3d) show simple 2D-projections of the volume, where the grey levels represent the depths of the object voxels.
Weighted Distance Transforms for Images Using Elongated Voxel Grids
251
Table 1. Integer approximations for Λ = 1.5. scale factor a b c d e 1.999 2 3 3 3 4 2.160 2 3 3 4 4 3.103 3 4 5 5 6 4.507 4 6 7 8 9 5.266 5 7 8 9 10 6.666 6 9 9 11 13 8.847 8 12 12 15 17 16.558 15 22 23 28 32 18.727 17 25 26 32 36 . . . . . . ∞ aopt bopt copt dopt eopt opt 0.9085 1.3227 1.3628 1.6656 1.9243 1 1 1 2 2 2 2 2 3 3 3 4 3 3 4 4 5 6 6 6 8 8 10 11 7 7 9 10 12 13 8 8 11 11 13 15 9 9 12 12 15 17 15 15 20 20 25 28 16 16 21 21 26 30 . . . . . . ∞ 1 b∗ c∗ d∗ e∗ opt* 1 1.2887 1.3118 1.6145 1.8733
maxdiff 0.302 0.210 0.191 0.169 0.163 0.153 0.144 0.141 0.139 . 0.1372 0.1372 0.621 0.303 0.253 0.228 0.204 0.202 0.200 0.195 0.190 . 0.1882 0.1882
For the interpolated image with equal resolution along all sides the common WDT, in our notation 3-4-3-4-5, maxdif f = 10.01%, was used, and for the original image with Λ = 2.58 the 6-9-16-17-18 WDT was chosen from Table 2. It was chosen because its error is rather close to the optimal error. To get a smaller error, the local distances have to be increased rather significantly, making the voxel values uncomfortably large. The MA is computed from the WDT by local tests using a Look-up table, which is specific for each set of weights used. The Look-up tables store, for each voxel value of the WDT, the smallest values of the neighbours that prohibit the voxel to be part of the MA. The Look-up tables were calculated by the method presented by Remy and Thiel in [6]. To produce the Look-up tables, they first calculate the WDT from each point in the first 1/48th of the volume to the origin. The rest of the volume can be omitted from the calculations due to symmetry. Then they scan through this WDT and for each voxel and all directions of local steps they search for the smallest value that prohibit the voxel in focus to be part of the MA. This is done by comparing the value in the WDT at a position one mask step away from the voxel with the value of the voxel plus the weight of the mask step. Their method simultaneously tests whether the directions in
252
I.-M. Sintorn, G. Borgefors Table 2. Integer approximations for Λ = 2.58. scale factor a b c d e maxdiff 1.999 2 3 4 5 5 0.584 2.301 2 3 6 6 6 0.338 4.601 4 6 11 12 12 0.337 6.882 6 9 16 17 18 0.331 17.105 15 22 39 42 45 0.330 . . . . . . . ∞ aopt bopt copt dopt eopt 0.3225 opt 0.8750 1.2892 2.2575 2.4445 2.6197 0.3225 1 1 1 2 2 2 1.069 2 2 3 4 5 5 0.582 3 3 4 7 7 8 0.546 4 4 5 9 10 11 0.481 6 6 8 13 14 15 0.446 8 8 10 18 19 20 0.442 15 15 19 33 35 38 440 19 19 24 41 45 48 0.435 . . . . . . . ∞ 1 b∗ c∗ d∗ e∗ 0.4263 opt* 1 1.2490 2.1537 2.3408 2.5159 0.4263
the WDT mask are enough to produce the correct Look-up table, or if extra directions must be added to compute the table. The only changes that were needed to make the method applicable to a parallelepipedic grid was to modify the mask and to search 1/16th of the volume instead of 1/48th. For the original image the MA consists of 110,067 voxels while the MA for the interpolated image consists of 207,013 voxels. The calculation of the MA of the original image required 1.12 CPUs compared to 2.83 CPUs for the interpolated image. Hence, it is almost exactly a factor Λ faster to calculate the MA in the original image instead of in the interpolated one. A further bonus is of course that no interpolation is needed.
4
Conclusions
Optimal weighted distance transforms for a 3×3×3 neighbourhood in 3D images with elongated voxel grids have been investigated. The results presented are valid for all elongated voxel ratios as long as two sides are of equal length, which is often the case for tomographic and microscopic images. Numerical results for grids with voxels of size 1×1×Λ, when Λ equals 1.5 and 2.58 have been presented. The new WDTs are useful in applications where relative distances are needed. As an example the MA was computed on a medical image with elongated voxels and compared to the MA computed on the same image interpolated to cubic voxels. To get a better approximation to the Euclidean distance transform a larger
Weighted Distance Transforms for Images Using Elongated Voxel Grids
253
Fig. 3. 2D-projection, middle slice, and middle slice of computed medial axis of segmented arteries from a magnetic resonance angiography image. All images are produced from the right (z − y) side of the volume to show the difference in resolution in the z-direction. The lower row shows images from the original volume with voxel dimension 1 × 1 × 2.58. The upper row shows images from the same volume but interpolated to voxel size 1 × 1 × 1. The grey levels in c) and f) represent the voxel values in the MA. The resulting MA are quite similar in both cases (when interpolation is not considered).
neighbourhood could of course be considered. The asymmetrical neighbourhood 5 × 5 × 3, used in [5], should then be a good way to compensate for the largest errors, without increasing the computation times as much as with a complete
254
I.-M. Sintorn, G. Borgefors
5 × 5 × 5 neighbourhood. However, to calculate such local distances many more constraints would have to be considered to ensure that the WDTs are wellbehaved (semi-regular) and that the equations used for optimization really are the ones valid in the Case investigated. In summary, the errors become quite large when distance transforms are computed in images with elongated voxels, unless the elongation factor is small. In fact, a Λ of 1.57 is enough to double the optimal maxdiff compared to a cubic grid. The WDTs presented here are good choices for small Λ in all cases and for larger Λ in cases where rotation dependence and true distances are unimportant. In other cases, interpolation is the only feasible option.
References 1. P. Bolon, J. L. Vila, and T. Auzepy. Operateur local de distance en mailliage rectangulaire. In Proc. 2eme Colloque de Geometrie Discrete en Imagerie: Fondements et Applications, Grenoble, France, pages 45–56, Sept. 1992. 2. G. Borgefors. Distance transformations in digital images. Computer Vision, Graphics, and Image Processing, 34:344–371, 1986. 3. G. Borgefors. On digital distance transforms in three dimensions. Computer Vision and Image Understanding, 64(3):368–376, Nov. 1996. 4. D. Coquin and P. Bolon. Discrete distance operator on rectangular grids. Pattern Recognition Letters, 16:911–923, 1995. 5. D. Coquin, Y. Chehadeh, and P. Bolon. 3D local distance operator on parallelepipedic grids. In Proc. 4th Discrete Geometry for Computer Imagery, Grenoble, France, pages 147–156, Sept. 1994. 6. E. Remy and E. Thiel. Computing 3d medial axis for chamfer distances. In Borgefors, Nystr¨ om, and S. di Baja, editors, Discrete Geometry for Computer Imagery, volume 1953 of Lecture Notes in Computer Science, pages 418–430. Springer-Verlag, Dec. 2000. 7. A. Rosenfeld and J. L. Pfaltz. Sequential operations in digital picture processing. Journal of the Association for Computing Machinery, 13(4):471–494, Oct. 1966. 8. I.-M. Sintorn and G. Borgefors. Weighted distance transforms in rectangular grids. In Ardizzone and Ges` u, editors, Proc. 11th International Conference on Image Analysis and Processing (ICIAP 2001), Palermo, Italy, pages 322–326. IEEE Computer Society, sep 2001.
Robust Normalization of Shapes 1
2
Javier Cortadellas , Josep Amat , and Manel Frigola
3
1
Departament d’Electrònica, Enginyeria La Salle, Universitat Ramon Llull Pso. Bonanova 8, 08022- Barcelona, Spain
[email protected] 2 IRI - Institut de Robòtica e Informàtica, Universitat Politècnica de Catalunya Llorens Artigas 4-6, 08028 - Barcelona, Spain
[email protected] 3 Departament d’Enginyeria de Sistemes, Automàtica i Informàtica Industrial Universitat Politècnica de Catalunya C. Pau Gargallo 5, 08028 Barcelona, Spain
[email protected] Abstract. The normalization of a binary shape is a necessary step in many image processing tasks based on image domain operations. When one must deal with deformable shapes (due to the projection of non-rigid objects onto the image plane or small changes in the position of the view point), the traditional approaches doesn’t perform well. This paper presents a new method for shape normalization based on robust statistics techniques, which allows to keep the location and orientation of shapes constant independent of the possible deformations they can suffer. A numerical comparison of the sensitivity of both methods is used as a measure to validate the proposed technique, together with a ratio of areas between the non-overlapping regions and the overlapping regions of the normalized shapes. The results presented, involving synthetic and real shapes, show that the new normalization approach is much more reliable and robust that the traditional one.
1 Introduction The normalization of a binary shape is a necessary step in many image processing tasks based on image domain operations. It is even a crucial step when the main goal of a computer vision system is the discrimination of objects depending on their visual appearance. In such systems, the way an object is represented arises as an important question to be solved. Although other appearance properties such as color or texture can be exploited, shape can be used as a powerful tool for describing and recognizing objects, if a reliable image segmentation method is available. The shape of an object refers to its profile and physical structure. In this paper, shapes are considered binary images that come from the projection of 2D or 3D objects onto a 2D image plane (they are also usually called silhouettes [14]). There are basically four ways of representing this important characteristic: by means of boundaries, regions, moments or structural representations. Among them, the description based on the region that an arbitrary shape occupies is the simplest one. The benefits of this simple description are that it has a straight meaning for human visual perception systems, it avoids the computation of shape descriptors [3,4,5], which can be ambiguous, and finally, as the description lies on the image domain, it is A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 255–266, 2002. © Springer-Verlag Berlin Heidelberg 2002
256
J. Cortadellas, J. Amat, and M. Frigola
possible to use specialized hardware architectures like [13] that operate over the image domain at very high speeds. However, the main drawback for this kind of representation is that, when used in shape recognition tasks, it needs to be normalized with respect to similarity transformations (translation, rotation and scaling) in order to grant a unique representation of the projection of an object onto an image plane. Traditionally, the centroid (also called center of gravity) and the angle of the axis of the least moment of inertia have been used to normalize the localization and orientation of the shapes [2,3,4,5]. A standard method consists of rotating the shape around the centroid so that the axis previously mentioned, called principal axis, has some predefined orientation. This normalization does not present difficulties when dealing with shapes of rigid objects that have always been projected onto the image plane from the same point of view. However, problems arise when one must deal with deformable shapes, produced by small changes of perspective or alterations of the physical structure of non-rigid objects, which produces significant changes of the periphery of their shape. In many applications such as shape recognition or registration, it would be desirable that two different shapes of a same object class share to a certain extent (depending on the degree of deformation) the same location and orientation, because this fact would ease further processing tasks, but the classical normalization approaches doesn’t perform well when dealing with such deformed shapes. The main contribution of this paper is the development of a reliable technique for accurately estimating the robust centroid and the robust principal axis of deformable shapes, making use of robust statistics [1,7,8]. Size normalization is not treated in this paper because it is assumed that objects with different scale must be discriminated. The structure of the paper is as follows: after this introduction, the traditional approach for normalizing shapes, based on the centroid and the principal axis, is briefly described. After pointing out some of the existing problems of this approach, a new robust normalization method is developed. Then, by means of a similarity measure based on the discrepancy of overlapping areas, the proposed method is compared with the traditional normalization technique. Some results involving synthetic and real shapes are also presented and finally, the conclusions are summarized.
2 Normalization of Shapes. Classical Method There are a number of different methods that have been used to determine the orientation of a shape [3], like the maximum Feret’s diameter – the line between the two points on the periphery that are furthest apart - or the major axis of an ellipse fitted to the contour of the shape silhouette. Among these methods, those that take into account all the pixels of the shape have often worked better than those based on contour or boundary representations, because they are less influenced by the presence or absence of a single pixel around the periphery. One of the most widely used methods among those that consider the whole shape is related to the computation of the principal axis. It is important to mention that the main goal of this work is not to find a meaningful axis in terms of visual perception but to find an axis insensitive to alterations of the shape due to admissible deformations. This axis would allow different deformed shapes of a same object to be all aligned in a fixed direction.
Robust Normalization of Shapes
257
2.1 Principal Axis Based Orientation Method In this method, the centroid is used to normalize the location and the orientation is defined as the angle of the least moment of inertia axis [5]. This axis corresponds to the line about which it takes the least amount of energy to spin an object of like shape. It is called principal axis, and can be regarded as the line that “best fits” the shape. It is obtained by minimizing, with respect to the angle θ, the sum of the squared distances of each point of the shape to an axis that, passing through the centroid of the shape, has a slope with value tan θ.
θ Pi
di
Fig. 1. Arbitrary shape with its principal axis, which is the line that minimizes the sum of the distances (di) of all the points of the shape (Pi). Assuming discrete images, this points are the coordinates of the pixels of the shape.
The result of this minimization is [3,5]:
θ=
2 µ11 1 tan −1 2 µ 20 − µ 02
(1)
where µ are the central moments of the shape. Being I a binary-valued picture, and S = {(x,y) | I(x,y) =1} the set of pixels representing a two-dimensional shape, for each pair of nonnegative integers (j,k), the above central moments of S are given by [5] jk
µ jk =
∑ (x − x ) ( y − y ) ( ) j
k
(2)
x , y ∈S
where ( x, y ) are the coordinates of the centroid of the shape, also used for location normalization. Moments can be given a physical interpretation by regarding S as an area composed of a set of point masses located at the points (x,y), and thus providing useful information about the spatial arrangement of the points of S. The standard method for normalizing S with respect to rotation is to rotate it in such a way that its principal axis has some standard orientation, say vertical. The main problem of this principal axis approach is that, although it is quite insensitive to small variations on the boundary of the shape (i.e. due to discretization noise), it becomes very sensitive when the shape varies significantly (i.e. due to structural changes of the object it represents). This fact can be seen in Fig. 2.
258
J. Cortadellas, J. Amat, and M. Frigola
(a)
(b)
(c)
Fig. 2. Principal Axis estimation of a deformable object: (a) θ = 30.47º, centroid =(64,61), (b) θ = 13.92º, centroid=(61,65). (c) Overlapping of the two previous images. Notice the difference between the two shapes and its centroids.
This exemple shows two instances (a and b) of the same class of object “elephant”. Despite of the fact that both shapes have a very similar structure, - and thus it would be desirable that they share, to a certain extent, the same orientation and centroid -, there is a difference of more than 16 degrees between both principal axis and of 7 pixels between the centroids (using the chessboard metric). These normalization results would be unacceptable in a template matching framework for instance, because the missalignment would cause an important decrease of the similarity measure. The reason for these differences is that a region on the periphery of the shape has changed (in this case, the position of the elephant’s trunk). This change has biased the centroid and orientation estimation.
3 Robust Normalization of Shapes Considering the problem stated above, the objective now is to reduce the sensitivity of the centroid and principal axis estimation to shape deformations. To achieve this goal, it is assumed that not all the regions of a shape are equally important when computing the principal axis. This means that pixels that are far away from the tendency of the bulk of the pixels or that are in regions easily subjected to changes should be downweighted, instead of letting each one vote equally in the estimation. This weighting can be dealt by means of robust statistics. Robust estimation has been successfully used to solve many computer vision, signal processing and statistical problems [1,6,9,10]. There are many robust procedures to tackle these problems, such as Least Median Squares, Ransac, Mestimator, etc (see [7,10] for a review). In this paper, an M-estimator is chosen since the solution can be found with a continuous derivative process, which is very appealing. In order to apply robust statistics to the principal axis estimation, the problem has been reformulated. It can be shown [3] that the orientation computed in (1) coincides with the direction of the eigenvector corresponding to the larger eigenvalue of this matrix:
Robust Normalization of Shapes
µ 20 µ11
µ11 µ 02
259
(3)
Following a similar formulation as in [15], this eigenvector estimation problem can be related to the minimization of the following energy function, based on a quadratic norm: N r r r r E { v , ci } = ∑ d i − m − v ·ci i =1
2
(4) 2
r r T T where d i = (x i , y i ) are the coordinates of the N pixels of the shape, m = ( x , y ) is v the centroid, ci are the linear coefficients that minimize the projection error and v is the eigenvector associated to the largest eigenvalue of Eq. 3. It is well known that in a least squares framework, this energy function minimization would not be robust to outliers, which can be interpretated as the pixels that are not close to the principal axis. Changes in the position of these pixels would cause significant variations in the orientation estimation. For this reason, the quadratic function in the energy function for the least square estimation is replaced by a pfunction, which assigns smaller weights for constraints with larger residues. Two p functions commonly used in computer vision are the Lorentzion function and the Geman & McClure’s function given as follows [6,7,9]: x2 ρ LO ( x,σ ) = log 1 + 2 , 2σ
ρ GM (x, σ ) =
x2 σ + x2
(5)
2
where x is the residue of the data constraint and σ is the scale parameter. When using the p-function for robust estimation in an error minimization framework (for example, line fitting), the influence of each data constraint to the solution is characterized by an influence function, which is the derivative of the pfunction. If the derivatives of the above two p-functions are taken, it is straightforward to see that the influence functions decrease as the magnitude of the residue increases [1]. For the least-square estimation, the influence function is linearly increasing as the magnitude of the residue increases. Therefore the least-square estimation is more sensitive to outliers than the robust estimation. Incorporating the p-function into Eq. 4 yields the following new energy function
(
N r rT r E{ v , ci } = ∑ ρ ei ·ei ,σ i =1
r
)
(6)
where eri = d i − mr − vr·ci is the error vector and σ is the scale parameter. In this paper, the Geman-McClure p-function has been used. It must also be remarked that unlike [6], intra-sample outliers have not been taken into account, because it has no meaning that only the x or the y coordinate of a distant pixel were an outlier.
260
J. Cortadellas, J. Amat, and M. Frigola
In order to minimize (6), the relation between robust statistics and IRLS (Iterative Reweighted Least-Squares) has been exploited [6], reformulating the previous robust estimation problem in terms of Weighted Least Squares: N r r r r E{v , ci } = ∑ wi d i − m − v ·ci
(
i =1
r
) (d T
i
r r − m − v ·ci
with weights wi related to the derivative of the p-function by rT r dρ ( ei ·ei , σ ) rT r d ( ei ·ei ) wi = rT r ei ·ei
)
(7)
(8)
An efficient way of computing the minimization of Eq. (8) consists of applying Alternated Least Squares. The iterative scheme, which can be verified taking the v partial derivatives with respect to v and ci, would be: r
N
r m(k + 1) =
∑ w (k )·d (k ) i
i =1
i
(9)
n
∑1 i =1
r
∑ w (k )·c (k )·(d (k ) − m(k ) ) N
r v ( k + 1) =
i =1
i
i
n
∑ w (k )·c i =1
ci ( k + 1) =
r
i
i
2 i
(k )
r r r v ( k ) T ⋅ d i ( k ) − m( k ) r r v ( k ) T ·v ( k )
(
(10)
)
(11)
It is important to notice that the robust centroid of the shape (9) is also iteratively computed taking into account the weights wi associated to each location of pixels of the shape. It is also worth mentioning here that at every iteration k the weights are updated following Eq. (8). A solution can be usually reached in less than 10 iterations, and the computational cost of each iteration is just O(n). The scale parameter σ can be automatically estimated using the Median Absolute Deviation (MAD) [7], which can be viewed as a robust statistical estimate of the standard deviation. 3.1 Shape Dependent Weighting Function As it has been shown in the previous section, it is necessary to weight the contribution of the pixels to the principal axis computation. These weights should penalize those pixels that are far away from the tendency of the bulk of the pixels or that are more likely to vary due to shape deformations. However, Equation (8) does not consider the later dependence on the shape, as it can be seen in Fig. 3.
Robust Normalization of Shapes d2 P2
261
d2 P2
P d1 1
P d1 1
(a)
(b)
Fig. 3. Two instances of a deformable shape, with the desired principal axis and the distances of two pixels to this axis.
This example shows two instances of a deformable shape. It would be desirable that both have the same principal axis, regardless of small changes of the position of the tail. In Fig. 3(a) the pixel P2 is closer to the axis than P1, so according to Eq. (8), its influence on the final solution is bigger. However, in Fig. 3(b) pixel P2 and pixel P1 are at the same distance, so both have now the same influence on the final solution. A small change in the position of the tail can causes significant differences in its associated weights, affecting the centroid and orientation estimation. It is clear that the tail of this shape is more likely to vary, and that the influence of P2 in comparison with the influence of P1 should be downweighted in both figures. In order to cope with these situations and to add more meaning to the weights, the original weights wi are modified by multiplying them by a shape dependent weighting function M(xi,yi)β, whose value depends on the location of each pixel i of the shape. These new weights Ωi are:
Ω i = wi ·(M ( xi , y i ) )
β
(11)
where β is a constant factor that controls the influence of the periphery on the robust principal axis estimation. The shape dependent weighting function function M(xi,yi)β must penalize those regions of the shape which are more likely to vary due to deformations or changes of the shape. As it is assumed that the major alterations occur on the periphery of the shapes, a measure of the distance between the pixels of the shape and its boundary is a good candidate for being M(xi,yi). In this paper, the chessboard distance metric [3,11] has been used to compute M(xi,yi). It is defined as follows:
{
d ch ( pi , p j ) = max x i − x j , y i − y j
}
(12)
for pixels pi= (xi, yi) ∈ S and pj= (xj, yj) ∈ SB where SB denotes the set of shape pixels that are in the boundary of the shape. According to this, M(xi,yi) is defined for each pixel of the shape as the distance dch from that pixel to the nearest pixel on the boundary. There are efficient algorithms for computing distance maps based on the
262
J. Cortadellas, J. Amat, and M. Frigola
chessboard distance metric that can be used to generate M(xi,yi) [2,3]. Figure 4 shows two examples of shape dependent weighting functions M(xi,yi) for a deformable shape. The β factor depends on the dimension of the image, i.e. the size of the shape. The bigger the β factor is, the more down weighted are the outer pixels of the shape in relation to the inner pixels. Consequently, this factor also downplays the effect of spurious edges and noise due to binarization on the boundary of the shape. In the experiments, a value between 0.5 and 2 worked well, being 1.4 the standard value used for images or 128x128 pixels.
Fig. 4. Above, two shapes of the same deformable object class “teddy bear”. Below, their corresponding shape dependent weighting functions M(xi,yi) showing that the parts more likely to vary (arms and legs) have smaller associated weights.
4 Evaluation Method In order to measure the effectiveness of the proposed robust orientation and localization method in comparison with the traditional principal axis estimation, three parameters are carried out. First, the difference between the centroid estimation of a a shape (COGDIFF) that is subjected to a deformation, and has been computed using the chessboard metric as COGDIFF = | x1-x2 | + | y1 - y2 |. The second is the change in the orientation estimation (θDIFF) when a shape has been deformed and has been computed for both the standard method (T) and the robust method (R) as θDIFF = θ1 – θ2 . Finally, a similarity measure (δ) is carried out for pairs of shapes once they have been aligned using the previous estimations. This third parameter has been introduced to validate the use of this normalization procedure in a deformable object recognition framework based on shapes, and it is closely related to the work of P. Rosin [12]. Given the areas shown in Figure 5: R the difference between the original shape and the modified shape, D the difference between the modified and the original shape, and T the overlapped region common to both shapes, then the similarity measure δ is
Robust Normalization of Shapes
δ = 1−
R+D T
263
(13)
This parameter measures the normalized discrepancies between the areas of the aligned shapes and has a maximum equal to 1 in case that both the original and the deformed shape are the same. In contrast to the simple sum of the overlapping area, which would be equivalent to a template matching approach, this measure is valid even when due to the deformation, one shape encloses the other. The main difference between this measure and those from [12] is that here the objective is not to measure shape properties but to recognize them. Hence, the errors have been normalized by the intersection of areas instead of the whole area of the reference shape to provide better selectivity when it comes to recognize shapes.
(a) R
(b) D
(c) T
Fig. 5. Areas (in black) used in the similarity measure based on the discrepancy method, applied to the deformable shape shown in Fig. 4.
5 Results The robust and the traditional normalization method have been applied to a set of deformable shapes related to recognizable objects. Some of these shapes are shown in Fig. 6. The procedure has been as follows: for each shape, both the standard method based on moments and the robust normalization method have been carried out. This results in two estimates, the centroid and the orientation for both methods. Then each shape has been rotated around its centroid in order to align the principal axis vertically and it is shifted to place the centroid of the shape centered in the image. Table 1 condenses some results about the reliability of the robust method. The left column indicates the pair of shapes whouse parameters have been compared. This table clearly suggests that robust normalization (R) is much less sensitive to changes of the shape than the traditional approach (T). The maximum difference between the orientation estimation of a shape that has been deformed is of less than 3º, in comparison with the 29º of the classical method. There is also a significant improvement in the stability of the centroid estimation, as can be seen in the comparison between shapes h1 and h2. The similarity measure has also verified that normalizing shapes with respect to robust estimates results in a better resemblance measurement between shapes, although the selectivity was not as high as expected. This is due to the fact that great part of the pixels of two silhouettes
264
J. Cortadellas, J. Amat, and M. Frigola
of the same class of object (those sorrounding the centroid) overlap even when the difference of orientation is significant. These results also show that when two shapes that belong to different object classes are compared, the similarity measure is quite low, thus it is nonsense to compare in this cases the robust and traditional estimation methods. Table 1. Comparison between the standard (T) and robust (R) normalization of shapes of deformable objects.
θDIFF a1/a2 b1/b2 c1/c2 d1/d2 e1/e2 f1/f2 g1/g2 h1/h2 i1/i2 a1/f2 e1/b2 f1/i2 i1/d2
T 16.64º 12.54º 14.1º 6.25º 29.05º 2.21º 7.56º 9.98º 6.59º -
R 2.61º 1.26º 2.97º 0.42º 0.01º 0.19º 2.88º 0.27º 1.47º -
δ
COGDIFF T 7 2 6 7 4 2 9 10 6 -
R 1 1 1 2 0 1 3 1 1 -
T 0.61 0.58 0.59 0.72 0.50 0.76 0.53 0.61 0.42 0.47 0.5 0.39 0.33
R 0.88 0.84 0.77 0.90 0.91 0.80 0.63 0.72 0.74 0.42 0.52 0.48 0.28
6 Conclusions and Further Work The classical location and orientation normalization method, based on the centroid and the least moment of inertia axis, is not appropriate when dealing with deformable shapes, because it is based on a least squares minimization framework that is not robust to outliers. They can be interpretated as pixels that are far away from the tendency of the bulk of the data, and have a strong influence on the estimation, so minor changes of their position have a great impact on the final solution. By means of a reformulation in terms of an energy function, it has been possible to incorporate robust statistics and make the estimations much less sensitive to variations of the shape. This goal has been attained by using a shape dependent weighting function, which penalizes those pixels of the shape that are more likely to be subjected to changes due to deformations, i.e. the periphery of the shape. Finally, to prove the validity of the proposed method, a similarity measure has been introduced. It is based on the discrepancy between the areas of two shapes, once they have been aligned with both the traditional and the robust normalization methods.
Robust Normalization of Shapes
265
Fig. 6. The set of shapes used for evaluation purposes
The experimental results have demonstrated the superior performance of the robust approach with respect to the traditional one, even when shapes suffer important structural changes. Although the similarity measure is able to differentiate among the deformable shapes and validates the proposed normalization method, it turned out to be not as selective as it would be desirable. It is important to remark that the method tolerates deformations of the shapes to a certain extent, over which it can be considered that the shape is scarcely resembling to the original one and thus, the normalization procedure results in erroneous estimations. This paper does not deal with size normalization. It has been assumed that different instances of the same object are available, so changes of size are only due to deformations, which have been treated. Future research will focus on the scale normalization.
References [1]
M. J. Black and P. Anandan. “The robust estimation of multiple motions: parametric and piecewise-smooth flow-fields”. Computer Vision Image Understanding, Vol. 63, No. 1, pp. 75-104, 1996. nd [2] J. C. Russ. The Image Processing handbook, 2 . edition. CRC Press, 1995. [3] A. Rosenfeld, A. C. Kak. Digital Picture Processing, Vols. 1 & 2. Academic Press, 1982. [4] A. K. Jain, Fundamentals of Digital Image Processing. Prentice Hall, 1989. [5] R. M. Haralick, L. G. Shapiro. Computer and Robot Vision, Vols. I&II. AddisonWesley, 1993. [6] F. de la Torre, M. Black. “A Framework for Robust Subspace Learning”. Accepted for Int. Journal of Computer Vision, 2002. [7] F. Hampel, E. Ronchetti, P. Rousseeuw, W. Stahel. Robust Statistics: The Approach based on Influence Functions. Wiley, 1986. [8] P. J. Huber. Robust Statistics. Wiley, 1981 [9] S. Geman, D. McClure. “Statistical methods for tomo-graphic image reconstruction”. Bulletin of the Inter-national Statistical Institute. Vol. LII, pp. 4-5, 1987. [10] D. Mintz, P. Meer. “Robust Estimators in Computer Vision: an Introduction to Least Median of Squares Regression”. Artificial Intelligence and Computer vision, Y.A. Feldman, A. Bruckstein (Eds.). Elsevier, 1991. [11] G. Borgefors. “Distance transformations in arbitrary dimensions”. CVGIP, Vol.27, pp.321-345, 1984
266
J. Cortadellas, J. Amat, and M. Frigola
[12] P.L. Rosin. “Measuring Shape: Ellipticity, Rectangu-larity, and Triangularity”. 15th Int. Conf. Pattern Recognition, Barcelona, Spain, vol. 1, pp. 952-955, 2000. [13] J. Cortadellas, J.Amat, “Image Associative Memory”. 15th Int. Conf. Pattern Recognition, Barcelona, Spain, vol. 3, pp. 638-641, 2000. [14] V. Bruce, P.R. Green, M.A. Georgeson. Visual Perception. Physiology, Psychology and rd Ecology. 3 . Ed., Psychology Press, 1996. [15] L. Xu, A. Yuille, ”Robust Principal Component Analysis by Self-Organizing Rules Based on Statistical Physics Approach”, IEEE Trans. on Neural Networks, Vol. 6, Nº 1, January 1995
Surface Area Estimation of Digitized 3D Objects Using Local Computations Joakim Lindblad and Ingela Nystr¨ om Centre for Image Analysis, Uppsala University L¨ agerhyddv¨ agen 17, SE-75237 Uppsala, SWEDEN joakim,
[email protected] Abstract. We describe surface area measurements based on local estimates of isosurfaces originating from a marching cubes representation. We show how improved precision and accuracy are obtained by optimizing the area contribution for one of the cases in this representation. The computations are performed on large sets (approximately 200,000 3D objects) of computer generated spheres, cubes, and cylinders. The synthetic objects are generated over a continuous range of sizes with randomized alignment in the digitization grid. Sphericity, a scale invariant measure of compactness, allows us, in combination with the improved surface area estimate, to distinguish among the test sets. Keywords: shape analysis, marching cubes, isoperimetric inequality, accuracy, rotation invariance
1
Introduction
In quantitative shape analysis, it is important to know the precision and accuracy of the measurement, and if there are any restrictions on the input domain where the reliability of the measure does not hold. One aim of this paper was to study how 3D shape can be measured in terms of compactness. In 2D image analysis, P 2/A is a commonly used measure of compactness which may be derived from the classical isoperimetric inequality [2]. An “inverse” to compactness is the circularity of objects. This concept is extendable to higher dimensions. Hence, in 3D image analysis, one feature of interest when distinguishing among different classes of objects is sphericity. We define this as a dimensionless ratio between enclosed volume and surface area. The problem that arises is that, in image analysis, we are given only a digitized version of the original continuous object. Under these conditions how reliable is our measure? Note that there is a difference in measuring digital objects and measuring digitized objects. The former class of objects exists solely in the digital world, and exact measures can be calculated. The latter represents the digitization of continuous original objects. The aim of the measure is not to find properties of the digital version, but rather to estimate properties of the continuous original. in the methods applied. Also, the measurement on the digitized A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 267–278, 2002. c Springer-Verlag Berlin Heidelberg 2002
268
J. Lindblad and I. Nystr¨ om
object can never become more than an estimate, as information is irreversibly lost in the digitization process. This paper focuses on how the surface area can be measured by an approximating isosurface generated by the Marching Cubes approach [13]. Our aim is to find a way to obtain accurate estimates with high precision based on local computations, avoiding strong assumptions about the object of study. We also verify the robustness of the sphericity measurement for small to large convex objects with randomized alignment in the digitization process.
2
Surface Area of 3D Objects
In 2D image analysis, perimeter has been, and sometimes still is, measured in terms of cumulative distance from pixel centre to pixel centre. This is straightforward to accomplish using the Freeman chain code [5], but results in incorrect √ measurements. Note that the weights 1 for horizontal and vertical steps and 2 for diagonal steps are not optimal when measuring digitized line segments [6]. By assigning optimized weights [17,1,3] to the steps, more accurate perimeter measurements are obtained. If the boundary is represented by the pixel edges between object and background, steps can be taken from pixel edge to pixel edge, taking advantage of optimized weights, with promising results [12]. This corresponds to Marching Squares, a 2D equivalent of Marching Cubes. In 3D image analysis, when measuring surface area, an analogous first approach would be to connect voxel centre to voxel centre and add the resulting areas. This is not simple and would, as in the 2D case, produce incorrect measurements. In a second approach, where the surface is represented by the faces of the voxels at the boundary between object and background [15], the number of faces gives an efficient and simple estimation of the surface area, but this is an overestimate [14]. By approximating the boundary between object and background with a triangular representation, e.g., the one obtained from the marching cubes algorithm, more correct surface area estimates are obtained. There exist recent publications [7,9] that have studied the problem of surface area estimation. Our approach is different in that we exploit the simplicity of working in small neighbourhoods and base our estimates on local computations, still sacrificing neither precision nor accuracy.
3
Triangular Isosurface Representation
The voxel representation of object and background has a close relation to polygonal representations; a digital surface can be transformed directly into a triangulated isosurface [10,11]. An m-cube (short for Marching Cube), is the cube bounded by the eight voxels in a 2 × 2 × 2 neighbourhood. Hence, each corner of the m-cube corresponds to a voxel. If object and background are considered, the possible number of configurations of the eight voxels is 256. Each configuration consists of zero to four
Surface Area Estimation of Digitized 3D Objects Using Local Computations
(a) Case 0
(b) Case 0
(c) Case 1
(d) Case 1
(e) Case 2
(f) Case 2
(g) Case 8
(h) Case 9
269
Fig. 1. m-cubes of 2 × 2 × 2 voxels, where voxels denoted by a • are inside the object and the other voxels are outside. The isosurfaces for the configurations consist of zero to four triangles.
triangles constituting the isosurface through the specific m-cube. The configurations are often grouped into symmetry and complementary cases, resulting in 14 (or 15) cases [13,16]. See Figures 1 and 2 for examples. Two of these configurations do not represent a boundary situation and have no triangles. These correspond to the case when the m-cube is placed completely outside or completely inside the object (Figures 1(a) and 1(b), respectively). We number the cases according to previous literature [13], except for cases 11 and 14 which we group into one case 11. Some of the cases can be triangulated in different ways, e.g., case 11, which is illustrated in Figures 2(a) and 2(b). We use the more symmetric triangulation in Figure 2(b). The approximated isosurface in the m-cube is computed from some interpolation of the voxel values. The interpolation results in intersection points on the edges of the m-cube for the triangle vertices. In the following, we will use the simplest case, where the intersection points are positioned midway along the m-cube edges. The correct connection among intersection points is ambiguous for some configurations of voxels. See Figures 2(c) and 2(d) for an example. This classical problem in the original marching cubes algorithm was pointed out by D¨ urst in 1988 [4]. The problem requires a careful consideration when a closed surface is desired. It has been examined and solved (e.g., [16,8]). For this study it should be sufficient to assign a contributing area for each configuration.
270
J. Lindblad and I. Nystr¨ om
(a) Case 11
(b) Case 11
(c) Case 3
(d) Case 3
Fig. 2. (a), (b): Some m-cubes can be triangulated in different ways (vertices moved from the midpoint for visibility reasons). We use the more symmetric version (b). (c), (d): Shown are the triangles we use for these ambiguous m-cube configurations.
4
Sphericity
In 2D image analysis, P 2/A is a commonly used measure for compactness, or circularity, of objects. The concept behind this dimensionless ratio is extendable to higher dimensions. In 3D image analysis, a feature of interest when distinguishing among different classes of objects would, hence, be the dimensionless ratio of surface area to volume of an object. For a consistent and natural definition of this class of ratios, we choose the following for the 2D case and correspondingly for the 3D case: Circularity, C: perimeter of a circle, enclosing the same area A as the object, divided by the perimeter P of the object. √ 4πA C= (1) P Sphericity, S: surface area of a sphere, enclosing the same volume V as the object, divided by the surface area A of the object. √ 3 36πV 2 S= (2) A In the continuous case, these measures are in the range [0, 1] given by the isoperimetric inequality. The upper bound is reached only by the ball of the corresponding dimension. For the nD case, the dimension of the ratio is lengthn−1 /lengthn−1 . An alternative would be to define the ratio as lengthn /lengthn , but we choose the one formulated above as it more closely reflects the dimension of the predominant part of the measurement.1 The reason for inverting the P 2/A expression is that circularity should reach its maximum for a true circle. 1
The measure P 2/A suffers from the fact that the perimeter has a more predominant effect on the measure than the area.
Surface Area Estimation of Digitized 3D Objects Using Local Computations
5
271
Experiments
We have generated sets of synthetic convex objects. Each set consists of a certain shape digitized in different sizes, rotations, and positions, with respect to the digitization grid. The objects are generated in the continuous space and then discretized using Gauss centre point digitization, where the digitized object is defined to be the set of all grid points (voxel centres) contained in the continuous set. All units are given with respect to the sampling grid, i.e., edge length, face area, and volume of voxels are all equal to 1. The surface area contribution for each of the 256 marching cube configurations, AM C(i) , i = 0..255, is precomputed by summing the areas of the included triangles and stored in a lookup table [14]. The histogram describing the number of each of the 256 configurations, ni , is computed for each digitized object. The surface area of the specific object is then calculated as A=
255
ni AM C(i)
(3)
i=0
Of interest is to plot the ratio of estimated area to true area versus the size of the object. The variance within a given size, for different rotations and placement of that object, should capture digitization effects. We are also interested in the sphericity, computed as described in Section 4. In a recent publication, the enclosed volume is computed for the marching cubes representation in an incremental fashion from the surface [14]. How this would affect the sphericity estimate needs further studies. Here, the volume V is computed as a simple voxel count of the object, which is a good estimate for large objects. 5.1
Bias
Our main assumption is that, on a local scale, objects are fairly flat. The surface of a large sphere is a good sampling of planes in all directions. Therefore, our first test object is a Euclidean sphere. This also happens to be the object with maximal sphericity. The digitization of a sphere centered at (x0 , y0 , z0 ) ∈ R3 of radius r ∈ R is generated by the following equation 1 if (x − x0 )2 + (y − y0 )2 + (z − z0 )2 r2 f (x, y, z) = , (x, y, z) ∈ Z3 (4) 0 otherwise See Figure 3 for sphere examples, where r is 4, 10, and 25, respectively, presented as renditions of their marching cubes triangulations. Figure 4 shows three surface area estimates for digitized spheres of increasing radius. The function describes the mean value for a given radius and the error bars indicate the corresponding smallest and largest estimates, where the surface area is computed according to Eq. (3). If the surface area is computed with the triangle configuration of Figures 5(a) and 5(b) for case 5 (area contribution of 5flat = 1.150), this gives the largest overestimate (8.8%). From now on, we instead use the triangle
272
J. Lindblad and I. Nystr¨ om
(a) radius 4
(b) radius 10
(c) radius 25
Fig. 3. Three spheres presented as renditions of their marching cubes triangulations. Area of digitized spheres 1.14
Estimated value / True value
1.12 1.1 1.08 1.06 1.04 1.02 1 0.98 0.96 0.94 Case 5 flat Case 5 slanted Unbiased
0.92 0.9 0
10
20
30 40 Radius
50
60
70
Fig. 4. Three surface area estimates divided by the true surface area, for 27,000 digitized spheres of increasing radius.
(a) Case 5
(b) Case 5
(c) Case 5
(d) Case 5
Fig. 5. The triangular representations in (a) and (b) may seem as the simplest for this configuration and is possibly the most commonly used. The representations in (c) and (d) approximate a smoother surface. (Third • not visible in (a) and (c).)
Surface Area Estimation of Digitized 3D Objects Using Local Computations
(a) side 4
(b) side 10
273
(c) side 40
Fig. 6. Three cubes of different sizes presented as renditions of their marching cubes triangulations. (a) is aligned with the digitization grid. (b) and (c) have the same orientation, rotated (15◦ , −20◦ , 25◦ ), but represent different sizes.
configuration of Figures 5(c) and 5(d) for case 5 (area contribution of 5slanted = 1.122), for which the overestimate is slightly reduced (8.0%). To obtain an unbiased estimate for randomly aligned planar surfaces, we choose to divide the area by the average overestimate for large digitized spheres. Using 5slanted the bias term ξ becomes 1.080 (supposed convergence value from Figure 4). This also gives good convergence to the true surface area. 5.2
Alignment Invariance
If we wish to study the behaviour of specifically aligned planar surfaces, the sphere is not useful as it represents every alignment. We seek alignment invariance for large planar surfaces, which here is represented by cubes, rotated and positioned in different ways. When applying the same surface area estimator for cubes, the results divided by the bias term ξ = 1.080 are (on average) an underestimate of the true surface area. This is due to the cutting of corners and edges. The cube in Figure 6(a), aligned with the axes of the digitization grid, is small enough to illustrate this effect. For large objects the effect can be neglected, though. The area measurements for the cubes also contain a large variance. This is due to the different alignments of the faces of the cube. The surface area becomes too large when the cube is rotated in some direction that is not well captured by the triangles of the marching cubes triangulation, e.g., as in Figures 6(b) and 6(c). The increased variance can be seen in Figure 7. The approach of a simple scaling by the bias term is not sufficient for good convergence. We wish to reduce the variance of the estimate. Our approach is to identify the cases for which the area estimate is extreme and to correct for these errors. Therefore, we study the relative frequency of the m-cube configurations
274
J. Lindblad and I. Nystr¨ om Area of digitized cubes 1.14
Estimated value / True value
1.12
Case 5 slanted Unbiased
1.1 1.08 1.06 1.04 1.02 1 0.98 0.96 0.94 0.92 0.9 0
20
40
60 80 Side length
100
120
140
Fig. 7. Two surface area estimates divided by the true surface area, for 80,000 digitized cubes of increasing size.
for cubes of side 100 with different alignment, and examine the situations giving rise to the largest under- and overestimates, respectively. From Figure 8(a), we interpret that some of the m-cube configurations, e.g., case 5, have not been assigned a value agreeing well with the truth. We have already noted two possible triangulations for case 5 (Figure 5), none of which approximates a flat isosurface through the m-cube. By plotting the relative maximum error against a scale factor used on mcube case 5 (Figure 8(b)), we see that there is an optimum for the scale factor 0.892. The sum of the three triangle areas of m-cube configuration case 5 equals 1.122. If we, instead of assigning this triangle area contribution, assign the value 1.122 ∗ 0.892 = 1.001 to case 5, we reduce the relative maximum error from 7.1% to 4.3% and the coefficient of variation (CV, standard deviation divided by mean) from 1.6% to 1.2%. Note, however, that the scale factor 0.892 is optimal only if case 5 solely is allowed to vary, but the scale factor will change if other cases are also scaled to further improve the surface area estimate. 5.3
Results
Our surface area estimator can be summarized in the following: Starting with the surface area from the marching cubes triangulation, we assign to case 5 area contribution 1.001 and divide by the overall bias term ξ = 1.046. The change of area contribution assigned to case 5 requires a change of the bias term, to acquire an unbiased estimate for randomly aligned planar surfaces as described in Section 5.1. To verify our surface area estimates, we will study objects of other shape than spheres and cubes. Here, we choose cylinders with the same height as diameter, a shape resembling the other two, but which still should be possible to distinguish
Surface Area Estimation of Digitized 3D Objects Using Local Computations
0.8
0.1 Minimum area case All cases Maximum area case
0.7
0.09 0.08 Error in area estimate
0.6 Relative frequency
275
0.5 0.4 0.3 0.2
Relative max error − Cubes CV − Cubes Relative max error − Cylinders CV − Cylinders
0.07 0.06 0.05 0.04 0.03 0.02
0.1 0
0.01 0 1 2 3 4 5 6 7 8 9 10 11 12 13 MCube case
(a)
0 0.8 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 Scale factor for case 5
1
(b)
Fig. 8. (a) Relative frequency of the different m-cube cases for 30,000 digitized cubes, of size 100 × 100 × 100, of random position and orientation. (b) Maximum error and coefficient of variation for the estimated area of digitized cubes and cylinders, for different scale factors for the area contribution of m-cube case 5.
as different. As done previously, the cylinders are generated in a continuous range of sizes with randomized alignment in the digitization grid. For the surface area estimate of cylinders (radius 50), the relative maximum error is reduced from 3.8% to 2.7%, and the CV from 1.1% to 0.8% (Figure 8(b)), using the described method. Surface area, volume, and sphericity for the three test sets are presented in Figure 9. The plots compare well with expectations. There is still a fair amount of variance in the surface area measurements, but the estimation performance has significantly improved compared to the uncorrected marching cubes area estimate. The volume estimate is robust and show no surprises. The sphericity agrees well, at least for objects down to radius 5, with the sphericity in the continuous case, where a sphere, a cube, and the described cylinder have values 1, 0.806, and 0.874, respectively. The sphericity measure manages to totally separate the three test sets.
6
Discussion and Conclusions
This paper describes surface area measurements based on local estimates of isosurfaces originating from a marching cubes representation. We show how improved precision and accuracy can be obtained by optimizing the area contribution for one of the cases in this representation. Our results indicate improved
276
J. Lindblad and I. Nystr¨ om Area of digitized objects 1.14
Sphere Cube Cylinder
Estimated value / True value
1.12 1.1 1.08 1.06 1.04 1.02 1 0.98 0.96 0.94 0.92 0.9 0
10
20
30 40 Radius
50
60
70
60
70
60
70
Volume of digitized objects 1.14
Sphere Cube Cylinder
Estimated value / True value
1.12 1.1 1.08 1.06 1.04 1.02 1 0.98 0.96 0.94 0.92 0.9 0
10
20
30 40 Radius
50
Sphericity of digitized objects 1.15 1.1
Sphere Cube Cylinder
Sphericity
1.05 1 0.95 0.9 0.85 0.8 0.75 0
10
20
30 40 Radius
50
Fig. 9. Measurements for digitized spheres, cubes, and cylinders of increasing size. Top: Surface area estimates divided by the true surface area. Middle: Volume estimates divided by the true volume. Bottom: Sphericity estimates.
Surface Area Estimation of Digitized 3D Objects Using Local Computations
277
robustness when measuring compactness, or sphericity, to distinguish classes of objects. Non-convex and more natural shapes should be studied in further work. Another future challenge is to analyze all of the m-cube configurations and assign proper weights to them in order to obtain accurate measurements with high precision. This would be an optimization process in 13 dimensions; there are 13 cases of area contributions. Our surface area computations are performed for a binary marching cubes representation, where the isosurface intersects the marching cube midway between the corners. Further studies on a grey-level marching cubes representation, where the isosurface is approximated according to some interpolation of the grey-level values of the boundary voxels, seems worth pursuing after the promising results for the binary case. Acknowledgements. We thank Prof. Gunilla Borgefors, Centre for Image Analysis, Uppsala, Sweden, and Prof. Jayaram K. Udupa, Medical Image Processing Group, University of Pennsylvania, Philadelphia, for helpful comments on our work.
References 1. G. Borgefors. Distance transformations in digital images. Computer Vision, Graphics, and Image Processing, 34:344–371, 1986. 2. S.-s. Chern. Studies in Global Geometry and Analysis, volume 4 of Studies in Mathematics, pages 25–29. The Mathematical Association of America, Washington, DC, 1967. 3. L. Dorst and A. W. M. Smeulders. Length estimators for digitized contours. Computer Vision, Graphics and Image Processing, 40:311–333, 1987. 4. M. J. D¨ urst. Letters: Additional reference to ”Marching Cubes”. In Proceedings of ACM SIGGRAPH on Computer Graphics, volume 22(2), pages 72–73, Apr. 1988. 5. H. Freeman. Boundary encoding and processing. In B. S. Lipkin and A. Rosenfeld, editors, Picture Processing and Psychopictorics, pages 241–266. Academic Press, 1970. 6. F. C. A. Groen and P. W. Verbeek. Freeman code probabilities of object boundary quantized contours. Computer Graphics and Image Processing, 7:391–402, 1978. 7. Y. Kenmochi and R. Klette. Surface area estimation for digitized regular solids. In L. J. Latecki, R. A. Melter, D. M. Mount, and A. Y. Wu, editors, Vision Geometry IX, pages 100–111. Proc. SPIE 4117, 2000. 8. Y. Kenmochi, K. Kotani, and A. Imiya. Marching cubes method with connectivity. In Proc. of IEEE Int. Conference on Image Processing (ICIP’99), pages 361–365, 1999. 9. R. Klette and H. J. Sun. Digital planar segment based polyhedrization for surface area estimation. In C. Arcelli, L. P. Cordella, and G. Sanniti di Baja, editors, Visual Form 2001, volume 2059 of Lecture Notes in Computer Science, pages 356– 366. Springer-Verlag, 2001. 10. J.-O. Lachaud and A. Montanvert. Digital surfaces as a basis for building isosurfaces. In Proc. of 5th IEEE Int. Conference on Image Processing (ICIP’98), volume 2, pages 977–981, Chicago, IL, 1998.
278
J. Lindblad and I. Nystr¨ om
11. J.-O. Lachaud and A. Montanvert. Continuous analogs of digital boundaries: A topological approach to iso-surfaces. Graphical Models, 62:129–164, 2000. 12. J. Lindblad. Perimeter and area estimates for digitized objects. In Proceedings of SSAB (Swedish Society for Automated Image Analysis) Symposium on Image Analysis, pages 113–117, Norrk¨ oping, Sweden, Mar. 2001. Available from the author. 13. W. E. Lorensen and H. E. Cline. Marching Cubes: A high resolution 3D surface construction algorithm. In Proceedings of the 14th ACM SIGGRAPH on Computer Graphics, volume 21(4), pages 163–169, July 1987. 14. I. Nystr¨ om, J. K. Udupa, G. J. Grevera, and B. E. Hirsch. Area of and volume enclosed by digital and triangulated surfaces. In S. K. Mun, editor, Medical Imaging 2002: Visualization, Image-Guided Procedures, and Display. Proc. SPIE 4681. Accepted for publication. 15. J. K. Udupa. Multidimensional digital boundaries. Graphical Models and Image Processing, 56(4):311–323, July 1994. 16. A. Van Gelder and J. Wilhelms. Topological considerations in isosurface generation. ACM Transactions on Graphics, 13(4):337–375, 1994. 17. A. M. Vossepoel and A. W. M. Smeulders. Vector code probability and metrication error in the representation of straight lines of finite length. Computer Graphics and Image Processing, 20:347–364, 1982.
An Abstract Theoretical Foundation of the Geometry of Digital Spaces Gabor T. Herman Department of Computer Science, The Graduate Center, City University of New York 365 Fifth Avenue, New York, NY 10016, USA
[email protected] Abstract. Two approaches to providing an abstract theoretical foundation to the geometry of digital spaces are presented and illustrated. They are critically compared and the possibility of combining them into a single theory is discussed. Such a theory allows us to state and prove results regarding geometrical concepts as they occur in a digital environment independently of the specifics of that environment. In particular, versions of the Jordan Curve Theorem are discussed in this general digital setting.
1
Introduction
One kind of digitization of (three-dimensional) space is obtained by tessellating it into cubes. Then there are natural notions of adjacencies between cubes determined, for example, by the sharing of a single face. However, the adjacencies may also be defined using edges and/or vertices. Tessellations of arbitrarydimensional spaces into arbitrary polyhedra similarly give rise to various notions of adjacencies of these polyhedra. Alternatively, we may study a grid of points in an N -dimensional space and consider certain points adjacent by whatever criterion seems desirable to us. If we view such models as appropriate for capturing the notion of a digital (as opposed to continuous) space, then we see that the suitable underlying mathematical concept is a (possibly infinite) graph, which is a collection of vertices - corresponding to the above-mentioned spatial elements (spels, for short) - some pairs of which are considered adjacent. Our aim is to introduce geometrical concepts at this very general level, so that we get away from the specific consequences of the choices of tessellations and adjacencies. If we can prove nontrivial theorems in such a general setting, then these theorems will have nontrivial corollaries in all the specific manifestations of the general theory. Certain continuous geometrical concepts have an immediate natural equivalent in a digital space. If ρ denotes the adjacency, then a ρ-path is a finite sequence of spels each but the last of which is ρ-adjacent to the one following it. A ρ-connected set S can then be defined as one in which for any pair of spels in S there is a ρ-path entirely in S from one to the other. A simple closed ρ-curve C is a nonempty finite ρ-connected set such that for each element in C there are A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 279–288, 2002. c Springer-Verlag Berlin Heidelberg 2002
280
G.T. Herman
exactly two other elements in C ρ-adjacent to it. Other notions are more difficult to capture in a natural way. For example, the digital equivalent of the continuous notion of a simply connected set (one in which every simple closed curve can be continuously deformed into a point) is not so obviously definable: we need somehow to capture the digital correspondent of “continuously deformed.” In this article we discuss two (somewhat related) attempts at providing such abstract theoretical foundation to the geometry of digital spaces. We give a concise, but rigorous, description of both, including samples of theorems that have been proved. We follow this up with a discussion of the differences between the two approaches and speculate on a possible synthesis into a single theory. Finally we mention where we see the interesting open problems.
2
Digital Spaces
The material in this section is based on the approach presented in [1]. There a digital space is defined as a pair (V, π), with V an arbitrary nonempty set (of spels) and π a symmetric binary relation (called the proto-adjacency) on V such that V is π-connected. A trivial example is when V consists of all the cubes into which space is tessellated and two cubes are in the relation π if, and only if, they share a single face. In mathematical notation, this is the digital space (Z 3 , ω3 ) defined, more generally for any positive integer N (the number of dimensions), by Z N = {(c1 , · · · , cN ) | cn ∈ Z, for 1 ≤ n ≤ N },
(1)
with Z being the set of integers, and ωN the binary relation on Z N satisfying (c, d) ∈ ωN ⇔
N
|cn − dn | = 1.
(2)
n=1
This trivial example illuminates the thinking underlying [1]: π has the alternative interpretation to just being an adjacency, a (c, d) in π can also be thought to represent the surface element (surfel, for short) facing from the spel c to the spel d. Thus any nonempty subset of π is referred to as a surface in (V, π). The boundary between subsets O and Q of V, defined as ∂(O, Q) = {(c, d) | (c, d) ∈ π, c ∈ O and d ∈ Q},
(3)
is a surface provided that it is not empty. The fact that π is a set of ordered pairs allows us to define, for any surface S, its immediate interior II(S) and its immediate exterior IE(S): II(S) = {c | (c, d) ∈ S for some d in V },
(4)
IE(S) = {d | (c, d) ∈ S for some c in V }. (5) (0) (K) We say that a π-path c , · · · , c crosses S if there is a k, 1 ≤ k ≤ K, such that either (c(k−1) , c(k) ) ∈ S or (c(k) , c(k−1) ) ∈ S. The surface S is said
An Abstract Theoretical Foundation of the Geometry of Digital Spaces
281
to be near-Jordan if every π-path from an element of II(S) to an element of IE(S) crosses S and it is said to be Jordan if it is near-Jordan and no nonempty proper subset of it is near-Jordan. This terminology is justified by the following theorem, which makes use of the notions of interior I(S) and exterior E(S) of S defined by I(S) = {c ∈ V | there is a π − path connecting c to an element of II(S) which does not cross S},
(6)
E(S) = {c ∈ V | there is a π − path connecting c to an element of IE(S) which does not cross S}.
(7)
Theorem 1. A Jordan surface S in a digital space (V, π) has the following properties. 1. 2. 3. 4.
S = ∂(I(S), E(S)). I(S) ∪ E(S) = V and I(S) ∩ E(S) = ∅. Both I(S) and E(S) are π-connected. Every π-path from an element of I(S) to an element of E(S) crosses S.
This theorem, which is Corollary 3.3.6 of [1], says that a Jordan surface S has properties reminiscent of those indicated by the Jordan Curve Theorem for simple closed curves in the plane: S is the boundary between its interior and its exterior, which do not intersect but contain all the spels between them, they are both π-connected, but one cannot get from the interior to the exterior by a π-path without crossing S. It is worthy of note that near-Jordanness is quite powerful by itself: with the possible exception of the third one, a near-Jordan surface has all the properties listed in Theorem 1 (as can be seen from Lemmas 3.2.1 and 3.2.2 of [1]). While it is quite impressive that such powerful-looking results can be stated (and proved) after only just a very few definitions, their practical usefulness is limited by the fact that it maybe very difficult (if not impossible) to check for an arbitrary surface whether or not it is near-Jordan. For this reason, [1] introduces a more desirable “local” property which under some circumstances implies near-Jordanness. A surfel (c, d) is said to cross S if exactly one of (c, d) ∈ S or (d, c) ∈ S. The surface S is said (where N is a positive integer) if, for any to be N -locally-Jordan π-path P = c(0) , · · · , c(K) such that (c(0) , c(K) ) ∈ S and 2 ≤ K ≤ N + 1, the number of surfels among (c(0) , c(1) ), · · · , (c(K−1) , c(K) ) that cross S is odd. N locally-Jordanness does not by itself imply near-Jordanness; we need to introduce two further conditions: one on the digital spaces (they have to be in some sense simply connected) and one on the surfaces (they have to be certain kinds of boundaries). We now discuss both of these conditions. If P = c(1) , · · · , c(m) , d(0) , · · · , d(n) , e(1) , · · · , e(l) (8) and
282
G.T. Herman
P = c(1) , · · · , c(m) , f (0) , · · · , f (k) , e(1) , · · · , e(l)
(9)
are π-paths such that f (0) = d(0) , f (k) = d(n) , and 1 ≤ k + n ≤ N + 2,
(10)
then P and P are said to be elementarily N-equivalent. The digital space is said to be N -simply connected if, for any π-path c(0) , · · · , c(K) such that c(K) = c(0) , there is a sequence of π-paths P0 , · · · , PL (L ≥ 0) such that P0 = c(0) , · · · , c(K) , (0) PL = c and, for 1 ≤ l ≤ L, Pl−1 and Pl are elementarily N -equivalent. This is a useful concept, as indicated by Theorem 6.3.5 of [1]: Theorem 2. For any positive integer N , (Z N , ωN ) is 2-simply connected. The kind of boundaries in which we are particularly interested appear in binary pictures over the digital space (V, π). These are defined as triples (V, π, f ), with f a function mapping V into {0, 1}. Those spels which map into 0 are called 0-spels and those which map into 1 are called 1-spels. In order to give the intuitively desired interpretation to objects in binary pictures, we are forced to consider simultaneously more than one adjacency. For example, in the two binary pictures shown below (in which the spels are from Z 2 ), the 1-spels form a letter O and a letter C respectively. 0 0 0 0 0 0 0 0 0
0 0 0 1 1 1 0 0 0
0 0 1 0 0 0 1 0 0
0 1 0 0 0 0 0 1 0
0 1 0 0 0 0 0 1 0
0 0 1 0 0 0 1 0 0
0 0 0 1 1 1 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 1 1 1 0 0 0
0 0 1 0 0 0 1 0 0
0 1 0 0 0 0 0 1 0
0 1 0 0 0 0 0 1 0
0 0 1 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
Note that neither the O nor the C forms an ω2 -connected set. On the other hand, if we define the binary relation δN on Z N by N (c, d) ∈ δN ⇔ 0< |cn − dn | ≤ 2 and, for 1 ≤ n ≤ N, |cn − dn | ≤ 1 , n=1
(11) then both the O and the C are δ2 -connected, but the inside of the O is also δ2 connected to its outside. For such reasons, it is customary and useful to consider different adjacencies for the 0-spels and for the 1-spels. In [1] a symmetric binary relation ρ on V is called a spel-adjacency if π ⊆ ρ. If κ and λ are spel-adjacencies, then a surface S is called a κλ-boundary in the binary picture (V, π, f ) if there is κ-component O of 1-spels and a λ-component
An Abstract Theoretical Foundation of the Geometry of Digital Spaces
283
Q of 0-spels, such that S = ∂(O, Q). In the previously-presented binary pictures over the digital space (Z 2 , ω2 ), the letter O forms a single δ2 -component and its inside and outside each forms a single ω2 -component. Hence there are two δ2 ω2 -boundaries in the binary picture on the left. On the other hand, there is only one δ2 ω2 -boundary in the binary picture on the right. For reasons explained quite in detail in early chapters of [1] (they have to do with being able to prove that certain computer procedures for multidimensional image processing perform as desired), it is useful to choose spel-adjacencies so that every non-empty κλ-boundary is κλ-Jordan, in the sense that it is nearJordan, its interior is κ-connected and its exterior is λ-connected. (Note that in view of Theorem 1, a Jordan surface is always a ππ-Jordan.) For this we need the concept of a tight spel-adjacency ρ, which is the property defined as having that, for all (c, d) in ρ, there exists a π-path c(0) , · · · , c(K) from c to d such that, for 0 ≤ k ≤ K, either (c(0) , c(k) ) ∈ ρ or (c(k) , c(K) ) ∈ ρ (or possibly both). Clearly, both ωN and δN are tight spel-adjacencies in (Z N , ωN ). The following is Theorem 6.2.7 of [1], it holds for all positive integers N . Theorem 3. Let κ and λ be tight spel-adjacencies in an N -simply connected digital space (V, π). A κλ-boundary is in a binary picture over (V, π) is κλJordan if, and only if, it is N -locally Jordan. The advantage of this theorem as compared to Theorem 1 is that the desirable property of being κλ-Jordan follows from the property of being N -locally Jordan, which appears to be a condition that is easier to check than the condition of being near-Jordan. We now show that under some circumstances this appearance very much corresponds to reality. We call a π-path c(0) , c(1) , c(2) , c(3) , c(0) a unit square if both c(0) = c(2) (1) (3) and c = c . An unordered pair {κ, λ} of spel-adjacencies in a digital space is said to be a normal pair if, for any unit square c(0) , c(1) , c(2) , c(3) , c(0) , we have (c(0) , c(2) ) ∈ κ or (c(1) , c(3) ) ∈ λ or both. It is easy to prove that, for any positive integer N , {δN , ωN } is a normal pair in (Z N , ωN ) (Theorem 6.3.8 of [1]). Theorem 4. If {κ, λ} is a normal pair of spel-adjacencies in a digital space and S is a κλ-boundary in a binary picture over the digital space, then S is 2-locally Jordan. This result (which is Lemma 6.3.3 of [1]) together with Theorems 2 and 3 implies that, for any positive integer N , every δN ωN -boundary in any binary picture over (Z N , ωN ) is δN ωN -Jordan. To emphasize what this means in the special case of tessellating space into cubes, we spell out in full its consequences in that space (Corollary 6.3.9 of [1]). Theorem 5. Let A be a nonempty proper subset of Z 3 . Let O be a δ3 -component of A and Q be an ω3 -component of Z 3 \A, such that ∂(O, Q) is not empty. Then there exists two uniquely defined subsets I and E of Z 3 with the following properties.
284
1. 2. 3. 4. 5.
G.T. Herman
O ⊆ I and Q ⊆ E. ∂(O, Q) = ∂(I, E). I ∪ E = Z 3 and I ∩ E = ∅. I is a δ3 -connected subset of Z 3 and E is an ω3 -connected subset of Z 3 . Every ω3 -path connecting an element of I to an element of E crosses ∂(O, Q).
From the point of view of our discussion here, the important aspect of this theorem is not what it says but rather the fact that the general results discussed earlier immediately yield similar theorems for other tessellations of three and other dimensional spaces and even for digital spaces which are obtained by means other than tessellating a Euclidean space. Many specific examples of this are presented in [1].
3
Generic Axiomatized Digital Surface-Structures
The material in this section is based on the approach presented in [2]. That paper aims at providing an axiomatic foundation of digital topology; it succeeds in this only partially inasmuch that it deals only with discrete structures which model subsets of the Euclidean plane and of other surfaces. Its main advantage over the approach of the previous section is its treatment of the digital version of “continuously deformed”: instead of having the awkward hierarchy of elementary N -equivalences, the allowable deformations are embedded into the very definitions of the structures, in the form of some “loops.” A basic difference between the conventions of [1] and [2] is that in the latter adjacencies are represented by unordered pairs. In order to accommodate the terminology of [2], we define a proto-edge in a digital space to be any two-element set {c, d} such that (c, d) is a surfel. A 2D digital complex is defined as a triple (V, π,L), where (V, π) is a digital space and L is a set of simple closed π-curves (its elements are called loops) such that the following conditions hold: 1. V contains more than one spel and if (c, d) ∈ π, then c = d. 2. For any two distinct loops L1 and L2 , L1 ∩ L2 is either empty, or consists of a single spel, or is a proto-edge. 3. No proto-edge is included in more than two loops. 4. Each spel belongs to only a finite number of proto-edges. If L2×2 is the set of all c(0) , c(1) , c(2) , c(3) such that c(0) , c(1) , c(2) , c(3) , c(0) is a unit square in (Z 2 , ω2 ), then (Z 2 , ω2 ,L2×2 ) is a 2D digital complex. Now we see why we use the term 2D digital complex: if we tried to do the same for the space (Z 3 , ω3 ) we would violate Condition 3 of the definition, since each proto-edge would be included in four loops. For an arbitrary spel-adjacency ρ in (V, π), let the P in (8) and the P in (9) be two ρ-paths. They are said to be elementarily loop-equivalent in (V, π,L) if 1. either there is a proto-edge {c, d} such that one of d(0) , · · · , d(n) and (0) f , · · · , f (k) is c and the other is c, d, c,
An Abstract Theoretical Foundation of the Geometry of Digital Spaces
285
2. or f (0) = d(0) , f (k) = d(n) , and there is a loop which contains d(0) , · · · , d(n) and f (0) , · · · , f (k) . A ρ-path c(0) , · · · , c(K) such that c(K) = c(0) , is said to be ρreducible in (V, π,L) if there is a sequence of ρ-paths P0 , · · · , PL such that P0 = c(0) , · · · , c(K) , PL = c(0) and, for 1 ≤ l ≤ L, Pl−1 and Pl are elementarily loop-equivalent in (V, π,L). The 2D digital complex (V, π,L) is said to be simply connected if every π-path c(0) , · · · , c(K) such that c(K) = c(0) is π-reducible in it. A spel c of (V, π,L) is called interior if every proto-edge that contains c is included in two loops. A 2D digital complex is called a pseudomanifold if all its spels are interior. (Trivially, every proto-edge of a pseudomanifold is included in exactly two loops.) It is easy to see that (Z 2 , ω2 ,L2×2 ) is a pseudomanifold. Two loops L and L of a 2D digital complex are said to be adjacent if L ∩ L is a proto-edge. A subset L of L is said to be strongly connected if, for any two loops L and L of L , there exists a sequence L0 , · · · , LK of loops in L such that L0 = L, LK = L and, for 1 ≤ k ≤ K, Lk−1 and Lk are adjacent. The 2D digital complex (V, π,L) is said to be strongly connected if L is strongly connected. A spel c is said to be a singularity of a 2D digital complex if the set of all loops that contain c is not strongly connected. The following is Proposition 3.7 of [2]. Theorem 6. A 2D digital complex that is both simply connected and strongly connected has no singularities. For an arbitrary spel-adjacency ρ in (V, π), let C be a simple closed ρ-curve. A ρ-path c(0) , · · · , c(|C|) , where |C| denotes the number of elements in C, which is such that c(|C|) = c(0) and C = c(1) , · · · , c(|C|) is called a ρ-parameterization of C. (Note that this exists.) We say that in this parameterization c(k−1) precedes c(k) and c(k) follows c(k−1) , for 1 ≤ k ≤ |C|. Let L and L be two adjacent loops of a 2D digital complex and let {c, d} ∈ L∩ L . We say that a π-parameterization of L is coherent with a π-parameterization of L if c precedes d in one of the π-parameterizations and c follows d in the other. A 2D digital complex (V, π,L) is said to be orientable if there is a function Ω with domain L such that: 1. For each loop L in L, Ω(L) is a π-parameterization of L. 2. For all pairs of adjacent loops L and L , Ω(L) is coherent with Ω(L ). It is easily seen that (Z 2 , ω2 ,L2×2 ) is orientable. A generic axiomatized digital surface-structure (or GADS, for short) is a pair G = ((V, π,L), (κ, λ)), where (V, π,L) is a 2D digital complex (called the complex of G, whose spels, proto-edges and loops are also referred to as the spels, protoedges and loops of G) and κ and λ are spel-adjacencies in (V, π) that satisfy: Axiom 1. If (c, d) ∈ κ ∪ λ, then c = d. Axiom 2. If (c, d) ∈ (κ ∪ λ) \ π, then some loop contains both c and d. Axiom 3. If {c, d} is a subset of a loop L, but it is not a proto-edge, then (a) (c, d) ∈ λ if, and only if, L \ {c, d} is not κ-connected and (b) (c, d) ∈ κ if, and only if, L \ {c, d} is not λ-connected.
286
G.T. Herman
(Note if in the underlying complex it is the case that, for every unit square (0) that c , c(1) , c(2) , c(3) , c(0) , c(0) , c(1) , c(2) , c(3) is a loop, then Axiom 3 implies that {κ, λ} is a normal pair of spel-adjacencies.) The following is Theorem 6.1 of [2]. Theorem 7. Let C be a simple closed (κ ∩ λ)-curve contained in a loop L of a GADS ((V, π,L), (κ, λ)). Then C has one of the following properties: 1. For all distinct c and d in C, (c, d) ∈ κ. 2. For all distinct c and d in C, (c, d) ∈ λ. A GADS ((V, π,L), (κ, λ)) is a subGADS of a GADS ((V , π ,L ), (κ , λ )) if: 1. V ⊆ V , π ⊆ π and L⊆L . 2. For all L ∈L, κ ∩ L2 = κ ∩ L2 and λ ∩ L2 = λ ∩ L2 , where L2 denotes the set of ordered pairs of elements of L. (It is a consequence of this definition and Axiom 2 that if ((V, π,L), (κ, λ)) is a subGADS of ((V , π ,L ), (κ , λ )), then κ ⊆ κ and λ ⊆ λ .) A GADS assumes the properties of its complex; thus a GADS is said to be simply connected, strongly connected or orientable if its complex is simply connected, strongly connected or orientable, respectively. Thus we can state the following, which is Proposition 5.1 of [2]. Theorem 8. A subGADS of a simply connected GADS is orientable. We are now ready to state Theorem 8.1 of [2], which is a very general version of the Jordan Curve Theorem for GADS. In it we make use of the concept of a pGADS, which is a GADS whose complex is a pseudomanifold. Theorem 9. Let ((V, π,L), (κ, λ)) be a GADS that is a subGADS of an orientable pGADS ((V , π ,L ), (κ , λ )) whose complex has no singularities. Let P be a κ-parameterization of a simple closed κ-curve C in V such that: 1. C is not included in any loop in L. 2. Every spel in C is an interior spel of (V, π,L). 3. P is κ -reducible in (V , π ,L ). Then V \ C has exactly two λ-components and, for each spel c in C, Nλ (c) (the set of all spels λ-adjacent to c) intersects both of these λ-components. To illustrate the applicability of this theorem, we discuss its implication for ((Z 2 , ω2 ,L2×2 ), (δ2 , ω2 )), which is easily checked to be a GADS. Based on previously made remarks, we see that it is in fact an orientable pGADS and it is easy to see that its complex has no singularities. In this application of Theorem 9 we can use this same GADS for the two GADS mentioned in that theorem. Furthermore, it is not difficult to prove (similarly how our Theorem 2 is proved in [1]) that every δ2 -path is δ2 -reducible in (Z 2 , ω2 ,L2×2 ). Recognizing that a simple closed δ2 -curve in Z 2 which contains at least four spels cannot be contained in an element of L2×2 and that every spel of a pGADS is interior, all this proves the following (which is purposely stated to resemble Theorem 5):
An Abstract Theoretical Foundation of the Geometry of Digital Spaces
287
Theorem 10. Let C be a simple closed δ2 -curve in Z 2 which contains at least four spels. Then there exists two uniquely defined nonempty subsets I and E of Z 2 with the following properties. 1. 2. 3. 4. 5.
I ⊆ Z 2 \ C and E ⊆ Z 2 \ C. For every c in C, both I ∩ Nω2 (c) and E ∩ Nω2 (c) are nonempty. I ∪ E ∪ C = Z 2 and I ∩ E = ∅. Both I and E are ω2 -connected subsets of Z 2 . Every ω2 -path connecting an element of I to an element of E contains an element of C.
Again, the important aspect of such a theorem is that it is just one example of many similar theorems that can be derived from Theorem 9 for a variety of GADS. Some examples of interesting GADS are given in [2].
4
Discussion
Although the approaches of the two previous sections are clearly related, there are many differences between them. An inessential one is expressed by Condition 1 of the definition of a 2D digital complex; such restrictions are not made in the definition of a digital space. However, nothing interesting can be said without this restriction which cannot be said in its presence and so there is no harm in restricting our study of digital spaces to those which satisfy Condition 1. A more interesting difference is due to the use of proto-edges rather than surfels. The concept of surfel is not even mentioned in [2], everything there is developed in terms of proto-edges. That material has been rewritten for Section 3 so as to make it notationally consistent with the previous section. However, there is more than notation at stake here. The use of surfels allowed us to define in a natural way the concept of a surface. The corresponding concept in GADS is a simple closed curve, which is a very different sort of animal: surfaces in digital spaces are sets of surfels, while curves in GADS are sets of spels. The consequences of this difference in approach become evident when comparing Theorems 5 and 10. Since both approaches have been utilized in the literature, it seems desirable to develop a theory capable to deal with them simultaneously. To investigate this, let us start with a situation in which all the conditions of Theorem 9 are satisfied (as a specific illustration consider Theorem 10 and the letter O on the left of the previously shown figure as the simple closed δ2 -curve C). Creating a binary picture in which the 1-spels are elements of C (which is κ-connected), we see that there are exactly two κλ-boundaries, namely ∂(C, Q1 ) and ∂(C, Q2 ) with Q1 and Q2 being the two λ-components of V \ C. Under what additional conditions on the GADS in Theorem 9 are these two boundaries guaranteed to be κλ-Jordan? (That they can indeed be such is illustrated by the corresponding boundaries of the O in the figure; see also the more general statement after Theorem 4.) If they were, then we would have a way of going from C to surfaces which approximate it and have desirable properties.
288
G.T. Herman
On the other hand suppose that we have a κλ-Jordan surface S in the digital space of the underlying complex of a GADS. Under what conditions is II(S) a simple closed κ-curve? If it is and if the other conditions of Theorem 9 are also satisfied, then one of the λ-components of V \ II(S) implied by Theorem 9 has to be E(S) and the other has to be I(S) \ II(S). (This implies that I(S) itself is λ-connected, in addition to being κ-connected, giving us one necessary condition.) Symmetrical conditions would be obtained for insuring that IE(S) is a simple closed λ-curve. One might even investigate the circumstances under which IE(S) is a simple closed κ-curve, but this seem less likely to lead to useful results for a κλ-Jordan surface S. A major disadvantage of the GADS-based approach is its restriction to 2D digital complexes and consequently (in that approach) to curves to play the role of “surfaces which separate space into two components.” An attempt has been made in [1] to introduce more general structures to fulfill such a role. In any digital space (V, π), a nonempty subset of P of V is called a spel-manifold if it satisfies the following three conditions: 1. P is π-connected. 2. For each c ∈ P , Nπ (c) \ P has two π-components, 3. For each c ∈ P and for each d ∈ Nπ (c)∩P , Nπ (d) has a nonempty intersection with both π-components of Nπ (c) \ P . This definition is half satisfactory in the sense that it is the case that [1, Theorem 7.3.1] if P is a spel-manifold in a digital space (V, π), then V \ P has at most two π-components, but it is not guaranteed to have more than one π-component, even if we restrict ourselves to rather special digital spaces [1, Theorem 7.3.2]. It appears desirable to find some nice conditions on a set of spels which would guarantee that it is a κλ-manifold in the sense that it is κ-connected and its complement has exactly two λ-components. Another intriguing approach is to generalize the notion of a 2D digital complex to N dimensions. Even though it is not too difficult to envision how to do this satisfactorily (an inductive definition is a possibility), it is less clear how such a generalization can be combined with spel-adjacencies; in particular, Axiom 3 seems to be very anchored to the two-dimensional environment. In summary, the previous two sections have illustrated that powerful results can be proven in an abstract framework; these results have immediate consequences in the many specific theories that have been put forward to study geometry in a digital framework. However, as discussed in this final section, not all aspect of such a theoretical foundation have yet been satisfactorily resolved.
References 1. Herman, G.T.: Geometry of Digital Spaces. Birkh¨ auser, Boston Basel Berlin (1998) 2. Fourey, S., Kong, T.Y., Herman, G.T.: Generic Axiomatized Digital SurfaceStructures, submitted for publication in Disc. Appl. Math. (preliminary version appeared in Electronic Notes in Theoretical Computer Science 46 (2001) see http://www.elsevier.nl/gej-ng/31/29/23/86/27/show/Products/notes/index.htt)
Concurrency of Line Segments in Uncertain Geometry Peter Veelaert Hogent, Schoonmeersstraat 52, 9000 Ghent, Belgium,
[email protected] Abstract. We examine the derivation of consistent concurrency relations in uncertain geometry. This work extends previous work on parallelism and collinearity. We introduce the concept of a metadomain, which is defined as the set of parameter vectors of lines passing through two domains, where a domain is defined as the uncertainty region of the parameter vector of a line segment. The intersection graph of the metadomains is introduced as the primary tool to derive concurrency relations.
1
Introduction
We consider the concurrency of digital line segments within the framework of uncertain geometry, a geometric theory introduced to model the uncertainty of positions as well as geometric properties of objects in a digital image. In uncertain geometry points are equipped with uncertainty regions. The notion of an uncertainty region coincides with the use of a structuring element in the discretization by dilation scheme developed by Heijmans and Toet [1]. The definition of concurrency was introduced previously together with definitions for digital collinearity and parallelism [2]. One of the main characteristics of uncertain geometry is that properties such as parallelism, collinearity and concurrency are not necessarily consistent in the same way they are consistent in Euclidean geometry, and that somehow we must restore this consistency wherever it is needed in an application. In uncertain geometry when the line A is parallel with B, and B with C, then this does not necessarily imply that A is also parallel with C [2,3]. Thus, for the sake of consistency, after establishing a set of geometric relations between objects in uncertain geometry, we must repartition the objects, discard some relations as well as add new ones to obtain a consistent set of relations. This repartitioning can be accomplished by an optimal grouping process as was done for collinearity and parallelism in previous work [3,4,5]. As for parallelism, for example, the grouping process led to consistency by extracting cliques from an interval graph, in which each interval represented the uncertainty about the slope of a line segment. Since concurrency concerns triples of lines, a direct approach to optimal grouping would involve the extraction of cliques from a hypergraph, where A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 289–300, 2002. c Springer-Verlag Berlin Heidelberg 2002
290
P. Veelaert
the hyperedges represent concurrent triples [6]. Here, to simplify the grouping process, we reduce the conditions imposed on concurrency, similar to the way collinearity was replaced by the concept of weak collinearity to simplify the extraction of collinear groups of line segments [5]. What we propose in this paper is one approach to geometric reasoning when the position and parameter vectors are unprecise or uncertain. Also in robotics, mechanical design and computer vision, there is a need to deal with uncertainty. The models proposed there include the use of finite precision arithmetic [7], the use of probability density functions [8,9], the use of tolerance zones for mechanical parts [10], and significance measures for geometric relations [11]. However, these methods have not yet been integrated into a larger mathematical framework, what this work is aimed at. In Section 2 we briefly sketch how we derive concurrency relations. In Section 3 we give more details on the computation of a metadomain. Section 4 examines the extraction of consistent relations. We conclude the paper in Section 4.
2
Concurrency and Line Transversals
During discretization the precise knowledge about the position of geometric objects is lost. We model this uncertainty by an uncertainty region that we associate with each point. The discretization process that coincides naturally with this notion of uncertainty is the discretization by dilation scheme developed by Heijmans and Toet [1]. The structuring element Ap used in this scheme coincides with our notion of an uncertainty region associated with a point p. Furthermore, to keep the complexity of the computations acceptable, as model for the uncertainty of the position of a digital point p = (x, y), we often use a very simple uncertainty region, i.e, the vertical line segment Cp (τ ), which comprises all points (x, b) ∈ IR2 that satisfy y − τ /2 ≤ b < y + τ /2. Here τ is a positive real number, which controls the uncertainty. Also to simplify the exposition, we restrict ourselves to the concurrency of straight lines of the form y = αx + β, where the slope α satisfies −1 < α < 1. We assume that each set S contains at least two points with distinct x-coordinates. We shall also not discuss how to deal with lines of slope |α| ≥ 1, which can be done along the same lines as discussed in [2,4] for parallelism. Definition 1. A finite digital set S ⊂ Z 2 is digitally straight if there is a Euclidean straight line that cuts all uncertainty segments Cp (τ ), p ∈ S. We call such a set also a digital line segment. Definition 2. Let S1 , . . . , Sn be a finite collection of digital line segments. We define the following properties: – The sets {Si : i = 1, . . . , n} are called digitally collinear if there exists a common Euclidean straight line A that cuts the uncertainty segments of all the sets Si ;
Concurrency of Line Segments in Uncertain Geometry
291
– The sets in {Si : i = 1, . . . , n} are called digitally parallel if there exist n Euclidean straight lines A1 , . . . , An that are parallel and such that for i = 1, . . . , n the line Ai cuts the uncertainty segments of the set Si ; – The sets in {Si : i = 1, . . . , n} are called digitally concurrent if there exist n Euclidean straight lines A1 , . . . , An that meet in a common point and such that for i = 1, . . . , n the line Ai cuts the uncertainty segments of the set Si ; In the remainder the notions of collinearity, parallelism and concurrency are used in the sense of Definition 2, unless we specify explicitly that we are using the Euclidean definition. When the sets S1 , . . . , Sn are digitally concurrent we denote this as conc(S1 , . . . , Sn ). The parameter vectors of the Euclidean lines passing through the uncertainty regions of a digital line segment, define a new kind of uncertainty region, also called the domain of the line segment. Definition 3. Let S be a finite digital set that contains at least two points with distinct x-coordinates, and let τ be the uncertainty parameter. Then the domain of S, denoted as domx S, is the set of all parameter vectors (α, β) ∈ IR2 that satisfy the following system of inequalities: − τ /2 ≤ αxi + β − yi ≤ τ /2, (xi , yi ) ∈ S.
(1)
Note that a domain is defined here as a convex, closed polygon. The properties of Definition 2 can be reformulated in terms of domains [2]. Two digital line segments are collinear when their domains intersect. Two line segments are parallel when the intervals that result from the projection of the domains upon the α-axis have a none-empty intersection. Finally, three line segments are concurrent if there is a straight Euclidean line passing through their domains. Attaching uncertainty regions to points and parameter vectors leads to additional levels of uncertainty. In fact, the introduction of domains can proceed indefinitely. Thus, we may consider the uncertainty of the parameter vector of a Euclidean line passing through two or more domains, and in this way construct a metadomain. One of the strong points of the use of uncertainty regions is that on each level the uncertainty depends in a direct way on the uncertainties introduced at the first level, that is, on positions of image points. Given a collection of digital line segments and their domains, we shall perform the following steps to find out whether the line segments are concurrent or not: 1. 2. 3. 4.
compute the metadomains for pairs of domains; construct an intersection graph of the metadomains; find a consistent grouping on the vertices of the intersection graph; compute the metadomains for the groups.
We illustrate these steps further in Fig. 1. In Fig. 1(a) we are given a collection of digital line segments which have been extracted from a digital image by use of the Ransac method [12]. The domains of these segments are shown in Fig. 1(b), where each domain corresponds to one line segment in Fig. 1(a). Large line segments correspond to small domains, because there is less uncertainty regarding their slope and height. Note, for example, that the domain of line segment G is
292
P. Veelaert
Fig. 1. (a) Scanned image of handmade line drawing, (b) domains, and (c) intersection graph.
much smaller than the domain of F , because G actually consists of two parts as shown in Fig. 1(a). Because the line segments were drawn by hand, to be able to extract at least some geometric relations, we have chosen a relatively large value for the uncertainty parameter used to compute the domains, i.e, τ = 5. Once they have been computed, the domains can serve as new uncertainty regions, for which we want to find line transversals, as line transversals determine concurrency relations. Therefore, for each pair of non-parallel digital line segments we derive a metadomain formed by the parameter vectors of those Euclidean lines that cut their domains. Definition 4. Let A and B be two digital line segments with domains domx A and domx B. In addition assume that A and B are not-parallel, that is, domx A and domx B can be separated by a Euclidean line parallel to the β-axis. The metadomain consists of all parameter vectors (p, q) such that the Euclidean line β − αp − q = 0
(2)
cuts domx A as well as domx B in the αβ-plane. The points of a metadomain can be identified with the points in the original image space. In fact, since the Euclidean line (2) cuts both domains, it contains
Concurrency of Line Segments in Uncertain Geometry
293
Fig. 2. Metadomains for pairs of lines.
two points (α1 , β1 ) and (α2 , β2 ) such that (x, y) = (p, q) lies on a line β1 − α1 x − y = 0 which cuts all uncertainty regions of points in A, as well as on the line β2 − α2 x − y = 0 which cuts all uncertainty regions of B. In this sense the metadomain is the uncertainty region of the intersection point (p, q) of a Euclidean line passing through the uncertainty regions of A and a Euclidean line passing through the uncertainty regions of B. Therefore, the metadomains of the domains in Fig. 1(b) can be superimposed on the original image, as done in Fig. 2. Next, we construct the intersection graph of the metadomains, as shown in Fig. 1(c). Each metadomain is represented by a vertex. When two metadomains intersect, they are joined by an edge. Thus the existence of an edge refers to the fact that there is a Euclidean line cutting either three or four domains. The precise number of domains cut depends on whether the two domain pairs have one domain in common or not. For example, since the vertices AG and BG are adjacent, there must be a straight line cutting the domains A, B and G. Deriving concurrency relations directly from the intersection relations of the metadomains would almost always lead to inconsistency. Instead, as will be explained later, we perform a consistent grouping of the vertices of the intersection graph of the metadomains. We shall prove that we can obtain consistency when we extract cliques from the graph such that two cliques do not have a common neighbor. Next, for each clique we compute the common intersection of the metadomains that correspond to the vertices of the clique. These intersections are shown in Fig. 3. They form new uncertainty regions, representing the uncertainty regarding the position of the intersection point of three or more concurrent lines. For example, the region ABF corresponds to the uncertainty of the intersection point of the three Euclidean lines passing through the uncertainty regions of A, B and F . Each clique gives rise to a concurrency relation, pro-
294
P. Veelaert
Fig. 3. Uncertainty regions for intersection points.
vided the intersection of the metadomains that are involved is non-empty. The cliques of the graph in Fig. 1(c) yield the following candidates for the concurrency relations: conc(A, B, F ), conc(A, B, G), conc(A, B, H), conc(A, B, D, E), conc(C, E, H) and conc(C, E, G). Finally, to improve consistency we may discard illegal cliques that contain two or more parallel lines. Since C and E are parallel, but not collinear, the only relations that remain are conc(A, B, F ), conc(A, B, G), conc(A, B, H). They can coexist since A and B are in fact collinear.
3
Derivation of Convex Metadomains from Domain Pairs
Since the construction of the intersection graph of the metadomains is greatly simplified when the metadomains are convex, an important question is the following: Given two disjoint convex domains, when is their metadomain also convex? To examine this, given two domains A and B we introduce the lines and halflines shown in Fig. 4(a). Here V denotes a Euclidean line of minimal slope cutting A as well as B. Similarly, we let W denote a transversal of maximal slope. In addition, we choose a point au ∈ (V ∩ A) common to the line V and the set A, as well as a point bl ∈ (V ∩ B). Let Au denote the halfline starting at au parallel to the y-axis and extending in the direction of increasing y-values. Similarly we introduce the points bu , al for the transversal of maximum slope W , and the halflines Al , Bu and Bl , as shown in Fig. 4. We then have the following result.
Concurrency of Line Segments in Uncertain Geometry
295
Fig. 4. A non-vertical line that does not cut Au , Al , Bu , Bl , must cut A and B.
Lemma 1. Let A and B be two domains that are convex polygons, and let Al , Au , Bl and Bu be defined as in Figure 4. If there is a transversal cutting either Bl or Bu then one of the supporting lines of B cuts A. Proof. Without loss of generality we suppose there is a transversal L of A and B that cuts the halfline Bu and that does not pass through any of the vertices of B, as shown in Fig. 4. We choose an arbitrary point p ∈ A ∩ L, and we define c as the point c ∈ L ∩ B such that the Euclidean line segment cp has only c in common with B. Let M denote the supporting line of B that passes through c. Since B is convex, the slope of M must be smaller than the slope of W , but larger than the slope of L. Hence M cuts Bu as well as the domain A. Proposition 1. Let A and B be two domains that are convex polygons. If the supporting lines of one domain do not cut the other domain then their metadomain is a convex polygon. Proof. Let al = (pal , qal ), . . . , bu = (pbu , qbu ) be the coordinates of the endpoints. By Lemma 1 if no supporting line of either A or B cuts the other set, then there is no transversal line cutting one or more of the halflines Al , Au , Bl or Bu . On the other hand, any line that cuts none of these four halflines is a transversal of A and B. Therefore the metadomain in the uv parameter space is determined by the four inequalities v − qal − upal > 0 v − qbl − upbl > 0 (3) v − qau − upau < 0 v − qbu − upbu < 0. Hence the domain is convex. Metadomains are not always convex. It is easy to construct a counterexample as follows. By the identification of the metadomain with the uncertainty region of an intersection point it follows that the metadomain can also be considered as the intersection of two collinearity regions as defined in [2]. Furthermore,
296
P. Veelaert
Fig. 5. Non-convex uncertainty region for the intersection point of two lines.
since collinearity regions are not necessarily convex, it is not difficult to find two collinearity regions whose common intersection is not convex, as shown in Fig. 5. In [2] a collinearity region of a digital set S is defined as the set consisting of those points p in IR2 for which there is at least one Euclidean line passing through p as well as through the uncertainty regions associated with the points of S. In Fig. 5 the intersection of the two collinearity regions is shown as a gray, non-convex region. Since metadomains are not necessarily convex, to construct the intersection graph of Fig. 1(c), we replace the metadomain by its convex hull, which is done as follows. First, we compute the parameter vector of each line that passes through any vertex of the first domain and any vertex of the second domain, next we take the convex hull of the parameter vectors. This approach overestimates uncertainty. When the conditions of Proposition 1 are met, however, which for real lines is often the case, then the metadomain coincides with its convex hull without any overestimation of uncertainty.
4
Consistency and Grouping
From the metadomains we construct an intersection graph, in which two vertices are joined by an edge when the two metadomains have a non-empty intersection. Next, the connectivity of the intersection graph is used to find larger groups of metadomains that have a common non-empty intersection. The basic idea is that when two vertices are joined by an edge then this points to the existence of a line crossing three or four domains, and therefore also points to a concurrency relation for digital line segments, i.e., the line segments whose domains have given rise to the metadomains that correspond with the two vertices. The extension of this idea is to extract cliques from the intersection graph, since the vertices of a large clique will often correspond to a large collection of metadomains that have a common non-empty intersection, and thus indirectly it will correspond to a concurrency relation that involves a large number of line segments. There is no guarantee, however, that the metadomains of clique always have a non-empty intersection. Fig. 6(a) shows an exception, where we have six line segments, and the metadomains AB, CD and EF which are
Concurrency of Line Segments in Uncertain Geometry
297
Fig. 6. Clique whose metadomains have no common intersection.
Fig. 7. Minimum clique covering leads to inconsistencies.
shown here as the uncertainty regions of the six line segments. Fig. 6(b) shows a subgraph of the intersection graph induced by the vertices AB, CD, EF , which form a clique. Nonetheless, the intersection of the metadomains is empty. Such cases are rare, however. In fact, they can only occur when the union of the three metadomains forms a topological torus. In this work, whenever such a situation occurred we solved this problem by taking the convex hull of the hole of the torus as the uncertainty region of the intersection point. A second problem is that most groupings, and therefore cliques also, lead to inconsistent geometric relations. This is illustrated in Fig. 7. A minimum clique covering algorithm may propose, for example, the cliques AC − AB − BC and AD − BD and thus the concurrent relations conc(A, B, C), conc(A, B, D). Since these two triples share two line segments (A and B), we should also have conc(A, B, C, D), which is not the case here, because some edges are missing, and therefore some of the metadomans that are involved do not have a common non-empty intersection. Informally we can define a set of relations between digital line segments as consistent if the set contains no relations that cannot occur in Euclidean geometry. A further requirement would be that for sufficiently small values of the uncertainty parameter τ the relations found in uncertain geometry should coincide with Euclidean geometry. Note, however, that consistency is violated already at the lowest level, since parallelism and collinearity are not transitive in
298
P. Veelaert
uncertain geometry. It follows that we can only restore consistency upto a certain degree. For example, even when we restore the usual properties of parallelism, collinearity and concurrency, more involved properties such as Pappus’ Theorem may still not hold. As for concurrency, in this work we restrict ourselves to the removal of the most obvious inconsistencies: (A B) ∧ (A = B) ∧ conc(A, B, C),
(4)
(A
B ) ∧ (A
C ) ∧ (B
C ) ∧ conc(A, B, C) ∧ conc(B, C, D) ∧ ¬conc(A, B, D). (5) To remove inconsistencies of the form (4) or (5) we shall use the following result, which gives a sufficient, but not necessary, condition. Lemma 2. Let G be the intersection graph of the metadomains of a collection of digital line segments A, B, . . .. Let S be a set of disjoint cliques of G such that the pathlength between two vertices belonging to distinct cliques in S is always larger than two. If in the set of concurrency relations implied by the cliques in S we have two relations of the form conc(A, B, C, . . .) and conc(A, B, P, . . .), then we have A B. Proof. Suppose we have conc(A, B, C, . . .) and conc(A, B, P, . . .). Assume that A B . Then AB appears as a vertex in the intersection graph. Since there is a clique whose vertices imply the relation conc(A, B, C, . . .), either AB must belong to this clique, or there must be vertices AX and BY in the clique, where X and Y can even be identical. Since AX and BY are adjacent, there is a common transversal cutting the four domains A, X, B and Y . Hence, both AX and BY must also be adjacent to AB. Similarly, in the clique generating the relation conc(A, B, P, . . .), there must be vertices AU and BV that are also adjacent to AB. In this case both cliques would have a common neighbor. Since we exclude such cliques, it follows that the assumption A
B cannot be true. In fact, two relations of the form conc(A, B, C, . . .) and conc(A, B, P, . . .) can only arise when AB is not a vertex of the graph, or in other words when A B. In particular, A may even be collinear with B. To find consistent concurrency relations, we use a greedy algorithm which is based on Lemma 2. First, we extract a maximum clique and remove all its neighbors as well as the neighbors of these neighbors from the intersection graph of the metadomains. By proceeding with the next maximum clique in the graph, we construct thus a list of cliques such that no two cliques share a common neighbor. Next, from each clique we derive the implied concurrency relation. For example, a clique with vertices AB, CD, AE implies conc(A, B, C, D, E). The result is shown in Fig. 3, where we draw the uncertainty regions for the intersection points of a number of concurrency relations, i.e., the region ABH for the relation conc(A, B, H). Finally, we remove those concurrency relations which involve one or more pairs of digital line segments that are parallel but not collinear. Since C and E are parallel, only conc(A, B, F ), conc(A, B, G), conc(A, B, H) are left as valid concurrency relations.
Concurrency of Line Segments in Uncertain Geometry
299
With the above algorithm we can avoid inconsistencies of the form (4) as well as (5). Suppose, for example, that we have found the concurrency relations conc(A, B, C) and conc(A, B, D). Because the extracted cliques satisfy the conditions of Lemma 2, it follows that A B, which excludes an inconsistency of the form (5). In addition, if A is not collinear with B, then both concurrency relations will be discarded in the final step of the algorithm where we examine parallel pairs. This excludes an inconsistency of the form (4). If A and B are collinear, however, then the concurrency relations will not be discarded because they can coexist without any contradiction.
5
Concluding Remarks
We presented a method for the derivation of concurrency relations in uncertain geometry. The removal of inconsistencies is a major part in this approach. Without doubt, there is still no unique decisive procedure that guarantees the removal of all possible geometric inconsistencies. In this work we have proposed one approach, which gives satisfactory results on real images, but still has some shortcomings. In particular, the following important questions have been barely examined. How can we obtain consistency when we consider distinct types of relations? For example, if A is collinear with D while A, B and C are concurrent, then consistency requires that B, C and D must also be concurrent. Upto now the consistency of properties such as parallelism, collinearity and concurrency have only been examined separately. Furthermore, in real applications we must combine the extraction of linear structure with other properties such as proximity and symmetry. Also this question has not been sufficiently examined yet. Finally, the uncertainty is controlled by the uncertainty parameter τ . When analyzing relations in a digital image, we often derive geometric relations for different values of τ . It is a natural requirement that these relations should behave in a consistent way when we increase or decrease τ .
References 1. H. Heijmans and A. Toet, “Morphological sampling,” CVGIP: Image Understanding, vol. 54, pp. 384–400, 1991. 2. P. Veelaert, “Geometric constructions in the digital plane,” J. Math. Imaging and Vision, vol. 11, pp. 99–118, 1999. 3. P. Veelaert, Parallel line grouping based on interval graphs, Proc. of DGCI 2000, vol. 1953 of Lecture Notes in Computer Science, pp. 530–541. Uppsala, Sweden: Springer, 2000. 4. P. Veelaert, “Graph-theoretical properties of parallelism in the digital plane,” submitted. 5. P. Veelaert, “Collinearity and weak collinearity in the digital plane,” Digital and Image Geometry, vol. 2243 of Lecture Notes in Computer Science, pp. 434–447, Springer, 2001. 6. P. Veelaert, “Line grouping based on uncertainty modeling of parallelism and collinearity,” in Proceedings of SPIE’s Conference on Vision Geometry IX, (San Diego), pp. 36–45, SPIE, 2000.
300
P. Veelaert
7. V. J. Milenkovic, “Verifiable implementations of geometric algorithms using finite precision arithmetic,” in Geometric Reasoning (Kapur and Mundy, eds.), pp. 377– 401, Cambridge: MIT Press, 1989. 8. H. F. Durrant-Whyte, “Uncertain geometry in robotics,” IEEE Trans. Robotics Automat. , pp. 23–31, 1988. 9. H. F. Durrant-Whyte, “Uncertain geometry,” in Geometric Reasoning (Kapur and Mundy, eds.), pp. 447–481, Cambridge: MIT Press, 1989. 10. A. Fleming, “Geometric relationships between toleranced features,” in Geometric Reasoning (Kapur and Mundy, eds.), pp. 403–412, Cambridge: MIT Press, 1989. 11. D. Lowe, “3-d object recognition from single 2-d images,” Artificial Intelligence, vol. 31, pp. 355–395, 1987. 12. M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” CACM, vol. 24, pp. 381–395, 1981.
Discretization in 2D and 3D Orders Michel Couprie, Gilles Bertrand, and Yukiko Kenmochi Laboratoire A2 SI, ESIEE Cit´e Descartes, B.P. 99 93162 Noisy-Le-Grand Cedex France, {coupriem,bertrand,kenmochy}@esiee.fr
Abstract. Among the different discretization schemes that have been proposed and studied in the literature, the supercover is a very natural one, and furthermore presents some interesting properties. On the other hand, an important structural property does not hold for the supercover in the classical framework: the supercover of a straight line (resp. a plane) is not a discrete curve (resp. surface) in general. We follow another approach based on a different, heterogenous discrete space which is an order, or a discrete topological space in the sense of Paul S. Alexandroff. Generalizing the supercover discretization scheme to such a space, we prove that the discretization of a plane in R3 is a discrete surface, and we prove that the discretization of the boundary of a “regular” set X (in a sense that will be precisely defined) is equal to the boundary of the discretization of X. This property has an immediate corollary for half-spaces and planes, and for convex sets. Keywords: discretization, topology, orders, supercover, discrete surfaces
1
Introduction
An abundant literature is devoted to the study of discretization schemes. Let E be an “Euclidean” space, and let D be a “discrete” space related to E. Typically, one can take E = Rn and D = Zn (n = 2, 3), but we do not limit ourselves to this case. A discretization scheme associates, to each subset X of E, a subset D(X) of D which is called the discretization of X. Different discretization schemes have been proposed and compared with respect to some fundamental geometrical, topological and structural properties. We may, for example, ask the following questions: if X ⊆ E is the image of X by a symmetry, is D(X ) the image of D(X) by the same symmetry ? If X is connected, is D(X) connected (in some sense) ? And if X is a curve, is D(X) a curve (in some sense) ? In this paper, we consider the discretization scheme called supercover, and we focus on some structural properties. Consider E = R2 , for simplicity, and let D be the set of all closed squares in E with side 1 and the vertices of which have integer coordinates (the elements of D are often called pixels). Let X be a subset of E, the supercover of X is the set of all the pixels that have a non-empty intersection with X. The supercover has many interesting properties, which have been studied by several authors [9,10,2,1,8,21,22]. In particular, Andr`es [1] proposed an analytical characterization of the supercover of straight lines, and more generally A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 301–312, 2002. c Springer-Verlag Berlin Heidelberg 2002
302
M. Couprie, G. Bertrand, and Y. Kenmochi
for hyperplanes and for simplices in higher dimensions. Also, Ronse and Tajine showed that the supercover is a particular case of Hausdorff discretization [21, 22]. But the supercover has also a drawback for thin objects such as straight lines. If a straight line δ in R2 goes through a point with integer coordinates, then the supercover of δ contains the four pixels that cover this point - this configuration is called a “bubble” (Fig. 1(a)). An extreme case is when δ is horizontal or vertical, and hits elements of Z2 (Fig. 1(b)): the supercover of such a line is 2-pixel thick. Thus, the supercover of a straight line cannot be seen as a discrete curve.
(a) (b) Fig. 1. (a): A straight line segment and its supercover (shaded), which contains a “bubble” (set of four pixels sharing a common vertex). (b): A horizontal line segment that has a 2-pixel thick supercover.
Another popular discretization scheme for lines, called grid intersection digitization [17,23], does guarantee that the discretization of a straight line δ is a digital curve, in the sense of the digital topology [16]. A proof of this property can be found in [15]. The drawback of this discretization scheme is its lack of symmetry: for any intersection of δ with a pixel boundary, the pixel vertex which is closest to this intersection is chosen as an element of the discretization of δ, and if the intersection is at equal distance between two vertices, then an arbitrary choice is made (for example, the rightmost or upmost vertex). This drawback is shared by other discrete models for straight lines and planes, the Bresenham’s model [7], the naive model [20] and the standard model [1]. On the other hand, the supercover does not suffer from this lack of symmetry. An attempt to solve the problem of “bubbles”, which seems to be the price payed for symmetry, has been made in [8] with the notion of minimal cover. Let X be a subset of R2 . Any set S of pixels, such that X is included in the union of the elements of S, is called a cover of X. Let S be a cover of X, we say that S is a minimal cover of X if there is no other cover of X which is a proper subset of S. We see in Fig. 2 that the minimal cover of certain straight lines is “thinner” than the supercover, but we see also that the minimal cover is not unique in general. We follow another approach based on a different, heterogenous discrete space which is an order, or a discrete topological space in the sense of Paul S. Alexandroff [3]. Such spaces have been the subject of intensive research in the recent past, not only from the topology point of view [13,18,11,5], but also in relation with discretization and geometrical models [14,25]. The discrete space D that
Discretization in 2D and 3D Orders
303
(a) (b) Fig. 2. (a): A straight line segment and its minimal cover (shaded). (b): A horizontal line segment and one of its possible minimal covers.
we will consider is a partition of the Euclidean space E, composed (in the case of E = R2 ) of open unit squares, unit line segments and singletons. The fact that D is a partition of E leads to a fundamental property: for any subset X of E, the supercover of X (relative to D) is the unique minimal cover of X. We will focus on this discretization scheme, and discuss only the 3D case in the sequel (corresponding results in 2D are particular cases). The two main contributions of this paper are the following results. (i) We prove that the discretization of a plane in R3 is a discrete surface. (ii) We prove that the discretization of the boundary of a “regular” set X (in a sense that will be precisely defined) is equal to the boundary of the discretization of X. This property has an immediate corollary for half-spaces and planes, and for convex sets.
2
Basic Notions on Orders
In this section, we recall some basic notions relative to orders (see also [13,5,6]). If X is a set, P(X) denotes the set composed of all subsets of X, if S is a subset of X, S denotes the complement of S in X. If S is a subset of T , we write S ⊆ T , the notation S ⊂ T means that S is a proper subset of T , i.e. S ⊆ T and S = T . If γ is a map from P(X) to P(X), the dual of γ is the map ∗γ from P(X) to P(X) such that, for each S ⊆ X, ∗γ(S) = γ(S). Let δ be a binary relation on X, i.e., a subset of X × X. We also denote by δ the map from X to P(X) such that, for each x of X, δ(x) = {y ∈ X, (x, y) ∈ δ}. We define δ as the binary relation δ = δ \ {(x, x); x ∈ X}. An order is a pair (X, α) where X is a set and α is a reflexive, antisymmetric, and transitive binary relation on X. An element of X is also called a point. The set α(x) is called the α-adherence of x, if y ∈ α(x) we say that y is α-adherent to x. We illustrate these general notions on orders with the example of Fig. 3, which is composed of the following elements : two triangles t1 , t2 ; five edges e1 , e2 , e3 , e4 , e5 ; and four vertices v1 , v2 , v3 , v4 . Here, we define the order relation α by: α(t1 ) = {t1 , e1 , e2 , e3 , v1 , v2 , v3 }; α(t2 ) = {t2 , e2 , e4 , e5 , v2 , v3 , v4 }; α(e1 ) = {e1 , v1 , v2 }; α(e2 ) = {e2 , v2 , v3 }; α(e3 ) = {e3 , v1 , v3 }; α(e4 ) = {e4 , v3 , v4 }; α(e5 ) = {e5 , v2 , v4 }; and for i = 1 . . . 4, α(vi ) = {vi }. Let (X, α) be an order. We denote by α the map from P(X) to P(X) such that, for each subset S of X, α(S) = ∪{α(x); x ∈ S}, α(S) is called the α-
304
M. Couprie, G. Bertrand, and Y. Kenmochi
v1
e1
v2
t 1 e2
e3
t2 v3
e4
e5 v4
Fig. 3. An example for the basic notions on orders.
closure of S, ∗α(S) is called the α-interior of S. A subset S of X is α-closed if S = α(S), S is α-open if S = ∗α(S). In our example of Fig. 3, let S be the set {t1 , e1 , e5 , v2 }. We see that α(S) = {t1 , e1 , e2 , e3 , e5 , v1 , v2 , v3 , v4 } = S, thus S is not α-closed. We can also see that ∗α(S) = {t1 , e1 } = S, thus S is not α-open. On the opposite, α(t1 ), {e2 , e5 , v2 , v3 , v4 }, {v1 } for example are α-closed, and {t1 }, {t1 , t2 , e2 } for example are α-open. Let (X, α) be an order. We denote by β the relation β = {(x, y); (y, x) ∈ α}, β is the inverse of the relation α. We denote by θ the relation θ = α ∪ β. The dual of the order (X, α) is the order (X, β). Notice that ∗α(S) = {x ∈ S; β(x) ⊆ S}, and ∗β(S) = {x ∈ S; α(x) ⊆ S}. In our example of Fig. 3, β(v2 ) = θ(v2 ) = {v2 , e1 , e2 , e5 , t1 , t2 }; β(e2 ) = {e2 , t1 , t2 }; θ(e2 ) = {v2 , v3 , e2 , t1 , t2 }; β(t1 ) = θ(t1 ) = {t1 }. The set composed of all α-open subsets of X satisfies the conditions for the family of open subsets of a topology, the same result holds for the set composed of all β-open subsets of X. These topologies are P.S. Alexandroff topologies, i.e., topologies such that every intersection of open sets is open [3]. An order (X, α) is countable if X is countable, it is locally finite if, for each x ∈ X, θ(x) is a finite set. A CF-order is a countable locally finite order. Let (X, α) be a CF-order. Let x0 and xk be two points of X. A path from x0 to xk is a sequence x0 , x1 , ..., xk of elements of X such that xi ∈ θ(xi−1 ), with i = 1, ..., k. A CF-order (X, α) is connected if for all x, y in X, there is a path from x to y. If (X, α) is an order and S is a subset of X, the order relative to S is the order |S| = (S, α ∩ (S × S)). We will use a general definition for curves and surfaces which has been used in several works (see e.g. [11,6]). This notion is close to the notion of manifold used by Kovalevsky [18]; nevertheless it does not involve the necessity to attach a notion of dimension to each element of X, which allows to have a simpler definition (in particular, no use of isomorphism is made). Let |X| = (X, α) be a non-empty CF-order. - The order |X| is a 0-surface if X is composed exactly of two points x and y such that y ∈α(x) and x ∈α(y). - The order |X| is an n-surface, n > 0, if |X| is connected and if, for each x in X, the order |θ (x)| is an (n − 1)-surface. - A curve is a 1-surface, a surface is a 2-surface.
Discretization in 2D and 3D Orders
305
In our example of Fig. 3, the orders relative to the following sets: {v1 , e1 , v2 , e5 , v4 , e4 , v3 , e3 } and {v1 , e1 , v2 , t2 , v3 , e3 }, are both curves. Conversely, the order depicted in Fig. 3 is not a surface, since for example, θ (v1 ) = {e1 , t1 , e3 } is not a curve.
3
An Order Associated to Rn
Let R be the set of real numbers. We consider the families of subsets of R named G01 , G11 and G 1 such that: G01 = {{p + 12 }, p ∈ Z}, G11 = {]p − 12 , p + 12 [, p ∈ Z}, G 1 = G01 ∪ G11 . A subset R of Rn which is the cartesian product of exactly m elements of G11 and n − m elements of G01 is called an m-gel of Rn . n For a given integer m, we denote by Gm the set of all m-gels of Rn , and we n n denote by G the union of all the sets Gm , for all m = 0 . . . n. An element of G n is called a gel. For example, with n = 2, a 0-gel is a singleton (a set containing a single point), a 1-gel is a line segment which does not contain its extremities (either of the form {p + 12 }×]q − 12 , q + 12 [ or ]p − 12 , p + 12 [×{q + 12 }), and a 2-gel is an open square. We remark that, according to the “standard” topology of Rn , only the n-gels are open subsets of Rn (they are open hypercubes), and that only the 0-gels are closed subsets of Rn (they are singletons). For 0 < m < n, an m-gel is neither open nor closed. On the opposite, all pixels (see section 1) are closed subsets of R2 . Notice also that G n is a partition of Rn , this is not the case with the covering of R2 with pixels. Let x be a gel, we denote by cl(x) the closure of x (according to the “standard” topology of Rn ). We consider the order (G n , α) defined by: ∀x, y ∈ G n , y ∈ α(x) if y ⊆ cl(x). For example, with n = 2, let x be an open square (a 2-gel). Then, α(x) is composed of x itself, of the four line segments that border x (without the vertices), and of the four singletons containing each a vertex of cl(x). Notice that these orders are equivalent to those obtained in the framework of connected ordered topological spaces introduced by E.D. Khalimsky [12]. As far as we know, the first mention of (G n , α) as a discrete topological space can be found in the classical topology textbook by P.S. Alexandroff and H. Hopf [4], as one of the first examples used to illustrate the notion of a topological space.
306
4
M. Couprie, G. Bertrand, and Y. Kenmochi
Generalized Covers and Supercovers
Let F be a family of subsets of Rn (n ≥ 1). We say that the family F covers Rn if Rn is equal to the union of all the elements of F. In the following, we will consider the families G n and Gnn . Notice that G n does cover Rn , but Gnn does not. Let R be any subset of Rn , we say that a subset S of F is an F−cover of R, if R is included in the union of all the elements of S (this definition generalizes the notion of cover in [8], but is different from the notion of cover discretization in [22]). Let F be a family of subsets of Rn , and let R be any subset of Rn . We consider the hit and miss transforms as defined in [24]. The hit of R in F, denoted by F(R), is the set of all the elements of F which intersect R: F(R) = {x ∈ F, x ∩ R = ∅}. In a dual way, we may consider the set ∗F(R) composed of all elements of F which are included in R. If F is a family that covers Rn , then F(R) is called the F−supercover of R. The F−supercover is obviously a particular case of F−cover, and is uniquely defined for any given R. If we choose n = 2 and F equals the set of all pixels, we retrieve the notion of supercover presented in the introduction. In this paper, we focus on supercovers based on the family G n . The fact that n G is a partition of Rn leads to several interesting properties. Furthermore: Property 1 Let F be a family of sets covering Rn . Then, the two following propositions are equivalent: (i) for any subset R of Rn , the F-supercover of R is the unique minimal F-cover of R. (ii) F is a partition of Rn . This property is a direct consequence of Prop. 2, which is stated in the more general framework of binary relations. Let A, B be two sets. A relation Γ from A to B is a subset of the cartesian product A × B. If (a, b) ∈ Γ , we also write that (b, a) ∈ Γ −1 , that b ∈ Γ (a) and that a ∈ Γ −1 (b), and we say that b is a successor of a and that a is a predecessor of b. We say that the relation Γ is surjective if each b in B has at least one predecessor, and Γ is a map from A to B if each a in A has a unique successor. Let R be a subset of A, we write Γ (R) = ∪a∈R Γ (a). Let A, B be two sets, let Γ be a relation from A to B. We say that Γ defines a covering of A by B if both Γ and Γ −1 are surjective, i.e. if each element of A has at least one successor and each element of B has at least one predecessor. Let A, B be two sets, let Γ be a relation defining a covering of A by B. Let R be a subset of A. We say that S ⊆ B is a Γ -cover of R if R ⊆ Γ −1 (S). Furthermore, we say that the Γ -cover S is minimal if there is no other Γ -cover of R strictly included in S. The set Γ (R) is called the Γ -supercover of R. It is obviously a Γ -cover of R, which is uniquely defined for any given R, but in general it is not a minimal Γ -cover of R. For example, if we take A = Rn and choose for B a family of sets covering Rn , and define Γ (x) as the set of elements of B which hit x, then we retrieve
Discretization in 2D and 3D Orders
307
the notions and the results of the beginning of this section. In particular, the following property generalizes Prop. 1. Property 2 Let A, B be two sets, let Γ be a relation defining a covering of A by B. Then, the two following propositions are equivalent: (i) for any subset R of A, the Γ -supercover of R is the unique minimal Γ -cover of R. (ii) Γ is a map from A to B. Proof: (ii) ⇒ (i). Let S = Γ (R). Is S minimal ? Suppose that there exists another Γ -cover S strictly included in S, and let s be an element of S \ S . Since S = Γ (R), there exists an x in R such that s ∈ Γ (x), and since Γ is a map, we have Γ (x) = {s}. Thus there is an element x of R which has no successor in S , a contradiction. Is S the unique minimal Γ -cover ? Suppose that there exists another minimal Γ -cover S = S. Since both S and S are minimal, there must exist at least an element s in S \ S and an element s in S \ S. This leads to the same contradiction. (i) ⇒ (ii). Suppose that (i) and that Γ is not a map. Then, there exists an element x of A that has either 0 or more than two successors. As Γ −1 is a surjection, x has at least one successor. Let y, z be two distinct successors of x, and let us consider the set R = {x}. The set {y} is a strict subset of Γ (R) which is also a Γ -cover of R, a contradiction. The following properties can also be easily proved. They are mentioned in [9] for the particular case of R2 and a covering with pixels. Notice that the condition that the relation Γ is a map is not required. Property 3 Let A, B be two sets, let Γ be a relation defining a covering of A by B. Then, ∀R, S ⊆ A we have: (i) Γ (R ∪ S) = Γ (R) ∪ Γ (S) (ii) Γ (R ∩ S) ⊆ Γ (R) ∩ Γ (S) (iii) R ⊆ S ⇒ Γ (R) ⊆ Γ (S) Furthermore, if Γ is a map defining a covering of A by B, then the cardinality of Γ (R) is less or equal to the cardinality of R, for any R subset of A.
5
Properties
This section contains the main results of the paper. We first show that an analytic characterization of G 3 -supercovers of planes can be given, by adapting the result of Andr`es for the “classical” supercover. Such an analytical characterization is essential to design fast algorithms that generate discrete planar objects. We consider the plane π defined by: π = {(x, y, z) ∈ R3 /ax + by + cz + d = 0}, where a, b, c, d belong to R. The following result, adapted from Andr`es ([1], Th. 18) gives us an analytical characterization of the elements of G33 (π). Remind that G33 (π) is the set of the elements (open cubes) of G33 that have a non-empty intersection with π.
308
M. Couprie, G. Bertrand, and Y. Kenmochi
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(m) (n) (o) Fig. 4. The fifteen ways for a plane to hit a 3-gel and its θ-neighborhood. The 1-gels and 0-gels that are hit by the plane are highlighted. The 2-gels (squares) that are hit by the plane have not been highlighted, in order to preserve the readability of the figure.
Property 4 Let a, b, c, d ∈ R, ab = 0 or bc = 0 or ac = 0, let π = {(x, y, z) ∈ R3 /ax + by + cz + d = 0}. Let (p, q, r) ∈ Z3 , we denote by Gpqr the 3-gel ]p − 12 , p + 12 [ × ]q − 12 , q + 12 [ × ]r − 12 , r + 12 [. Then, G33 (π) = {Gpqr ∈ G33 , − |a|+|b|+|c| < ap + bq + cr + d < |a|+|b|+|c| }. 2 2 Let x0 ∈ R, let π = {(x, y, z) ∈ R3 /x = x0 }. If x0 − 12 ∈ Z, then G33 (π) = ∅, else G33 (π) = {Gpqr ∈ G33 , |p − x0 | ≤ 12 }. For planes defined by y = y0 or z = z0 , a similar statement holds. In order to have a complete characterization of the elements of G 3 (π), we must characterize also the elements of G23 (π), of G13 (π) and those of G03 (π). Let s = {(p, q, r)} be an element of G03 . We denote by π(s) the index which characterizes the position of s relative to π: −1 if ap + bq + cr + d < 0, 0 if ap + bq + cr + d = 0, π(s) = +1 if ap + bq + cr + d > 0 Property 5 Let a, b, c, d ∈ R, let π = {(x, y, z) ∈ R3 /ax + by + cz + d = 0}. a) If b = c = 0 and ad − 12 ∈ Z, then:
1 1 G03 (π) = {{( −d a , q + 2 , r + 2 )}, q, r ∈ Z}, and
1 1 1 −d 1 G13 (π) = {{ −d a } × ]q − 2 , q + 2 [ × {r + 2 }, q, r ∈ Z} ∪ {{ a } × {q + 2 } × ]r − 12 , r + 12 [, q, r ∈ Z} and
Discretization in 2D and 3D Orders
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n) (o) Fig. 5. Neighborhoods of a 3-gel (open cube)
(a)
(b) (c) (d) Fig. 6. Neighborhoods of a 2-gel (square)
309
(e)
1 1 1 1 G23 (π) = {{ −d a } × ]q − 2 , q + 2 [ × ]r − 2 , r + 2 [, q, r ∈ Z}. Similar characterizations are obtained for the cases a = b = 0 and a = c = 0. b) Other cases: let v ∈ G03 , let i ∈ G13 , with α(i) = {v1 , v2 }, and let s ∈ G23 . Then: v ∈ G03 (π) iff π(v) = 0, i ∈ G13 (π) iff (π(v1 ) = π(v2 ) = 0) or (π(v1 ).π(v2 ) < 0), s ∈ G23 (π) iff at least two vertices of s have different non-null indices.
Now we are ready to state the first main result of this paper. It says that the discretization of a plane is a surface, in the sense defined in section 2. This result can be easily transposed in 2D, where it states that the discretization of a straight line is a curve. Property 6 Let a, b, c, d ∈ R, let π = {(x, y, z) ∈ R3 /ax + by + cz + d = 0}. The order |G 3 (π)| is a surface. The proof of Prop. 6 involves the examination of the different configurations of a plane π hitting an open cube, a square, a line segment and a single point, and
310
M. Couprie, G. Bertrand, and Y. Kenmochi
(a) (b) (c) (d) (e) (f) Fig. 7. (a,b,c): neighborhoods of a 1-gel (segment), (d,e,f): neighborhoods of a 0-gel (singleton)
their respective θ-neighborhoods. For the open cube, these configurations (up to rotations and symmetries) are only 15, they are depicted in Fig. 4. We can easily check (see Fig. 5) that for each of these configurations, the θ -neighborhood of any 3-cube in G 3 (π) is a curve. For the cases of a single point, a line segment and a square, the numbers of possible configurations (up to rotations and symmetries) are respectively 3, 3 and 5. Figs. 6 and 7 shows the θ -neighborhood of such an element in each possible configuration, again we can verify that it forms a curve. Our second main result states that, for any “regular” object (in a sense that will be defined below), the boundary operator commutes with the discretization operator. Let (O, α) be an order, and let P be a subset of O. We define the θ−boundary of P in O (or simply the boundary of P ) as the set B(P ) of elements p of P such that θ(p) ∩ P = ∅. Let X be a subset of R3 . Let C0 be the unit closed cube centered at the origin: C0 = [− 12 , 12 ]3 . Let Cu be the translation of C0 by the vector u of R3 . The set X is morphologically open by the structuring element C0 if X is equal to the union of all the translations Cu of C0 which are included in X (see [24]). Notice that this notion is close to the notion of a par(r, +)-regular set defined by Latecki in the continuous plane[19]. The (topological) closure of X is denoted by cl(X), and the boundary of X is defined by b(X) = cl(X) ∩ cl(X). Property 7 If X is a closed subset of R3 , then B(G 3 (X)) ⊆ G 3 (b(X)). Furthermore, if cl(X) is morphologically open by C0 , then the boundary of the discretization of X is equal to the discretization of the boundary of X, in other words, B(G 3 (X)) = G 3 (b(X)). A corollary can be immediately derived from this property, concerning planes and half-spaces. We consider the closed half-space γ = {(x, y, z) ∈ R3 /ax + by + cz + d ≥ 0}, where a, b, c, d belong to R. The boundary of γ is the plane π defined by: π = {(x, y, z) ∈ R3 /ax + by + cz + d = 0}. Then we have: Corollary 8 G 3 (b(γ)) = G 3 (π) = B(G 3 (γ)) From this, we can easily deduce a more general corollary which holds for any convex and closed set: Corollary 9 If X is a convex closed subset of R3 , then B(G 3 (X)) = G 3 (b(X)). Fig. 8 illustrates these properties in 2D.
Discretization in 2D and 3D Orders
311
Fig. 8. Illustration of Prop. 7 and Cor. 9 in 2D. We see the discretizations of three objects : a disc, a convex polygon and a third convex set. The boundaries of these objects appear as continuous solid lines. The discretizations of the boundaries, which coincide with the boundaries of the discretizations, are represented by light gray squares, black segments and black dots.
Acknowledgements. The authors wish to thank both reviewers for their interesting and useful comments.
References 1. E. Andr`es, Mod´elisation analytique discr`ete d’objets g´eom´etriques, Th`ese de HDR, Universit´e de Poitiers (France), 2000. 2. E. Andr`es, C. Sibata, R. Acharya, “Supercover 3D polygon”, Conf. on Discrete Geom. for Comp. Imag., Vol. 1176, Lect. Notes in Comp. Science, Springer Verlag, pp. 237-242, 1996. 3. P.S. Alexandroff, “Diskrete R¨ aume”, Mat. Sbornik, 2, pp. 501-518, 1937. 4. P.S. Alexandroff, H. Hopf, Topologie, Springer Verlag, 1937. 5. G. Bertrand, “New notions for discrete topology”, 8th Conf. on Discrete Geom. for Comp. Imag., Vol. 1568, Lect. Notes in Comp. Science, Springer Verlag, pp. 216226, 1999. 6. G. Bertrand, M. Couprie, “A model for digital topology”, 8th Conf. on Discrete Geom. for Comp. Imag., Vol. 1568, Lect. Notes in Comp. Science, Springer Verlag, pp. 229-241, 1999. 7. J. Bresenham, “Algorithm for computer control of digital plotter”, IBM System Journal, Vol. 4, pp. 25-30, 1965.
312
M. Couprie, G. Bertrand, and Y. Kenmochi
8. V.E. Brimkov, E. Andr`es, R.P. Barneva, “Object discretization in higher dimensions”, 9th Conf. on Discrete Geom. for Comp. Imag., Vol. 1953, Lect. Notes in Comp. Science, Springer Verlag, pp. 210-221, 2000. 9. J.M. Chassery, A. Montanvert, G´eom´ etrie discr`ete en imagerie, Herm`es, Paris, France, 1991. 10. D. Cohen-Or, A. Kaufman, “Fundamentals of surface voxelization”, Graphical models and image processing, 57(6), pp. 453-461, 1995. 11. A.V. Evako, R. Kopperman, Y.V. Mukhin, “Dimensional Properties of Graphs and Digital Spaces”, Jour. of Math. Imaging and Vision, 6, pp. 109-119, 1996. 12. E.D. Khalimsky, “On topologies of generalized segments”, Soviet Math. Doklady, 10, pp. 1508-1511, 1969. 13. E.D. Khalimsky, R. Kopperman, P. R. Meyer, “Computer Graphics and Connected Topologies on Finite Ordered Sets”, Topology and its Applications, 36, pp. 1-17, 1990. 14. R. Klette, “m-dimensional cellular spaces”, internal report, University of Maryland, CAR-TR-6, MCS-82-18408, CS-TR-1281, 1983. 15. R. Klette, “The m-dimensional grid point space”, Computer vision, graphics, and image processing, 30, pp. 1-12, 1985. 16. T.Y. Kong and A. Rosenfeld, “Digital topology: introduction and survey”, Comp. Vision, Graphics and Image Proc., 48, pp. 357-393, 1989. 17. J. Koplowitz, “On the performance of chain codes for quantization of line drawings”, IEEE Trans. on PAMI, 3, pp. 180-185, 1981. 18. V.A. Kovalevsky, “Topological foundations of shape analysis”, in Shape in Pictures, NATO ASI Series, Series F, Vol. 126, pp. 21-36, 1994. 19. L.J. Latecki, Discrete representation of spatial objects in computer vision, Kluwer Academic Publishers, 1998. 20. J-P. Reveill`es, G´eom´ etrie discr`ete, calcul en nombres entiers et algorithmique, Th`ese d’´etat, Universit´e Louis Pasteur, Strasbourg (France), 1991. 21. C. Ronse, M. Tajine, “Hausdorff discretization of algebraic sets and diophantine sets”, 9th Conf. on Discrete Geom. for Comp. Imag., Vol. 1953, Lect. Notes in Comp. Science, Springer Verlag, pp. 216-226, 2000. 22. C. Ronse, M. Tajine, “Hausdorff discretization for cellular distances, and its relation to cover and supercover discretizations”, Journal of Visual Communication and Image Representation, Vol. 12, no. 2, pp. 169-200, 2001. 23. A. Rosenfeld, A.C. Kak: Digital picture processing, Academic Press, 1982. 24. J. Serra, Image Analysis and Mathematical Morphology, Academic Press, 1982. 25. J. Webster, “Cell complexes and digital convexity”, Digital and image geometry, Vol. 2243, Lect. Notes in Comp. Science, Springer Verlag, pp. 268-278, 2002.
Defining Discrete Objects for Polygonalization: The Standard Model Eric Andres Universit´e de Poitiers Laboratoire IRCOM-SIC, Bˆ at. SP2MI, BP 30179, 86962, Futuroscope Cedex, France
Abstract. A new description model, called the standard model, for discrete linear objects in dimension n is proposed. Standard objects are tunnel-free and (n-1)-connected. The discrete objects are defined analytically as union of intersections of half-spaces. The standard 3D polygons are well suited for polygonalization. This is the main reason why this model has been developed.
1
Introduction
Polygonalization of discrete objects is one of the major research problems of the discrete geometry community for many years now. The main approach used in practice is the “marching cubes” [16] type approach. In this approach, local neighbourhoods of voxels are replaced by Euclidean polygons. This means that the resulting number of polygons is proportional to the number of boundary voxels of the discrete object. As the number of boundary voxels is usually extremely important, people tend to apply simplification schemes to reduce that number of polygons and thus loose approximation quality. For a couple of years now, another approach, we call a discrete analytical polygonalization, is investigated by a number of research groups. In this approach, the aim is to decompose the boundary of discrete objects into discrete analytical polygons and then these discrete polygons into Euclidean polygons. By discrete analytical polygons we understand discrete polygons that are not defined as set of discrete points but by an analytical description that is independent of the number of discrete points composing it. The aim here is to decompose the boundary of a discrete object into a number of discrete polygons that isn’t, in general, directly proportional to the number of boundary voxels. Potential applications can be found in compression, visualization, transformations, medical imaging, ... Numerous papers have brought new insight and new ideas on how to tackle this difficult problem. However, only a small part of the problem has been solved. In this paper we propose a new element to the problem that was missing so far, the definition of a discrete analytical polygon. Several authors have proposed algorithms that determine if a given set of discrete points belong to the same Reveill`es analytical discrete plane [17,2]. At the same time they provide the analytical description (two inequalities) of the discrete plane [9,13,12]. Some others A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 313–325, 2002. c Springer-Verlag Berlin Heidelberg 2002
314
E. Andres
propose a description of the equivalence class of all the possible discrete analytical planes the set of boundary points belongs to [11,19]. This only leads to decompositions of the boundary of discrete object into planar sections. All these different approaches couldn’t go beyond a planar decomposition since nobody knew how to define discrete polygon and therefore nobody could propose a complete discrete polygonalization algorithm. Existing discrete polygons such as the ones proposed by A. Kaufman [14] are well suited for visualization purposes but can’t be used for polygonalization: they are not planar (points do not all belong to a same discrete plane), not topologically consistent (sometimes with holes) and not analytically defined. As already stated we propose in this paper a definition of a 3D discrete analytical polygon but also, more generally, a discrete analytical model for all linear objects in dimension n (discrete points, m-flats and geometrical simplices). To the best authors knowledge, it is the first time that a discrete model is proposed that defines discrete objects in arbitrary dimensions. The new discrete analytical model proposed is called the standard model and is derived from the supercover model [1,3,4,18,5]. In fact, a standard object is obtained by a rather simple rewriting process of the inequalities defining analytically a supercover object [5]. The name “standard” model derives from the “standard plane” introduced by J. Fran¸con [10]. The standard model is called a discrete analytical model because the discrete objects (points, m-flats, simplices) are defined analytically by a finite number of inequalities that is independent of the set of discrete points of the object. For instance, a 3D standard triangle is defined by 17 or less inequalities independently of its size. The model we propose has many interesting properties. It has been shown that the standard model is in fact a 0-discretisation of Brimkov, Andres and Barneva [6,7] and therefore is (n − 1)-connected and tunnel-free. In our notation, in 3D, our 2-connectivity corresponds to the classical 6-connectivity. This means that our model fits particularly well polygonalization approaches such as the one proposed by L. Papier and J. Fran¸con [12] in Khalimski-Kovalesky spaces [15, 10]. The model is by definition geometrically consistent: for instance, the vertices of a 3D standard polygon are 3D standard points, the edges of a 3D standard polygons are 3D standard line segments and the 3D standard polygon is a piece of a 3D standard plane. In section 2, we start by introducing the main notations of the paper, before briefly recalling the main properties of the supercover model. In section 3, we introduce the orientation convention that forms the basis of the definition of the standard model before we formally define the standard model. In section 3.3, the main properties of the standard objects are presented especially the tunnelfreeness and the (n − 1)-connectivity. In Section 3.5, we examine the different classes of standard linear objects to see how the definition is translated in practice and how the different inequalities defining the objects are established. We conclude in section 4.
Defining Discrete Objects for Polygonalization: The Standard Model
2 2.1
315
Preliminaries Basic Notations
Most of the following notations correspond to those given by Cohen and Kaufman in [8] and those given by Andres in [5]. We provide only a short recall of these notions. Let Zn be the subset of the nD Euclidean space Rn that consists of all the integer coordinate points. A discrete (resp. Euclidean) point is an element of Zn (resp. Rn ). A discrete (resp. Euclidean) object is a set of discrete (resp. Euclidean) points. A discrete inequality is an inequality with coefficients in R from which we retain only the integer coordinate solutions. A discrete analytical object is a discrete object defined by a finite set of discrete inequalities. An m-flat is a Euclidean affine subspace of dimension m. Let us consider a set P of m + 1 linearly independent Euclidean points P 0 , . . . , P m . We denote Am (P ) the m-flat induced by P (i.e. the m-flat containing P ). We denote S m (P ) the geometrical simplex of dimension m in Rn induced by P (i.e. the convex hull of P ). For S = S m (P ) a geometrical simplex, n we denote S = Am(P ) the corresponding m-flat. For a n-simplex S = S (P ), i n−1 i we denote E S, P is the half-space of boundary A P \ P that contains P i. We denote pi the i-th coordinate of a point or vector p. Two discrete points p and q are |pi − qi | ≤ 1 for 1 ≤ i ≤ n, and nk-neighbours, with 0 ≤ k ≤ n, if n k ≤ n − i=1 |p − q |. The voxel V (p) ⊂ R of a discrete nD point p is defined i 1 i 1 1 1 by V (p) = p − , p + × · · · × p − , p + . For a discrete object F , 1 1 n n 2 2 2 2 V (F ) = V (p). p∈F
A k-path in a discrete object A is a sequence of discrete points all in A such that consecutive pairs of points are k-neighbours. A discrete object A is k-connected if there is a k-path between two arbitrary points of A. A k-component is a maximal k-connected set. Let D be a subset of a discrete object E. If E \ D is not k-connected then D is said to be k-separating in E. Let E be a k-separating discrete object in Zn such that Zn \ E has exactly two k-components. A k-simple point in E is a discrete point p such that E \ p is k-separating. A k-separating discrete object in Zn is called k-minimal if it does not contain any k-simple points. Let us consider two objects F of dimension n and G of dimension m. The Cartesian product of F and G is defined by F × G = {(f, g) |f ∈ F, g ∈ G }. The Minkowski sum of F and G is defined by F ⊕ G = {f + g |f ∈ F, g ∈ G }. We denote σ n the set of all the permutations of {1, . . . , n}. Let us denote Jnm the set of all the strictly growing sequences of m integers all between 1 and n: Jnm = {j ∈ Zm |1 ≤ j1 < j2 < . . . < jm ≤ n }. This defines a set of multi-indices. Let us consider an object F in the n-dimensional Euclidean space Rn , with n > 1.
316
E. Andres
The orthogonal projection is defined by: πi (F ) = {(q1 , . . . , qi−1 , qi+1 , . . . , qn ) |q ∈ Rn } , for 1 ≤ i ≤ n; πj (F ) = (πj1 ◦ πj2 ◦ · · · ◦ πjm ) (F ) , for j ∈ Jnm . The orthogonal extrusion is defined by: εj (F ) = πj−1 (πj (F )) , for j ∈ Jnm . We define an axis arrangement application rj , for j ∈ Jnm , by: rj : Rn → Rn x →xσj (1) , xσj (2) , . . . , xσj (n) where the permutation σj ∈ σ n is defined by: for 1 ≤ i ≤ m, σj (ji ) = i. σj = else σj (kr ) = i so that kr < kr+1 and kr = js for all 1 ≤ r ≤ n − m and for all 1 ≤ s ≤ m. The axis arrangement application has been specifically so that it designed verifies the two following properties: πj (F ) = π(1,2,... ,m) rj−1 (F ) and εj (F ) = rj ε(1,2,... ,m) rj−1 (F ) for all F in Rn and j ∈ Jnm . 2.2
Recalls on Supercover
A discrete object G is a cover of a Euclidean object F if F ⊂ V (G) and ∀p ∈ G, V (p) ∩ F = ∅. The supercover S (F ) of a Euclidean object F is defined by S (F ) = {p ∈ Zn |V (p) ∩ F = ∅ }. S (F ) is by definition a cover of F . It is easy to see that if G is a cover of F then G ⊂ of F can be defined S(F ). Thesupercover in different ways: S (F ) = F ⊕ B ∞ 12 ∩ Zn = p ∈ Z n d∞ (p, F ) ≤ 12 where B ∞ (r) if the ball centered on the origin, of radius r for the distance d∞ . This links the supercover to mathematical morphology [18]. The supercover has many properties. Let us consider two Euclidean objects F and G, and a multi-index j ∈ Jnm , then: S (F ) = α∈F S (α), S (F × G) = S (F ) × S (G), rj (S (F )) = S (rj (F )), πj (S (F )) = S (πj (F )) and εj (S (F )) = S (εj (F )) = rj (Zm × S (πj (F ))) [5]. Definition 1. (Bubble) A k-bubble, with 1 ≤ k ≤ n, is the supercover of a Euclidean point that has exactly k half-integer coordinates. A half-integer is a real l+ 12 , with l an integer. A k-bubble is formed of 2k discrete points. Definition 2. (Bubble-free) The cover of an m-flat is said to be bubble-free if it has no k-bubbles for k > m. The cover of a simplex S is said to be bubble-free if S is bubble-free.
Defining Discrete Objects for Polygonalization: The Standard Model
317
There are two types of bubbles in the supercover of an m-flat F . The k-bubbles, for k ≤ m, are discrete points that are part of all the covers of F . If we remove any of these points, the discrete object isn’t a cover anymore. In the k-bubbles, for k > m, there are discrete points that are “simple” points. The aim of this paper is to propose discrete analytical objects that are bubble-free by removing one of the points as illustrated by the figure. Lemma 1. A discrete point p belongs to a k-bubble, k > m, of the supercover of an m-flat F if and only if there exists a point α ∈ F with k half-integer coordinates such that p ∈ S (α). The proof of this lemma is obvious.
3
Standard Model
The aim of this paper is to propose a new cover class, called the standard cover. This cover is so far only defined for linear objects in all dimensions. This discrete analytical model has been designed to conserve most of the properties of the supercover, to be bubble-free and (n − 1)-connected. 3.1
Orientation Convention
The standard model, contrary to the supercover, is not unique. It depends on the choice of an orientation convention. We need one orientation convention per dimension Rm , m > 0. This choice must then remain unchanged for all the primitives handled. The choice of an orientation convention per dimension has to be coherent with the operator π. This means that we want the following property to be verified: St (πj (F )) = πj (St (F )) . If this is not the case, we won’t have correct modeling properties. In general, with arbitrary orientation conventions there is no reason for this property to be verified. It can sometimes be tricky to find a “good” set of orientation conventions. We propose a set of orientation conventions, denoted On and called the basic orientation conventions, that verify the above mentioned property. Definition 3. (Standard orientation) n Let us consider a discrete analytical half-space E : i=1 Ci Xi ≤ B and the basic orientation convention On . We say that E has a standard orientation if : – C1 > 0; – or if C1 = 0 and C2 > 0; . – ..
– or if C1 = · · · = Cn−1 = 0 and Cn > 0. If E has not a standard orientation then we say that E has a supercover orientation.
318
E. Andres
We consider from now on, without loss of generality, only the basic orientation conventions for all n > 0. All the standard primitives are defined with these basic orientation conventions. The basic orientation conventions are coherent with respect to the operators π. After πj , for j ∈ Jnm , the orientation convention On in Rn becomes On−m in Rn−m . 3.2
Standard Model Definition
We now have gathered all the elements we need to define the standard discretisation model of linear objects in Rn . Definition 4. (Standard Model) Let F be a linear Euclidean object in Rn whose supercover is described ann alytically by a finite set of inequalities Fk : i=1 Ci,k Xi ≤ Bk . The standard model St (F ) of F , for the basic orientation convention On , is the discrete ob ject described analytically by a finite set of discrete inequalities Fk obtained by substituting each inequality Fk by Fk defined as follows:
– If Fk has a standard orientation then Fk : n – else Fk : i=1 Ci,k Xi ≤ Bk .
n
i=1
Ci,k Xi < Bk ;
This definition is algorithmically easy to set up. Once a discrete analytical description of an object is available, the transition from the supercover model to the standard model and vice-versa is trivial. 3.3
Properties
We are now going to present the most important properties of standard objects. Let us consider a Euclidean linear object F of topological dimension m in Rn . We have by definition St (F ) ⊂ S (F ) even more precisely, if p ∈ S (F ) \ St (F ), then d∞ (p, F ) = 12 . A standard object is a supercover object from which we have removed some discrete points. These points are all at a distance 12 from the Euclidean primitive. We have St (F ) = S (F ) if no point with at least m + 1 half-integer coordinates belongs to the boundary of F . The differences between the supercover of F and the standard model of F are located in the k-bubbles of F , for k > m. One of the immediate consequences of this is that the standard model remains a cover: F ⊂ V (St (F )). It is because of this property that the standard model is also sometimes called standard cover. The standard model retains most of the set properties of the supercover. It is easy to deduce from definition 4, that if we have two Euclidean linear objects F and G in Rn , then: St (F ∪ G) = St (F ) ∪ St (G) ; St (F ∩ G) ⊂ St (F ) ∩ St (G) F ⊂ G ⇒ St (F ) ⊂ St (G) ; St (F × G) = St (F ) × St (G) ; St (πj (F )) = πj (St (F )) ; St (εj (F )) = εj (St (F )) .
Defining Discrete Objects for Polygonalization: The Standard Model
319
The first property ensures that we’ll be able to construct complex discrete objects out of basic elements such as simplices. These last properties are characteristic of correct orientation conventions. The properties are only verified if the orientation conventions are defined for all dimensions lower or equal to n and if they are coherent with respect to the operator π. This is the case for the basic orientation conventions Ok , for k ≤ n. It is important to notice that, in general, St (F ) = α∈F St (α). This property of the supercover is not conserved. We have St (F ∪ G) = St (F ) ∪ St (G) for a union of a finite number of objects. This comes simply from the fact that the standard model is not defined for an analytical description that has an infinite number of discrete inequalities. One simple example for that is given by the 2D line D: x1 − x2 = 0. The standard model of
the line is St (D) : −1 ≤ x1 − x2 < 1 while α∈F St (α) = x ∈ Z2 |x1 − x2 = 0 . One of the main properties of the standard model concerns the connectivity and the tunnel-freeness: Theorem 1. (connectivity and tunnel-freeness) Let F be a Euclidean linear object of topological dimension m in Rn . Its standard model St (F ) is (n − 1)-connected and tunnel-free. The standard model is a particular case of k-discretisations as introduced by Brimkov, Andres and Barneva in [6,7]. It is shown that the standard model is in fact a 0-discretisation (Theorem 3 in [7]) and that 0-discretisations are (n − 1)connected and tunnel-free (proposition 3 in [6] and theorem 4 in [7]). Another property proved in [18,7] is that the standard model minimizes the Hausdorff distance with the Euclidean object. See [18,6,7] for details. 3.4
Description of Standard Primitives
We’ll examine now the discrete analytical description of the different classes of standard linear primitives (half-space, point, m-flat and m-simplex) and how they can be computed. Our purpose here is to propose a discretisation scheme that can be used is practical applications. By definition 4, every analytical description of a standard linear primitive is based on the analytical description of a standard half-space. That is the one we present first. We deduce from it the formulas for the standard point, m-flat and m-simplex in the sections that follow. Standard Half-space. The standard half-space is given by : Proposition 1. (Standard half-space) n Let us consider a Euclidean half-space E : i=1 Ci Xi ≤ B. The standard model St (E) of E, according to an orientation convention, is analytically described by:
320
E. Andres
– If E has a standard orientation then n
n n i=1 |Ci | St (E) = p ∈ Z Ci pi < B + ; 2 i=1 – else
St (E) =
n
n i=1 |Ci | p∈Z Ci pi ≤ B + . 2 i=1 n
The proposition is an immediate extension to dimension n of results on the supercover [1,3,4,5] and of definition 4. Standard point. The analytical description of a standard point can easily be deduced from the one of the standard half-space. It is however interesting to notice that the standard discretisation of a Euclidean point is always composed of one and only one discrete point contrary to what happens with a supercover discretization of a Euclidean point that can be formed of 2k points, 0 ≤ k ≤ n. Proposition 2. (Standard point) Let us consider a Euclidean point α ∈ Rn and the basic orientation convention On . The standard model St (α) of α is the discrete point: 1 1 St (α) = α1 − , . . . , αn − 2 2 The proof is obvious. In figure 1, the cross represents the Euclidean point. The black dot represents the corresponding discrete standard point. The square with the doted lines represent the zone covered by the 4 inequalities corresponding to the analytical description of a standard point.
Fig. 1. Different configurations of 2D standard points.
Defining Discrete Objects for Polygonalization: The Standard Model
321
Standard m-flat. One of the consequences of the properties St (πj (F )) = πj (St (F )) and St (εj (F )) = εj (St (F )) is that the formulas that lead to the discrete analytical description of a standard m-flat or standard m-simplex are simple transpositions of the formulas that have been established for the supercover [5]. Proposition 3. (Standard m-flat) Let us consider an m-flat F in Rn and the basic orientation conventions Ok , for all k > 0. a) If F is a 0-flat in Rn , we apply proposition 2, b) If F is a (n − 1)-flat, we apply twice proposition 1, c) else the analytical description of the standard model of F is given by: St (F ) = St (εj (F )) = rj (Zm × St (πj (F ))) j∈Jn n−1−m
j∈Jn n−1−m
We reapply then, recursively, proposition 3 on St (πj (F )) for all j ∈ Jnn−1−m . This proposition is composed of several steps corresponding to the algorithm that yields the analytical description of the standard model of an m-flat. Let us discuss step c). The formula St (F ) = St (εj (F )) alone is not sufficient j∈Jn n−1−m
to describe the standard m-flat, with 0 < m < n−1, since εj (F ) is not necessarily a hyperplane in Rn . We might even have F = εj (F ) for some j ∈ Jnn−1−m . The way around this problem is to examine πj (F ) in Rm+1 . The new orientation convention for Rm+1 after πj is Om+1 . We have different cases that occur: – If πj (F ) is a hyperplane in Rm+1 then εj (F ) is a hyperplane in Rn . We do not actually need to consider πj (F ). We could directly use case b) on St (εj (F )). – If πj (F ) is a point in Rm+1 then we consider case a) in Rm+1 , with the basic orientation convention, and formula rj (Zm × St (πj (F ))). – If πj (F ) is a k-flat, 0 < k < m, in Rm+1 then we consider again case c), with the basic orientation convention. We have, by definition, St (πj (F )) = St (εj (πj (F ))). We repeat the operation described in case c) for j ∈Jm+1 m−k
πj (πj (F )) in Rk+1 . We know that this process ends since each time we repeat case c) we consider a new object in a space of strictly lower dimension. Standard Simplex. Let us finish with the formulas describing a standard simplex. These formulas are a direct transposition of the formulas obtained for the supercover [5].
322
E. Andres
Fig. 2. Supercover and Standard 3D line.
Proposition 4. (Standard Simplex)
Let us consider a simplex S = P 0 , . . . , P m of dimension m and the basic orientation conventions Ok , for all k > 0. The standard model of S is defined by: n n i a) If m = n then St (S) = St E S, P ∩ St (ε (S)) ; j i=0 j=1 n b) If m = n − 1 then St (S) = St S ∩ j=1 St (εj (S)) ; c) If m ≤ n − 2 then St (S) = St (εj (S)) . j∈Jn n−m−1
Let us just recall some notations. If S is a simplex of dimension m then we denote S the m-flat containing all thepoints defining S. If S is a simplex of dimension n in Rn then we denote E S, P i the half-space that contains P i and of boundary the (n − 1)-flat {P 0 , . . . , P i−1 , P i+1 , . . . , P m }. Figure 3 shows three views of a standard 3D triangle: Figure 3(a) presents a classical, voxel view, of the standard triangle; figure 3(b) presents the same triangle in a K2 -space representation; finally figure 3(c) represents what we have called the analytical view and represents the 17 inequalities describing the standard 3D triangle.
4
Conclusion
We have defined in this paper the standard model for half-spaces, m-flats and m-simplices in dimension n. This is, to the authors best knowledge, one of the first times that discrete primitives are described analytically in dimension n. The standard model is geometrically consistent, defined analytically and standard objects are tunnel-free and (n − 1)-connected. It seems to us that the path
Defining Discrete Objects for Polygonalization: The Standard Model
323
towards discrete polygonalization will be much easier with standard polygons than with other notions and this for several reasons: the standard objects are all topologically consistent. This is not the case for the discrete na¨ive model for instance. Most planar recognition algorithms have been designed to recognize discrete na¨ive plane pieces [9,19]. However, it is not very difficult to show that if the model is geometrical consistent then 3D discrete na¨ive edges of polygons aren’t connected in general [5] and 3D na¨ive vertices might be composed of zero discrete points. This will make it very difficult to perform any polygon reconstruction process. Designing a polygon reconstruction algorithm is the next step towards polygonalization. Indeed, the following steps need to be accomplished in order to perform a discrete polygonalization: first we need to decompose the boundary of the discrete object into discrete plane pieces. This part, as we have recalled in the introduction, has been realised by several different approaches in the past. Secondly, one needs to describe the plane pieces as discrete polygons. This supposes that we know what a discrete polygon is (our paper) and that we perform some edge and vertex recognition algorithm. This is still an open and somehow difficult question. In order to facilitate the implementation and the test of such polygonalization algorithms, we are developing a discrete modeling tool, called SpaMod (for Spatial Modeler), at the University of Poitiers (France). The standard model, as well as the supercover model, are part of the discrete objects models handled by Spamod. Spamod is still in the preliminary stages of its development, however the images of Figure 3 have been produced with this software.
Fig. 3. Standard triangle with (a) Voxel view (b) K2 -space view (c) Analytical view
Acknowledgments. The images of Figure 3 have been produced in Spamod with help of the algorithms developed by Martine Dexet.
324
E. Andres
References 1. E. Andres, C. Sibata and R. Acharya, Supercover 3D Polygon, in: Proc. 6 th Int. Workshop on Discrete Geometry for Computer Imagery, Lyon (France), Lecture Notes in Computer Science, vol. 1176 (Springer, Berlin-Heidelberg, 1996) 237-242. 2. E. Andres, R. Acharya and C. Sibata, Discrete Analytical Hyperplanes, Graphical Models and Image Processing 59-5 (1997) 302-309. 3. E. Andres, Ph. Nehlig and J.Fran¸con, Tunnel-free supercover 3D polygons and polyhedra, in: Proc. Eurographics ’97, Budapest (Hungary), Computer Graphics Forum 16-3 (1997) C3-C13. 4. E. Andres, Ph. Nehlig and J. Fran¸con, Supercover of Straight Lines, Planes and Triangles, in: Proc. 7 th Int. Workshop on Discrete Geometry for Computer Imagery, Montpellier (France), Lecture Notes in Computer Science, vol. 1347 (Springer, Berlin-Heidelberg, 1997) pp. 243-253. 5. E. Andres, Mod´elisation analytique discr`ete d’objets g´eom´etriques, Habilitation (in french), Laboratoire IRCOM-SIC, University of Poitiers (France), 8 Dec. 2000. 6. V.E. Brimkov, E. Andres, R.P. Barneva, Object discretizations in high dimensions and membership recognition, Proc. 9 th Int. Workshop on Discrete Geometry for Computer Imagery, Uppsala (Sweden), Lecture Notes in Computer Science, vol. 1953 (Springer, Berlin-Heidelberg, 2000) pp. 210-221.. 7. V.E. Brimkov, E. Andres, R.P. Barneva, Object discretizations in high dimensions, accepted for publication in Pattern Recognition Letters. 8. D. Cohen and A. Kaufman, Fundamentals of Surface Voxelization, Graphical Models and Image Processing 57-6 (1995). 9. I. Debled-Renesson, J-P. Reveill`es, A linear algorithm for digital plane recognition, 4th int. workshop in discrete geometry for computer imagery, Grenoble (France), sept. 1994. 10. J. Fran¸con, Discrete Combinatorial Surfaces, Graphical Models and Image Processing, vol. 57, Janv. 1995. 11. J. Fran¸con, J-M. Schramm, M. Tajine, Recognizing arithmetic straight lines and plane, 6th int. workshop in discrete geometry for computer imagery, Lyon (France), LNCS n◦ 1176, nov. 1996, pp. 141-150. 12. J. Fran¸con, L. Papier, Polyhedrization of the boundary of a voxel object, in: Proc. 8 th Int. Workshop on Discrete Geometry for Computer Imagery, Marne-laVall´ee (France), Lecture Notes in Computer Science, vol. 1568 (Springer, BerlinHeidelberg, 1999) 425-434. 13. Y. G´erard, Local configurations of digital hyperplanes, 8th int. Workshop on discrete geometry for computer imagery, Marne-la-Vall´ee (France), LNCS n◦ 1568, mars 1999, pp. 365-374. 14. A. Kaufman, An algorithm for 3D scan conversion of polygons, in: Proc. of Eurographics ’87, Amsterdam, (1987). 15. V. Kovalesky, Digital geometry based on the topology of abstract cell complexes, 3rd Discrete Geometry Conference in Imagery, Strasbourg (France), 1993, pp. 259284. 16. W. Lorensen, H. Cline, Marching Cubes: a high resolution 3d surface construction algorithm. SIGGRAPH’87, Anaheim (USA). Computer Graphics J., vol. 21, n◦ 4, Jul. 1987, pp. 163-169. 17. J-P. R´eveilles, G´eom´etrie Discr`ete, calculs en nombres entiers et algorithmique, State Thesis (in french), D´epartement d’Informatique, Universit´e Louis Pasteur, Strasbourg (France), 1991.
Defining Discrete Objects for Polygonalization: The Standard Model
325
18. M. Tajine, D. Wagner, C. Ronse, Hausdorff discretizations and its comparison to other discretization schemes, in: Proc. 8 th Int. Workshop on Discrete Geometry for Computer Imagery, Marne-la-Vall´ee (France), Lecture Notes in Computer Science, vol. 1568 (Springer, Berlin-Heidelberg, 1999) 399-410. 19. J. Vittone, J-M. Chassery, (n − m)-cubes and Farey nets for na¨ive planes understanding, 8th int. workshop in discrete geometry for computer imagery, Marne-laVall´ee (France), LNCS n◦ 1568, mars 1999, pp. 76-87.
Visibility in Discrete Geometry: An Application to Discrete Geodesic Paths David Coeurjolly Laboratoire ERIC Universit´e Lumi`ere Lyon 2 5, av. Pierre Mend`es-France 69676 BRON CEDEX
[email protected] Abstract. In this article, we present a discrete definition of the classical visibility in computational geometry. We present algorithms to compute the set of pixels in a non-convex domain that are visible from a source pixel. Based on these definitions, we define discrete geodesic paths in discrete domain with obstacles. This allows us to introduce a new geodesic metric in discrete geometry.
Introduction In discrete geometry, many Euclidean geometric tools are redefined to take into account specificities of the discrete grid. In this article, we propose a definition of the classical Euclidean visibility based on discrete objects. The interest is double: on one hand we extend the discrete geometry with a new tool and on the other hand, since this visibility allows us to define discrete geodesic paths and discrete shortest paths, we have a practical tool needed by many applications in medical imaging or image analysis to estimate geodesic distance in non-convex domains. The visibility definition we propose is based on classical Discrete Straight Lines (DSL for short). Many algorithms exist for the DSL recognition problem. Some of these approaches are based on chain code analysis [24], on links between the chain code and arithmetical properties of DSL [6,7], on links between the chain code and the feasible region in the dual -or parameter- space [9,15,23] and others on linear programming tools such that Fourier-Motzkin’s algorithm [10]. All these algorithms present a solution either to decide if a given set of pixels is a discrete straight segment (DSS for short) or to segment a discrete curve into DSS, or both. In our case, the problem is quite different, we want to decide if there exits a DSS between two pixels in a non-convex domain. We present definitions and algorithms to compute the set of pixels which are visible from a source. Then, we define a notion of discrete geodesic path and a metric associated to such path based on this visibility definition. We also proposed an efficient implementation of the geodesic distance labelling from a source pixel. A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 326–337, 2002. c Springer-Verlag Berlin Heidelberg 2002
Visibility in Discrete Geometry: An Application to Discrete Geodesic Paths
1 1.1
327
Visibility Notions and Definitions
Let us denote D a discrete domain, that is a n−connected set of pixels. We ¯ the complement of D, we call this set indifferently the background or denote D the set of obstacles. In the following, we consider D a 8-connected domain. In this domain, we define the discrete visibility by analogy to the continuous definition. Definition 1 (Discrete Visibility) Let s and t be two pixels in D, we define the discrete visibility as a binary relationship v : D → D such that we have v(s, t) if and only if there exists a 8-connected discrete straight segment from s to t whose pixels belong to D Before introducing the visibility problem in non-convex domain, we recall classical parameter space characterizations of DSL [15,16,23]. If we consider an Euclidean straight line y = αx + β, the digitization of this line using the Grid Intersect Quantization (see [11] for a survey on digitization scheme) is the set of discrete points such that: ∆(α, β) = {(x, y) ∈ Z2 | −
1 1 ≤ αx + β − y < } 2 2
Note that all classical digitization schemes (GIQ, Object Boundary Quantization or Background Boundary Quantization) can be used and such a choice will not interfere in our algorithms. We choose the GIQ scheme because of its symmetry properties. In the parameter space of the previous definition, we can describe the set of Euclidean straight lines whose the digitization contains a pixel p(x, y): Sp = {(α, β) ∈ R2 | −
1 1 + y ≤ αx + β < + y} 2 2
A pixel in D defines a strip in the (α, β)-space delimited by two lines L1 : αx + β − y ≥ − 12 and L2 : αx + β − y < 12 . If we want to know if a set of pixels belongs to a DSL, a classical way is to compute the intersection in the (α, β)-space of strips associated to each pixel. If the feasible domain is not empty, it describes all DSL containing the pixels (cf figure 1 for an example). In the following, we define the domain S(s, t) associated to pixels s and t the, intersection Ss ∩ St . In order to compute the visibility in non-convex domains, the main idea is to check in the dual space if domains associated to obstacle pixels do not hide the current pixel t from the source s. 1.2
Visibility Domain
Let o denote an obstacle pixel. If we want to describe the set of Euclidean straight lines whose digitizations do not contain o, we also introduce a strip in
328
D. Coeurjolly
y (3,2)
(0,0)
x
Fig. 1. An example of S(s, t) domain with pixels (0,0) and (3,2), the S(s, t) domain in the parameter space defined by inequations : {β < 1/2, β ≥ −1/2, β < −3α + 5/2, β ≥ −3α + 3/2}.
the parameter space such that the inequations are reversed. Hence, an obstacle o ¯ 1 (o) : αx+β−y < −1/2 and L ¯ 2 (o) : αx+β−y ≥ 1/2. is associated to constraints L If we want to know if this obstacle blocks the visibility from s to t, we just have ¯ 1 (o) ∩ L ¯ 2 (o). If to compute in the (α, β)-space L1 (s) ∩ L2 (s) ∩ L1 (t) ∩ L2 (t) ∩ L this intersection is empty then t is not visible from s. More generally, if we consider a non-convex domain D and a set of obstacle ¯ such that all point abscissas are pixels O = {oi }i=1..n that is a restriction of D between the abscissa of s and the abscissa of t (all other points can be omitted for the visibility problem). We have the lemma: Lemma 1 Let s be the source and t a pixel in D, t is visible from s in D if and only if: ¯ 1 (oi ) ∩ L ¯ 2 (oi ) S(s, t) ∩ L =∅ i=1..n
The proof of this lemma can be deduced by the visibility definition and by construction of S. Obviously, we do not have to consider all obstacle pixels. We first define: Definition 2 A pixel o in O is called “blocking pixel” for the visibility problem v(s, t) if: ¯ 1 (o) ∩ L ¯ 2 (o) S(s, t) ∩ L = S(s, t) and the abscissa of o is between the abscissa of s and t. These blocking pixels are those which interfere in the visibility problem. Nonblocking pixels in O can be removed from the v(s, t) test. We can characterize the shape of the domain when a blocking pixel modifies it: Lemma 2 If o is a blocking pixel for the v(s, t) problem, either the domain ¯ 1 (o) ∩ L ¯ 2 (o) is empty or it has only one connected component. S(s, t) ∩ L Proof: we consider the domain S(s, t) and a blocking pixel o such that o, s and t ¯ 1 (o) are not collinear (in that case, the domain is empty). We show that either L ¯ 2 (o) crosses the domain. We have different cases (cf figure 2-a) that induce or L
Visibility in Discrete Geometry: An Application to Discrete Geodesic Paths
329
two components but the left and the middle cases are excluded because they imply that the abscissa of o denoted xo is not between xs and xt and thus, o is not a blocking pixel according to definition 2. As the matter of fact, if xo is ¯ 1 (o) is between the slope of L1 (s) and the between xs and xt , then the slope of L slope of L1 (t). By construction of the strips, the vertical distance between L1 ¯ 1 with the and L2 is equal to 1. Hence, in figure 2-b, the intersection in a of L vertical line going through b implies that b must be outside the interval [a, b] on ¯ 2 is greater than the slope of the edge cb, L ¯2 the vertical line. Since the slope of L ¯ cannot cross the domain. Same idea can be applied when L2 crosses the domain. ¯ 1 (o) ∩ L ¯ 2 (o) Hence, all cases of the figure 2-a are impossible and thus, S(s, t)) ∩ L has only one connected component. ✷
b’ c b
L 2
a’ a
(a)
L 1
(b)
Fig. 2. a) Different cases that induce two connected components, the left case and the middle case are impossible by definition of blocking points. The third case must be taken into account b) illustratcfion of the proof of lemma 2.
¯ 1 (resp. L ¯ 2 ) of an obstacle crosses According to this lemma, if a straight line L ¯ ¯ the domain, the other constraint L2 (resp. L1 ) can be removed for the visibility ¯ 1 crosses the domain is above the problem. Geometrically, an obstacle such that L ¯ 2 crosses the S(s, t) domain Euclidean segment [s, t] and an obstacle such that L is beneath the segment [s, t] (cf figure 3 for an example). We denote U(s, t) the set of blocking pixels above [s, t] and L(s, t) the set of blocking pixels beneath the segment [s, t]. 1.3
Visibility Algorithm
In this section, we present algorithms that compute the equivalence class associated to the visibility binary relationship of a source s. We propose two algorithms, the first one computes the equivalence class with the visibility definition given above, and the second one introduces a new visibility definition that is a restriction of the previous one but the associated algorithm complexity justifies this new version of the visibility. The first algorithm we propose is a really straightforward computation of the visibility. Indeed, we can use classical linear programming tools to solve the
330
D. Coeurjolly
L 1 (c)
L 1 (d)
d t
L 2 (a)
c b s
a
L 2 (b) Fig. 3. Visiblity domain associated to a set of blocking pixels. The black feasible region in the parameter space is the visibility domain associated to grey pixels constrained with the black blocking pixels.
linear inequation system given by obstacle constraints. Such tools are for example the Fourier-Motzkin [10] system simplification algorithm, the Simplex algorithm or the Megiddo’s algorithm [17]. Note that the complexity of the Megiddo’s algorithm is linear in the number of inequations but the problem comes with the dimension of the system. In our case, the constraint system is in dimension 2 and thus the implementation of the Megiddo’s algorithm is tractable with a complexity bounded by 4n where n is the number of inequations. We consider a source s, a domain D. We label all pixels in D using a breadthfirst tracking of the domain using for example the 8-adjacency. During the propagation process, if we meet an obstacle we store its coordinates in a list O. At each pixel visited in the breadth-first tracking, we extract from O the set of blocking pixels and we solve the visibility problem using the Megiddo’s algorithm. Straightforward visibility algorithm Input: a domain D and a source s Output: the set of pixels which satisfy v(s, t) Let Q be a FIFO queue Let O be the obstacle list Append last(s,Q) While Q is not empty t:=remove first(Q) For each 8-neighbor n of t not labelled closed or visible If n is an obstacle then Append(n,O) else Let B be the set of blocking points of O according to the pixel n ¯ 1 or L ¯ 2 the constraints of each point of B Compute the linear inequation system S with L If M egiddo(S) = ∅ then Label n as visible Append last(n,Q) else Label n as closed //n is not visible and the point is closed endFor endWhile
Visibility in Discrete Geometry: An Application to Discrete Geodesic Paths
331
If we denote n the number of pixels in D and m the number of obstacles in O, each step in the while loop has got a complexity bounded by O(m). Hence, the global cost of this algorithm is O(nm). Due to the difficulties to provide an efficient data structure to propagate blocking points from a point to its neighbors, this algorithm has a quite important complexity and is not efficient for the geodesic computation. Thus, we propose a new definition of the discrete visibility which is a weak version of the definition presented above but that leads to an efficient algorithm for the visibility computation and the discrete geodesic problem. Definition 3 (Weak Discrete Visibility) Let s and t two pixels in D, we define the weak discrete visibility as a binary relationship v ∗ : D → D such that we have v ∗ (s, t) if and only if there exists an Euclidean straight line going ¯ between s and t. through s and whose digitization contains t and no pixels in D Instead of considering the inequation associated to s, we constraint the set of Euclidean lines to go through s. This new definition restricts the previous one and make the visibility not be a symmetric binary relationship. However, this definition allows an efficient data structure for the visibility test. We suppose that all obstacle pixels are sorted by polar angles using s as the origin. Using this data structure and the above definition, we have the following property. Proposition 1 Given a set of obstacles sorted by polar angles of center s and a point t, we denote u the minimum of U(s, t) and l the maximum of L(s, t). We have: ¯ 1 (u) ∩ L ¯ 2 (l) v ∗ (s, t) ⇔ S ∗ (s, t) ∩ L =∅ where S ∗ denotes the new domain associated to the weak visibility which is now a segment in the parameter space. Hence, instead of considering all blocking pixels, we just have to test two characteristic pixels given by a polar sort. The proof of this property is a straightforward application of the visibility definition. Note that the polar sort can be done with integer arithmetic. We can present the algorithm associated to this definition: Weak visibility algorithm Input: a domain D and a source s Output: the set of pixels which satisfy v(s, t) Let Q be a FIFO queue Let O be the obstacle list sorted in a polar trigonometric order of center s Append last(s,Q) While Q is not empty t:=remove first(Q) For each 8-neighbor n of t not labelled closed or visible If n is an obstacle then Append sort(n,O) else Let (u, l) be the localization of n in the sorted set O ¯ 1 (u) ∩ L ¯ 2 (l) If S ∗ (s, t) ∩ L = ∅ then Label n as visible Append last(n,Q)
332
D. Coeurjolly else
endFor endWhile
Label n as closed //n is not visible and the point is closed
The visibility test has got a constant time cost and according to the data structure, both localization and obstacle insertion have a cost in O(log(m)). Thus, the global cost of this algorithm is O(nlog(m)). Moreover, the cone (s, u, l) associated to a point t can be propagated for both localization and insertion to reduce the expected complexity of the algorithm that makes this labelling very efficient.
2
Discrete Shortest Path and Discrete Geodesic Metric
Based on these definitions of the visibility, we can define discrete shortest paths and discrete geodesic paths. 2.1
Definition and Previous Works
We first remind some classical facts on discrete metrics that approximate the Euclidean one. All discrete metrics are based on: – either a mask approach where elementary steps in the neighborhood graph are weighted in order to approximate the Euclidean distance of these steps. For example, elementary steps of the Manhattan distance (or d4 ) are horizontal or vertical moves weighted to 1, the chess-board distance (or d8 ) also considers diagonal moves weighted to 1. More generally, chamfer metrics first list elementary moves and then associate weights to each move (see [1,22] for initial works) ; – or a vector approach that leads to exact Euclidean metric where displacement vector (dx, dy) is stored and then the distance can be exactly computed d = dx2 + dy 2 but the main goal is to design distance map algorithm that only deal with the integer displacements [5,20,4]. For the discrete geodesic problem, the mask based approach leads to efficient algorithms because a weighted graph can be computed from the metric and the adjacency graph of the domain D and thus, classical shortest path algorithms can be applied such as the Dijkstra’s graph search algorithm [19]. In the following, we use the data structure and the implementation of the geodesic mask given by Verwer et al. [21]. The authors describe an bucket sorting implementation of the Dijkstra’s graph search algorithm which leads to a uniform cost algorithm. In [4], Cuisenaire proposes a region growing Euclidean distance transform using the same structures but the bucket are indexed by the square distance dx2 + dy 2 . For all the visible pixels from the source, this algorithm provides a
Visibility in Discrete Geometry: An Application to Discrete Geodesic Paths
333
good estimation of the Euclidean distance metric. This algorithm is not error-free but we will discuss this point later. In [18,2], Moreau presents an algorithm for the geodesic metric problem based on a discrete arc chain code propagation scheme but some operations to maintain the data structure are expensive. In our case, we use a uniform cost data structure from which we can extract arc chain code but the visibility property is propagated instead of iso-metric points. 2.2
Algorithm
The main idea of our discrete geodesic algorithm is the following: for all pixels which are visible from the source, we do not have any problem to compute their distance because it exists a discrete straight line between the source and these points and thus, we can compute the displacement vector and return dx2 + dy 2 . If a pixel p is not visible, we start a new visibility computation such that p is a new source and each pixel t such that v(p, t) will be labelled by the distance from p to the source plus the distance between p and t. More formally, we have the following purely discrete definition of a geodesic path in D: Definition 4 (Discrete Geodesic Path) A discrete geodesic path between a point t and a source s is a sequence of pixels in D denoted {pi }i=0..n+1 with p0 = s and pn+1 = t such that: v(pi , pi+k )
iff
k = {−1, 0, 1}
with i = 1..n
And such that the geodesic distance dgeodes (s, t) is minimal. The geodesic distance is defined by: dgeodes (s, t) =
n
deuc (pi , pi+1 )
i=0
where deuc (a, b) denotes the Euclidean distance between pixels a and b. The discrete geodesic path is thus a 8-connected curve that is segmented into DSS by construction. The metric we associate to this curve have been intensively studied and both the stability and multigrid convergence have been proved [14, 13,3]. In order to design an efficient algorithm based on the Verwer’s bucket structure [21], we consider rounded geodesic distance to index the buckets: a pixel p belongs to the bucket d if and only if: dgeodes (s, p) = d This estimated metric is still consistent for the Verwer’s algorithm (A∗ algorithm) because it satisfies the triangular inequality [18,2]: for a, b, c ∈ R
a + b ≥ c ⇒ a + b ≥ c
334
D. Coeurjolly
For a computational efficiency of the algorithm, we implement the v ∗ visibility. Hence, at each pixel p in the buckets d, we associate a data structure that contains: its coordinates, the current source pixel pi such that v(pi , p) and the distance dgeodes (s, pi ). We also have an obstacle data structure associated to each new source. Each obstacle list contains the set of obstacles sorted by polar angles met during the visibility propagation associated to each source. We can know present the discrete geodesic algorithm. Note that some steps of this pseudo-code are not detailed for sake of clarity. Discrete Geodesic Algorithm Input: a domain D, a source s and a goal g Output: the geodesic distance for each pixel of D Let Bucket[i] be an array of FIFO queues Let O[i] be an array of double-linked list of obstacles Let d denotes the current bucket (d:=0) Append last(s,Bucket[d]) While there is no more pixel in buckets If the bucket d is empty then increment(d) t:=remove first(Bucket[d]) For each 8-neighbor n of t not labelled closed or visible If n is an obstacle then Add n to the obstacle list associated to the source of p else Let (u, l) be the localization of n in the sorted set O[i] associated to the current source If n is visible then Label n as visible Compute the geodesic distance d of n Append last(n,Bucket[d’]) if d > d else Label n as closed Initialization of new source n whose obstacle list is empty Compute the geodesic distance d of n Append last(n,Bucket[d’]) if d > d endFor endWhile
3
Experiments and Discussions
In our experiments, we compute the geodesic distance labelling of a binary image according to the coordinates of a source. In figure 4, we present the distance labelling with three metrics: d4 , d8 and dgeodes in various domains. Geodesic distances are represented using a circular gray scale map in order to check the wave front propagations. In figures 5, instead of labelling the pixels according to their distance, pixels with the same color belong to the same equivalence class for the visibility problem. An illustration of these figures can be the minimum number of guards needed to control a room and the visibility associated to each guard (the first guard is given here). In figure 6, we present discrete geodesic metric on a blood vessel network. The domain is computed using a segmented angiography image. Using this geodesic distance algorithm, we naturally would like to apply this algorithm to compute the discrete Voronoi diagram or the Euclidean distance
Visibility in Discrete Geometry: An Application to Discrete Geodesic Paths
335
Fig. 4. From the left column to the right column: the discrete domains and the source point (isolated white pixels), the geodesic labelling using d8 , the geodesic labelling using d4 , the geodesic labelling using dgeodes .
Fig. 5. Global visibility graph: each pixel with the same color are in the same visibility equivalence class, source points of domains are the same of figure 4.
336
D. Coeurjolly
Source
Fig. 6. Application of the geodesic labelling in medical imaging: left An angiography image, middle binary image when blood vessels are segmented and right the geodesic labelling.
transform just considering multiple sources. Since this algorithm use a local propagation scheme (as the Cuisenaire’s algorithm [4]), the classical Danielsson’s algorithm errors are not solved in this approach. Hence, this algorithm presents a solution to this problem but errors may occur.
4
Conclusion
In this article, we have presented a discrete definition of the visibility in classical computational geometry. This definition is based on well known discrete objects (DSS) and is computed only with integers. Based on this definition, we have presented several algorithms to solve several problems: if we want to decide if there exist DSS between two pixels, we have a cost linear in the number of obstacle pixels O(m); if we want to label all pixels in a domain visible from a source, we have an algorithm in O(nm). Using the weak visibility definition, we reduce the complexity of both algorithms respectively to O(log(m)) and O(nlog(m)). We also have presented a definition of discrete geodesic paths and an algorithm that compute the geodesic distance of each point in the domain according to a source. This article also introduces open problems: is it possible to find an efficient data structure for the straightforward visibility algorithm ? How to generalize this approach for 3D domains and for discrete surfaces ? For this last problem, solutions exist in mask based approaches [12,8] but for the proposed method, discrete straight lines in 3D are well studied [3] and thus similar visibility algorithm is expected.
References 1. G. Borgefors. Distance transformations in digital images. Computer Vision, Graphics, and Image Processing, 34(3):344–371, June 1986.
Visibility in Discrete Geometry: An Application to Discrete Geodesic Paths
337
2. J.P. Braquelaire and P. Moreau. Error free construction of generalized euclidean distance maps and generalized discrete vorono¨i diagrams. Technical report, Universit´e Bordeaux, Laboratoire LaBRI, 1994. 3. D. Coeurjolly, I. Debled-Rennesson, and O. Teytaud. Segmentation and length estimation of 3d discrete curves. In Digital and Image Geometry. to appear, Springer Lecture Notes in Computer Science, 2001. 4. O. Cuisenaire. Distrance Transformations : Fast Algorithms and Applications to Medical Image Processing. PhD thesis, Universit´e Catholique de Louvain, oct 1999. 5. P.E. Danielsson. Euclidean distance mapping. CGIP, 14:227–248, 1980. 6. I. Debled-Rennesson. Etude et reconnaissance des droites et plans discrets. PhD thesis, Th`ese. Universit´e Louis Pasteur, Strasbourg, 1995. 7. I. Debled-Rennesson and J.P. Reveill`es. A linear algorithm for segmentation of digital curves. In International Journal of Pattern Recognition and Artificial Intelligence, volume 9, pages 635–662, 1995. 8. G. Sanniti di Baja and S. Svensson. Detecting centres of maximal discs. Discrete Geometry for Computer Imagery, pages 443–452, 2000. 9. L. Dorst and A.W.M. Smeulders. Decomposition of discrete curves into piecewise straight segments in linear time. In Contemporary Mathematics, volume 119, 1991. 10. J. Fran¸con, J.M. Schramm, and M. Tajine. Recognizing arithmetic straight lines and planes. Discrete Geometry for Computer Imagery, 1996. 11. A. Jonas and N. Kiryati. Digital representation schemes for 3d curves. Pattern Recognition, 30(11):1803–1816, 1997. 12. N. Kiryati and G. Sz´ekely. Estimating shortest paths and minimal distances on digitized three-dimension surfaces. Pattern Recognition, 26(11):1623–1637, 1993. 13. R. Klette and J. Zunic. Convergence of calculated features in image analysis. Technical Report CITR-TR-52, University of Auckland, 1999. 14. V. Kovalevsky and S. Fuchs. Theoritical and experimental analysis of the accuracy of perimeter estimates. In Robust Computer Vision, pages 218–242, 1992. 15. M. Lindenbaum and A. Bruckstein. On recursive, o(n) partitioning of a digitized curve into digital straigth segments. IEEE Transactions on PatternAnalysis and Machine Intelligence, PAMI-15(9):949–953, september 1993. 16. M. D. McIlroy. A note on discrete representation of lines. Atandt Tech. J., 64(2, Pt. 2):481–490, February 1985. 17. N. Megiddo. Linear programming in linear time when the dimension is fixed. Journal of the ACM, 31(1):114–127, January 1984. 18. P. Moreau. Mod´elisation et g´en´eration de d´egrad´es dans le plan discret. PhD thesis, Universit´e Bordeaux I, 1995. 19. J. Piper and E. Granum. Computing distance transformations in convex and nonconvex domains. Pattern Recognition, 20:599–615, 1987. 20. I. Ragnemalm. Contour processing distance transforms, pages 204–211. World Scientific, 1990. 21. B. J. H. Verwer, P. W. Verbeek, and S.T Dekker. An efficient uniform cost algorithm applied to distance transforms. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-11(4):425–429, April 1989. 22. B.J.H Verwer. Local distances for distance transformations in two and three dimensions. Pattern Recognition Letters, 12:671–682, november 1991. 23. J. Vittone and J.M. Chassery. Recognition of digital naive planes and polyhedization. In Discrete Geometry for Computer Imagery, number 1953 in Lecture Notes in Computer Science, pages 296–307. Springer, 2000. 24. L.D. Wu. On the chain code of a line. IEEE Trans. Pattern Analysis and Machine Intelligence, 4:347–353, 1982.
Multi-scale Discrete Surfaces Jasmine Burguet and R´emy Malgouyres LLAIC1 - IUT D´epartement Informatique BP 86 63172 AUBIERE CEDEX {Burguet,Remy.Malgouyres}@llaic.u-clermont1.fr
Abstract. In this article, we first propose a method to discretize a surface represented by a polyhedron. Then, we define a data structure used to work on such a discrete surface and that allows us to consider multi-scale discrete surfaces. Then, we explain how to perform easily and quickly boolean set operations on this data structure. Finally, we expose a method to reconstruct the whole surface and we display some results obtained from the boolean set operations. Keywords: discretization, multi-scale, discrete surface, boolean set operations.
1
Introduction
In this article, we work on surfaces of the objects, instead of working directely on the volumes. Indeed, the data sizes are then generally smaller. We explain the contexts in which our work make sense in Geometric Modeling. ([4], [10]). In the field of Constructive Solid Geometry (CSG), the principle is to construct complicated objects from elementary ones (like spheres or cubes), using boolean set operations such as union, intersection or set difference. These complicated objets are not explicitely encoded, but are represented by trees which nodes are boolean set operations and leaves are elementary objects. There is no method to display the objects of CSG that could allow us to perform real-time navigation programs. But such programs (OpenGL for example) are available for surfaces represented by polyhedra. So, it could be interesting to compute explicitely the surfaces of the objects of CSG. In the field of Computer Assisted Design (CAD) (see [5]), the aim is to construct automatically real objects with machines, and it also could be useful to use a representation of the surfaces of these objects. The algorithms used to compute the set operations on continuous surfaces encounter problems in some cases, for example when the considered surfaces are tangent. Our idea is to have a discrete step, since the discrete surfaces are naturally adapted to perform boolean set operations. In [1], a complete method to discretize polyhedrons, to compute the boolean set operations on the obtained discrete surfaces and to polyhedrize these ones is discribed. But a multi-scale definition of the discrete surfaces is necessary. For example, if we want to compute the union of a little nail and a wall, it is interesting to be able to represent A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 338–349, 2002. c Springer-Verlag Berlin Heidelberg 2002
Multi-scale Discrete Surfaces
339
these two objects with different scales: a small one for the nail and a bigger one for the wall. The notion of multi-scale discrete surface is one major originality of our work. The main purpose of this article is first to propose a quick method to obtain a discrete multi-scale approximation of a surface made of polyhedrons, then to define a data structure to represent such discrete surfaces, whatever its size and the discretization level, and finally to compute the boolean set operations on such data structures. The discretization process and the so-obtained data structure are presented in Section 3. The discretization method is derived from the z-buffer algorithm (see [3]). To represent a discrete surface, we use the notion of quad-trees, that is quickly discribed in the first Section (see [2], [9]), at different levels. Using the structure we define, we know the surfels orthogonal to a given direction, one of the axes of the coordinate space. In Section 4, we explain the method to compute set operations on the discrete surfaces, and some results are proposed in Section 4.4. Finally, from our representation, we have to construct the missing surfels, i.e. the lateral surfels. The method used is discribed in Section 5.
2 2.1
Basic Notions Classical Notion of Discrete Surface
A voxel v is a point of Z3 . We can represent such a voxel by a cube centered on its coordinates. A discrete 3D object is a set of voxels. We can define adjacency relation on the voxels: two voxels are said to be 6-adjacent if they share a common face. Let us define our discrete surfaces. First, a surfel is a pair of 6-adjacent voxels. We can see a surfel as a square shared by two 6-adjacent voxels. Then, the surface of a given object O is the set of surfels {(v1 , v2 )/v1 ∈ O, v2 ∈ O}. We want the discrete surfaces to have the Jordan property. Indeed, this property allows us to define an object from its surface, and so the suface separates an interior (the object) and an exterior (its complementary). In [6] and [7], the properties that the elements composing the object and the ones of its complementary have to satisfy to have the Jordan property are exposed. In [8], G.T. Herman deals with the needed properties of the elements of the surface of the object. In the following, we work on surfaces without considering the object itself. Furthermore, the surfels composing a given surface will not always have the same width. So, we will use an adapted tool to our multi-scale surfaces: quad-trees. 2.2
Quad-Tree
The quad-trees ([2]) were originally used to represent binary images and this method is based on successive sub-divisions. If an image I is not entirely made of 0’s or entirely made of 1’s, we subdivide I into four quadrants (squares), and so on until each quadrant is only composed of 0’s or 1’s. We represent the successive divisions by a quaternary tree. The root is the whole image, each node
340
J. Burguet and R. Malgouyres
1
0 0 0 10 0 1 0 1
Fig. 1. Example of the quad-tree of a binary image.
x z O y
Fig. 2. The bounding box and the coordinate space.
represents a quadrant of the 2D image, and the leaves are 1 or 0, depending on the values of the pixels composing the quadrant (see Figure 1). In the following, we will use the same ordering of the children of a node. In our case, we will use quad-trees to represent the 3D space according to one of the coordinate axes. This choice is motivated by the multi-scale character of our discrete surfaces. Moreover, the structures used to represent two surfaces with different sizes will be the same.
3
Discretization and Definition of the Surfaces Data Structures
After the discretization of a polyhedron, the result will be a discrete surface that will be composed of surfels of the same width, as defined in Section 2.1. In the following, we denote by Oxy the plane containing the origin O of the coordinate space and the axes x and y. First, we compute a bounding box B which contains the whole initial polyhedron. In the following, we assume that the width of the box is a power of 2 denoted by CB . Furthermore, the origin of the coordinate space is a vertex of the bounding box and the axes are according to the edges of the box (see Figure 2). We also assume that the width of the surfels composing our obtained discrete surface is a power of 2, with range from 1 to CB . We consider a surface represented by a closed polyhedron P . We want to compute a discrete approximation of P, in other words a set of surfels which is close to P. Let us denote by wP the wanted width of each surfel after the discretization of P . The idea is the following: first we build a 2-dimensional array A of lists of floating points. The dimensions of A are (CB /wP ) × (CB /wP ). Each cell of the array represents a square, of width equal to w, in Oxy . For each i, j such that 0 ≤ i ≤ CB and 0 ≤ j ≤ CB , a list Lij contains the depths z of the intersections of the line defined by x = i, y = j, and the faces of the surface. To compute these lists, we use a method derived from the z-buffer algorithm ([3]): we project each face F of the polyhedron on the plane z = 0. Then we use a polygon filling algorithm: for each projected face Fp , we cover the integer points k = (ki , kj ) contained in Fp and we compute the intersections of the corresponding lines x = ki and y = kj with F , that we put in Lij . Once we have computed the lists of floating points, we can construct the surfels of the surface, considering integer approximations d of the intersections
Multi-scale Discrete Surfaces
341
Fig. 3. The surfels of the tree that represents the discrete version of a sphere.
represented by elements of Lij . The value of d depends on whether we enter into the surface or we get out (we use a parity rule), and on the width w of the surfels composing the discretized surface. If we enter, then d = z/w × w, and d = z/w × w if we get out. On Figure 3, the pictures show the result of the surfels obtained by the discretization algorithm. Note that only surfels which are orthogonal to the z axis are computed; this enables us to reconstruct a voxelized volume, still using a parity rule. 3.1
A Quad-Tree of Virtual Surfels
For each discretized surface, we have an array that contains some surfels composing this surface. But our motivation is to work on surfaces that have different scales, and to obtain multi-scale surfaces. An array have a static nature and is not adapted. Then we decide to use a better tool for our purpose: quad-trees. First, let us consider the bounding box B of a fixed surface (see above). We build a quad-tree of virtual surfels as follows: the root of the tree represents a face F of B (the one that is contained in Oxy ), we subdivide F to build a quadtree, and we stop the construction of a branch when we reach the size of the surfels of the surface. The leaves of this tree are lists of special structures, the quad-tree of surfels (see Figure 4 and Section 3.2). Note that there are two kinds of quad-trees: quad-trees of virtual surfels and quad-trees of surfels. Considering that the depth of the root is 0, the surfels of the list of a leaf at a depth d in the tree have a size equal to CB /2d , and the position of these surfels according to the plane Oxy depends on the position of the list in the virtual tree. In order to build only useful parts of the tree, we stop the construction of the branches which end with empty lists in A (see Figure 5). Note that the virtual tree of depth n has almost 22×n leaves. In the following, we call virtual tree the tree of virtual surfels. 3.2
Construction of the Lists at the Leaves of the Virtual Tree
Let us consider a virtual tree V . Up to now, from the discretization, each list of the leaves of V must contain surfels which have the same size wP . But in the sequel (after multi-scale boolean set operations for example), it may happen
342
J. Burguet and R. Malgouyres Bounding box
F x O
...
z y Bounding box
Corresponding tree :
Corresponding virtual tree
Cracine the root ,
its 4 sons, . . . ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
...
Fig. 4. The quad-tree of virtual surfels. We continue the construction of the tree until the size of the surfels is reached.
z
Fig. 5. The vitual tree according to the bounding box and the object composed of 4 voxels does not contain useless branches.
s= 1 d=3
s= 1 d=4
s= 1 d=3
s= 1 d=4
s= 1 d=3
s= 1 d=4
s= 1 d=3
s= 1 d=4
Fig. 6. Virtual tree and lists of surfels corresponding to the example of Figure 5.
that such a list has to contain surfels with different sizes w ≤ wP , with w = wP , wP /2, wP /4, · · · , 1 . So, each leaf is a list of structures composed of a boolean quad-tree of surfels, both with the integer depth according the z axis (see Figure 6). In the following, we simply call these lists lists of surfels. We have just defined a tree data structure which can represent a discretized surface according to a given direction. So, we know all the surfels which are parallel to the plane Oxy . We are going to explain how to perform boolean set operations on surfaces discretized at various scales using such structures, thus obtaining a multi-scale surface.
4
Boolean Set Operations on Tree Surfaces Representation
Before doing boolean set operations on the trees representing the surfaces, we make the assumption that the considered bounding boxes of the surfaces are the same. It is not really a drawback since the virtual tree does not contain useless branches (see section 3.1). Furthermore, the representations of the surfaces follow to the same axis of the coordinate space and we remind that the width of the surfels of the surfaces we consider are always powers of 2. Since our data structure is not translation invariant, we also assume that the different objects are correctly located relatively to each other in the bounding boxes.
Multi-scale Discrete Surfaces
343
The method is more or less the same whatever the considered boolean set operation, so we only explain the process for the union (therefore, we can replace the word union by intersection or difference in the following). 4.1
Cover of the Virtual Quad-Trees
We denote by A1 and A2 the virtual trees representing two surfaces S1 and S2 . We simultaneously apply a depth-first exploration for the two trees. So, during the recover, we always consider the same part of the plane Oxy . We have 3 cases. If none of the trees is a leaf, we continue the cover with the first children of A1 and A2 , the second ones and so on. If both of the trees are leaves, we compute the union of the corresponding respective lists L1 and L2 (see Section 4.3). If only one of the trees is a leaf (for example A2 ). Then the result of the union of A1 and A2 is the union of the children of A1 and the corresponding children of the surfels trees of the list of surfels of A2 (if a surfel tree is a node with value v, it is transformed into a tree with four leaves which have value v), and so on. In practice, we replace A1 by a leaf of which the list of surfels is constructed recursively from the lists of the children of A2 (see Figure 7). There remains to compute the union of two lists of the leaves of the virtual quad-trees. First, we introduce a useful tool. 4.2
The Booltrees
In the following, in order to perform the boolean set operations we need to know, during the cover of a list of surfels, if we enter or get out of the surface when we meet a quad-tree of surfels. So, we use a quad-tree that we call booltree and that represents a boolean matrix, for each list of surfels. At the beginning of the cover of a list of surfels, the booltree B is just a node with boolean value equal to 0. When we meet a quad-tree of surfels, for each surfel in the tree (i.e. a leaf with value equal to 1), we update B by reversing the boolean values of the corresponding part of the booltree. Then, we know if a surfel is opening (we enter into the surface), closing (we get out), partly opening or closing. For example, if a part of the booltree passes from 0 to 1, the surfel is opening for this part. Moreover, a booltree indicates the parts of the space which are in the interior or the exterior of the surface: between two quad-trees of surfels with depth d1 and d2 , the parts of the booltree with value equal to 1 are in the interior of the surface between d1 and d2 . For a better understanding, let us consider on Figure 8 the example with the list L. The booltree B is initially a matrix of 0 and is represented by a white square. The first tree of surfels t1 is a node with a value equal to 1. Then, all the booltree is reversed and now B is a matrix of 1 (the square is gray). So, we know that the unique surfel in t1 is an opening one (we pass from 0 to 1 in the booltree). The next tree of surfels t2 is not a leaf. So, the booltree is partially transformed according to the parts of the tree that have a value equal to 1. Since these parts become 0, we know that the two corresponding surfels are closing
344
J. Burguet and R. Malgouyres L0
S= 1 d=1
S= 1 d=2
L1
S= 1 d=1
S= 1 d=2
A1 L2
S= 1 d=1
S= 1 d=3
L3
S= 1 d=1
S= 1 d=3
S=
new
L
A1
1
1 1 0 0
S=
d=1
d=2
0 0 1 1
S=
d=3
Fig. 7. Example of the construction of a list of surfels from the children of a virtual tree. We replace A1 by a leaf with a list constructed from the children of A1 . L
S=
1
d=1
S=
1 1 0 0
S=
0 0 1 1
d=2
d=3
t1
t2
t3
After the first tree
After the second tree
d=1 d=2 d=3 Initial empty booltree
After the last tree
Fig. 8. Evolution of a booltree during the cover of a list.
ones. The last tree t3 contains two closing surfels. Since there are no more trees in L, the booltree is made of 0 again. We are ready to perform the boolean set operations on lists of surfels. 4.3
Boolean Set Operations on Two Lists of Surfels
Let us consider two non-empty lists of surfels L1 and L2 , in the same level in the virtual surfels tree. If one of the two lists is null, the result of the union is the other list. Initially, for the two lists, the respective booltrees B1 and B2 of L1 and L2 are completely made of 0. Let us consider the first elements e1 of depth d1 and e2 of depth d2 of L1 and L2 . We also construct a booltree B representing L = L1 ∪ L2 , initially made of 0. There are three different cases. First, if d1 < d2 , we update B1 using e1 . Then, we compute the union B∪ of B1 and B2 . If B is not equal to B∪ , for each part of B which is different to B∪ , we have a surfel of L. Therefore, we can construct the quad-tree of surfels at the depth d1 . If B∪ = B, we do not construct a quad-tree. Then, B is now equal to B∪ , L1 becomes the next element of L1 , and we do the process again with the new values. The case d2 < d1 is similar. If d1 = d2 , we update B1 using e1 and B2 using e2 , we compute the union B∪ of B1 and B2 , we compare B∪ and B, and we construct the quad-tree of surfels, if it exists, at the depth d1 . Then, we update B with B∪ , L1 and L2 , and we do the process again with the new values. This procedure ends when L1 and L2 are both null. If at one step, one of the lists becomes null, the result list is equal to the other one. As an example, we will consider the union of the objects O1 and O2 of Figure 9. The lists L1 and L2 correspond to the objects and are at the same place in
Multi-scale Discrete Surfaces O1
O2
L1
L2 d=2
d=4
1
S=
d=4 1 0 0 0
S=
S=
d=3
d=3 d=4 d=5
1
S=
d=2
345
1 0 0 0
d=5
d=2
d=3
d=4
d=5
d=2
d=3
d=4
d=5
d=2
d=3
d=4
d=5
B1
B2
B=B1
B2
L=L1
L2
S= d=2
1
S=
0 1 1 1
d=4
S=
1 0 0 0
d=5
Fig. 9. Union of two objects represented by their lists of surfels using the corresponding booltrees.
the respective virtual quad-trees representing the objects. In this Figure are represented the evolution of the booltrees B1 and B2 , according to the depths, and the union of the booltrees B at each depth. Initially, the booltrees are white. At depth 2, we meet the first surfel of L1 . The booltree B1 becomes grey, B2 remains the same (since there is no surfel at this depth in L2 ), and so B, which is the union of B1 and B2 at depth 2, becomes grey. From the comparison of the initial B and B at depth 2, we conclude that we have to construct a surfel of L of which the quad-tree is a node with value 1 and d = 2. At depth 3, we meet a surfel of L2 , and we consequently update B2 . The booltree B1 is unchanged. The union of B1 and B2 at depth 3 is equal to B at depth 2, so we do not have to create a surfel at this level. At depth 4, B1 becomes empty and B2 is unchanged. The comparison of the union of these booltrees and the last B shows that we have to create 3 surfels corresponding to the places of B that have changed at level 4. Finaly, at depth 5, B1 and B2 becomes white and so we have to build the last surfel, as shown on Figure 9. Then, we have obtained the list L which is the union of L1 and L2 . 4.4
Results of the Boolean Set Operations
Figure 10 shows an example of the result of the set operations. The surfels of the lists, which are orthogonal to the z axis, are shown. The bounding box has a width equal to 32, and this is the result of the difference between the surface of a voxel of width 16 and a sphere made of surfels of width 1.
346
J. Burguet and R. Malgouyres
Fig. 10. Surfels of the lists of the leaves of a virtual tree. This object has been obtained by the difference of a voxel of width 16 and a sphere composed of voxels of width 1.
As we can see, the upper face of the cube has been subdivided into smaller surfels, but the lower one is unchanged.
5
Reconstruction of the Whole Surface
To build and to display the whole surface from the quad-tree structure V , we once again use booltrees. Note that there exist more than one possibility to reconstruct the surface depending on the chosen size of surfels. First, there are four orientations of the lateral surfels, one by edge of the surfels of the lists of surfels. We denote these ones by N for North, S for south, E for east and W for west. We explain the way we use to build the N surfels, and it is similar for the other orientations. First, we easily construct an array CB × CB denoted by A. Each cell of A contains a list of integers that is constructed from the recover of each list of surfels of V . They represent the intervals of the space which are in the interior of the surface and are obtained from the booltrees of the lists. If we consider a list of surfels L of width 1, and if the list at the north of L contains a list composed of 2 elements, 0 and (CB − 1), then the whole north of L is in the interior of the surface and there are no N surfel to construct for L. We construct the N surfels from A and the virtual tree, in such a way that the size of all surfels is as large as possible for the considered list (see the right part of Figure 11). To do this, we treat the lists at the leaves of V one by one. Let us consider a list of surfels L and the corresponding booltree B. Let us denote by Cmax the maximal width of the surfels of L (depending on the place of L in V ), by e the first element of L and by d the depth of e. We consider L by successive intervals of length Cmax , and we begin at the depth p = d/Cmax × d. So, the first interval we consider is I = [p, p + Cmax ]. We denote by w the width of the surfels we consider. Initially, w = Cmax . Let us introduce the procedure. If L is empty or contains only one element, we stop here the process for the current list. If d = p, we update B using e, L becomes the element after e in L, and d is the depth of the new first element of L. We consider several cases for I. If, by considering A, the whole north of L is not completely full from p to p + w (i.e. is not in the interior of the surface)
Multi-scale Discrete Surfaces
347
if all the following properties are satisfied: – B is not a leaf, – in the successive transformations of B using the possible elements of L that have a depth from p to p + w − 1, in B the north parts are always true, – the north of the list is completely free (by considering A), then we create a N lateral surfel with width w. Else if one of the following cases is true: 1. d < p + w, 2. B is not a leaf, 3. the north of L is not completely full or completely free, then, we subdivide. By subdividing, we mean that we do the process again for each n ∈ {1, 2, 3, 4} and: w = w/2, I = [p + i, p + i + w], with i = 0 and i = w, the nth child of B and the nth children of the elements of L. Indeed, if we verify one of these hypotheses, we can not put a surfel of the initial width w, and we try to put surfels with inferior widths. Else if B is a leaf. If the value of B is true and the north of the list is completely free (by considering A), then we create a N lateral surfel with width w. We update B thanks to the elements of L which have a depth strictly less than p + w, we consequently update L, and we continue with the next interval. Let us consider, on Figure 11, the reconstruction of the foreground faces of the object represented by the list of surfels L, with Cmax = 4. Since the second element of the list is at the depth 3, we cannot put a surfel of size 4. Then we subdivide L into 4 lists, l1 , l2 , l3 and l4 , numerated like indicated on the Figure. For each sub-list, Cmax = 2. From l1 and l3 , we can put any surfel because the foreground of these lists are hidden by respectively l2 and l4 . The list l2 is composed of 3 elements, at depths 0, 3 and 4. We update the initial booltree using the first surfel and the booltree is now made of 1. We also update l2 . We can put a foreground surfel between 0 and 2 because the foreground side is completely free. After that we consider the next interval, between 2 and 4. The first element of l2 is at depth 3, so we cannot put a surfel of width 2. Then we subdivide again, and we consider the sub-lists of l2 ll1 , ll2 , ll3 and ll4 . By considering ll1 , we put only a surfel of width 1 between 3 and 4, and from the lists ll2 and ll4 , we put three surfels of width 1 between 2 and 4. From the list l4 , we put two foreground surfels of width 2.
6
Final Results
Figure 12 shows the result of the reconstruction for the previous example of Figure 10.
348
J. Burguet and R. Malgouyres 0 1
L
d=0
0 0 d=3
0 1 0 0
1
1 1
1 0 1 1
2 4
1
reconstruction of the
3
foreground faces
d=4 d=0
d=3 d=4
Fig. 11. Reconstruction of the foreground faces from a list of surfels.
Fig. 12. Final result of the difference between a big voxel and a sphere, after the reconstruction of the lateral faces.
On Figure 13, we can see the result obtained by the difference between a wall of surfels of width 8 and a thorus made of surfels of width 1 in a bounding box of width 64. As we can see, only the surfels of the wall “in contact” with the thorus are subdivided in smaller surfels. Moreover, for each list composing the surface, we put the surfels with the maximal possible width. This allows us not to subdivide too much the bigger surfels of the surfaces.
7
Conclusion and Perspectives
In this paper, we presented a method to discretize surfaces represented by polyhedra. This method is easy to implement and is derived from the z-buffer algorithm. Next, we proposed a structure to represent the surfaces obtained by the discretization method. The quad-tree nature of this structure allows us to represent multi-scale surfaces. Then, we designed a process to perform easily boolean set operations of these tree structures and using booltree. From the discrete nature of the surfaces, there exists no class of surfaces that causes difficulties. Furthermore, since we consider only the surfaces of the objects, the data sizes are quite small and we can compute the set operations very quickly. We also proposed a method to restitute the whole surface, and some experimental results were shown. There are many perspectives to this work. Indeed, we want to be able to polyhedrize the obtained discrete multi-scale surface. As in the case of polyhedrization of a discrete surface presented in [1], we have to construct a graph of surfels to represent the surface. So, from the structure that represents a multiscale surface, we will construct the corresponding multi-scale graph of surface.
Multi-scale Discrete Surfaces
349
Fig. 13. Result of the difference between a wall and a thorus. Some surfels of the wall have been subdivided into smaller ones.
We also will define a topology over the multi-scale surfaces. Then, we should be able to define a polyhedrization method analogue to the one exposed in [1]. Then, we will have a complete method to perform the boolean set operations on surfaces represented by polyhedrons, using a discrete step. After that, we will be able to use programs to perform a real-time navigation on the final polyhedrized surfaces.
References 1. J. BURGUET, R. MALGOUYRES, Strong Thinning and Polyhedric Approximation of a Discrete Surface, Proceedings of DGCI’2000, Uppsala, Sweden, Lecture Notes in Computer Science, vol 1953, Springer, pp222-234, 2000. 2. R.A. FINKEL, J.L. BENTLEY, Quad trees: A data structure for retrieval on composite keys, Acta Informatica, Vol. 4, pp. 1-9, 1974. 3. J.D. FOLEY, A. VAN DAM, S.K. FEINER, J.F. HUGHES, Computer Graphics: introduction and practice (second edition in C), Addison-Wesley. 4. R.A. GOLDSTEIN, R.NAGEL, 3-D Visual Simulation, Simulation, 16(1), pp. 2531, 1971. 5. T. HENDERSON, C. HANSEN, CAD-Based Computer Vision, IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(10), pp. 1181-1193, 1989. 6. G.T. HERMAN, Discrete Multidimensional Jordan Surfaces, CVGIP: Graphical Models and Image Processing, 55(5), pp. 507-515, 1992. 7. G.T. HERMAN, Boundaries in Digital Spaces: Basic Theory, in L.N. KANAL and A. ROSENFELD Ed., Topological Algorithms for Digital Image Processing, Vol. 19 of Machine Intelligence an Pattern Recognition, pp. 233-262, 1996. 8. G.T. HERMAN, Finitary 1-Simply Connected Digital Spaces, GMIP, 1(60), 1998. 9. A. KLINGER, C.R. DYER, Experiments on picture representation using regular decomposition, Computer Graphics and Image Processing, Vol. 5, pp. 68-105, 1976. 10. A. RAPPOPORT, S. SPITZ, Interactive Boolean Operations for Conceptual Design of 3-D Solids SIGGRAPH 97 Conference Proceedings, Annual Conference Series, pp. 269-278, 1997.
Invertible Minkowski Sum of Polygons Kokichi Sugihara Department of Mathematical Informatics, University of Tokyo Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
[email protected] Abstract. The paper gives a new formulation of the Minkowski sum of polygons. In the conventional Minkowski sum, the inverse operation is not well defined unless the polygons are restricted to be convex. In the proposed formulation, on the other hand, the set of polygons is extended to the set of “hyperpolygons” and the Minkowski sum forms a commutative group. Consequently, every polygon has its unique inverse, and the sum and the inverse operations can be taken freely. An example of a physical interpretation of the hyperpolygon is also given.
1
Introduction
The Minkowski sum is one of the most fundamental operations for geometric objects, and has a wide range of applications in data compression [4], generation of collision-free paths [10], and others. The Minkowski sum is also useful for constructing efficient geometric algorithms [6]. Algorithms for computing the Minkowski sum have also been studied extensively; they include algorithms for restricted classes of polygons [8,13], for general polygons [6,12], and for higher dimensional figures [1,4,9]. However, there is a big open problem on the “inverse” of the Minkowski sum, because no clear definition of the inverse is known yet for a general class of figures. An inverse-like operation called the Minkowski difference is known, but it behaves as the inverse only for strongly restricted cases such as the case of convex figures [3]. Guibas et al. [6] studied objects which are obtained by the inverselike operation and pointed out that the boundaries of those objects change their directions partially. Ghosh [3,4] tried to define the inverse for nonconvex figures, but his definition is valid for a certain restricted class only. Sugihara et al. [14] solved this problem partially. They introduced a class of closed curves whose tangent direction is continuous and monotone, and defined an algebraic operation in this class. This operation is a generalization of the Minkowski sum in that if we replace these closed curves with the regions bounded by the curves, the operation coincides with the Minkowski sum. Moreover, the operation is invertible in this class, and thus the inverse of the Minkowski sum is defined for a much wider class than the class of convex figures. In this paper, we apply a similar idea to polygons, and construct an algebraic system in which a generalization of the Minkowski sum and its inverse is well defined. In this new system, the set of polygons is extended to a set of more A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 350–359, 2002. c Springer-Verlag Berlin Heidelberg 2002
Invertible Minkowski Sum of Polygons
351
general objects, which we call “hyperpolygons”. The extension from polygons to hyperpolygons has the same structure as the extension from integers to rationals. After we review conventional Minkowski algebra in Section 2, we introduce a new representation of the boundary of polygons in Section 3. Next, we define the new Minkowski sum on this representation in Section 4, and extend it to a larger invertible world in Section 5. In Section 6, we give an example of an interpretation of a physical meaning of a hyperpolygon, and make some concluding remarks in Section 7.
2
Conventional Minkowski Sum
We fix an (x, y) Cartesian coordinate system Σ, and represent a point by its radial vector with respect to the origin of Σ. We call a set of points a figure. For two figures A and B, the new figure defined by A ⊕ B = {a + b | a ∈ A, b ∈ B}
(1)
is called the Minkowski sum of A and B, where a + b represents the sum of the radial vectors a and b. Let A and B be the convex figures in Fig. 1(a) and (b). Then, the Minkowski sum of A and B is the figure C shown in Fig. 1(c). The Minkowski sum can be intuitively understood in the following way. Consider the figures A and B in Fig. 1(a) and (b) again. Let us choose and fix an arbitrary point in B, as the dotted point in Fig. 1(b), and call it the reference point. As shown in Fig. 1(d), the Minkowski sum is the union of the figure A and the region swept by B when the reference point of B moves along the boundary of A. Next suppose that we are given two figures B and C. We want to find the figure X that satisfies X ⊕ B = C. (2) This is the inverse problem of the Minkowski sum. The inverse problem can be partially solved by another operation called the Minkowski difference. The Minkowski difference of two figures C and B, denoted C B, is defined as C B = {c − b | c ∈ C}. (3) b∈B
For the figures B and C in Fig. 1(b) and (c), respectively, C B coincides with the figure in Fig. 1(a), and hence in this particular case, the solution of eq. (2) is given by X = C B. However, the Minkowski difference does not give the complete solution of eq. (2). Actually the solution of eq. (2) is not necessarily unique. For example, let A be the figure in Fig. 1 (e). Then, we get A ⊕ B = C, and hence X = A is also a solution of eq. (2). Furthermore, eq. (2) does not necessarily admit a solution.
352
K. Sugihara
A’ A
B
C
(a)
(b)
(c)
(d)
(e)
Fig. 1. Minkowski sum of two polygons.
3
Angle-Parametric Polygonal Curves
While the conventional Minkowski sum is defined on polygons, we consider a class of closed polygonal curves, a typical example of which is the boundary curve of a polygon. Let P be a closed polygonal curve with n vertices v0 , v1 , · · · , vn−1 , vn = v0 and n edges e1 , e2 , · · · , en such that edge ei is a line segment connecting vi−1 to vi . We consider that the edge ei has the direction from vi−1 to vi . Examples of closed polygonal curves are shown in Fig. 2. The polygonal curve in Fig. 2(a) is considered the boundary of a polygon surrounding counterclockwise, while the one in Fig. 2(b) has a self-intersection and hence does not correspond to a polygon in an ordinary sense. 0- 3 e3 0- 4
v2
v3 e4 0 5 e 5 v4
e2
0- 6 v5
0- 2 v1
e6
e1 0- 1
0- 7
v0 = v6
(a)
(b)
Fig. 2. Polygon.
For our convenience of the notation, let us assume that we travel along P around in either direction endlessly, and name the vertices and the edges accordingly, i.e., vi+kn = vi and ei+kn = ei for k = ±1, ±2, ±3, · · ·. For each i = 0, ±1, ±2, · · ·, let φi be the angle required when we turn from the direction of ei to the direction of ei+1 counterclockwise. We call φi the left-turn
Invertible Minkowski Sum of Polygons
353
angle at vi . Since we measure the angle counterclockwise, we have 0 < φi < π if P bends to the left at vi as in Fig. 3(a), while we have π < φi < 2π if P bends to the right at vi as in Fig. 3(b). We call the former vertex a convex vertex and the latter vertex a reflex vertex.
o/ i
o/i
v e
i +1
vi e
ei
vi
ei
+1
v (a)
i+1
(b)
Fig. 3. Left-turn angle
Let us choose and fix an arbitrary edge, say es , and let θs be the angle of es with respect to the positive x direction measured counterclockwise with the convention that 0 ≤ θs < 2π. Next, for each i = 0, ±1, ±2, · · ·, we define θs + φs + φs+1 + · · · + φi−1 if i > s, θi = (4) θs − φs−1 − φs−2 − · · · − φi if i < s. In other words, θi basically represents the direction of the edge ei , but the ambiguity of a multiple of 2π is resolved by accumulating the left-turn angles starting with θs , as shown in Fig. 2(a). For θ ∈ R, let f (θ) be the set of points defined by {vi } if θi < θ < θi+1 , f (θ) = (5) {tvi−1 + (1 − t)vi | 0 < t < 1} if θ = θi . That is, if θ = θi , f (θ) represents the set of points on the open line segment ei , and otherwise f (θ) represents the set of a single point representing the vertex vi satisfying θi < θ < θi+1 . We call f (θ) the angle-parametric polygonal curve. The function f (θ) is periodic. If P is the counterclockwise boundary of a convex polygon, the total left turn we make while we travel along P around is equal to 2π, and hence f (θ) = f (θ + 2π) (6) On the other hand, if P is the counterclockwise boundary of a polygon with k reflex vertices, the total left turn we make while we travel along P around is equal to (k + 1)2π, and hence we get f (θ) = f (θ + (k + 1)2π).
(7)
354
K. Sugihara
We call the smallest positive integer m satisfying f (θ) = f (θ + 2mπ) the winding number of the polygonal curve. If P is the boundary of a convex polygon, its angle-parametric representation f (θ) is unique. However, for a general polygonal curve P, the angle parametric representation f (θ) depends on the choice of the starting edge es . For example, for the polygonal curve in Fig. 2(a), if we choose e1 , e2 , e3 , or e4 as es , we get the same angle-parametric representation, while if we choose e5 or e6 as es , the corresponding angle-parametric representation is different from the former. In general, two edges ei and ei+k (k > 0) give the same angle-parametric representation if and only if we do not face the positive x direction when we turn from the direction of ei to the direction of ei+k counterclockwise. Hence, if the angleparametric representation f (θ) of the closed polygonal curve P has the winding number m, there are m different angle-parametric representations for P. We consider them different angle-parametric polygonal curves. In order to distinguish between the closed polygonal curve P and its angle-parametric representation f (θ), we call P the geometric polygonal curve. In what follows, f (θ) represents the set of points at the particular value θ of the parameter, and the function itself is represented by f or {f (θ)}. Let Π + be the set of all angle-parametric polygonal curves. Next, we introduce a binary operation to Π + , which corresponds the Minkowski sum.
4
New Minkowski Sum
For {f (θ)}, {g(θ)} ∈ Π + , we introduce a new operation ⊕ in the following way. First, for each value of the parameter θ, we define f (θ) ⊕ g(θ) = {a + b | a ∈ f (θ), b ∈ g(θ)},
(8)
where a + b denotes the sum of the radial vectors a and b. That is, if both f (θ) and g(θ) consist of a single point, f (θ)⊕g(θ) also consists of a single point; if one of f (θ) and g(θ) consists of a point and the other is an edge, f (θ) ⊕ g(θ) is the line segment obtained by translating the edge by the radial vector representing the point; if both f (θ) and g(θ) are edges, f (θ) ⊕ g(θ) is an open line segment connecting the sum of the start points to the sum of the end points. For the convenience in the subsequent discussion, we introduce the following notation. For each value of θ, we consider f (θ) the line segment; that is, if f (θ) is a point, we consider it the line segment with length 0. Let f s (θ) and f e (θ) be the start point and the end point of the line segment f (θ). Hence, if f (θ) is a point, we have f s (θ) = f e (θ). Next, we define {f (θ)} ⊕ {g(θ)} as {f (θ)} ⊕ {g(θ)} = {f (θ) ⊕ g(θ)},
(9)
that is, {f (θ)} ⊕ {g(θ)} is a function of the angle parameter θ, and its value at each θ is specified by eq. (8). As θ increases, almost everywhere {f (θ)} ⊕ {g(θ)} stays a constant singleton representing a vertex, and at some discrete values of
Invertible Minkowski Sum of Polygons
355
θ it represents an edge connecting the vertex immediately before and the vertex immediately after. Moreover, {f (θ)} ⊕ {g(θ)} is periodic. Indeed its winding number is the least common multiple of the winding numbers of {f (θ)} and {g(θ)}. Hence, {f (θ)} ⊕ {g(θ)} also belongs to Π + . We call {f (θ)} ⊕ {g(θ)} the Minkowski sum of {f (θ)} and {g(θ)}. For example, let {f (θ)} and {g(θ)} be the angle-parametric representations of the polygon A in Fig. 1(a) and the polygon B in Fig. 1(b), respectively. Then, {f (θ)} ⊕ {g(θ)} coincides with the angle-parametric representation of the polygon C in Fig. 1(c). Fig. 4 shows another example. The polygonal curves in Fig. 4(a) and (b) both have winding number 2, and hence they have two different angle-parametric representations. Let f1 and f2 be the two angle-parametric representations of the polygonal curve in Fig. 4(a), and g1 and g2 be the two angle-parametric representations of the polygonal curve in Fig. 4(b). Since f1 ⊕ g1 = f2 ⊕ g2 and f1 ⊕ g2 = f2 ⊕ g1 , there are essentially two different Minkowski sums. They are shown in Fig. 4(c) and (d).
(a)
(b)
(c)
(d)
Fig. 4. Minkowski sums of two polygons with winding number 2.
In this case also we can consider the polygons bounded by the outermost part of the polygonal curves. The conventional Minkowski sum of the polygons in Fig. 4(a) and (b) corresponds to the union of the polygons in Fig. 4(c) and (d).
5
Extension to the Invertible World
As shown in eqs. (8) and (9), the Minkowski sum is defined in terms of the sum of radial vectors. Consequently, the algebraic structure of the vector space is inherited in Π + . In particular, the following hold for any f, g, h ∈ Π + . Property 1. f ⊕ g = g ⊕ f (commutativity). Property 2. (f ⊕ g) ⊕ h = f ⊕ (g ⊕ h) (associativity). Property 3. f ⊕ g = f ⊕ h implies g = h (injectivity).
356
K. Sugihara
There is a big theorem in algebra which says that, if Properties 1, 2 and 3 are satisfied, the set Π + can be augmented in such a way that all the elements have their inverses in it [2]. A well-known example is the extension of the set of natural numbers with the multiplication operation to the set of positive rational numbers. More precisely, we can augment Π + to a larger set Π in the following way. Let Π + × Π + be the set of all ordered pairs of the elements of Π + . For any (f1 , f2 ), (g1 , g2 ) ∈ Π + × Π + , we define (f1 , f2 ) ⊕ (g1 , g2 ) by (f1 , f2 ) ⊕ (g1 , g2 ) = (f1 ⊕ g1 , f2 ⊕ g2 ).
(10)
[In the case of the natural numbers, eq. (10) corresponds to (f1 /f2 ) · (g1 /g2 ) = (f1 · g1 )/(f2 · g2 ).] Next, we write (f1 , f2 ) ∼ (g1 , g2 ) if and only if f1 ⊕ g2 = f2 ⊕ g1 . [In the case of the natural nubmers, f1 /f2 ∼ g1 /g2 if and only if f1 · g2 = f2 · g1 .] Then, the binary relation ∼ is an equivalence relation in Π + × Π + . Finally, we define Π as the set all equivalence classes, that is, the quotient of Π + × Π + by ∼: Π = Π + × Π +/ ∼ .
(11)
[In the case of the natural numbers, f1 /f2 is considered a rational number, and f1 /f2 and g1 /g2 are considered identical if f1 /f2 ∼ g1 /g2 .] Elements of our new world Π can be considered more concretely in the following way. Let e(θ) be the origin for any θ. Then {e(θ)} (or e for short) is considered the “angle-parametric closed curve” consisting of the single point at the origin. Hence es (θ) = 0 and ee (θ) = 0 for any θ. e acts as the unit in Π, because for any f ∈ Π + , we get f (θ) ⊕ e(θ) is the line segment from f s (θ) + es (θ) = f s (θ) to f e (θ) + ee (θ) = f e (θ). For any f ∈ Π + , the inverse f −1 of f in Π is considered the angle-parametric closed curve for which the line segment f −1 (θ) at each value of θ starts at −f s (θ) and ends at −f e (θ). In other words, we get {f (θ)}−1 = {−f (θ)}. More generally, for any f, g ∈ Π + , the equivalence class containing (f, g) can be considered the angle-parametric closed curve whose value at each θ is the line segment from f s (θ) − g s (θ) to f e (θ) − g e (θ). Namely, the equivalence class containing (f, g) can be written {f (θ) − g(θ)}. The algebraic structure of the extension from Π + to Π is the same as that of the extension from the natural numbers to the positive rationals. Similar extension was also applied when the world of functions was augmented to the world of hyperfunctions [11]. From that analogy, we call elements of Π − Π + hypercurves or hyperpolygons.
6
Interpretation of Hyper Curves
Let us consider a numerically controlled polish machine as shown in Fig. 5. In this figure, A is a rail, B is a support plate, C is an arm, D is a polish disc, and E, F , G constitute the material to be polished. The upper part of Fig. 5 is the
Invertible Minkowski Sum of Polygons
357
A
P
G
E
F
B
D
C
A
Fig. 5. Numerically controlled polish machine.
top view (the projection on the xy plane) while the lower part is the side view (the projection on the xz plane). The rail A is fixed to the outside world, on which the support plate B slides in the y direction. The arm C is held by B and can slide in the x direction. The polish disc is attached to the arm at P and can rotate around P. The material consists of mutually parallel two plates E and F , and they are connected by a small rod G. The material is fixed to the outside world, and the upper surface of the plate F is to be polished by D. Note that the arm C cannot enter the material region E, because C and E are on the same z level, as shown in the side view of Fig. 5. If the arm C enters the material region E, they collide and broke each other. Hence the arm C can move only outside E. We are interested in which part of the surface of E can be polished by this machine. To answer this question, we usually apply the Minkowski algebra in the following way. Let R(C) represent the polygon obtained when we rotate the polygon C by 180 degrees around the point P, and let {fR(C) (θ)}, {fD (θ)} and {fE (θ)} be the angle-parametric polygonal curves representing the shapes of R(C), D and E, respectively. We assume that the point P is chosen as the origin of the coordinate system to represent fR(C) and fD . Then the Minkowski sum g = fE ⊕ fR(C) represents the region which the reference point P of the arm C cannot enter. Next, let {h(θ)} be the Minkowski difference; h is expressed by −1 −1 h = g ⊕ fD = (fE ⊕ fR(C) ) ⊕ fD .
(12)
The region represented by h corresponds to the region which the polish disc cannot reach. Hence, the intersection of this region with the material region E is the region that cannot be polished by this machine. This is the conventional
358
K. Sugihara
way of finding the unpolishable region of the material surface (although we used our new notations). Note that the above computation can be done only after we are given the shape of the material E. In our new algebra, on the other hand, we can compute the Minkowski sums in any order, because the algebra forms a group and the uniqueness of the final result is guaranteed. Now, let us rewrite eq. (12) into −1 h = (fR(C) ⊕ fD ) ⊕ fE .
(13)
Let {w(θ)} denote the first part of the right-hand side of the equation: w = −1 fR(C) ⊕ fD . w is a hypercurve. If we compute according to eq. (12), the intermediate result g and the final result h are both in Π + . On the other hand, if we compute according to eq. (13), the intermediate result w goes out from Π + , and hence the computation becomes invalid in the conventional algebra. This difference is important, because w can be computed even if we are not given the actual shape of the material. Actually, w depends only on the polish machine, and hence we can interpret that the hypercurve w represents the ability of the polish machine. This example shows the following two important points. First, the hypercurve has its own physical meaning, just as the “ability of the polish machine” in the above example. Secondly, the hypercurve can save the computational cost, because we need to compute the intermediate result w only once, and can apply it to any material shape, whereas in the conventional method we have to compute the intermediate result g every time a new material E is given.
7
Concluding Remarks
We have reformulated the Minkowski algebra in such a way that the new algebra forms a group and consequently the sum and its inverse can always be taken freely. In this new algebra, the conventional polygonal curves are extended to more general geometric objects, which we name the “hypercurves”. We also gave a physical interpretation of the hypercurve, and discussed the practical use of this new concept. There still remains many related problems to be solved. They include (i) efficient algorithms for this algebra, (ii) optimal representation of a given polygonal region by an angle-parametric polygonal curve, (iii) physical interpretation of hypercurves generated by nonconvex curves, and (iv) extension to higher dimensions. Acknowledgements. This work is supported by the Grant-in-Aid for Scientific Research of the Japanese Ministry of Education, Science, Sports and Culture.
Invertible Minkowski Sum of Polygons
359
References 1. H. Bekker and J.B.T.M. Roerdink: An efficient algorithm to calculate the Minkowski sum of convex 3D polyhedra. Lecture Notes in Computer Science 2073, pp. 619–628, 2001. ´ ements de Math´ematique, Alg´ebre 1. Hermann, Paris, 1964. 2. N. Bourbaki: El´ 3. P. K. Ghosh: An algebra of polygons through the notion of negative shapes. CVGIP: Image Understanding, vol. 54 (1991), pp. 119–144. 4. P. K. Ghosh: Vision, geometry, and Minkowski operators. Contemporary Math., vol. 119 (1991), pp. 63–83. 5. P.K. Ghosh and R.M. Haralick: Mathematical morphological operations of boundary-represented geometric objects. J. Math. Imaging and Vision, vol. 6 (1996), pp. 199-222. 6. L. J. Guibas, L. Ramshaw and J. Stolfi: A kinetic framework for computational geometry. Proc. 24th IEEE Symp. Foundation of Computer Sciences, (1983), pp. 100–111. 7. S. Har-Peled, T. M. Chan, B. Aronov, D. Halperin and J. Snoeyink: The complexity of a single face of a Minkowski sum. Proc. 7th Canadian Conf. Comput. Geom., (1995), pp. 91–96. 8. A. Hermandez-Barrera: Computing the Minkowski sum of monotone polygons. IEICE Transactions on Information and Systems, vol. E80-D (1997) pp. 218–222. 9. M.-S. Kim and K. Sugihara: Minkowski sums of axis-parallel surfaces of revolution defined by slope-monotone closed curves. IEICE Transactions on Information and Systems, vol. E84-D (2001), pp. 1540–1547. 10. D. Leven and M. Sharir: Planning a purely translational motion for a convex object in two-dimensional space using generalized Voronoi diagram. Discrete and Comput. Geom., vol. 2 (1987), pp. 9–31. 11. J. Mikusinski: Operational Calculus. Pergamon Press, London, 1957. 12. D. Mount and R. Silverman: Combinatorial and computational aspects of Minkowski decompositions. Contemporary Math., vol. 119 (1991), pp. 107–124. 13. G. D. Ramkumar: An algorithm to compute the Minkowski sum outer-face of two simple polygons. Proc. 12th ACM Symp. Comput. Geom., (1996), pp. 234–241. 14. K. Sugihara, T. Imai and T. Hataguchi: An algebra for slope-monotone closed curves. Int. J. Shape Modeling, vol. 3 (1997), pp. 167–183.
Thinning Grayscale Well-Composed Images: A New Approach for Topological Coherent Image Segmentation Jocelyn Marchadier, Didier Arqu`es, and Sylvain Michelin Universit´e de Marne-la-Vall´ee, Equipe Image, Institut Gaspard Monge, 5, Boulevard Descartes, 77454 Champs sur Marne Cedex
[email protected] Abstract. Usual approaches for constructing topological maps on discrete structures are based on cellular complexes topology. This paper aims to construct a coherent topological map defined on a square grid from a watershed transformation. We propose a definition of well-composed grayscale images based on the well-composed set theory and the cross-section topology. Properties of two different thinning algorithms are studied within this scope, and we show how to obtain a thin crest network. We derive an efficient algorithm that permits the construction of a meaningful topological map. Finally, we demonstrate the usefulness of this algorithm for multilevel image segmentation. Keywords. Topological map, thinning, well-composed images.
1
Introduction
A digital image may be seen as the digitalization of a piecewise continuous function. The discontinuities of the underlying continuous function are of primary importance in many shape recognition processes, as they usually describe the shapes of objects appearing on an image. A piecewise continuous function may be represented by a topological map, which can be viewed as a partition of the plane into three sets of points: a finite set S of points, a finite set A of disconnected Jordan arcs having elements of S as extremities, and a set of connected domains, the faces, which boundaries are unions of elements of S and A. The definition and the extraction of a topological map from a digital grayscale image coherent with the one describing the underlying continuous function would be a major achievment. Although such a structure can been defined straightforwardly on an hexagonal grid, the square grid poses consistency problems. The notion of discontinuity is lost once a function is digitalized. Segmentation may be viewed as a process that tries to catch the discontinuities of an underlying continuous function. Two main approaches to segmentation may be distinghished. The first approach consists of approximating the discrete function by a piecewise continuous function, and is usually referred to as a region oriented segmentation. The second approach tries to directly catch the discontinuities of an underlying continuous function using an heuristic provided by a A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 360–371, 2002. c Springer-Verlag Berlin Heidelberg 2002
Thinning Grayscale Well-Composed Images
361
gradient operator defined on discrete functions, and is referred as contour oriented segmentation. Recent works [4,7] aim to develop a coherent topological structure describing a digital image from the information provided by regions. The different structures described use a discrete topology based on the decomposition of the support domain of an image into three kind of elements of different dimensions, i.e. surface elements, associated with the discrete points of the support, edge elements, which are the edges seperating two surface elements, and vertices of the so defined grid [6,10]. On one hand, such a partition has nice topological properties, but on the other hand, it suffers from many practical drawbacks, such as the amount of memory needed to store the entire partition, and difficulties faced when trying to construct the partition from the contour information. Alternatively, a topological partition of an image may be directly defined by a digital topology only involving points on the square grid. In order to face the connectivity paradox, several neighborhood systems are usually used together. This is done either by considering different adjacent relations for points belonging to a set and its complement [11], or by assigning different neighborhoods to each point of ZZ 2 in a data independent manner, which can be formally stated using the framework proposed in [9] (this framework can also be used in the cellular complex approach [10]). Watersheds or more generally graytone skeletons can be used in this context to retrieve a vertex/arc network (crests lines) and faces (catchment basins) from a discrete topographic surface such that the modulus of the gradient viewed as a relief, which is exactly the seeked partition of the image. However, many consistency problems are encountered on a square grid. Approaches that work by suppressing points from a potential crests network (grayscale thinning) or by adding points to connected sets of points [1,3] do not usually guarantee that the extracted crests network is thin (figure 1). Thick configurations of crests pose
Fig. 1. Thick irreductible configuration of points [1]
obvious problems when one is trying to link points from the resulting crests network in order to obtain digital curves and vertices of the topological partition. Approaches that work by linking potential crest points [16,17], constructing a raster graph, do not usually guarantee that the faces defined by the cycles of the graph are composed of a unique connected component (figure 2).
362
J. Marchadier, D. Arqu`es, and S. Michelin
Fig. 2. Connected components assiciated with a cycle of a raster graph
Latecki and al. proposed to face the problem of thickness of skeletons on digital binary images by forbidding some configurations of points [15]. They propose a thinning operator that preserves the properties of the so called wellcomposed binary images, resulting in a thin skeleton. He also demonstrated a Jordan theorem that is verified on well-composed sets of points. He extended the property of well-composedness to multicolor images. In this contribution, we first recall some classical notions of digital topology (section 2) and some properties of well-composed sets (section 3). We redefine grayscale well-composed images using the cross section topology formalism [2,3, 16], adapt a grayscale thinning to well-composed graylevel images, and proove some of its properties (section 4). We derive an algorithm which construct a topological partition from an ultimate thinning of a well-composed image, and finally present an application (section 5).
2
Digital Topology: Basic Notions
A discrete image I is a function from ZZ 2 to a set E. When E = {0, 1}, I is said to be a binary digital image. When E = {0, . . . , k}, I is said to be a grayscale image. Two point p1 = (x1 , y1 ) ∈ ZZ 2 and p2 = (x2 , y2 ) ∈ ZZ 2 are – 4-adjacent if and only if d4 (p1 , p2 ) = |x1 − x2 | + |y1 − y2 | = 1. – 8-adjacent if and only if d8 (p1 , p2 ) = max(|x1 − x2 |, |y1 − y2 |) = 1. The n-neighborhood Γn (p) of a point p is the set of all the points n-adjacent to p, with n = 4 or n = 8. A n-connected path is an ordered set C = {p1 , . . . , pm } such that for all the points pi,1 I(p). Theorem 4. The set composed of all the points belonging to regional minima of a well-composed image I is well-composed. Let p and p be two 8-adjacent but not 4-adjacent points in regional minima of I. Suppose that Γ4 (p)∩Γ4 (p ) is not included in a regional minimum of I. Then I is not well-composed, since this construction is forbidden for well-composed images. This leads to a contradiction. ✷ Theorem 5. There exists a bijection between regional minima of a wellcomposed image I and regional minima of a thinning of I. We consider a thinning I of a well-composed image I. The points belonging to a regional minimum of a well-composed set I are not destructible. If a point
366
J. Marchadier, D. Arqu`es, and S. Michelin
of a regional minimum of I is destructible, then it is 4-adjacent to at least one point with a lower value of I, and thus, it is not a point of a regional minimum. The regional minima of I are then subsets of regional minima of I of I. Let’s suppose that a regional minimum S of I contains several regional minima of I. Let c be the value of the highest regional minimum S of I such that S ⊂ S . The set Fc+1 ∩ S contains at least two 4-connected components bounded by points adjacent to S (Jordan theorem). Thus, Fc+1 contains more connected components than the set Fc+1 , where Fc+1 is the (c+1)-section of I . I is not a thining of I, which contradicts the hypotheses. ✷ The complementary sets of regional minima are arbitrarely thick. This is partly due to configurations like the one depicted on the figure 6. On this figure,
Fig. 6. Regional quasi-minimum
the set composed of the points marked by a circle can be used to define an interseting face of a topological partition. We define a regional quasi-minimum as a 4-connected component of the set Q = {p ∈ ZZ 2 , Γ − (p) = ∅} defined on I. Note that a regional minimum is a regional quasi-minimum. The set Q defined on an irreductible well-composed image is not generally well-composed. For example, on the figure 7, Q is composed of all the points marked by a circle, and Q is not well-composed. This is due to the presence of peaks, i.e. points without upper neighbors. This leads to the following devlopments.
Fig. 7.
A leveling of an image I [3] is a graylevel image obtained from I by iteratively destructing destructible points and peaks from I.
Thinning Grayscale Well-Composed Images
367
Theorem 6. Let I be a graylevel well-composed image, and let I be a leveling of I. Then I is well-composed. The destruction of a destructible point from a graylevel well-composed image I will result in a thinning of I which is well-composed (c.f. theorem 3). Now consider a peak p from a well-composed image I. Let p one of its neighbors such that ∀p ∈ Γ8 (p) I(p ) ≤ I(p ). As p is a peak, we have I(p) > I(p ). Let I the image obtained by destructing p from I. Then I (p) ≥ I (p ), and the ordering of the other points from Γ8 (p) is not changed. Thus, we do not create a forbiden configuration for well-composed graylevel images, and I is well-composed. As the two operations are conserving well-composedness, their iterative application on a well-composed image will yield in a well-composed image. ✷ By applying the leveling transformation on a well-composed graylevel image until stability, we obtain a well-composed irreductible graylevel image such that none of its point is a peak. On such images, regional quasi-minima are wellcomposed. Theorem 7. Let I be an irreductible well-composed graylevel image such that no point of I is a peak. The set Q = {p ∈ ZZ 2 , Γ − (p) = ∅} defined on I is well-composed. Let I be a well-composed grayscale image. Let p and p be two points 8-adjacent, but not 4-adjacent, belonging to the set Q = {p ∈ ZZ 2 , Γ − (p) = ∅}, such that the two points p1 and p2 4-adjacent to p and p are not belonging to Q. Necessarly, I(p) = I(p ), I(p1 ) ≥ I(p) and I(p2 ) ≥ I(p). Moreover, as I is well-composed, I(p1 ) = I(p) or I(p2 ) = I(p). Let’s suppose, without loss of generality, that I(p1 ) = I(p), which corresponds to the figure 8.
P5
P3
P1
P’
P6
P4
P
P2
Fig. 8.
I(p3 ) ≥ I(p) by hypothese. I(p3 ) = I(p) otherwise p3 is either destructible or I is not well-composed. I(p5 ) ≥ I(p3 ) and I(p6 ) < I(p3 ) otherwise p3 is destructible again. Then, p4 is destructible, which leads to a contradiction. ✷ We can now expose the following result, which states that each cut of an irreductible graylevel image without peak is thin. Theorem 8. Let I be an irreductible well-composed graylevel image from ZZ 2 to a set E, such that no point of I is a peak, and the set Q = {p ∈ ZZ 2 , Γ − (p) = ∅} defined on I. For all c ∈ E, the 8-connected components of the 4-interior of the set Ec \ Q are composed of at most one point, where Ec is the c-cut of I.
368
J. Marchadier, D. Arqu`es, and S. Michelin
Let c ∈ E. Consider a point p with Γ − (p) = ∅ and I(p) = c. If {p ∈ Γ8 (p), I(p ) > I(p)} = ∅, then the c-cut of I respect one of the local configurations of the irreductible well-composed sets depicted on figure 4. We now consider two points p and p belonging to the same c-cut, 8-adjacent to each other, such that p is adjacent to at least one point of lower value and one point of upper value. Thus, one of the local configurations depicted on figure 9 holds.
P’
P’ D
D
P
P’
D
P
P
D P P’
Fig. 9.
The point noted D is destructible on these configurations and consequently I is not irreductible. ✷
5
Constructing a Topological Partition and Application
The topological map defined on the digital plane is composed of: – a set S of points which are vertices of the map, – a set A of arcs, which are 4-connected disconnected digital curves, of which the extremities are elements of S, – a set F of faces, which are 4-connected sets of points, of which the boundaries are elements of S and A. Such a structure can be built from an irreductible well-composed grayscale image without peak, obtained by a leveling transformation of a well-composed grayscale image. The faces of the topological partition are the regional quasi-minima. The vertex/arc network can be retrieved by linking points that have at least one lower neighbor in the following way: – Points belonging to the same c-cut are linked together using one of the configurations of the figure 4. – Points belonging to different c-cuts are linked according to one of the configurations of the figure 10.
Fig. 10.
Thinning Grayscale Well-Composed Images
369
We can prove, by reasoning on local configurations, that the cycles of the graph constructed as such correspond to regional quasi-minima. The figure 11 represents an irreductible well-composed image without peaks and its constructed vertex/arc network.
Fig. 11.
Different structures can be used to store this topological map, such as a combinatorial map [7] or a raster graph [17]. Note that both structures can be obtained from an irreductible well-composed image without peak with a single pass algorithm. Moreover, the raster graph structure can be obtained with a parallel algorithm, as the linking process involves only local configurations. The correspondance between the so defined map and the map that can be constructed from the linear discontinuities of the underlying piecewise continuous function is achieved when the topology of the interesting set of points is preserved under digitalization. Note also that the digitalization of a continuous curve is a digital well-composed curve [13]. That motivates the use of the 4-neighborhood connectivity. Figure 12 demonstrates the usefulness of this results for image segmentation. The first image of the figure is the original image. The second image is the irreductible well-composed image without peak that has been constructed from the modulus of a gradient of the original image. An algorithm, not detailed in this article, has been applied on the image of the modulus, turning it into a well-composed graylevel image. Thus, the second image is well-composed. The third image is the extracted crest network, where some arcs have been removed using a threshold strategy that was found to be stable and very fast on different kinds of images. On all tested images, the crest network was found to be a thin set of points, although some thick configurations can be constructed. However, the linking is realized without ambiguity, and the topological map obtained in such a way have found to be usefull for shape recognition applications.
370
J. Marchadier, D. Arqu`es, and S. Michelin
Fig. 12.
The definition of destructible points presented in this article is not well suited for non well-composed images. For such images, destructible points are defined with the classical 4/8-neighborhood system [3]. The thinning and leveling transformations defined with that system are resulting in images with drawbacks such that thick configurations of points (when the 8-neighborhood system is used to characterized crest points) or 8-connected isolated points.
6
Conclusion
In this contribution, we have used the cross section topology formalism in order to define a thinning operator that is internal to well-composed gray level images. Moreover, we have proposed a way to construct a topological map from the resulting thin image, and have shown that the obtained map is coherent in the sense that: – a bijection exists between cycles of the map and 4-connected regions of the thin image, – each Jordan arc of the map is a 4-connected digital curve. The map can be constructed from a gray level image very efficiently, using previously published algorithms.
References 1. Arcelli C., Pattern Thinning by Contour Tracing, Computer Graphics and Image Processing, Vol. 17 (1981) 130–144 2. Bertrand B., Everat J.C., Couprie M., Topological approach to image segmentation, SPIE Vision Geometry V Proceedings, Vol. 2826 (1996) 3. Bertrand B., Everat J.C., Couprie M., Image segmentation through operators based upon topology, Journal of Electronic Imaging, Vol. 6(4) (1997) 395–405 4. Braquelaire J.-P., Brun L., Image Segmentation with Topological Maps and Interpixel Representation, Journal of Visual Communication and Image Representation, Vol. 9(1) (1998) 62–79
Thinning Grayscale Well-Composed Images
371
5. Couprie, M., Bertrand G., Topological Grayscale Watershed Transformation, SPIE Vision Geometry V Proceedings, Vol. 3168 (1997) 136–146 6. Fiorio C., Approche interpixel en analyse d’images, une topologie et des algorithmes de segmentation, PhD Dissertation, Universit´e de Montpellier, France, (1995) 198 pages. 7. Fiorio C., A topologically Consistent Representation for Image Analysis: the Frontiers Topological Graph, DGCI’96, Lectures Notes in Computer Sciences, no. 1176, (1996) 151–162 8. Gangnet M., Herv´e J.-C., Pudet T., Van Tong J.-M., Incremental Computation of Planar Maps, SIGGRAPH Proc., Computer Graphics, Vol. 23(3) (1989) 345–354 9. Khalimsky E., Kopperman R., Meyer R., Computer Graphics and Connected Topologies on Finite Ordered Sets, Topology and its Applications, Vol. 36 1–17 10. Kovalevsky V. A., Finite Topology as Applied to Image Analysis, Computer Vision, Graphics and Image Processing, Vol. 46 141–161 11. Kong T. Y., Rosenfeld A., Digital Topology, Introduction and Survey, Computer Vision, Graphics, and Image Processing, Vol. 48 (1989) 357–393 12. Latecki L., Multicolor Well-Composed pictures, Pattern Recognition Letters, Vol. 16 (1995) 425–431 13. Latecki L., Discrete Representation of Spatial Objects in Computer Vision, Computational Imaging and Vision Vol. 11, Kluwer Academic Publishers (1999) 216 pages 14. Latecki L., Well-Composed Sets, Advances in Imaging and Electron Physics Vol. 112, Academic Press (2000) 95–163 15. Latecki L., Eckhardt U., Rosenfeld A., Well-Composed Sets, Computer Vision and Image Understanding, Vol. 61 (1995) 70–83 16. Meyer F., Skeletons and Perceptual Graphs, Signal Processing, Vol. 16 (1989) 335–363 17. Pierrot Deseilligny M., Stamon G., Suen C., Veinerization: A New Shape Description for Flexible Skeletonization, IEEE Trans. on PAMI, Vol. 20(5) (1998) 505– 521
An Incremental Linear Time Algorithm for Digital Line and Plane Recognition Using a Linear Incremental Feasibility Problem Lilian Buzer LLAIC, Universit´e Clermont 1, IUT d´epartement Informatique, B.P. 86, 63172 AUBIERE cedex, FRANCE,
[email protected] Abstract. We present a new linear incremental method for digital hyperplane1 recognition. The first linear incremental algorithm was given for 8-connected planar lines in [DR95]. Our method recognizes any subset of line in the plane or plane in the space. We present the Megiddo linear programming (LP) algorithm in linear time and describe its adaptation to our problem. Then we explain its improvement toward a linear incremental method. Keywords: digital line recognition, digital plane recognition, feasibility problem, incremental, linear time. Conference Topic: Models for Discrete Geometry. Type of Presentation: oral presentation.
1
Introduction
We study digital line and plane recognition. The seminal definition of these objects was given by Reveill`es in [Rev91]. A set of points P of Zd is a digital hyperplane if it verifies: d∗
∃(ai )1≤i≤d ∈ Z , ∃γ ∈ Z, ∀x ∈ P, we have: γ ≤
d
ai .xi < γ + ||a||∞
(1)
i=1
where ||.||∞ denotes infinite norm equal to Sup{|ai |1≤i≤d }. We present a new algorithm which recognizes any set of points. Our technique is based on LP algorithm, more precisely the linear Megiddo method which is described in Section 2. In Section 3, we transform our recognition problem into a feasibility problem solvable by Megiddo algorithm. The improvement to linear incremental complexity is then presented in Section 4. 1
By extension of the notion of an euclidian hyperplane in a d-dimensional vectorspace. We remind that this object refers to a (d − 1)-dimensional affine subspace.
A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 372–381, 2002. c Springer-Verlag Berlin Heidelberg 2002
An Incremental Linear Time Algorithm
2 2.1
373
Megiddo Algorithm in R2 and R3 Preliminaries
Our digital line and plane recognition algorithm requires to solve LP problems in two and three dimensions. For this, we will use Megiddo linear-time algorithm. History. Working in the field of computational geometry, Megiddo gave the first deterministic algorithm for LP whose running time is linear in the number of constraints when the dimension is fixed. The decimation technique was first introduced in [Meg83] for the two and three-dimensional cases. Later, in [Meg84] d he extended his method to develop an O(22 .n) time algorithm for LP in Rd . The factor in d was improved by Clarkson to 3.d2 (see [Cla86]). In recent years, no progress has been made on this front, nevertheless new developments occured in randomized and parallel algorithms for linear programming. Numerous simpler and more practical randomized algorithms have been given (see [Cla98,Sei91]). An introduction can be read in [BKOS00] and a comprehensive summary of this field can be found in [AS98]. We will hereafter present Megiddo’s technique. Note that randomized methods are unusable in the incremental approach. Summary of the prune and search technique of linear programming. We know that a limited number of constraints are tight at an optimal solution of a LP problem. This method tries to eliminate input constraints that do not affect the optimum value. At each step, a constant fraction of n constraints is eliminated from the current set in O(n) time. Therefore after a logarithmic number of steps, the size of the problem becomes constant. By using any strongly polynomial LP algorithm, we solve our remaining set of constraints in constant time. Because of this decimation, the global cost remains bounded by the cost of the first pruning step. Problem posing. We want to find the optimum value of a d-dimensional LP problem of n constraints. We can always transform the gradient function into (0, . . . , 0, 1) by rotation. For presentation convenience, we only consider constraints that have a strictly positive coefficient associated to xd (wlog). We want to solve: Minimize xd (2) d−1 So that xd ≥ Σj=1 aij .xj + bi (i = 1, . . . , n) Deletion criterion. The core of the technique consists in coupling constraints. Under the previous assumption, if we take two non-parallel constraints, there exists a vertical hyperplane passing through their intersection, that divides the space into two half spaces. Such a hyperplane is called a separating line (SL) or a separating plane (SP). If optimal solutions are located in a certain half space, then we may discard one of the two constraints.
374
2.2
L. Buzer
The Two-Dimensional Case
We give an overview of Megiddo two-dimensional algorithm described in [Meg83]. A comprehensive description of this method is given in [PS85] and in [Ede87] . We hereafter describe its inner loop in four steps (see Fig. 1). The algorithm steps. 1. Coupling: we create couples of constraints (except for one at most) and their associated SL. Under the assumption of 2.1, if two inequalities are parallel, one of them is redundant and can be immediately suppressed. 2. Selection of a test line : vertical SL have horizontal coordinates. We compute in linear time the median of these values. We select the test line to be the particular SL that corresponds to this median. 3. Testing a line: we want to know on which side of the test line are the optima located. For this, we first compute the optimum of the LP problem restricted to this line. We determine the right and left slopes given by the constraints passing through this point. By convex properties of the feasibility polyhedron, if a decreasing slope exists, optima will be located on its side. Else this point is a minimum and the problem is solved. 4. Pruning: As the test line is a median for the SL set, we deduce the optima location relative to one half of the SL. We then apply the deletion criteria to each couple of constraints associated to these SL. After this, we iterate as long as the number of constraints is above a fixed constant.
Separating line Test line
1. Coupling.
2. Selection of a test line.
Suppressed constraint
3. Testing a line.
4. Prunning.
Fig. 1. Steps of Megiddo two-dimensional algorithm.
An Incremental Linear Time Algorithm
375
Oz Ox
P1 M
P2
Oy
I
P Fig. 3. Point location. Fig. 2. Testing a plane.
Global complexity. One quarter of the inequalities are rejected from the current set. Each of the four steps has a linear time complexity in the number of constraints. This implies that the runtime T (n) of our algorithm satisfies T (n) = O(n) + T ( 34 .n). Therefore this algorithm solves a linear program in two variables and n constraints in O(n) time. 2.3
The Three-Dimensional Case
We keep the technique described in the two-dimensional method. We test particular planes which enable us to drop a constant fraction of inequalities. After this, we iterate. We precisely describe steps 2 and 3 in the next sections. Step 2: Selection of testing planes. As the separating planes (SP) are all vertical, we represent them by lines in the Oxy plane. Our problem also becomes a two-dimensional search problem. Megiddo describes a technique to solve it effectively in [Meg83] and in [Meg84]. Readers can refer to [Ede87]. Suppose we have two planar lines of opposite slopes, let I denote their intersection. If we know the position of a point relative to the vertical and horizontal lines passing trough I, we can determine the location of this point relative to at least one of the two given lines (see Fig. 3). The trick consists in finding in linear time the median of all present slopes. To shorten explanations, we consider the median direction to be vertical. We can also couple lines (except for one of them at most) and create couples of opposite slopes. So we use the previous remark and we obtain a set of vertical lines. We compute their median line and test it. We know the optimum position relative to one half of the vertical lines. We select the horizontal lines coupled with these vertical lines, take their median and test it. We also determine the optima position relative to 18 of SP and we can also 1 delete 16 of constraints. All these steps are shown in Fig. 4. Step 3: Testing a plane. We want to know on which side of a plane P lies the optima (see Fig. 2). We first solve our LP problem restricted to this
376
L. Buzer
plane and obtain a minimum called M . If this problem is unbounded the threedimensional problem is unbounded too. Now, we have to test both sides of P . Let C denote the set of constraints passing through M . Because of the finite number of inequalities, there is a neighbouring ball of radius r around M where no other constraint can interfere, so the local decrease around M only depends on constraints in C. Without loss of generality, we can assume that P is Oxz and M is the origin. Let P1 and P2 denote the solutions of the two following systems: Minimize z, under C Minimize z, under C (3) with yi = r with yi = −r If one solution is better than M , it indicates the optima location. If both are worse, M is a solution of our problem. Because of the convexity of the feasibility polyhedron, we cannot have two better solutions. These two LP problems are linear in the number of constraints as we saw in the previous section. Remark 1. By definition of C, we do not have to determine a value for r. We can also choose r equal to 1. Oz
Oy
Oy
O
Ox
Ox
1. Coupling and creation of SPs.
3. Test of the horizontal median line.
2. Test of the vertical median line.
4. Optima location relative to two SPs.
Fig. 4. Solving the search problem.
3
Digital Line and Plane Recognition
We know we have to solve diophantine equations of the form: γ ≤ N.P (x1 , . . . , xd ) < γ + N ∞ . Each point P is linked to two inequalities. We
An Incremental Linear Time Algorithm
377
cannot solve integer programming with LP method. I hereafter describe my rewriting of the three-dimensional problem that allows to use LP. As the normal vector is nonzero, the following transform is always possible: d
Ni .xi γ γ ≤ < +1
N ∞
N
N
∞ ∞ i=1
(4)
We now have a set of linear inequalities with d variables at most. For instance, in the three-dimensional case, we can solve this system using threedimensional Megiddo algorithm. We will hereafter only consider the case when
N (u, v, w) ∞ = w. We are not concerned with negative values of w because N is valid iff −N is. We have: γ γ γ ≤ N.P (x, y, z) < γ + N ∞ ≤ u .x + v .y + z < w +1 ⇔ wu w v w (5)
N (u, v, w) ∞ = w | w | ≤ 1, | w | ≤ 1 Linear programming consists in optimizing linear function. Nevertheless, our recognition problem only requires to solve a feasibility problem: Find
(a, b, h) a.xi + b.yi − h ≥ −zi So that a.xi + b.yi − h < −zi + 1 |a| ≤ 1, |b| ≤ 1
(i = 1, . . . , n)
(6)
Remark 2. This transform is equivalent to a linear separability problem. Let S be a set of points, we want to find a plane passing between S and a virtual set of forbidden points S + ez . If such a plane exists, the convex hull of S and the convex hull of S +ez do not intersect, see [PS85] for details. The vertical distance between these two convex hulls is in ]0, 1[. This implies that S is lying in a band of vertical thickness less than 1. By definition, S is a subset of a digital plane. Strict and large inequalities in a same system may cause problems. Let P denote the polyhedron defined with inequalities of (6) transformed into large inequalities and P the polyhedron associated to (6). The difference between P and P is equal to the superior border of P denoted Γ sup . From this observation, (6) is equivalent to this problem: (∗)
(a, b, h) ∈ / Γ sup a.xi + b.yi − h ≥ −zi So that a.xi + b.yi − h ≤ −zi + 1 |a| ≤ 1, |b| ≤ 1 Find
(i = 1, . . . , n)
(7)
Remark 3. (∗) requires just a modification in the algorithm. In the twodimensional problem, when we cut by a vertical line, we only have to keep the highest and lowest feasible points on the cut and check if they are different. Otherwise, no solution lies on this cut. The three-dimensional problem calls the two-dimensional algorithm and uses its improvement.
378
4
L. Buzer
The Incremental Problem
We are able to recognize digital lines and planes in linear time using Megiddo’s technique. Nevertheless the incremental algorithm is quadratic. I adapt our feasibility problem to create a linear time incremental method. The core of this method uses the following lemma: Lemma 1. Let H be a separating hyperplane associated to a couple of constraints. If the set of feasible solutions lying on H is included in a hyperplane of H then one of the two constraints is inactive. Remark 4. An inactive constraint supports no face of the polyhedron and its suppression does not affect the polyhedron. 4.1
The Starting Point
To apply Megiddo’s technique in Zd (d = 2 or 3 in our study), we must suppose that the ith coefficient of each possible normal is equal to its infinite norm. In other words, we want to know if all the possible hyperplanes can be written as a function of the same d − 1 coordinates. The two-dimensional case is quite easy. Suppose the first inserted point M is the origin. If the next point P (x, y) verifies |y| < |x|, (resp. >), all the possible normals N (u, v) will verify |u| > |v| (resp. 1), formally, Bk =
k i=1
Then
ai β0−i | ai ∈ {0, 1}
.
(7)
394
E. Balogh et al.
ri ∈ Bn , i = 1, . . . , m,
and
sj ∈ Bm , j = 1, . . . , n,
(8)
are necessary conditions for the existence of a matrix A with Rβ0 (A) = (r1 , . . . , rm ) 2.1
and
Sβ0 (A) = (s1 , . . . , sn ) .
(9)
Switching in β0 -Representations
The β0 -representation is generally nonunique, because there are binary words with the same length representing the same number. For example, on the base of (4) it is easy to check the following equality between the 3-digit-length β0 representations 100 = 011 .
(10)
As direct consequences of (10), it is easy to see that 100 10x3 00 10x3 0x5 00 10x3 0x5 0x7 00
= = = = ...
011 01x3 11 01x3 1x5 11 01x3 1x5 1x7 11
(11)
where x3 , x5 , x7 , · · · denote the positions where both β0 -representations have the same (but otherwise arbitrary) binary digit. (That is, such kind of transformation 1(0x)k−1 00 → 0(1x)k−1 11 (k ≥ 1) between the subwords of the β0 representations can be performed without changing the represented value and without changing the values in the positions indicated by x’s.) The transformations described by (10) and (11) are called switchings. It is proved that any finite β0 -representation of a number can be get from its any other β0 -representation by switchings. Lemma 1. [3] Let a1 · · · ak and b1 · · · bk be different, k-digit-length β0 representations of the same number. Then b1 · · · bk can be get from a1 · · · ak by a finite number of switchings. Consequence. If a1 · · · ak and b1 · · · bk are different, k-digit-length β0 -representations of the same number, then there are positions i, i + 1, i + 2 (1 ≤ i ≤ k − 2) such that there is a switching between a1 · · · ak and b1 · · · bk on these positions. 2.2
β0 -Expansion
The k-digit-length β0 -expansion is a particular k-digit-length β0 -representation that can be computed by the “greedy algorithm”: Let r ∈ Bk , then its β0 expansion a1 · · · ak is determined as r0 := r , ai := β0 · ri−1 ,
ri := {β0 · ri−1 },
i = 1, . . . , k ,
(12)
Reconstruction of Binary Matrices from Absorbed Projections
395
where . and {.} denote the integer and fractional, respectively, part of the argument. It is clear that the k-digit-length β0 -expansion of any number r ∈ Bk is uniquely determined (it is not the case with the k-digit-length β0 -representations as we saw it in the previous subsection). The finite β0 -expansion is characterized by the following property . Proposition 1. [3] Let a1 , . . . , ak ∈ {0, 1} (k ≥ 1). The word a1 · · · ak is the β0 -expansion of a number r ∈ Bk if and only if it has the form a1 · · · ak = T U V , where T = 0 · · · 0,
T = 1 · · · 1,
or
T =λ,
(13)
(λ denotes the empty symbol), U = U1 · · · Uu ,
u ≥ 0,
such that
Ui = 10 · · · 0,
i = 1, . . . , u, (14)
and each Ui contains at least one 0, V = 1
or
V = λ
(15)
and at least one of T , U , and V is not the empty symbol λ.
3
β0 -Representation and 3SAT Clauses
We are going to describe the β0 -representation by 3SAT expressions, that is, by Boolean expressions in conjunctive normal form with at most three literals in each clause. Let r be a real number having a k-digit long β0 -representation, a1 · · · ak . Let z1 , . . . , zk be Boolean variables and L be a Boolean function of z1 , . . . , zk , that is, L = L(z1 , . . . , zk ). We say that the Boolean values a1 , . . . , ak satisfy L if L(z1 = a1 , . . . , zk = ak ) is true. Now we are going to give the set of clauses, denoted by K, by which all k-digit length β0 -representations of any r ∈ Bk can be described for any k > 1. Let a1 · · · ak the k-digit-length β0 -expansion of r. Then, by Proposition 1, a1 · · · ak = T U V , where T , U , and V are given by (13), and (14), respectively. Accordingly, K = TT ∪ UU ∪ V V ,
(16)
where T T , U U , and V V denote the subsets of clauses describing the corresponding parts T , U , and V . First, consider the non-constant part of the β0 -representations, U = U1 · · · Uu (u ≥ 0). On the base of Lemma 1 we know that all β0 -representations of any r ∈ Bk can be generated from the β0 -expansion of r by elementary switchings. Accordingly, the clauses U U have to describe the set of binary words generated from Uk by elementary switchings (see Fig. 1). The elementary switchings done in U can be classified into two classes according to the places of switchings:
396
E. Balogh et al.
(i) The switchings done in the positions of one Ui . (ii) The switchings done in the positions of Ui and Ui+1 , i.e. the last 1 of Ui “overflows” into the first position of Ui+1 as a consequence of switchings. There can be such a switching if the length of Ui is even and the length of Ui+1 is not less than 3 (see the β0 -representations in Fig. 1 indicated by arrows). There are two consequences of overflowing switchings: We have different clauses for Ui having even or odd length li and the sets of clauses of Ui , i = 1, · · · u, are not completely independent. The clauses of U U are given with the help of the Boolean variables γj , δj , ϕj , ψj , and χj , j = w1 , w1 + 1, · · · , wu + lu − 1, i.e. for all the variables of U U . For each j exactly one of these variables has value 1 (see the clauses of P OSIT ION S later). For this reason each binary word satisfying the clauses described by these variables can be represented in a 1-to-1 correspondence by a word of the alphabet {γ, δ, ϕ, ψ, χ}, indicating which variable has value 1 on that position. For example, z1 z2 z3 = ψγδ means that γ1 = 0, γ2 = 1, γ3 = 0, δ1 = 0, δ2 = 0, δ3 = 1, ϕ1 = 0, ϕ2 = 0, ϕ3 = 0, ψ1 = 1, ψ2 = 0, ψ3 = 0, χ1 = 0, χ2 = 0, χ3 = 0. The variables γj , δj , ϕj , ψj , and χj describing the clauses of U U will be transformed to 0’s and 1’s as follows: ϕj ⇒ aj = 0, δj ⇒ aj = 1, ψj ⇒ aj = 0, γj ⇒ aj = 1, χj ⇒ aj = 1 .
(17)
Continuing the previous example, then ψγδ = 011. Ui Ui+1 | 100000 | 10000 | | 011000 | 10000 | | 010110 | 10000 |
corresponding representations | δϕϕϕϕϕ | δϕϕϕϕ | | ψγδϕϕϕ | δϕϕϕϕ | | ψγψγδϕ | δϕϕϕϕ |
| 100000 | 01100 | | 011000 | 01100 | | 010110 | 01100 | | 010101 | 11100 |
| δϕϕϕϕϕ | ψγδϕϕ | | ψγδϕϕϕ | ψγδϕϕ | | ψγψγδϕ | ψγδϕϕ | | ψγψγψγ | χγδϕϕ | ←−
| 100000 | 01011 | | 011000 | 01011 | | 010110 | 01011 | | 010101 | 11011 |
| δϕϕϕϕϕ | ψγψγδ | | ψγδϕϕϕ | ψγψγδ | | ψγψγδϕ | ψγψγδ | | ψγψγψγ | χγψγδ | ←−
Fig. 1. All β0 -representations of Ui Ui+1 generated by elementary switchings and the corresponding representations with the variables γ, δ, ϕ, ψ, and χ (when li = 6 and li+1 = 5). The positions of Ui and Ui+1 are separated by vertical lines. The “overflowing” 1’s are indicated by χ in the rows with arrows.
Let B(Ui ) denote the set of (binary) sequences of Ui . Clearly, B(Ui ) = {01}b 1{0}c ,
(18)
Reconstruction of Binary Matrices from Absorbed Projections
397
where b and c are nonnegative integers such that b + 1 + c = li . Then the binary sequences of Ui Ui+1 , B(Ui Ui+1 ), can be given as B(Ui )B(Ui+1 ), if li is odd B(Ui Ui+1 ) = (19) B(Ui )B(Ui+1 ) ∪ {01}li /2 1B (0) (Ui+1 ), if li is even, where B (0) (Ui+1 ) denotes the set of subsequences created from those sequences of B(Ui+1 ), where the first element is 0, by omitting just this first 0. For example, if li = 6 and li+1 = 5 then B(Ui ) = {100000, 011000, 010110}, B(Ui+1 ) = {10000, 01100, 01011}, and B (0) (Ui+1 ) = {1100, 1011}. We can describe these sequences with the letters γ, δ, ϕ, ψ, and χ as follows. Corresponding to (18) and (19) B(Ui ) = {ψγ}a δ{ϕ}b , B(Ui Ui+1 ) =
(20)
B(Ui )B(Ui+1 ), if li is odd B(Ui )B(Ui+1 ) ∪ {ψγ}li /2 χB (ψ) (Ui+1 ), if li is even.
(21)
According to (17) ψ, ϕ denote 0, γ, δ, and χ denote 1. B and B (ψ) are defined in these sequences analogously to (18) and (19). Examples of generated in this way and the corresponding β0 -representations are in Fig. 1. The following sets of clauses will define a subword Ui .
DELT A =
wi +l i −2
(δj ⇒ ϕj+1 ) ∧
j=wi [
wi +l i −1
(δj ⇒ γj−1 ) ∧
j=wi +1
li
2 ]
δwi +2j−1 ∧ (ϕwi +1 ⇒ δwi ) .
j=1
The position of δ is crucial, because knowing this position all the elements succeeding δ can be computed as it is described in the first part of this rule and all elements preceding δ can be computed as it is described in the second part. δ cannot be on an even position in the subword Ui . The last part of DELT A expresses that if there is a ϕ in the second position then there is a δ in the first one. P HI =
wi +l i −2
(ϕj ⇒ ϕj+1 ) ∧ ϕwi .
j=wi +1
In other words, ϕ can be followed only by ϕ and ϕ cannot stand on the first position of the subword. GAM M AP SI =
wi +l i −1 j=wi +2
(γj ⇒ ψj−1 ) ∧
wi +l i −2 j=wi
(ψj ⇒ γj+1 ) .
398
E. Balogh et al.
The only predecessor of γ is ψ and the only successor of ψ is γ. CHI =
wi +l i −1
χj ∧ (χwi ⇒ γwi +1 ) .
j=wi +1
χ can stand only on the first position. GAM M A =
wi +l i −1
(γj ⇒ ϕj+1 ) .
j=wi
γ cannot be followed by ϕ.
[
P OSIT ION S =
li
2 ]
(ϕwi +2j ∨ γwi +2j ) ∧ (δwi ∨ ψwi ∨ χwi ) ∧
j=1 [
li
2 ]
(δwi+2j−1 ∨ ψwi+2j−1 ∨ ϕwi+2j−1 ) .
j=1
On an even position in a subword can be ϕ or γ, on the first position in the subword can stand δ, ψ, or χ, and on odd positions in the subword can stand δ, ψ, or ϕ. These are the only clauses containing 3 variables. EV EN = (γwi +li −1 ⇒ χwi +li ) . Actually wi + li = wi+1 , the first element of the subword Ui+1 . This means that a subword with even length can influence the next subword. In this case the first element is a χ followed by γ. ODD = χwi +li ∧ ψwi +li −1 . A subword with odd length cannot influence the next subword, this means that the first element of the next subword cannot be χ and the last element cannot be ψ. DISJ =
li
(Aj ⇒ Bj ), for symbols A, B ∈ {ϕ, ψ, γ, δ, χ}, where A = B .
j=1
The clauses mean that exactly one of the variables ϕ, ψ, γ, δ, and χ has the value 1, for each j = 1, · · · , li .
Reconstruction of Binary Matrices from Absorbed Projections
399
The clauses for a subword Ui . Knowing the length of the subword Ui we can construct a corresponding 3SAT expression: DELT A ∧ P HI ∧ GAM M AP SI ∧ CHI if li is odd ∧GAM M A ∧ P OSIT ION S ∧ ODD, Ki = (22) DELT A ∧ P HI ∧ GAM M AP SI ∧ CHI ∧GAM M A ∧ P OSIT ION S ∧ EV EN, if li is even The clauses describing U U . Let Γ = (γ1 , · · · , γk ), ∆ = (δ1 , · · · , δk ), Φ = (ϕ1 , · · · , ϕk ), Ψ = (ψ1 , · · · , ψk ), and X = (χ1 , · · · , χk ) be the vectors of Boolean variables. Then U U = U U (r; Γ, ∆, Φ, ψ, X) is defined as follows: UU =
u
Ki .
i=1
The clauses describing T T and V V . In these clauses the same variables are as used in U U . Since the subwords corresponding to T and V have constant values in each β0 -representation of the same r ∈ Bk , the clauses describing these parts are γ1 = · · · = γlt = 0, δ1 = · · · = δlt = 0, ϕ1 = · · · = ϕlt = 0, ψ1 = · · · = ψlt = 0, χ1 = · · · = χlt = 1, T T = γ1 = · · · = γlt = 0, δ1 = · · · = δlt = 1, ϕ1 = · · · = ϕlt = 0, ψ1 = · · · = ψlt = 0, χ1 = · · · = χlt = 0, φ, and
if T = 0 · · · 0;
(23) if T = 1 · · · 1;
if T = λ ,
γk = 0, δk = 1, ϕk = 0, ψk = 0, χk = 0, if V = 1; VV = φ, ifV = λ ,
(24)
The clauses describing K. As we saw T T , U U , V V , and so K are defined with the help of r, Γ , ∆, Φ, Ψ , and X, i.e., K = K(r; Γ, ∆, Φ, Ψ, X) . K is given by (16) explicitly.
400
E. Balogh et al.
Theorem 1. Let r ∈ Bk and a1 , . . . , ak be a binary word. a1 , . . . , ak is a β0 -representations of r if and only if there are vectors Γ, ∆, Φ, Ψ , and X of Boolean values such that a1 , . . . , ak is transformed by these vectors by (17) and K(r; Γ, ∆, Φ, Ψ, X) is true. Proof. Let a1 · · · ak be a k-digit-length β0 -representation of r. The corresponding word of γ, δ, ϕ, ψ, and χ is uniquely determined on the base of the forms (20) and (21). It is easy to check that all clauses of K (i.e. T T , V V , DELT A, · · · , DISJ) are satisfied by any word given by (20) and (21). In order to prove the other direction, consider an arbitrary word W satisfying the clauses of K. W has the uniquely determined structure T U V , where T same as (13), V same as (15) and U is a word of γ, ψ, ϕ, δ, and, χ. We have to show that U is a sequence of subsequences Ui , each of them satisfying (20) and (21). Knowing r we can determine the lengths li and positions of all Ui , i = 1, · · · u. Now we identify the subsequence Ui with length li starting from the end of U. 1. li is odd. According to P OSIT ION S, in the li th position can be δ, ψ, or ϕ. a. In the li position there is a δ. Now we have to prove that before δ there are only pairs of ψγ. From DELT A it follows that in the position li − 1 there is a γ. Let γ the position 2j, before δ. From GAM M AP SI it follows that in the position 2j − 1 there is a ψ. From P OSIT ION S it follows that in the position 2j − 2 there can be ϕ or γ. If in the position 2j − 2 is a ϕ, then according to P HI in the position 2j − 1 should be ϕ which is a contradiction (from DISJ) . This means, that in the position 2j − 2 is a γ, and let j = j − 1. This step has to be repeated till j > 1. If j = 1, i.e. in the second position is γ, then from P OSIT ION S we have that in the first position can be δ, ψ, or χ. If in the first position is δ then from DELT A follows that in the second position should be ϕ which is a contradiction. Conform to the equations (20) and (21), in the first position can be ψ or χ, in this last case there is an overflow. b. In the li position there is a ψ This in contradiction with ODD. c. In the li position there is a ϕ. From P OSIT ION S it follows that in the previous position can be ϕ or γ. If it is γ, then from GAM M A it follows that in the li th position cannot be ϕ which is a contradiction. This means, that in the position li − 1 is ϕ. If li − 2 = 1 then in this position is δ (from DELT A). If li − 2 > 1 then from P OSIT ION S it follows that in the position li − 2 can be δ, ψ, or ϕ. If in the position i − 2 is δ then similar to Case a. we can prove that Ui satisfies (20) and (21). If in the position li − 2 is ϕ then similar to Case c. we can prove that Ui satisfies (20) and (21). If in the position li − 2 is ψ then from GAM M AP SI follows that in the position li − 1 is γ and this is in contradiction with DISJ. 2. li is even. According to P OSIT ION S in the position li can be ϕ or γ. If it is ϕ then using a similar deduction as in Case C. we can prove that Ui satisfies (20) and (21). If in the position li is γ, then conform EV EN in the
Reconstruction of Binary Matrices from Absorbed Projections
401
next position is χ and conform CHI in the position li + 2 is γ, which means that Ui satisfies (20) and (21).
4
The Reconstruction Algorithm
In order to solve the reconstruction problem DA2D(β0 ) we express the β0 representations of the absorbed row and column sums with 3SAT clauses. (h) (h) (h) Boolean variables Γ (h) = (γij )m×n , ∆(h) = (δij )m×n , Φ(h) = (ϕij )m×n , (h)
(h)
Ψ (h) = (ψij )m×n , and X (h) = (χij )m×n are for describing relations of column (v)
(v)
sums (h stands for horizontal), and Γ (v) = (γij )m×n , ∆(v) = (δij )m×n , Φ(v) = (v)
(v)
(v)
(ϕij )m×n , Ψ (v) = (ψij )m×n , and X (v) = (χij )m×n for describing relations of (h)
(h)
(h)
(v)
(v)
column sums (v stands for vertical), Let, furthermore, Γi· = (γi1 ), · · · , γin ) (v) (v) (V ) be the ith row of Γ (h) , i = 1, · · · , m and Γ·j = (γ1j ), · · · , γmj )T be the jth (h)
(h)
(h)
(h)
(v)
(v)
column of Γ (v) , j = 1, · · · , n. ∆i· , Φi· , Ψi· , Xi· , ∆·j , Φ·j , Ψ·j , and X·j , be defined similarly. The clauses describing the rows and columns. Now we can describe a whole row of the discrete set to be reconstructed by the following subset of clauses: (h)
(h)
(h)
(h)
(h)
K (h) (ri ; Γi· , ∆i· , Φi· , Ψi· , Xi· ) = T T ∧ U U ∧ V V, i = 1, · · · , m , where T T , U U , and V V are defined in the previous section. All clauses describing the absorbed row sums are given by L(h) = L(h) (R, Γ (h) , ∆(h) , Φ(h) , Ψ (h) , X (h) ) m (h) (h) (h) (h) (h) = K (h) (ri ; Γi· , ∆i· , Φi· , Ψi· , Xi· ) .
(25)
i=1
Similarly, the columns can be described by (v)
(v)
(v)
(v)
(v)
K (v) (sj ; Γ·j , ∆·j , Φ·j , Ψ·j , X·j ) = T T ∧ U U ∧ V V, j = 1, · · · , n , and L(v) = L(v) (R, Γ (v) , ∆(v) , Φ(v) , Ψ (v) , X (v) ) m (v) (v) (v) (v) (v) = K (v) (sj ; Γ·j , ∆·j , Φ·j , Ψ·j , X·j ) . i=1
(26)
402
E. Balogh et al.
The clauses describing the binary matrix. The last step is to define the connections between the Boolean matrices CON N = (
(h)
i1,j1
∧(
i1,j1
∧(
i1,j1
∧(
i1,j1
(v)
ϕi1,j1 ⇒ γi1,j1 ) ∧ (
(h)
i1,j1
(h)
(v)
(h) γi1,j1
(v) ϕi1,j1 )
ψi1,j1 ⇒ γi1,j1 ) ∧ (
i1,j1
(h)
⇒
(v)
∧(
i1,j1
δi1,j1 ⇒ ψi1,j1 ) ∧ (
i1,j1
(v)
ϕi1,j1 ⇒ δi1,j1 ) ∧ (
(h)
i1,j1
(h)
(v)
ψi1,j1 ⇒ δi1,j1 ) ∧ (
i1,j1
(h) γi1,j1 (h)
⇒
(v) ψi1,j1 )
∧(
(v)
i1,j1
χi1,j1 ⇒ ϕi1,j1 ) ∧ (
i1,j1
(v)
ϕi1,j1 ⇒ χi1,j1 ) ∧ (h)
(v)
(h)
(v)
ψi1,j1 ⇒ χi1,j1 ) ∧ δi1,j1 ⇒ ϕi1,j1 ) ∧ (h)
(v)
χi1,j1 ⇒ ψi1,j1 ) .
The 3SAT expression describing the whole discrete set is: L(h) ∧ L(v) ∧ CON N .
(27)
That is, in order to solve the reconstruction problem DA2D(β0 ) we have to do the following steps: 1. 2. 3. 4.
5
Determine the β0 -expansions of ri , i = 1, · · · , m, and j = 1, · · · n. On the base of β0 -expansions give the 3SAT expression (27). Solve the 3SAT problem using an efficient SAT solver (e.g. CSAT, see [7]). If there is a solution of the 3SAT problem, give the binary matrix solution on the base of (17).
Discussion
A method is given to solve the reconstruction problem DA2D(β0 ), i.e., to reconstruct a binary matrix from it absorbed row and column sums, when the absorption can be represented by the special value β0 . It is shown that the problem DA2D(β0 ) can be transformed to a 3SAT expression such that if there is a solution of the 3SAT expression then it gives also a solution of the reconstruction problem (see Section 4). It is a natural question that how this method can be extended to other values of β. We believe that this idea is specific and cannot be generalised directly to all possible values of β. However, it is relative easy to show that very similar results are true for β’s having the property β −1 = β −2 + β −3 + · · · + β −l , where l ≥ 3. Then the switchings can be described by similar relations as in (11), β-representations can be given similarly as in Section 3, and so the reconstruction problem can be reduced to a 3SAT problem in such cases.
Reconstruction of Binary Matrices from Absorbed Projections
403
Acknowledgements. This work was supported by the grant OTKA T 032241.
References 1. Brualdi, R.A.: Matrices of zeros and ones with fixed row and column sums. Linear Algebra and Its Applications 33 (1980) 159-231. 2. Herman, G.T., Kuba, A. (Eds.): Discrete Tomography: Foundations, Algorithms and Applications. Birkh¨ auser, Boston (1999). 3. Kuba, A., Nivat, M.: A Sufficient condition for non-uniqueness in binary tomography with absorption, Technical Report, University of Szeged (2001). 4. Kuba, A., Nivat, M.: Reconstruction of discrete sets with absorption, accepted for publication in Linear Algebra and its Applications (2001). 5. M. Chrobak, C. D¨ urr, Reconstructing hv-convex polyominoes from orthogonal projections, Information Processing Letters 69 (1999) 283–289. 6. Y. Boufkhad, O. Dubois, and M. Nivat, Reconstructing (h,v)-convex twodimensional patterns of objects from approximate horizontal and vertical projections, to appear in Theoretical Computer Science. 7. O. Dubois, P. Andr´e, Y. Boufkhad, and J. Carlier, SAT versus UNSAT, in Second DIMACS Implementation Challenge, D. Johnson and M. A. Trick, eds., DIMACS Series in Discrete Mathematics and Theoretical Computer Science, AMS, 1993.
A Simplified Recognition Algorithm of Digital Planes Pieces Mohammed Mostefa Mesmoudi Department of Computer Science and Information Science (DISI) Science University of Genoa Via Dodecaneso, 35 -16146 Genoa (Italy)
[email protected] Abstract. Debled proposed an efficient algorithm for the recognition of rectangular digital planes pieces. However, uses, in some cases called strongly exterior cases, some validity criteria which are only sufficient but not necessary. In this paper we give necessary and sufficient conditions (including strongly exterior cases) to recognize pieces of digital planes. We build up a simplified form of Debled’s algorithm. Furthermore, our approach is independent from the rectangular form of pieces considered by Debled.
1
Introduction
Let ν be a 18-connected bounded convex subset of Z 3 which injectively projects on a subset Π of the plane Oxy. Let ω be a positive integer, if there exist integers a, b, c, µ in Z such that ν is the set of solutions of the double Diophantine inequality µ ≤ ax + by + cz < µ + ω (1) (x, y) ∈ Π then we say that ν is a piece of a digital plane P (a, b, c, µ, ω) with characteristics a, b, c and lower bound µ. The number ω is called the arithmetic thickness of the plane. When ω = sup(|a|, |b|, |c|), the plane P (a, b, c, µ, ω) is called a na¨ıve plane and denoted by P (a, b, c, µ) [1]. Let us consider a family {νt }t∈[µ,µ+ω−1] of real parallel planes νt defined by ax + by + cz = t. The digital plane ν is the intersection of the above family with Z 3 that can be geometrically represented by a set of voxels as shown in Figure 1. Let us suppose that the greatest common divisor of a, b, c is 1. We say that ν is recognized if it contains sufficiently points to compute all the characteristics a, b, c, µ, ω. To this aim, it is sufficient that ν possesses four affinely independent points that generate the bounding real planes νµ and νµ+ω−1 . The problem of studying pieces of digital na¨ıve planes can be reduced to the case where 0 ≤ a ≤ b ≤ c. Indeed, the general case can be obtained from the former case by rotations and symmetries [3]. There is a wide literature on the problem of recognizing digital plane pieces. Kim and Rosenfeld showed in [6] that a digital surface is a piece of a na¨ıve digital plane if and only A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 404–416, 2002. c Springer-Verlag Berlin Heidelberg 2002
A Simplified Recognition Algorithm of Digital Planes Pieces
405
if there exists a face of the convex hull of the surface such that the distance between the points of the surface and the plane that supports the face is less than 1. They proposed an algorithm based on this property of complexity O(p4 ), where p is the number of points on the surface. In 1991, Kim and Stojmenovi´c [7] improved this algorithm to obtain another algorithm of complexity O(p2 logp). In the same year, Stojmenovi´c and To˜si´c [11] presented two other algorithms, the first with complexity O(plogp) based on the construction of two convex hulls and the second with complexity O(p) and based on linear programming in 3D. These algorithms have a low complexity but they are not incremental which is a drawback in application. Furthermore, the construction of the convex hull in 3D is a delicate and expensive operation. In [13,12], Veelaert, relying on a generalization of a regularity property of digital straight lines introduced by Hung [5], developed a simple algorithm of complexity O(p2 ) which is satisfactory for small sets (p ≤ 100).
ν
y O x Π
Fig. 1. The shape of a possible digital plane piece
The arithmetic definition of digital planes led by relation (1) was introduced for the first time by Reveilles in 1991, [10]. In 1995, Debled proposed, in her PhD thesis [3], an efficient algorithm for the recognition of rectangular pieces of digital planes. This algorithm has more advantages than the other algorithms quoted above. It uses a simple and intuitive geometric properties of digital planes. It has a quadratic complexity and is incremental. It consists in sweeping a piece by sections that are parallel to a co-ordinate plane. At the beginning, one fixes, for instance, y = 0 and let x vary. In this case, Debled’s algorithm tries to recognize digital straight lines until x reaches its maximum value in Π. Then, y is incremented by 1 and x varies again. In this case, Debled’s algorithm tries to recognize pieces of digital planes. At each step the algorithm tries to compute the characteristics of the new plane. Three cases may occur – If the added point M (x, y) satisfies relation (1), then the same characteristics of the last recognized piece are kept. – If the added point M satisfies one of the following relations ax + by + cz = µ − 1
or
ax + by + cz = µ + c
406
M.M. Mesmoudi
where (a, b, c) is the normal vector of the last recognized piece, the point M is said to be 1-exterior to the last piece. Debled conjectured that the new piece is recognized in a new plane. By means of other conjectures, it is possible to construct the new characteristics. All these conjectures have been checked by numerous examples. Mathematical proofs of these conjectures have been presented in [2,9,8]. The complexity of Debled’s algorithm corresponding to this step is at most linear in the number of points on the piece, see [3] p.180. – The third possibility is that M satisfies one of the following inequalities ax + by + cz < µ − 1
or
ax + by + cz > µ + c
In this case we say that M is strongly k-exterior, where k is equal to µ − (ax + by + cz) or (ax + by + cz) − µ − c + 1 respectively. The new piece belongs to a digital plane if M is not too distant from the piece. Debled gave three validity criteria to be checked for the new piece to conclude its flatness. These criteria are sufficient but not necessary, see [3] p. 172. In this paper we present necessary and sufficient conditions to recognize pieces of digital planes. We obtain a new algorithm that simplifies and generalizes Debled algorithm. Its complexity is at most quadratic since we exploit, for each added point, only the part of Debled’s algorithm used for 1-exterior case. Next section introduces some fundamental notions of digital planes that will be used throughout the paper. Section 3 surveys our fundamental results on 1exterior case. Section 4 addresses the problem of recognition in strongly exterior case and two recognition theorems are presented. Section 5 describes our algorithm that simplifies Debled’s one, whereas Section 6 explains it by an example.
2
Background
Let us begin by giving the definitions of some notions related to digital planes that we will use in this paper. Let M (x, y, z) be a point of a digital plane P (a, b, c, µ), the quantity r(M ) = ax + by + cz is called the remainder of M with respect to P . The Diophantine inequality(1) allows us to define P (a, b, c, µ) as the set of points (x, y, z) ∈ Z 3 such that µ−ax−by ≤ z < µ−ax−by +1. Since the c c plane projects injectively on Oxy, we can represent P (a, b, c, µ) in Oxy by level lines corresponding to values of z. We can also represent the plane P (a, b, c, µ) by the remainder of its points. In Figure 2, we combine the remainder and level lines representations to represent P (9, 13, 21, 0) on plane Oxy. The real plane defined by r(a, b, c)(M ) = k is called the plane of index k. The plane of index µ is called lower leaning plane of P and the plane of index µ + c − 1 is called upper leaning plane of P . We denote them by (P i) and (P s) respectively. A piece of plane is a convex subset of voxels of a na¨ıve digital plane. A piece of a digital plane is said to be recognized if it possesses four leaning points that satisfy one of the following two cases: – Three upper (resp. lower) leaning points and one lower (resp. upper) leaning point. This configuration is referred to as CAS3.1, see figure 2(a).
A Simplified Recognition Algorithm of Digital Planes Pieces
407
– Two upper leaning points and two lower leaning points. This configuration is refereed to as CAS2.2, see figure 2(b).
18
6
15
3
12
0
9
18
6
15
15
18
21
2
5
8
11
14
17
20
5
14
2
11
20
8
17
5
14
2
10
13
16
19
0
3
6
9
12
15
13
1
10
19
7
16
4
13
1
10
5
8
11
14
17
20
1
4
7
10
0
9
18
6
15
3
12
0
9
18
0
3
6
9
12
15
18
21
2
5
(a)
(b)
Fig. 2. (a) CAS3.1. The remainder and level lines representation of a piece of P (9, 13, 21, 0) that contains three lower leaning points and one upper leaning point. (b) CAS2.2. The remainder and level lines representation of a piece of P (3, 5, 22, 0) that contains two lower leaning points and two upper leaning points
When the added point M is k-exterior, many geometric constructions are built and key voxels are extracted. These geometric constructions depend on leaning polygons and their positions with respect to M . These leaning polygons are defined as follows: – If r(M ) < µ, we call (P S) the convex hull of the upper leaning points in the piece and we call it the upper leaning polygon. We define, in this case, the convex polygon of pivots CV P to be the upper leaning polygon P S. In the same way, we call P I the lower leaning polygon. In this case, we define the convex polygon of antipodes CV A to be the lower leaning polygon P I. Thus, we have CV P = P S and CV A = P I. – If r(M ) ≥ µ + c, the upper leaning polygon P S is called the convex polygon of antipodes CV A and the lower leaning polygon P I is called the convex polygon of pivots CV P . We have CV A = P S and CV P = P I. In figure 3(a) we give an example of the convex polygons of pivots and antipodes for a recognized piece in the plane P (5, 6, 7, −1). The polygonal line of pivot vectors L is constructed depending on the added k-exterior point and its associated convex polygon of pivots. All constructions are realized on the projections in the plane Oxy. Four cases are possible: 1. The CV P is reduced to one point. In this case, the polygonal line of pivot vectors L is reduced to this point. 2. The CV P is formed by points that are not all collinear. In this case, the polygonal line L of pivot vectors is composed by points of the CV P such that their projections in the plane Oxy are located on the part of the boundary of the convex hull of the CV P projection that disappears when the point M is added, see Figure 3(b). 3. The CV P is composed by collinear points in the plane Oxy:
408
M.M. Mesmoudi
5
3
1
−1
−1
4
2
0
4
−2
2
0
5
3
1
−1
4
2
0
3
1
0
5
3
1
−1
1
−1
4
2
0
5
(a)
M
5
−2
3
1
−1
4
2
0
4
2V
0
5
3
1
−1
0
5
3
1
−1 U 4
2
0
1
−1
4
2
3
1
N’ −1
N
0
5
M
(b)
Fig. 3. The point M is 1-exterior of remainder −2. In (a) the parallelogram in bold is the convex of pivot points CV P and the dashed triangle is the convex polygon of antipodes CV A. In (b) U ∪ V form the polygonal line of pivot vectors L, point N is an antipode for both pivot vectors U and V , It is separating for the vector U . Point N is an antipode for the vector U only.
• If the projection of M is collinear with the projected points of the CV P , then the polygonal line L of pivot vectors is reduced to the nearest point of the CV P to M . • If the projection M is not collinear with the projected points of the CV P , then L is equal to all points of the CV P . An antipode A of a pivot vector V associated to a k-exterior point M is a summit of the CV A that has the maximal distance, among all points of the CV A, from the line directed by V and containing M . Furthermore, if the end points of V are separated by the line M A, then A is called a separating antipode.
3
The 1-Exterior Case
Let S be a recognized piece of the plane P (a, b, c, µ) and M an added point to S. If the remainder of M with respect to P (a, b, c, µ) is between µ and µ + c − 1, then S = S ∪ {M } is still recognized in the same plane. If the point M is 1exterior, Debled’s conjectures [3,4] assure the existence of a new plane in which S is recognized. Furthermore, following the shape of the polygonal line of pivot vectors, these conjectures give methods to construct the new characteristics A ≤ B ≤ C such that C is the smallest for all possible planes that contain S . These conjectures have been mathematically proven in recent works [2,8,9]. In fact, we prove in [8] the following key theorem that assures the existence of planes that contain S . We gave also a method to construct the smallest characteristics. Theorem([8]). Let S be a recognized piece of the plane P (a, b, c, µ) and M (x0 , y0 , z0 ) ∈ Z 3 be a point such that S = S ∪{M } is convex. If M is 1-exterior to S, then there exists (A, B, C, µ ) ∈ Z 4 with A ∧ B ∧ C = 1 such that S = S ∪ {M } is a piece of the plane P (A, B, C, µ ). The proof of this theorem provides two necessary and sufficient conditions on the
A Simplified Recognition Algorithm of Digital Planes Pieces
409
choice of points that allow the construction of the new base. These conditions are: β2 (x2 − x0 ) − α2 (y2 − y0 ) = c,
(2)
where (x2 , y2 ), (x1 , y1 ) and (α2 , β2 ) are respectively the projections on Oxy of a pivot point M2 , an antipode M1 and a vector V2 formed by M2 and another pivot point. dm − (µ + c − 1) +
h2 )m hm ≤ ≤ dm − (µ − 1) ∀m ∈ S, H H
(3)
where )m is the sign of β2 (x − x0 ) − α2 (y − y0 ), the numbers H, h2 et hm are the Euclidean distances between the points M1 , M2 and m respectively and the real line directed by V2 and containing M .
4
The Strongly Exterior Case
Let us now suppose that M is strongly exterior to the piece S ⊂ P (a, b, c, µ). This implies that there exists an integer P ≥ 2 such that M satisfies one of the following equalities: ax0 + by0 + cz0 = µ − P (i.e, M is located under the piece S) or ax0 + by0 + cz0 = µ + c − 1 + P (i.e, M is located over the piece S). An axial rotation of the plane by π reverses the positions of its points. Points located under the plane get over it and vice versa. The characteristics of the plane remain unchanged. This fact allows us to reduce our study to one case, say M being located under the plane. All relations we will obtain are still valid in the case where M is located over the plane. The following lemma is a direct consequence of convexity. Lemma 1. Let S be a piece of a discrete plane P (a, b, c, µ) and M (x0 , y0 , z0 ) a point such that S = S ∪ {M } is convex. If M is strongly P -exterior then P ≤ c. Proof. Suppose that the remainder of M is r(M ) = µ − P and let consider the point M (x0 , y0 , z0 +1). The remainder of M is r(M ) = µ−P +c. If r(M ) < µ, then M is disconnected from S and S is not convex. Hence, r(M ) ≥ µ which gives P ≤ c. 4.1
Recognition Theorems
The piece S is bounded by two real leaning planes of indices µ and µ + c − 1. When the added point M is exterior to S, located for instance under S, then, in order to geometrically contain S = S ∪ {M } in another digital plane, one should lean the plane of index µ in a suitable direction until M is reached. This latter plane becomes the lower leaning plane of the ”eventual” digital plane we are looking for. Then let us consider another plane located over S, tangent to S and parallel to the new lower leaning plane. The distance between these two planes should be less than or equal to 1; otherwise we don’t get a digital plane.
410
M.M. Mesmoudi
The operation of leaning planes can be done by revolving on some parallel straight lines all planes of indices between µ and µ + c − 1. If the line, on which pivots the plane of index µ + c − 1, is too distant from M , then we may lose some upper leaning points (i.e., points of CV P ) located between this line and M after the rotation, see Figure 4(a). Thus, this line should be chosen as close as possible to the point M . In the same way, the rotation of the lower leaning plane should not exclude the lower leaning points. Therefore, the line on which we pivot this plane should be chosen as far as possible from M so that all points of the CV A will be located between this line and M , see Figure 4(b). This explains the importance of polygonal lines of pivots and antipodes defined above. M’2
M’2
M2 M"2
M"1 M1
M’1 (a)
M2
M"1
M1 M
L’a
M"2
M"1 M’1 (b)
M
M’1
M2 S
M0
M4
La
M1 (c)
M3
R’1 R1
M
Fig. 4. Profile view: Axes of rotation are shown by (), the result is delimited by the dashed lines. In the 3D-space points, as M, M ”2 , are not piled up. In (a) the rotation excludes leaning points M1 , M2 and eventual internal points as M ”1 and M ”2 . In (b) the rotation does not exclude any point of the piece. In (c) the rotation R1 excludes point M0 , while rotation R1 includes all points of S, its axe contains M1 . Leaning point M4 becomes an internal point.
After this operation M become naturally a lower leaning point of the new digital plane that contains S . In practice, this operation does not always give a recognized piece of a digital plane. This is due to the fact that when M is too distant from S, the rotation of the above upper (resp. lower) leaning plane may excludes some points (i.e., their remainders are beyond the bounds given by relation 1). To palliate to this problem one can choose the lines, on which the leaning planes pivot, outside the piece. But the result we obtain is that either the new digital plane is not na¨ıve (the distance between the resulting leaning planes is greater than 1) or the piece is not recognized (the number of the new leaning points in the piece is strictly inferior to 4). Thus, to get a satisfactory solution we have to carefully handle the operation of leaning planes. We proceed as follows: We rotate the upper leaning plane, by a suitable angle, as explained bellow on the nearest line Lp to M . We rotate the lower leaning plane on the most far line La from M which is parallel to Lp (in the sense that it is directed by the same vector of Lp ) and contains a leaning lower point in S. These lines strongly depend on the polygonal line of pivots L and the polygonal line of antipodes L . Since each of L and L contains no more than two independent vectors [3,9] then Lp and La are chosen in one of the following configurations: 1. If the polygonal line of pivots L is not reduced to a point and if there exists a separating antipode M1 , then Lp is the line containing vector V2 of L which
A Simplified Recognition Algorithm of Digital Planes Pieces
411
is separated by M1 . We take La as the line directed by V2 and containing M1 . 2. If L is reduced to one point, then, since S is recognized, four leaning points exist that define the normal vector of S and its thickness. These points define at least two independent vectors. Since the leaning points on L are collinear, then other leaning points (antipodes) exist on the CVA that define a transverse direction to L. Among antipodes on this transverse direction we take the line La which contains the most far antipode from M . Then we define line Lp to be parallel to La and to contain the nearest pivot point to M (which is, in this case, equal to L). 3. If L is not reduced to a point and there is no separating antipode, then we proceed in the same way as in the previous case to define La and Lp . The angle of rotation is chosen such that (i)
if M is reached without excluding any point of S (i.e., all points of S still satisfy a relation of type (1) after the rotation), then this rotation, say R1 , gives a new na¨ıve digital plane in which the piece S is recognized. (ii) if the previous rotation excludes a point only on Lp , since all possible points on Lp should have the same remainder, then all these points are excluded. Thus M is too distant. To keep the points of Lp in the new plane we have to stop rotation R1 before reaching M . In this case, Lp and La are still the pivots and antipodes lines of the new plane and there is no way to reach M by a rotation. We note that, in this case, M becomes less strongly exterior but never 1-exterior because in this later case the piece becomes recognizable which is impossible. (iii) if the previous rotation R1 excludes a point M2 of Lp (or La ) after having reached a point M0 of remainder µ + c − 2 (or µ + 1), then we reduce the angle of rotation until M0 is reached. Let R1 be the corresponding rotation. Algorithmically, this can be done by moving line La to the next line La that is parallel to La , contains antipodes, and its distance to M is bigger than La ’s one, see Figure 4(c). The leaning points of La and Lp always remain leaning points of the new digital plane, the other leaning points become internal points in general, and M0 becomes a new leaning point. Thus, we get enough leaning points so that the piece S still be recognized in the new digital plane. • If point M becomes 1-exterior for the new characteristics, then Debled algorithm allows us to recognize the piece S in another digital plane. • If point M remains strongly exterior, then we have to apply a new rotation. But in this case, point M2 will be excluded and we are in situation (ii). Therefore, piece S ∪ {M } can not be recognized in a digital plane. (iv) If rotation R1 excludes a point M0 of remainder µ + c − 2 (or µ + 1) and points of Lp and La remain included by R1 then we have to stop the rotation before excluding M0 . This corresponds to an extremal position and piece S ∪ {M } can not be recognized.
412
M.M. Mesmoudi
All these cases imply that strongly exterior cases can be recognized only through 1-exterior cases. This proves our first theorem of recognition. However, before doing any rotation, there is an analytic way to check what kind of situation we may have. This analytic way is given by condition 5 in the second theorem of recognition. Theorem 1. Let S be a recognized piece of a digital plane P (a, b, c, µ) and M ∈ Z 3 be a point such that S = S ∪ {M } is convex. Suppose that M is strongly P exterior ( P ≥ 2). Then there exists at most one digital na¨ıve plane in which S is recognized and admits M as a strongly q-exterior for some integer 2 ≤ q < P . When a such number q exists, then S cannot be recognized in a na¨ıve digital plane. Theorem 2. Let S be a recognized piece of a digital plane P (a, b, c, µ) and M (x0 , y0 , z0 ) ∈ Z 3 be a point such that S = S ∪ {M } is convex. Suppose that M is strongly P -exterior ( P ≥ 2). Let M2 (x2 , y2 , z2 ) be a point on the polygonal line of pivots and V2 (α2 , β2 , γ2 ) a vector based on M2 and located either on the polygonal line of pivots if it is not reduced to a point or parallel to the polygonal line of antipodes. Let V1 (α1 , β1 , γ1 ) be a vector linking M to an antipode M1 sufficiently distant from M in order for it to exceed at least the polygonal line of antipodes. Let A = β1 γ2 − β2 γ1 , B = α2 γ1 − α1 γ2 , C = α1 β2 − α2 β1 and k = A ∧ B ∧ C). B C Then piece S belongs to a digital plane P ( A k , k , k , µ ) such that M is a leaning point (q = 0) or 1-exterior (q = 1) if and only if M1 is arbitrarily chosen on a line, among at most three parallel lines, directed by V2 such that the following relations are satisfied: 0 ≤ A ≤ B ≤ C,
(4)
P β2 (x2 − x0 ) − α2 (y2 − y0 ) − C(P − 1) = kc(1 − q),
(5)
dm − µ − c + P k(1 − q)c )m hm dm − µ + P + ≤ ≤ ; ∀m ∈ S, P |V2 |H H P
(6)
with q = 0 or 1 and H (resp. hm ) being the height of point M1 (resp. m) to the real line passing by M and directed by V2 . The number of lines that may contain M1 is 1 if k < P and at most 3 if k = P . Remarks. 1. In the algorithm that we propose, only relations (4) and (5) are needed explicitly, relation (6) serves mathematically to prove Debled’s conjectures and explains how to choose M1 between all antipodes. 2. We note that if in theorem 2 we assume that P can take the value 1, relations (4),(5) and (6) become equivalent to those quoted in the theorem of the previous section. Following lemma 2 below, the number k = A ∧ B ∧ C divides P which is assumed to be 1 here. This fact is also true for the 1-exterior case. Thus theorem 2 includes the 1-exterior case.
A Simplified Recognition Algorithm of Digital Planes Pieces
413
Proof Idea. Let us suppose that the added point M is located under plane P (a, b, c, µ). The Point M satisfies the relation ax0 + by0 + cz0 = µ − P . Let M1 −−−→ be a lower leaning point of P (a, b, c, µ) and let V1 = M M1 =: (α1 , β1 , γ1 ) and (A, B, C) =: V1 ∧ V2 . Vector V1 satisfies the relation aα1 + bβ1 + c(γ1 − Pc ) = 0. Suppose that V2 is linearly independent from V1 (α1 , β1 , γ1 − Pc ). The vectorial product V1 ∧ V2 = (A , B , C ) is a rational multiple, say λ, of N (a, b, c). The numbers A , B , C are given by: A = A + β2
P = λa; c
B = B − α2
P = λb; c
C = C = λc.
We can always assume (by considering −V2 in place of V2 ) that 0 ≤ A . Since 0 ≤ a ≤ b ≤ c, then 0 ≤ A ≤ B ≤ C which is equivalent to 0 ≤ A + β2 Pc ≤ B − α2 Pc ≤ C. If we increase the length of V1 , then the numbers A , B , C also increase, and so does the distance between each other. Note that the quantities β2 Pc and α2 Pc do not change when we increase the length of V1 . Let us take V1 sufficiently long so that 0≤A≤B≤C
(4)
By dividing A, B, C by their greater common divisor, we obtain the characteristics of the eventual new plane which may contain the piece S = S ∪ {M }. Lemma 2. If the numbers α2 , β2 and γ2 are prime together (α2 ∧ β2 ∧ γ2 = 1), then P GCD(A, B, C) = A ∧ B ∧ C divides number P . Let us search conditions under which there exists µ such that the point M is q-exterior to S ⊂ P (A, B, C, µ ), with q ∈ {0, 1}. This means that for all m (x, y, z) ∈ S we have: µ ≤ Ax + By + Cz < µ + C Ax0 + By0 + Cz0 = µ − P + q where q = 0 or 1 For all m(x, y, z) ∈ S ⊂ P (a, b, c, µ) we have µ ≤ ax + by + cz ≤ µ + c − 1. The added point M satisfies the relation ax0 + by0 + cz0 = µ − P . Then µ − P ≤ ax + by + cz ≤ µ + c − 1, for all points of S . By multiplying all members with λ we obtain: λ(µ − P ) − P (
β2 α2 β2 α2 x− y) ≤ Ax + By + Cz ≤ λ(µ − 1) − P ( x − y) + C. c c c c
Let us take k = P GCD(A, B, C). By dividing the previous inequality with k we obtain: 2 y) 2 y) λ”(µ − P ) − P (β2 x−α ≤ A”x + B”y + C”z ≤ λ”(µ − 1) − P (β2 x−α + C”, c c B C λ C” where P = Pk , A” = A , B” = , C” = etλ” = = . k k k k c 2 y) Let us take µ(x, y) = λ”(µ − P ) − P (β2 x−α . Then for all points in S we c have: µ(x, y) ≤ A”x + B”y + C”z ≤ µ(x, y) + C” + λ (P − 1). Using the facts that M is P -exterior and should become q-exterior and that M2 (one of the end points of vector V2 ) is an upper leaning point of pieces S and S , we obtain the condition
414
M.M. Mesmoudi
P [β2 (x2 − x0 ) − α2 (y2 − y0 )] − C(P − 1) = kc(1 − q).
(5)
This condition expresses the position of points M2 , M2 and M1 with respect to M . Now let us search in what region we can choose V1 so that surface S will be a recognized piece of the digital plane of parameters (A”, B”, C”, µ(x0 , y0 ) + q) with M a q-exterior, where q = 0 or 1. The former double inequality can be 0 )−α2 (y−y0 ) transformed to get dm −µ−c+P + kc(1−q) ≤ β2 (x−x ≤ dm −µ+P . This P PC β2 α1 −α2 β1 P expression can be expressed in terms of heights by: dm −µ−c+P P
+
k(1−q)c |V2 |H
≤
m h m H
≤
dm −µ+P P
(6)
with dm ∈ [µ−P, µ+c−1]. It is the condition that all points m(x, y, z) ∈ S should satisfy so that S will be a recognized piece in the digital plane of characteristics (A”, B”, C”, µ = µ(x0 , y0 )). By studying the sign of the first member of relation (5), we can show that this relation can be satisfied at most by three lines directed by V2 and containing antipodes.
5
Simplified Recognition Algorithm
The algorithm that we describe in this section directly derives from the discussion and the proofs of the two previous theorems. It uses only the part of Debled’s algorithm corresponding to 1-exterior case with small modifications to recognize the general case of rectangular pieces. The complexity of the simplified algorithm decreases to become at most quadratic in the number of points in the piece. Following the necessary and sufficient conditions quoted in theorem 2, the simplified algorithm becomes completely decidable where the validity criteria given by Debled fail in the strongly exterior case. These two advantages are added to the other advantages of Debled’s algorithm. The algorithm begin by sweeping the piece to be recognized following sections parallel to one co-ordinate plane, Oxz for instance, by successively adding voxels. At the beginning we initialize y = 0 and we let x vary in its interval of definition. At each added voxel the algorithm tries to recognize a piece of a digital straight line. When all values of x are considered we increment y by 1 and we let x sweep again over all its possible values. At each step the algorithm tries to recognize a piece in a digital plane and compute its characteristics. Three cases are possible: 1. If the added point M satisfies the double inequality (1) for the plane constructed before adding M , then we keep the same characteristics and the updated piece is still recognized in this plane with the same leaning points with a possible addition of M . 2. If the added point is 1-exterior to the plane, then we apply Debled’s algorithm to recognize the new piece. This step consists of computing polygonal pivots and antipodes lines to determine the vector V2 which satisfies relation (2), and also search an adequate antipode M1 that satisfies (3)). 3. In the third case, if the added point is strongly exterior, we search the polygonal line of pivots and antipodes and then check relation (5). If this relation
A Simplified Recognition Algorithm of Digital Planes Pieces
415
is not satisfied for any point M2 on the polygonal line of pivots, then the piece is not recognizable. If relation (5) is satisfied for some q different from 0 and 1 then the piece is not recognizable. If relation (5) is satisfied for q = 0, then we apply 1-exterior part of Debled’s algorithm. In this case the piece is recognized with M as a leaning point. If relation (5) is satisfied for q = 1, then we again apply Debled’s algorithm for the 1-exterior case at most three times to decide the recognition of the piece. Note that in this case, the choice of the antipode M1 is modified from the choice given by Debled. In the first running of the algorithm, point M1 is taken as in Debled’s algorithm. If M becomes 1-exterior, then we apply Debled’s algorithm once more and the piece is recognized. At this point, if after one run there are some points which are excluded, then we choose the antipode M1 on the following line proposed by theorem 2; if the point M becomes 1-exterior, then we are done. If there are some points which are excluded then we take M1 on the last line proposed by the theorem and the point becomes 1-exterior. Usually, only two lines are sufficient.
6
Example
In Figure 5(a), we represent a recognized piece S of the plane P (18, 21, 23, 0). The added point M (3, 5, −7) is 2-exterior of remainder r(M ) = 2. The nearest pivot point to M is M2 (4, 2, −4). The polygonal line of antipodes contains only two points M1 (0, 0, 0) and M1 (8, 3, −9). The vector V2 is then (8, 3, −9) and V1 = (−3, −5, 7). The vectorial product of V1 and V2 gives (A, B, C) = (24, 29, 31). With this choice relation (6) is not satisfied. Take for instance the point m = 2 (y−y0 ) (9, 0, −7). The quantity hHm = β2 (x−x0 )−α = 58 C 31 = 1, 86 is greater than dm −µ+P = 3/2 = 1, 5. Let us take the antipode located on the line directed by P V2 and that comes just after V1 . Let M ”1 (5, −1, −3) be a such antipode. The −−−−→ point M ”1 is located outside the piece S. In this case vector V1 is M ”1 M = (2, −6, 4). The vectorial product V1 with V2 is (42, 50, 54) = 2(21, 25, 27). Note that P GCD = k = 2 divides P = 2. The quantity P β2 (x2 − x0 ) − α2 (y2 − y0 ) − C(P − 1) is equal to 2 3(4 − 3) − 8(2 − 5)) − 54(2 − 1) = 54 − 54 = 0. Moreover, for q = 1 we obtain kc(1 − q) = 0. Thus, relation (5) is satisfied for q = 1. We can check that all points of S satisfy relation (6). For m(9, 0, −7), which did not satisfy (6) for the first basis, we get dm −µ+P hm 58 = 3/2 The value of µ = µ(x0 , y0 ) + q H = 54 = 1, 07 which is small to P is −1 + 1 = 0. Piece S is recognized in the plane P (21, 25, 27, 0), see Figure 5(b). Thus, we return back to 1-exterior case. The polygonal line of pivots is reduced to the point M2 (4, 2, −4). The polygonal line of antipodes contains two −−−−→ points M1 (0, 0, 0) and M1 (9, 0, −7). Vector V2 is then equal to M1 M1 (9, 0, −7) −−−→ and V1 = M1 M = (−3, −5, 7). The vectorial product of V1 with V2 gives (35, 42, 45). Piece S = S ∪ {M } becomes recognized in the plane P (35, 42, 45, 0) with at l;east three lower leaning points M, M1 , M1 and at least one upper leaning point M2 , see Figure 5(c).
416
M.M. Mesmoudi
8
3
−2
15 10
5
0
13
17
12
7
2
22 18
13
20 15
8 10
3 5
21
17 11 5
−1
30 20 10 0
16
11
6
19 13 7
1
22 16 10 4
25 19 13 7
33 23 13 3
38 28 18 8
43 33 23 13
0 18
13
8
21 15 9
3
24 18 12 6
0
21 15 9
36 26 16 6
41 31 21 11
1
36 26 16
20 14 8
2
23 17 11
39 29 19 9
44 34 24 14 4
39 29 19
25 19 13
42
32 22 12 2
37 27 17 7
0
35 25 15 5
40 30 20 10 0
19 14
9
4
22
17 12
7
2
20
15
10
23 17 11 5 26
21
11
6
1
19
14
9
4
22
17
12
25
19 13 7
1
22 16 10 4
18 13
8
3
21
16
11
6
1
19 14
0
21 15
3
24 18 12
0
16
0
(a)
9
(b)
6
0 21 15
42 32 22 35 25
(c)
Fig. 5. In (a), piece S is a recognized in the plane P (18, 21, 23, 0). In (b) piece S is recognized in the plane P (21, 25, 27, 0) and M is 1-exterior. In (c), piece S is recognized in the plane P (35, 42, 45, 0)
References [1] E. Andr`es. Cercles discrets et rotations discr` etes. PhD thesis, Louis Pasteur University, December 1992. [2] Y. Boukhatem. Sur la reconnaissance des plans discrets. Master Thesis, Mostaganem University, April 2001. [3] I. Debled-Rennesson. Etude et Reconnaissance des Droites et Plans Discrets. PhD thesis, Louis Pasteur University, October 1995. [4] I. Debled-Rennesson and J.P. Reveilles. Incremental algorithm for recognizing pieces of digital planes. In Spie’s Internat. Symp. on Optical Science, Engeneering and Instrumentation, Technical conference Vision Geometry 5, Denver, USA, Aug 1996. [5] S. H. Hung. On the straightness of digital arcs. In IEEE Transactions on Pattern Analysis and Machine Intelligence, volume PAMI 7, pages 203–215, 1985. [6] C.E. Kim and A. Rosenfeld. Convex digital solids. In IEEE Transactions on Pattern Analysis and Machine Intelligence, volume PAMI 6, pages 639–645, 1984. [7] C.E. Kim and I. Stajmenovi´c. On the recognition digital planes in threedimensional space. In North-Holland, editor, Pattern Recognition Letters, volume 12, pages 665–669, 1991. [8] M. M. Mesmoudi and I. Debled-Rennesson. Contribution ` a la reconnaissance des plans discrets. Technical report, Done at Loria Laboartory INRIA-Lorraine, April 2000. [9] M.M. Mesmoudi and I. Debled-Rennesson. Sur la reconnaissance des plans discrets. In Colloque d’Analyse et Application. Mostaganem University, Oct 1999. [10] J.P. Reveilles. G´eom´etrie discr`ete, calculs en nombre entiers et algorithmique. Doctorat d’Etat thesis, Louis Pasteur University, Strasbourg, 1991. [11] I. Stojmenovi´c and R. Tosi´c. Digitaziation schemes and the recognition of digital straight lines, hyperplanes and flats in arbitrary dimensions. In Vision Geometry, Contemporay Mathematics series, volume 119, pages 197–212. AMS, Providence, RI, 1991. [12] P. Veelaert. Digital planarity of rectangular surface segments. In IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 16, pages 647–652, 1994. [13] P. Veerlaert. On the flatness of digital hyperplanes, volume 3. Journal of Mathematical Imaging and Vision, 1993.
Ridgelet Transform Based on Reveill` es Discrete Lines Philippe Carr´e and Eric Andres Laboratoire IRCOM-SIC, bˆ at. SP2MI, av. Marie et Pierre Curie BP 30179 - 86960 Chasseneuil-Futuroscope C´edex - FRANCE
Abstract. In this paper we present a new discrete implementation of ridgelet transforms based on Reveill`es discrete 2D lines. Ridgelet transforms are particular invertible wavelet transforms. Our approach uses the arithmetical thickness parameter of Reveill`es lines to adapt the Ridgelet transform to specific applications. We illustrate this with a denoising and a compression algorithm. The broader aim of this paper is to show how results of discrete analytical geometry can be sucessfully used in image analysis.
1
Introduction
Image analysis is traditionally aimed at understanding digital signals obtained by sensors (in our case cameras). Digital information is considered as sampled continuous information and the theoretical background for it is signal theory. This is sometimes referred to as “digital geometry” in opposition to “discrete geometry” for computer graphics. These last ten years, since J-P. Reveill`es has introduced it [1], discrete analytical geometry has made an important progress in defining and studying classes of discrete objects and transformations. This greatly enhanced our understanding of the links between the discrete world Zn and the continuous world Rn . In the same time, a new discrete signal decomposition has been developed in image analysis: the wavelet representation. This new representation has many applications such as denoising, compression, analysis, etc. One of the aims of this paper is to apply this new insight in discrete geometry to image analysis and more specifically to a particular wavelet transform: the ridgelet transform. Wavelets are very good at representing point singularities ; however they are significantly less efficient when it comes to linear singularities. Because edges are a extremely common phenomena in natural images, an efficient multiresolution representation of images with edges would be quite advantageous in a number of applications. A team of Stanford has recently developed an alternative system of multiresolution analysis specifically designed to efficiently represent edges in images [2]. Their attempt was to design a new system, called ridgelet transform, in the continuous domain so that an image could be approximated within a certain margin error with significantly fewer coefficients than would be required A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 417–427, 2002. c Springer-Verlag Berlin Heidelberg 2002
418
P. Carr´e and E. Andres
after a wavelet decomposition. However, most of the work done with ridgelets has been theoretical in nature and discussed in the context of continuous functions. The important bridge to digital implementation is tenuous at best. To our knowledge, we can find in the literature only two solutions for the digital ridgelet decomposition [3], [4] (notice that the study proposed by Gu´edon et al is similar [5]).This paper presents a new approach that aims at representing linear singularities with a discrete ridgelet transform based on Reveill`es discrete lines. In this article, we propose a new approach of the ridgelet transform based on several types of Reveill`es discrete lines definitions in the Fourier domain. Our decomposition has an exact inverse reconstruction process and the redundancy of our Ridgelet representation can be adjusted with the arithmetical thickness of the Reveilles discrete lines. To illustrate this new decomposition, we propose a method of restoration of noised images which uses a wavelet undecimated method defined in [6].
2 2.1
The Ridgelet Transform The Wavelet Transform
The discrete wavelet transform (DWT) stems from the multiresolution analysis and filter bank theory [7]. The multiresolution analysis is a decreasing sequence of closed subspace {Vj }j∈Z that approximates L2 () (f ∈ L2 () if ∞ 2 f (x) dx < ∞). A function s ∈ L2 () is projected, at each step l, onto −∞ the subset Vl . This projection is defined as the scalar product, noted cl , of s with a scaling function, noted φ that is dilated and translated: cl (k) = s(x), 2−l/2 φ 2−l x − k = s(x), φl,k (x)
(1)
k is the translation parameter and l is the dilatation parameter with k, l ∈ Z. At each step (if l is growing), the signal is smoothed. The lost informations can be restored using the complementary subspace Wl+1 of Vl+1 in Vl . This subspace is generated by a wavelet function ψ with integer translation and dyadic dilatation; the projection of s on Wl is defined as the scalar product, noted dl : dl (k) = s(x), 2−l/2 ψ 2−l x − k = s(x), ψl,k (x) (2) Then, the analysis is defined as : 1 1 cl (k) = √ h(n − 2k)cl−1 (k), dl (k) = √ g(n − 2k)cl−1 (k) 2 n 2 n with cl the coarse approximation, dl the decimated wavelet coefficients at scale l and c0 the original signal, the sequence {h (k) , k ∈ Z} is the impulse response
Ridgelet Transform Based on Reveill`es Discrete Lines
419
of a low-pass filter and the sequence {g (k) , k ∈ Z} is the impulse response of a high-pass filter. Notice that with conditions required on the filters, we get an exact restoration. Mallat’s multiresolution analysis is connected with so called ”pyramidal” algorithms in image processing [8]. Because of decimation after filtering, the Mallat’s decomposition is completely time variant. A way to obtain a time-invariant system is to compute all the integer shifts of the signal. Since the decomposition is not decimated, filters are dilated between each projection. This algorithm presents many advantages, particularly a knowledge of all wavelets’ coefficients: coefficients removed during the downsampling are not necessary for a perfect reconstruction, but they may contain information useful for the denoising. 2.2
Continuous Theory of Ridgelet Transform
A substantial foundation for Ridgelet analysis is documented in the Ph.D. thesis of Cand`es [2]. We briefly review the ridgelet transform and illustrate its connections with the radon and wavelet transforms in the continuous domain. The continuous ridgelet transform of s ∈ L2 2 is defined by : r(a, b, θ) = ψa,b,θ (x)s(x)dx R2
with ψa,b,θ (x) the ridgelet 2-D function defined from a wavelet 1-D function ψ as: x1 cos θ + x2 sin θ − b −1/2 ψa,b,θ (x) = a ψ a b is the translation parameter, a is the dilatation parameter and θ is the direction parameter. The function is oriented at the angle θ and is constant along lines x1 cos θ + x2 sin θ = cst. Transverse to these ridges it is a wavelet. In comparison, the analysis continuous 2-D wavelet function are tensor products of 1-D wavelet ψa,b : ψa,b (x) = ψa1 ,b1 (x1 )ψa2 ,b2 (x2 ) The Radon transform seems to be similar to the 2-D wavelet transform but the translation parameters (b1 , b2 ) are replaced by the line parameters (b, θ). Then, the wavelets are adapted to analyse isolated point discontinuities, while the ridgelets are adapted to analyse discontinuities along lines. A basic tool for calculating ridgelet coefficients is to view ridgelet analysis as a form of wavelet analysis in the Radon domain: in 2-D, points and lines are related via the radon transform, thus the wavelet and ridgelet transforms are linked via the Radon transform. The Radon transform of s is defined as: Rs(θ, t) = s(x)δ(x1 cos θ + x2 sin θ − t)dx1 dx2 R2
420
P. Carr´e and E. Andres
where δ is the Dirac distribution. The ridgelet coefficients r(a, b, θ) of s are given by the 1-D wavelet transform to the projections of the Radon transform where the direction θ is constant and x is varying: r(a, b, θ) = ψa,b (x)Rs(θ, x)dx R
Notice that the Radon transform can be obtained by applying the 1-D inverse Fourier transform to the 2-D Fourier transform restricted to radial lines going through the origin (this is exactly what we are going to do in the discrete Fourier domain with help of discrete Reveill`es lines): s (ω cos θ, ω sin θ) = e−jωx Rs(θ, x)dx R
with s (ω) the 2-D Fourier transform of s. This is the projection-slice formula which is used in image reconstruction from projection methods. We deduce that the Radon transform can be obtained by applying the 1-D inverse Fourier transform to the 2-D Fourier transform restricted to radial lines going through the origin. These relations are shown in figure 1.
Fig. 1. Relation between transforms
2.3
Discrete Ridgelet Transform
As we have seen, a basic strategy for calculating the continuous ridgelet transform is first to compute the Radon transform Rs(θ, t) and secondly, to apply a 1-D wavelet transform to the slices Rs(θ, .). The discrete procedure uses the same principle. As presented in the first section, the discrete wavelet decomposition is easy to implement, is stable and invertible, and can be associated to a discrete orthogonal representation. The discretization of the Radon transform is more difficult to achieve. The majority of methods proposed in the literature have been devised to approximate the continuous formula. But, none of them were specifically designed to be invertible transforms for discrete images and can not be used for the discrete
Ridgelet Transform Based on Reveill`es Discrete Lines
421
Ridgelet transform. Recently, some articles studied the implementation of the digital Ridgelet transform. Two approaches have been developed: – Spatial strategy for digital Radon transform: the Radon transform is defined as summations of image pixels over a certain set of lines. Those lines are defined in a finite geometry in a similar way as the line for the continuous Radon transform in the Euclidean geometry. Rs(p, q, b) = s(x, y)δ (b + px − qy) with (p, q) direction of projection x
y
In [5] an inverse transform based on erosion and dilatation operations is proposed. Vetterli et al. proposed in [3] an orthonormal ridgelet transform. – Fourier strategy for digital Radon transform: the projection-slice formula suggests that approximate Radon transforms for digital data can be based on discrete Fast Fourier transforms (FFT). This is a widely used approach in the literature of medical imaging and synthetic aperture radar imaging. The Fourier-domain computation of an approximate digital radon transform is defined as: 1. Compute the 2-D FFT of f 2. Extract Fourier coefficients which fall lines Lθ going through the origin. 3. Compute the 1-D FFT on each line Lθ (defined for each value of the angular parameter). In this strategy too, discrete lines must be defined. In [4], Starck et Al proposed to use an interpolation scheme which substitutes the sampled value of the Fourier transform obtained on the square lattice with sampled value of s on a polar lattice. In this paper, we propose to define the lines Lθ with the discrete geometry in the Fourier domain. This solution allows us to have different Ridgelet decompositions according to the arithmetical thickness of the discrete Reveill`es lines. Our transformation is redundant but the repetition of information depends on the type of the discrete lines used and can be adapted with the application. Moreover we obtain an exact reconstruction.
3 3.1
Digital Radon Transform Based on Reveill` es Discrete 2D Lines Definition of Discrete Lines
The discrete lines that are used in our application are not classical discrete lines such as, for instance, Bresenham lines nor the classical Reveill`es lines. These lines are not suitable for our purpose because they do not provide a central symmetry in the Fourier domain. Without central symmetry, the inverse Fourier transform would produce imaginary values during the Radon transform. Central symmetry is obtained easily by using closed Reveill`es discrete lines defined as follows: 2 Lω (p,q) = (x, y) ∈ Z ||px + qy| ≤ ω/2
422
P. Carr´e and E. Andres
with (p, q) ∈ Z2 the direction of the line (direction of Radon projection) and ω the arithmetical thickness. The parameter ω defines the connectivity of the discrete lines. The closed discrete lines have many interesting properties. One of the most important ones is that each type of closed discrete line is directly linked to a distance: for instance √ 2 2 p2 + q 2 p +q 2 L(p,q) = (x, y) ∈ Z |px + qy| ≤ 2 is equal to M ∈ Z2 d2 M, L(p,q) ≤ 12 where L(p,q) : px + qy = 0 is the Euclidean line of direction (p, q) and d2 the Euclidean distance [9].
Fig. 2. Redundancy on the cover of the Fourier lattice by (a) closed na¨ıve lines (b) supercover lines
3.2
Closed Reveill` es Discrete Lines for Digital Radon Transform
Our Digital Radon transform is defined by: Rω s(p, q, b) =
K k=0
k
s (fk )e2πj K b with fk =
k f1 such that pf1k + qf2k ≤ ω/2 k f2
and K the length of a line segment of Lω p,q We must define the set of discrete directions (p, q) in order to provide a complete representation. The set of line segments must cover all the square lattice in Fourier domain. For this, we define the direction (p, q) according to pairs of symmetric points from the boundary of the 2-D Discrete Fourier Spectra.
Ridgelet Transform Based on Reveill`es Discrete Lines
423
2 Proposition 1. Let a square lattice be defined as ΩN = [−N, N ] × [−N, N ]. Let us consider the set of directions (pm , qm ) with, for 0 ≤ m ≤ 2N, (pm , qm ) = (N, m − N ) and for 2N + 1 ≤ m ≤ 4N − 2, (pm , qm ) = (m − 3N + 1, N ). The set of all the closed lines defined by |pm f1 + qm f2 | ≤ ωm /2 with ωm ≥ 2 sup (|pm | , |qm |) provides a complete cover of the lattice ΩN .
The proof of this proposition is obvious because of a well known result in discrete analytical geometry that states that a closed discrete line of direction (p, q) is connected if and only if ω ≥ sup (|p| , |q|) [1]. For thinner (non connected) discrete lines, with values of ω < sup (|p| , |q|) , it is possible but not certain that 2 we also achieve a complete cover of the lattice ΩN depending on the value of ω compared to N . However, for our applications, we preferred working with connected discrete lines. Figure 2 illustrates the cover of the Fourier lattice (on the first octant) by two different types of discrete lines. The grey value of the pixels represents the redundancy in the projection (number of times a pixels belongs to a discrete line). One isolated line is drawn to shown the illustrate the arithmetical thickness of each type of line. Three different types of closed discrete lines have been tested: – closed naive discrete lines: ω = sup (|p| , |q|). These lines are the thinnest connected closed discrete lines. They are 8-connected. They provide therefore the smallest redundancy as we can see on figure 2(a). Closed naive discrete lines sup(|p|,|q|) are related to the distance d1 : L(p,q) = M ∈ Z2 d1 M, L(p,q) ≤ 12 where d1 (A, B) = |xA − xB | + |yA − yB |; – supercover lines: ω = |p| + |q|. These lines are the thickest connected closed discrete lines that have been considered in our applications. They are the thinnest closed lines that are 4-connected and that cover the Euclidean line they approximate. They provide of course an important redundancy as we can see on figure 2(b). Supercover lines are related to the distance |p|+|q| d∞ : L(p,q) = M ∈ Z2 d∞ M, L(p,q) ≤ 12 . The supercover lines have an important theoretical importance. – closed Pythagorean lines: ω = p2 + q 2 . These lines are 8-connected and offer a medium redundancy, in between the naive and supercover lines. The lines are related to the Euclidean distance d2 : √ 2 2 1 p +q L(p,q) = M ∈ Z2 d2 M, L(p,q) ≤ . 2 These lines possess the property of having a number of pixels per period close to its length. This means, in practice, that if pixels of the discrete line would hold energy, this energy would be distributed evenly along the line in the same way independently of the slope of the line.
424
3.3
P. Carr´e and E. Andres
Discrete Ridgelet Transform
Now, to obtain the Ridgelet transform, we simply apply the 1-D wavelet transform on each discrete Radon coefficients Rω s(p, q, b) obtained on the line segment Lω p,q . This transform is easily invertible. The reconstruction procedure works as follows: 1. Compute the inverse 1-D wavelet transform followed by the inverse 1-D FFT transform for each set Rω s(pm , qm , .) with m ∈ [0, 4N − 2] 2. Substitute the sampled value of f on the lattice where the points fall on lines
Lω p,q with the sampled value of f on the square lattice. The precedent procedure permits one to obtain an exact reconstruction if the set of M = 4N − 2 lines provides a complete cover of the square lattice.
Fig. 3. (a) Noisy image “object”(b) denoised by ridgelet decomposition with pythagoricean discrete lines, ω = q 2 + p2 (c) noisy image woman (d) denoised by ridgelet decomposition with na¨ive lines, ω = max(|p| , |q|) (e) denoised by ridgelet decomposition with supercover lines, ω = |p| + |q| .
Now with our invertible discrete Radon transform, we can obtain an invertible discrete Ridgelet transform by taking the discrete wavelet transform on
Ridgelet Transform Based on Reveill`es Discrete Lines
425
each Radon projection sequence {Rω s(pm , qm , k)}b∈[0,K−1] where the direction (pm , qm ) is fixed. This wavelet transform can be decimated or undecimated and the wavelet base can be adapted according to the application, as for the classical wavelet decomposition. Notice that our strategy generalizes and unifies the methods proposed in the litterature that use particular forms of discrete lines (see section on discrete ridgelet transform).
4
Illustration and Discussion
To illustrate the different applications that can be achieved with the new discrete ridgelet transform based on closed Reveill`es discrete lines, we have developed two examples: a denoising and a compression algorithm. The procedure of denoise by Ridgelet transform consists simply in thresholding the Ridgelet coefficients and computing the inverse Ridgelet transform. The thresholding is performed with help of an undecimated method developed for the wavelet decomposition [6]. The redundancy of the wavelet decomposition, associated with this method, reduces artifacts which appear after thresholding [6].
Fig. 4. (a) Original image (b) noisy image (c) denoising with na¨ıve lines (d) denoising with pythagoricean lines (e) denoising with supercover lines
We present in the figure two results of our denoising method. With the first example, we can see that this method can reconstruct very noisy images. Because of the adaptation of this decomposition to linear singularities, the edges of
426
P. Carr´e and E. Andres
objects are preserved and the noise seems to be removed. The second example illustrates the results for different definition of the lines Lω p,q . As for the first image, the features are generally correctly reconstructed and the noise is smoothed. But if we study more precisely the result on the woman’s hat, we see that the denoising is better for ω = |p| + |q|, supercover lines, than for ω = max(|p| , |q|), naive discrete lines. The first choice of arithmetical thickness ω introduces more redundancy into the decomposition. Due to this redundancy we obtain an average value during the reconstruction process that reduces the artifacts. In order to illustrate more precisely the result of the denoising algorithm with different type of discrete closed lines we have generated an artificial image (Figure 4 (a)) and added important white noise (Figure 4 (b)). To show the effect of the noise we have added a vertical slice of each image (at the left of (a) and right of (b)). Figures 4 (c), (d) and (e) are the results obtained with the denoising algorithm for the three definitions of closed discrete lines. As we can see, for a more redundant decomposition (supercover discrete lines, figure 4 (e)) the denoising is better than for a lesser redundant decomposition (4 (c)).
Fig. 5. (a) Original image (b) image compressed at 70% with na¨ıve discrete lines (c) image compressed at 70% with supercover lines
Ridgelet Transform Based on Reveill`es Discrete Lines
427
Contrary to the denoising algorithm problematic, for an efficient compression algorithm, redundancy is of course not interesting (more redundancy means more information and thus less compression). In our example, Figure 5, in order to obtain a compression of the image, we have selected in the ridgelet decomposition, the 30% most important (highest) coefficients. This leads to a 70% compression rate. Of course, this is not a very sophisticated procedure and postprocessing would be applied in real applications. This illustrates however how the arithmetical thickness of the discrete lines employed in our ridgelet transform influences the quality of the compressed image. As expected, the lower redundancy representation (naive discrete lines) preserves all the features of the original image after thresholding (Figure 5 (b)). On the other hand, with the higher redundancy representation (supercover lines) we loose features and the image is globally of lower quality. This work can be extended in several directions. One of the more theoretical discrete geometry question that is the question of the smallest value of ω for which we one could obtain a full cover of the Fourier lattice. This is still an open and it seems difficult arithmetical problem. We are also considering extending our denoising and compression algorithms with more sophisticated filters and parameters. It is clear that, for instance, the quality of the result of compression algorithm, where we have performed a simple thresholding, can be increased.
References 1. Reveill`es, J.P.: G´eom´etrie discr`ete, calcul en nombres entiers et algorithmique. Habilitation, Universit´e Louis Pasteur de Strasbourg (1991) 2. Cand`es, E.: Ridgelets: Theory and Applications. PhD thesis, Stanford (1998) 3. Do, M., Vetterli, M.: Discrete ridgelet transforms for image representation. Submitted to IEEE Trans. on Image Processing (2001) 4. Starck, J.L., Cand`es, E.J., Donoho, D.L.: The curvelet transform for image denoising. Technical report, Department of Statistics, Stanford (2000) 5. Normand, N., Guedon, P.: Transformee mojette : une transformee redondante pour l’image. Compte rendu de l’Acad´emie des Sciences (1998) 6. Carr´e, P., Leman, H., Marque, C., Fernandez, C.: Denoising the EHG signal with an undecimated wavelet transform. IEEE Trans. on Biomedical Engineering 45 (1998) 1104–1113 7. Mallat, S.: A theory for multiresolution signal decomposition: the wavelet transform. IEEE Trans. on PAMI 11 (1989) 674–693 8. Burt, P., Adelson, E.: The laplacien pyramid as a compact image code. IEEE Trans. Comm. 31 (1983) 482–550 9. Andr`es, E.: Mod´elisation analytique discr`ete d’objets g´eom´etriques. Habilitation, Universit´e de Poitiers (2000)
A Discrete Radiosity Method R´emy Malgouyres LLAIC, Universit´e Clermont 1, IUT departement Informatique, B.P. 86, 63172 AUBIERE cedex, France.
[email protected] Abstract. We present a completely new principle of computation of radiosity values in a 3D scene. The method is based on a voxel approximation of the objects, and all occlusion calculations involve only integer arithmetics operation. The method is proved to converge. Some experimental results are presented. Keywords: Discrete Graphical Models, Voxel, Global Illumination, Radiosity.
Introduction Radiosity ([SP94]) is a technique which has proved being efficient (in spite of its large complexity) and accurate to simulate several physical processes involving exchange of energy. Its fields of application include weather forecast, heat transfer simulation and light propagation, and in particular 3D rendering in computer graphics. However, radiosity remains so expensive that it cannot yet be extensively used in the movie industry and similar applications. Moreover, the basic radiosity algorithms rely on the so-called ideal diffuse hypothesis, which represents a limitation of the range of applications, and often implies a complicated combination of radiosity with other techniques (such as ray-tracing) in order to obtain a correct simulation. On the other hand, integer only geometric computations are everyday improved and several techniques have been introduced for the use of discrete geometric models in the field of modeling or computer graphics ([YCK92], [SC95], [ANF97], [BM00]). Though discrete ray-tracing has not become very widely used, it remains an interesting discrete geometric modelisation and a good first attempt to use discrete geometric models for 3D rendering. The purpose of this paper is to introduce a completely new radiosity method, which is based on a voxel representation of objects, and whose occlusion calculations involve only integer arithmetical operations. The objects of the scene are first approximated by a set of voxels, which are stored in an octree data structure. Then the radiosity computations are performed in the discrete voxel space. Finally, some classical techniques such as z−buffer or ray-tracing are used for display of the results. The paper is organized as follows: first we remind the reader the required notions about classical radiosity, then we set the basics of discretization. Afterwards, we explain how we can approximate the so-called diffuse illumination A. Braquelaire, J.-O. Lachaud, and A. Vialard (Eds.): DGCI 2002, LNCS 2301, pp. 428–438, 2002. c Springer-Verlag Berlin Heidelberg 2002
A Discrete Radiosity Method
429
equation by a discrete equation, and propose a numerical solution which is proved to converge towards a solution of our discrete equation. Then comes a section about implementation and complexity of the method, and finally we present some experimental results.
1
Basic Notions for Radiosity
In this section we recall the basics of the radiosity method. We refer to [SP94] for more details on classical radiosity methods. 1.1
The Data of a 3D Scene
We assume that we are given a 3D scene, which consists of a set of polyhedra (also called objects), or surfaces approximated by polyhedra, each polyhedron P of the scene being provided with a reflectance coefficient ρP ∈ [0, 1[. The physical definition of this reflectance coefficient is intuitively that ρP is the ratio of the total power of the outgoing light at a given point of the polyhedron and of the power of the incoming light at the same point. Note that the hypothesis that ρP < 1 means that no energy is created while light is reflected by the object. For convenience, given a point x of the object P , we denote ρ(x) = ρP . In this paper, we adopt the simplifying hypothesis that the reflectance ρP does not depend on the point on the polyhedron P , which means that the objects are made of a uniform material: they are not textured. This is not an intrinsic limitation of the presented method, but rather a hypothesis that we made for the first investigations. There is also another hypothesis, which is that the reflectance does not depend on the incoming and outgoing direction of the light. This hypothesis is known as the ideal diffuse reflectors hypothesis, and is classically the basic hypothesis in radiosity methods. Intuitively, an ideal reflector is an object such that the light emitted from a point x of this object has the same properties in all the directions of the half-space limited by the tangent plane at point x and which contains the outgoing normal vector at x. A light source is a particular object P which is provided with a positive real number EP called the exitance of P . Intuitively, the exitance is the total power of light leaving the object per unit of area. As the reflectance, the exitance is assumed to depend neither on the point of the object, nor on the direction of emission. Given a point x of the object P , we denote E(x) = EP . 1.2
The Diffuse Illumination Equation
Here we describe a continuous equation which expresses the power of light leaving a point x (radiosity at point x) of an object P as a function of the power of light of all other points in the scene. Let us denote by B(x) the radiosity at point x. Given a point y in the scene, we denote by V (x, y) the number equal to 1 if y is visible form x in the scene (i.e. if no object of the scene intersects the straight
430
R. Malgouyres
line segment [x, y]), and equal to 0 otherwise. The function V thus defined is called the visibility function. We denote by θ(x, y) the angle between the vector − → and the normal vector − → xy n at point x if this angle is less than π2 , and equal to π 2 otherwise (so that cos θ(x, y) = 0 if y is not in the half space limited by the tangent plane at x and containing the outgoing normal vector). The diffuse illumination equation at point x, which corresponds to the physical ideal diffuse reflection model, is the following (see [SP94]): cos θ(x, y) cos θ(y, x) B(x) = E(x) + ρ(x) B(y) V (x, y) dy (1) π r2 y∈scene where r denotes the distance between x and y. The problem of simulating light propagation and reflections in the scene, which consists in computing the radiosity B(x) at each point x of the scene, reduces to solving the diffuse illumination equation. However, this integral equation cannot be solved analytically except in some very particular cases which are of no use in the field of computer graphics. Therefore, we have to find an approximate numerical solution. A classical way to do this, called the radiosity method, is do break down the objects of the scene into a finite number of patches, and solving a discrete version of the diffuse illumination equation, the solution of this discrete version of the equation consisting in solving a (huge) matrix equation. However, in this solution, the coefficients of the involved matrix depend on the so-called form factors, which are double integrals over the surface of couples of patches. The computation of these form factors constitute the main part of the runtime of radiosity programs, and is either very slow, or quite rough. The idea of this paper is to discretize the scene to obtain voxels, and, roughly speaking, to consider each voxel as a single point in order to replace the computation of integrals by computation of sums.
2 2.1
Discretization of a 3D Scene Basic Notions of Discrete Geometry
A voxel v = (i, j, k) is a point of Z3 , i.e. a point with integer coordinates. Classically, such a voxel v = (i, j, k) can be seen as a unit cube centered at the point (i, j, k), and whose edges are parallel to coordinates axis. Given two voxels v = (i, j, k) and v = (i , j , k ), we say that v and v are 26−adjacent if max(|i − i |, |j − j |, |k − k |) = 1. We say that v and v are 18−adjacent if they are 26−adjacent and have at least one coordinate in common. Finally, v and v are said to be 6−adjacent if they are 26−adjacent and differ only by one of their coordinates. Given v a voxel and n ∈ {6, 18, 26}, we call n−neighborhood of v, and we denote by Nn (v) the set of all voxels which are n−adjacent to v. Let n ∈ {6, 18, 26}. Using the notion of n−adjacency, we can define n−connectivity as follows. Let X ⊂ Z3 be a set of voxels. An n−path in X
A Discrete Radiosity Method
431
form v0 to vp is a finite sequence (v0 , . . . , vp ) of elements of X such that for i = 1, . . . , p the voxels vi is n−adjacent to vi−1 . Now let v and v be two elements of x. We say that v and v are n−connected in X if there exists an n−path in X from v to v . The relation “to be n−connected in X” is an equivalence relation, and we call n−connected components of X the equivalence classes of this equivalence relation. The set X is called n−connected if it has exactly one n−connected component. Given X ⊂ Z3 , we denote by X the complement Z3 \X of X in Z3 . The set X is said to be n−separating if X has exactly two n−connected components. 2.2
Discretizing a Polyhedron
The main problem is the following: given a closed polyhedron P , with an interior and an exterior, how to generate a list of voxels which approximate the polyhedron P (in the sense for instance that the Hausdorf distance between the obtained set of voxels and P is less than 1) , and such that the obtained set of voxels is 6−separating. There is no solution to this problem if P is an arbitrary polyhedron (just think of pinched, thin or highly curved polyhedra), but, as we shall see from experimental results, we can find practically acceptable solutions for thick enough polyhedra. We use a discretization scheme similar to the one described in [BM02]: for each face of the polyhedron, we use a polygon filling algorithm ([FVD96]) to go over the pixels of the projection of the face onto (say) the z = 0 plane, and for each of these pixels we sample the height z of the face over this pixel. 2.3
Our Discrete Data-Structure
In order to represent the discrete scene obtained after discretization, we have chosen an octree data structure, for the following reasons. First it is much less memory consuming than a boolean matrix representation of discrete objects, and second it is compatible with a hierachical version of the method, i.e. a version of the method in which some parts of the scene are discretized more roughly than others. As we shall see in the conclusion, such a hierachical method is the only way to obtain a technique which can seriously be compared to the most recent and advanced versions of classical radiosity methods. Since octrees have been very widely used in computer graphics, and more generally in computer imagery since long ago (see [S85] and [W95] among many others), we do not describe them here into many details. We just mention that an octree is a tree structure in which each node has less than 9 children, each child of a node representing an eighth portion of the space represented by the node. The root of the tree thus represents the whole matrix. A node is a leaf of the tree either if the portion of space it represents is only composed of 1’s (object leaf case), or if this portion of space is only composed of 0’s (complement leaf case). Thus, it is possible to represent an n × n × n boolean matrix by a tree with depth (at most) log n. Then, determining if a given element of the matrix is 1 can be done in O(log n).
432
R. Malgouyres
In our case, since 1’s correspond to voxels approximating surfaces, we are likely to find many wide portions of space composed only of 0’s, and thus many low depth complement leaves. This is another argument for the use of the octree data structure. Indeed, as was previously pointed out by authors using octrees for ray-tracing or discrete ray-tracing ([SC95], [YCK92]), and as we shall see below, the existence of low depth complement leaves enables us to speed up the computation of the intersection between a ray (i.e. a half-line) and the objects of the scene.
3 3.1
Discretizing and Solving the Diffuse Illumination Equation Transforming the Integral Equation
Now we are going to explain how, by transforming a bit the diffuse illumination equation (Equation 1), and by using sums to approximate integrals, we can obtain a discrete equation which can be numerically solved. The first term on the right side of the diffuse illumination equation, due to emitance, can easily be handled, henceforth we concentrate on the second term, which is an integral representing the light leaving the point x which is due to reflection of light arriving from other points y of the scene. We denote by SR (x) the sphere centered at the point x with radius R, with R ∈ R+ . Moreover, given σ ∈ SR (x), we denote by y(x, σ) the first point of the scene met when going over the ray (i.e. half line) having x as extremity and containing σ. cos θ(y,x) B(y) cos θ(x,y) V (x, y) dy π r2 y∈scene d(y(x,σ)) = σ∈SR B(y(x, σ)) cos θ(x, y) cos θ(y, x) π||x−y(x,σ)|| 2 dσ = σ∈SR B(y(x, σ)) cos θ(x, σ) πR 2 d(y(x,σ)) dσ Note that we can identify the terms cos θ(y, x) π||x−y(x,σ)|| 2 and πR2 because both can be recognized as an element of solid angle viewed from the point x.
3.2
The Discrete Sphere Method
Now we explain how we approximate the latter integral over a sphere with radius R by a sum over the voxels of a discrete sphere. The idea is simply that the integral of a function can be approximated by sum over small patches of the area of the patch multiplied by the value of the function at some point of the patch. We use a similar idea to the so-called Hemicube method (see [SP94] for instance) in order to discretize the set of directions in space. We consider a discrete sphere ΣR with radius R, with R ∈ R, centered at a voxel x = (a, b, c), i.e. the set of all voxels at distance less than or equal to R from x and having a 6−neighbor at distance more than R from x. In our method, we shall construct as an initialization the voxels of such a discrete sphere. We can use a straightforward construction algorithm, since the time for constructing the
A Discrete Radiosity Method
433
sphere will anyway be very small as compared to the overall radiosity method runtime. Then we consider, for each voxel v ∈ ΣR , the set F (v) of all the faces of the voxel v which are shared by a voxel with distance greater than R from x. All these faces constitute the frontier of the discrete ball with radius R. For each face f ∈ F (v), we consider the solid angle A(f ) formed by f viewed from x. The solid angle A(f ) can be approximated as follows: assume for instance that f is a face shared by v = (a + i, b + j, c + k) and the voxel v = (a + i, b + j, c + k + 1). Then k + 0.5 A(f )
3 . 2 π ∗ (i + j 2 + (k + 0.5)2 ) 2 Now, consider SR the continuous sphere centered at x with radius R over which we want to compute the integral as above. Consider, for each v ∈ ΣR and each f ∈ F (v), the patch p(f ) which is the central projection (with center x) of f on the sphere SR . We have: dσ B(y(x, σ)) cos θ(x, σ) πR 2 σ∈SR dσ = v∈ΣR ,f ∈F (v) σ∈p(f ) B(y(x, σ)) cos θ(x, σ) πR 2
v∈ΣR ,f ∈F (v) B(y(x, v)) cos θ(x, v)A(f ) The latter approximation is obtained by considering the integrand B(y(x, σ)) cos θ as constant on the patch p(f ); the obtained integral is B(y(x, v)) cos θ multiplied by the solid angle of p(f ) viewed from x, this solid angle being equal to A(f ). Now, y(x, v) can be approximated by the first voxel I(x, v) encountered by going over a discrete line from the voxel x through the voxel v. Finally, the diffuse illumination equation of Section 1.2 is approximated by the following discrete linear equation for each voxel x of the discrete scene: B(x) = E(x) + ρ(x) B(I(x, v)) cos θ(x, v)A(f ) (2) v∈ΣR ,f ∈F (v)
3.3
Numerical Solution of the Discrete Equation
The Proposed Algorithm Lemma 1. We have
lim
R→+∞
cos θ(x, v)A(f ) = 1.
v∈ΣR ,f ∈F (v)
An immediate consequence of this lemma is that our linear system (Equation 2) satisfies the formal properties under which the Jacobi relaxation and Gauss-Seidel relaxation both converge to a solution of the system (see [SP94] for the similar use of Gauss-Seidel relaxation in classical radiosity). Now we explain the iterative scheme. We set B0 (x) = E(x) for each voxel x of the scene. Then we inductively define Bi (x) for i ≥ 1 by: Bi (x) = E(x) + ρ(x) Bi−1 (I(x, v)) cos θ(x, v)A(f ) (3) v∈ΣR ,f ∈F (v)
434
R. Malgouyres
When i tends to infinity, the numbers Bi for all voxels exponencially converge to a solution of the discrete Equation 2.
4 4.1
Implementation and Complexity Going over a Discrete Line
We remind the reader that the voxels of the surfaces are encoded in an octree. The problem is, given a voxel x and an integer vector v, to compute the first voxel I(x, v) encountered by following a discrete line from x in the direction of v. First, which discrete line should we choose, since there are several ? Here, the good arithmetical properties of the chosen line are not really important. The main problem is to perform a fast computation of a 6−connected discrete line which is close to the continuous half line from x directed by v. To do so, we introduce a current voxel M initialized to x. In fact, we must initialize M to a voxel close to x on the considered discrete line in such a way that M is not, as is x, a surface voxel, i.e. the leave L(M ) of the octee corresponding to M is a complement leave. The choice of an initial M is a bit tricky. Then we iterate the following procedure until the leaf L(M ) is an object leave. We assume for instance that the coordinates of the vector v are all positive. 1. Find the limits xmax , ymax , zmax of the cube corresponding to the leaf L(M ) of the octree ; 2. Determine, using integers, which limit will be crossed first from x by following the line in the direction v ; 3. Assume that, say, the limit xmax is crossed first. Compute a new voxel M having xmax + 1 as first coordinate on the discrete line. To insure 6−connectedness, we choose M such that M.y ≤ ymax and M.z ≤ zmax ; 4. Find the new leaf L(M ) in the octree data structure. Let W be the width of the voxels matrix. Since the time for searching a leaf in the octree is at most log W , the time for going over the discrete line is at most W log W . In fact, the required time is generally much less because, when the complement leave L(M ) corresponding to M represents a large cube C, first the depth of L(M ) is less than log W , and second, we jump directly to the exit of C without considering intermediate voxels. We see here one advantage of the octree data structure: going over a discrete line is fast. 4.2
The Two Main Steps of the Method
The purpose of the method is to compute the numbers Bi (x) for each voxel x of the discretized scene, and for i sufficiently large so that Bi (x) is a correct approximation of the solution of Equation 2. Fortunately, the convergence is exponential with respect to i, and it turns out that, in practice, we have an accurate enough estimation for i = 5, . . . , 10. As the classical radiosity techniques, our method consists in two main steps :
A Discrete Radiosity Method
435
1. Computation, for all voxels x of the scene and for all voxels v of the discrete sphere ΣR centered at the point x, of the first voxel I(x, v) encountered by following the discrete 6−connected line issued from x through v. The address of the corresponding leave of the octree data structure may be stored. 2. Computation, using the inductive definition of Bi (x) (Equation 3) and the I(x, v) computed during the first step, of the numbers Bi ’s. The First Step. Note that the result of the first step depends only on the geometry of the scene, and not on the material properties of the objects (numbers ρ(x)) nor on the illumination conditions (light sources, emitance). This first step is analogous to the comptation of the form factors in classical radiosity methods, and doesn’t need to be performed again in case of a change in the illumination conditions. Note the important difference of our method with respect to other radiosity methods, that the first step consists in integer only computations. The complexity of the method is the time L for going over the longest discrete line segment included in the complement of the scene, multiplied by the number |ΣR | of voxels of the considered discrete sphere (which is O(R2 )), multiplied by the number N of voxels of the scene. The Second Step. Now we come to the second step which, after the first step, is quite straightforward. The second step is called the propagation step. Let us mention that all calculations concerning discrete spheres, including computation of the solid angles A(f ), can be performed once for all, independently from the voxel x. Note that we stored the normal vector at each voxel (which is used for computing cos θ(x, v)) while discretizing the scene, and that this normal vector is computed by continuous techniques (such as the Phong method). An interesting point is that we can, instead of duplicating the variables Bi in order to store Bi−1 and Bi as Equation 3 suggests, we can use the previously computed Bi (y) instead of Bi−1 (y) in order to compute Bi (x). The obtained method can also be proved to converge, and, practically speaking, it converges even faster. The complexity of this step is the number i of iterations (typically 5 or 6), multiplied by |ΣR |, multiplied by the number of voxels of the scene. Therefore, as we can also observe from experiments, the complexity of the second step is lower than the complexity of the first step. Hence the overall complexity of the method is LN |ΣR | + iN |ΣR |. 4.3
About the Space Complexity
The memory cost of the method, as described above, mainly corresponds to the cost of storage of the adresses of I(x, v) for all x and v, which is N |ΣR |. This can be managed by storing the octree data structure (hence voxel information) in the RAM, while storing the addresses I(x, v)’s on a disk. Indeed, by implementing carefully the algorithm, we can write once for all each address I(x, v) (maybe write blocks of a few MB) during the first step, and then, during the second step, read these addresses as many times as the number i of propagations, always in
436
R. Malgouyres
(a) 25 hours, viewpoint 1.
(b) 72 hours, viewpoint 2. Fig. 1. Color plates: the living-room scene.
A Discrete Radiosity Method
437
the same order as the addresses were written. Thus, the disk write/read cost is low, and the memory cost affordable.
5
Experimental Results
First let us describe the computer with which the experiments were made: the processor is 1.2 GHz, we needed 500Mo of RAM, and a 50Go IDE disk. We present 2 experiments with the same 3D scene: – The first experiment uses a 315*315*190 matrix of voxels, the surfaces being approximates by about 1 million voxels; the radius of the used discrete sphere is 30, so that the cardinality of the discrete sphere (which is the number of directions in space taken into account) is 9194. The amount of memory used is less than 120Mo in RAM to store the octree data structure, about 18Go were written on the disk. The runtime is about 25 hours. – The second experiment (Figure 1(a)) uses a 420*420*250 matrix of voxels; the radius of the used discrete sphere is 38, so that the cardinality of the discrete sphere is around 15000. The amount of RAM to encode the octree data structure is about 200Mo, and less than 50Go were written on the disk. The runtime is about 72 hours.
Conclusion We have presented a completely new simulation technique for lighting in a 3D scene made of ideal diffuse reflectors. This method is based on a space voxelization, integer only arithmetic, and give promizing results. However, this is the first paper on this method and a huge amount of work remains to be done, including, first the evaluation of the method as a simulation technique and comparison with classical radiosity methods; second the generalization to an adaptative voxel space by working with higher resolution arround highly curved objects (see [BM02]); third the generalization to specular reflectors and transparency; and fourth arithmetical optimization.
References [ANF97] [BM00]
[BM02]
E. Andres, Ph. Nehlig, J. Francon, Tunnel-Free Supercover 3D Polygons and Polyhedra, Eurographics ’97, Budapest, Computer Graphics Forum, ed. Blackwell Publishers, vol. 16, 3, pp. C3-C13, 1997. J. Burguet, R. Malgouyres, Strong Thinning and Polyhedrization of the surface of a Voxel Object, Proceedings of DGCI’2000, Uppsala, Sweden, Lecture Notes in Computer Science vol 1953, Springer, pp 222-234, December 2000. J. Burguet, R. Malgouyres, Multiscale Discrete Surfaces, Proceedings of DGCI’2002, Bordeaux, France, Lecture Notes in Computer Science, to appear, April 2002.
438
R. Malgouyres
[FVD96] [S85] [SP94] [SC95] [W95] [YCK92]
J.D. Foley, A. Van Dam, S.K. Feiner and J.F. Hughes, Computer Graphics: introduction and practice (second edition in C), Addison-Wesley. J. Sandor: Octree Data Structures and Perspective Imagery, C&G Vol. 9, No. 4, pp.393-405, 1985 F.X. Sillon and C. Puech, Radiosity & Golbal Illumination, Morgan Kaufmann Publishers, San Francisco, California, 1994. N. Stolte and R. Caubet. Discrete Ray-Tracing of Huge Voxel Spaces, Eurographics 95, pages 383-394, Maastricht, August 1995. Blackwell. K. Y. Whang et al, Octree-R: an adaptive octree for efficient ray tracing, IEEE TVCG, Vol. 1, No. 4, pp. 343-349, 1995 R. Yagel, D. Cohen, and A. Kaufman, Discrete Ray Tracing, IEEE Computer Graphics and Applications, September 1992, 19-28.
Author Index
Agnus V., 155 Amat J., 255 Andres E., 313, 417 Arqu`es D., 360 Attali D., 57
Kenmochi Y., 301 K¨ othe U., 22 Kong T.Y., 81 Kropatsch W.G., 1, 92 Kuba A., 392
Balogh E., 392 Bertrand G., 102, 301 Billhardt H., 165 Boissonnat J.D., 197 Borgefors G., 244 Bretto A., 124 Bruckstein A.M., 145 Brun L., 92 Burguet J., 338 Buzer L., 372
Lindblad J., 267 Lohou C., 102
Carr´e P., 417 Chastel S., 124 Ciria J.C., 45 Coeurjolly D., 326 Colantoni P., 124 Cortadellas J., 255 Couprie M., 301 Crespo J., 165 Damiand G., 220 De Floriani L., 69 Del Lungo A., 392 Dom´ınguez E., 45 Franc´es A.R., 45 Frigola M., 255 Frosini A., 136 Gau C.J., 81 H´etroy F., 57 Herman G.T., 279 Hilton A., 382 Illingworth J., 382 Jonker P.P., 187
Malandain G., 197 Malgouyres R., 338, 428 Maojo V., 165 Marchadier J., 360 Mesmoudi M.M., 69, 404 Michelin S., 360 Morando F., 69 Mu˜ noz A., 165 Nivat M., 392 Nystr¨ om I., 267 Puppo E., 69 Resch P., 220 Ronse C., 155 Ros L., 209 Sanandr´es J.A., 165 Simi G., 136 Sintorn I.M., 244 Soille P., 175 Song Y., 114 Starck J., 382 Sugihara K., 209, 350 Thomas F., 209 Th¨ urmer G., 34 Veelaert P., 289 Zhang A., 114 ˇ c J., 232 Zuni´